Best PSTN protocol introduction

PSTN (Public Switched Telephone Network) is a switched network used for global voice communications. This network has approximately 800 million users and is the largest telecommunications network in the world today.

In normal life, such as when we use a landline phone to make a call or use a telephone line to dial the Internet at home, we all use this network. One thing that needs to be emphasized here is that the PSTN network existed for the transmission of voice data from the beginning.

PSTN (PublicSwitch Telephone Network) is a telephone network commonly used in our daily lives. As we all know, PSTN is a circuit-switched network based on analog technology. Among many wide area network interconnection technologies, the communication cost required for interconnection through PSTN is the lowest, but its data transmission quality and transmission speed are also the worst, and the PSTN network resource utilization rate is relatively low.

It also refers to POTS. It is a collection of all circuit-switched telephone networks since Alexander Graham Bell invented the telephone. Today, except for the final connection between the user and the local telephone switchboard, the public switched telephone network has been technically fully digitalized.

In relation to the Internet, PSTN provides a considerable part of the long-distance infrastructure of the Internet. In order to use the long-distance infrastructure of the PSTN and share the circuit through information exchange among many users, the ISP needs to pay the equipment owner a fee.

In this way, Internet users only need to pay the Internet service provider. The public switched telephone network is a circuit-switched service based on standard telephone lines, used as a connection method for connecting remote endpoints. Typical applications are the connection between remote endpoints and local LAN and remote users dial-up Internet access.

PSTN can be composed of two parts, one is the switching system; the other is the transmission system, the switching system is composed of telephone switches, and the transmission system is composed of transmission equipment and cables. With the growth of user needs, these two components are constantly developing and improving to meet user needs.

1. The development of the exchange system probably goes through the following stages.

In the era of manual switching, transfers are performed manually. Just like a long time ago, when making a call, an operator will be connected first, and the operator will help you with the transfer.

In the era of automatic switching, step-by-step and crossbar switches were produced.

In the era of semi-electronic switching, electronic technology was introduced into the control part of the switch.

In the era of air division switching, program-controlled switches were created, but analog signals were still transmitted.

In the era of digital switching, with the successful application of PCM pulse code modulation technology, digital program-controlled switches have also been produced, in which digital signals are transmitted. 

2. PSTN transmission equipment has evolved from carrier multiplexing equipment to SDH equipment, and cables have also evolved from copper wires to optical fibers.

What PSTN provides is an analog dedicated channel, and the channels are connected via several telephone exchanges. When two hosts or routers need to be connected via PSTN, modems must be used to implement signal analog/digital and digital/analog conversion on the network access side at both ends.

From the perspective of the OSI seven-layer model, PSTN can be seen as a simple extension of the physical layer, and does not provide users with services such as flow control and error control. Moreover, because PSTN is a circuit-switched way, a path is established until it is released, and its full bandwidth can only be used by the devices at both ends of the path, even if there is no data to be transmitted between them. Therefore, this circuit switching method cannot achieve full utilization of network bandwidth.

PSTN access to the network is relatively simple and flexible, usually as follows:

1. Access to the network through ordinary dial-up telephone lines. As long as the modem is connected in parallel on the original telephone lines of the two communication parties, and then the modem is connected to the corresponding Internet equipment. Most Internet devices, such as PCs or routers, are provided with several serial ports, and serial interface specifications such as RS-232 are used between the serial port and the Modem. The cost of this connection method is relatively economical, and the charging price is the same as that of ordinary telephones, which can be applied to occasions where communication is not frequent.

2. Access the network through leased telephone lines. Compared with ordinary dial-up telephone lines, leased telephone lines can provide higher communication speed and data transmission quality, but the corresponding costs are also higher than the previous method. The access mode of the dedicated line is not much different from the access mode of the ordinary dial-up line, but the process of dial-up connection is omitted.

3. The way to connect to the public data exchange network (X.25 or Frame-Relay, etc.) from PSTN via ordinary dial-up or leased dedicated telephone line. It is a better remote way to use this method to realize the connection with remote places, because the public data switching network provides users with reliable connection-oriented virtual circuit services, and its reliability and transmission rate are much stronger than PSTN.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE lab dumps waiting for you.

2020 SIP technology introduction

SIP (Session Initiation Protocol) is a multimedia communication protocol formulated by IETF (Internet Engineering Task Force).

It is an application layer control protocol for multimedia communication on an IP network. It is used to create, modify and terminate the session process of one or more participants. SIP is an IP voice session control protocol derived from the Internet, which is flexible, easy to implement, and easy to expand.

SIP interoperates with the Resource Reservation Protocol (RSVP) responsible for voice quality. It also collaborates with several other protocols, including Lightweight Directory Access Protocol (LDAP) for location, Remote Authentication Dial-in User Service (RADIUS) for authentication, and RTP for real-time transmission.

With the advancement of computer science and technology, the IP data network based on packet switching technology has replaced the core position of the traditional telephone network based on circuit switching in the field of communication with its convenience and low cost. The SIP protocol, as an application layer signaling control protocol, provides complete session creation and session modification services for a variety of instant messaging services. Therefore, the security of the SIP protocol plays a vital role in the security of instant messaging. 

SIP appeared in the mid-1990s and originated from the research of Henning Schulzrinne and his research team in the Computer Department of Columbia University. In 1996, he submitted a draft to the IETF, which contained important content of SIP. In 1999, Shulzrinne deleted irrelevant content related to media content in the new standard submitted. Subsequently, the IETF released the first SIP specification, RFC 2543. 

The SIP protocol is a protocol under development and continuous research. On the one hand, it draws on the design ideas of other Internet standards and protocols, follows the principles of simplicity, openness, compatibility, and scalability that the Internet has always adhered to in style, and fully pays attention to the security issues in the open and complex network environment of the Internet.

On the other hand, it has also fully considered the support for various services of the traditional public telephone network, including IN services and ISDN services. Use SIP invitation messages with session descriptions to create sessions so that participants can negotiate media types through SIP interactions. It requests the user’s current location through proxy and redirection to support user mobility. Users can also register their current location. The SIP protocol is independent of other conference control protocols. It is designed to be independent of the underlying transport layer protocol, so it can expand other additional functions flexibly and conveniently.

SIP sessions use up to four main components: SIP user agent, SIP registration server, SIP proxy server, and SIP redirect server.

These systems complete SIP sessions by transmitting messages that include the SDP protocol.

1. User agent

SIP User Agent (UA) is an end-user device, such as mobile phones, multimedia handheld devices, PCs, PDAs, etc., used to create and manage SIP sessions. The user agent client sends a message. The user agent server responds to the message.

2. Register the server

The SIP registration server is a database containing the locations of all user agents in the domain. In SIP communication, these servers will retrieve each other’s IP address and other related information and send them to the SIP proxy server.

3. Proxy server

The SIP proxy server accepts the SIPUA session request and queries the SIP registration server to obtain the address information of the recipient UA. Then, it forwards the session invitation information directly to the recipient UA (if it is in the same domain) or proxy server (if the UA is in another domain). The main functions are: routing, authentication, billing monitoring, call control, service provision, etc.

4. Redirect server

The SIP redirect server maps the destination address in the request to zero or more new addresses, and then returns them to the client. The SIP redirect server can be on the same hardware as the SIP registration server and the SIP proxy server.

SIP uses the following logic functions to complete communication:

User location function: Determine the location of end users participating in communication.

User communication capability negotiation function: Determine the type and specific parameters of media terminals participating in communication.

Whether the user participates in the interactive function: Determine whether a terminal joins a specific session.

Call establishment and call control functions: including “ringing” to the called party, determining the call parameters of the calling party and the called party, call redirection, call transfer, call termination, etc.

SIP is not a vertically integrated communication system. SIP is more appropriately called a component, and it can be used as a part of other IETF protocols to construct a complete multimedia architecture.

Therefore, SIP should work with other protocols to provide complete services to end users. Although the functional components of the basic SIP protocol do not depend on these protocols. SIP itself does not provide services. However, SIP provides a foundation that can be used to implement different services.

SIP does not provide conference control services and does not suggest that conferences should be managed as such. A conference can be initiated by establishing other conference control protocols on SIP. Since SIP can manage the sessions of all parties participating in the conference, the conference can span heterogeneous networks. SIP cannot and does not intend to provide any form of network resource reservation management. Security is particularly important for the services provided. To achieve the desired degree of security, SIP provides a set of security services, including denial of service prevention, authentication services (user-to-user, agent-to-user), integrity assurance, encryption and privacy services.

Comparison of H.323 protocol and SIP protocol:

H.323 and SIP are protocols introduced by the two camps of the communications field and the Internet respectively. H.323 attempts to treat IP telephones as well-known traditional telephones, but the transmission mode has changed from circuit switching to packet switching. 

The SIP protocol focuses on using IP telephony as an application on the Internet. Compared with other applications (such as FTP, E-mail, etc.), it adds signaling and QoS requirements. The services they support are basically the same, and they all use RTP as a media transmission. Agreement. But H.323 is a relatively complicated protocol.

H.323 defines special protocols for supplementary services, such as H.450.1, H.450.2 and H.450.3. SIP does not specifically define a protocol for this purpose, but it conveniently supports supplementary services or intelligent services. As long as you make full use of SIP’s defined header fields, and simply extend SIP (such as adding several fields), you can implement these services.

In H.323, the call establishment process involves the third signaling message: RAS signaling channel, call signaling channel and H.245 control channel. Only through the coordination of these three channels can the H.323 call be carried out, and the call establishment time is very long. In SIP, the session request process and the media negotiation process are carried out together.

Although H.323v2 has made improvements to the call establishment process, it is still incomparable compared to SIP which only requires 1.5 loop delays to establish a call.

The H.323 call signaling channel and H.245 control channel require reliable transmission protocols. SIP is independent of low-level protocols, and generally uses unconnectable protocols such as UDP, and uses its own credit layer reliability mechanism to ensure reliable message transmission.

In short, H.323 follows the traditional telephone signaling mode. H.323 conforms to the traditional design ideas in the communication field, carries out centralized and hierarchical control, and adopts the H.323 protocol to facilitate connection with traditional telephone networks.

The SIP protocol draws on the design ideas of other Internet standards and protocols, and follows the principles of simplicity, openness, compatibility, and scalability that the Internet has always adhered to in style, which is relatively simple.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

What is the SSL protocol

SSL is called Secure Sockets Layer. It is a security protocol that guarantees privacy. SSL can prevent the communication between the client and the server from being intercepted and eavesdropped. It can also verify the identities of both parties in the communication and ensure the security of data transmission on the network.

The traditional HTTP protocol does not have a corresponding security mechanism, cannot guarantee the security and privacy of data transmission, cannot verify the identity of the communicating parties, and cannot prevent the transmitted data from being tampered with. Netscape uses data encryption, identity verification and message integrity verification mechanisms to provide security guarantees for network transmission.

The SSL protocol includes several security mechanisms for identity verification, data transmission confidentiality, and message integrity confidentiality.

The authentication mechanism is to use the digital signature method to authenticate the server and the client, and the authentication of the client is optional.

The digital signature can be realized through an asymmetric key algorithm. The data encrypted by the private key can only be decrypted by the corresponding public key. Therefore, the user’s identity can be judged according to whether the decryption is successful. If the decryption result is the same as the fixed message, the authentication is successful. When using digital signatures to verify identity, it is necessary to ensure that the public key of the verifier is authentic, otherwise, illegal users may pretend to be the verifier and communicate with the verifier.

The confidentiality of data transmission is to use a symmetric key algorithm to encrypt the transmitted data. It means that the sender sends the data to the other party before sending the data; after the receiver receives the data, it uses the decryption algorithm and decryption key to obtain the plaintext from the ciphertext. A third party without the decryption key cannot restore the ciphertext to plaintext, thus ensuring the confidentiality of data transmission.

The message verification code is used to verify the integrity of the message during message transmission. The MAC algorithm is an algorithm that converts the key and data of any length into fixed-length data.

1. With the participation of the key, the sender uses the MAC algorithm to calculate the MAC value of the message, and then sends the message to the receiver.

2. The receiving end uses the same key and MAC algorithm to calculate the MAC value of the message, and compare it with the received MAC value

Compare.

If the two are the same, the message has not changed. Otherwise, the message is modified during transmission and the receiving end will discard the

Message.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

2020 Knowledge points of wireless network coverage system

What is AP?

AP-Wireless Access Point (WirelessAccessPoint) AP is the HUB in the traditional wired network, and it is also the most commonly used equipment when building a small wireless LAN.

AP is equivalent to a bridge connecting wired and wireless networks. Its main function is to connect various wireless network clients together, and then connect the wireless network to the Ethernet to achieve the purpose of network wireless coverage.

AP is divided into thin and fat?

Thin AP (FITAP):

Also known as wireless bridges, wireless gateways, and so-called “thin” APs.

Popular understanding of thin AP: It cannot be configured by itself, and a dedicated device (wireless controller) is required for centralized control and management configuration.

“Controller + thin AP + router architecture” is generally used for wireless network coverage, because when there are a large number of APs, only the controller is used to manage the configuration, which will simplify a lot of work.

Fat AP (FATAP):

The so-called fat AP in the industry is also called a wireless router. A wireless router is different from a pure AP. In addition to the wireless access function, it generally has two interfaces, WAN and LAN, supports address translation (NAT), and supports DHCP server, DNS and MAC address cloning, as well as VPN access, firewall and other security Features.

What is AC?

The Wireless AccessPoint Controller is a network device used to centrally control the controllable wireless APs in the local area network. It is the core of a wireless network and is responsible for managing all wireless APs in the wireless network. The management of APs includes: Send configuration, modify related configuration parameters, radio frequency intelligent management, access security control, etc. (All ACs and APs currently circulating in the market are from the same manufacturer to manage each other)

What is a POE switch?

POE (PowerOver Ethernet) POE is also known as a local area network-based power supply system (PoL, Powerover LAN) or Active Ethernet (Active Ethernet), sometimes also referred to as Power Over Ethernet, which refers to the existing Ethernet Cat .5 Without any changes to the wiring infrastructure, while transmitting data signals for some IP-based terminals (such as IP telephones, wireless LAN access points, network cameras, etc.), it can also provide DC for such devices Power supply technology.

POE technology can ensure the normal operation of the existing network while ensuring the safety of the existing structured cabling, minimizing costs.

The POE switch can not only provide the transmission function of the ordinary switch, but also provide the power supply function to the other end of the network cable. The integration of power supply + data transmission does not require an additional power supply module or POE power supply module to supply power to the device, and a Cat.5 cable completes all the work.

PoE power supply difference

Standard poe: According to the IEEE802.3af/at specification, it is necessary to first detect the 25K characteristic resistance of the receiving terminal and perform a handshake. Only when the handshake is successful, can the power supply be supplied; otherwise, only data (data) is passed.

Example: Plug the POE power supply into the computer network card, the computer network card will not be burned, only normal Internet access because the data can pass.

Non-standard POE: also called forced supply type, the AC power is supplied as soon as the power is turned on; the receiving terminal is not detected first, and the handshake is not performed, and the power is directly 48V or 54V. 

Example: Plug the POE power supply into the computer network card, you can go online normally, but if you don’t negotiate to directly supply 48 or 54V, it may burn the device.

There are roughly 48V, 24V and 12V output voltages (DC) on the market

The software and hardware needed to deploy wireless engineering?

Basic hardware: router POE switch AC controller wireless AP

High-end hardware: firewall router traffic and behavior management bypass main switch floor switch POE switch AC controller wireless AP

Is the greater the power of the AP, the better?

No, the higher the power of the AP, the higher the transmitted signal strength. Literally speaking, it will lead you to a misunderstanding. The stronger the signal, the better, but the stronger the signal is for itself, which is transmitted in the entire wireless network. Signals belong to both parties. Both the transmitter and the receiver will transmit data to each other. If the signal at the transmitter is too strong, it will inevitably affect the return of data from the receiver, which will cause network transmission delays or packet loss.

Popular understanding: In a space, you and another person are talking at the same time, and the other person’s voice is too loud, and your voice is too small, which will cause the other person to not hear what you are saying, thus affecting the quality of the call.

In a large-scale wireless project, what are the key points and the most important points?

Key points of engineering perspective:

design

The actual construction drawing, determining the routing position of the wiring, need to consider such as: concealment, damage to the building (characteristics of the building structure), avoiding power lines and other lines while using the existing space, and pairing cables in the field Necessary and effective protection needs.

The location of the router

The router is generally selected in an underground weak current room (far away from a strong current room to avoid strong electromagnetic interference). Pay attention to ventilation and keep it dry. It is best to have a cabinet and put it together with the core switch.

POE power supply switch location

The location of the POE switch should be selected reasonably, located in the middle of the AP point, to reduce wiring costs and shorten the distance between the switch and the AP.

AP location selection

The point layout of the AP selects the central area of the scene and radiates it toward the periphery. The coverage areas of AP parts should overlap to reduce signal blind areas. The distance between the AP and the POE switch should not exceed 80 meters (a genuine Anpu network cable as an example)

Network cable laying

As the transmission carrier of the network signal, the network cable should be protected during the laying process, and there should be no breaks or dead angles. If necessary, iron pipes should be worn or placed in the roof bridge. Special attention is paid to the principle of high-voltage wires to reduce interference to the signal.

Precautions for practical debugging and post-maintenance:

a. External network and routing: The external network cable is connected in place to ensure the normal Internet access conditions of the line, and the routing is connected to ensure that the routing itself can normally communicate with the Internet. During the construction, the main exchange and the construction floor exchange are connected to ensure the normal communication of the backbone network.

b. Debug walkie-talkie: During the commissioning stage, a set of walkie-talkie equipment needs to be seconded to the mall to facilitate the debugging work.

c. During the construction and debugging stage, sufficient spare parts shall be reserved for AP, switch, network cable, and other construction and debugging hardware.

d. Construction drawings: Before each construction, please ask the constructor to give us two copies of the construction drawings.

Construction network topology: requirements, detailed floor switches, routing information and location, number of APs on each floor, and connection methods.

Construction equipment connection line identification diagram: requirements, routing and switch and AP connection information, corresponding ports, etc., all connection lines are theoretically approximate network cable length (including road-switch-AP).

e. Construction wiring and line marking planning:

Information identification record: AP point Mac information record: when the construction party places the AP location, it is necessary to record the floor number and location number of the AP and the corresponding Mac information (note the corresponding floor plan AP number, for example: 1st floor No. 1 mac information format is 1F- 1: AC:11:22:33:44:AP ). This information is uniformly recorded in the Word document floor shopping mall construction drawing according to the floor distribution or directly manually recorded in the free space on the side of the construction drawing, which is convenient for later maintenance and use. 

Wire mark identification record:

(1) The input and output lines of the switch: It is necessary to indicate which floor and location number of the AP connected to the terminal of the identification or serial number, (note the corresponding floor plan AP number, for example: the format of 1st floor 1 is 1F-1), Lines coming in from the external network should also be marked with a cable: “External network access should be marked.”

(2) Interconnection between switches on all floors: The source of the wiring connector with the identification or serial number should be marked at the head of the line interconnection line of the switch. (Pay attention to write the floor and switch label, such as: switch 1 on the first floor, the format is 1F-1 SW)

Check on the spot whether the installed AP is powered on and working normally:

After the construction is completed, the construction personnel shall check all APs on the spot to be energized normally, and the normal state under the power-on condition: the green indicator on the AP is always on. If the routing is in place and running, the software can be used to detect whether the AP normally emits signals and connects to the Internet.

If the above information is completely clear, there is no need for the construction personnel to be on site. If the above information is completely unclear, the construction personnel need to cooperate on site for each commissioning.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

The difference between OSPFv3 and OSPFv2

OSPF is a link state routing protocol. It has many advantages such as open standards, rapid convergence, no loops, and easy hierarchical design. The OSPFv2 protocol, which is widely used in IPv4 networks, is too closely related to IPv4 addresses in terms of message content and operating mechanism, which greatly restricts its scalability and adaptability.

Therefore, when we first considered extending OSPF to support IPv6, we realized that this was an opportunity to improve and optimize the OSPF protocol itself. As a result, not only did OSPFv2 be extended for IPv6, but a new and improved version of OSPF was created-OSPF v3.

OSPFv3 is described in detail in RFC2740. The relationship between OSPFv3 and OSPFv2 is very similar to the relationship between RIPng and RIPv2. The most important thing is that OSPFv3 uses the same basic implementation mechanism as OSPFv2-SPF algorithm, flooding, DR election, area, etc. Some constants and variables like timers and metrics are also the same. Another similarity to the relationship between RIPng and RIPv2 is that OSPFv3 is not backward compatible with OSPFv2.

Whether it is OSPFv2 or OSPFv3, the basic operating principles of the OSPF protocol are the same. However, due to the different meanings of the IPv4 and IPv6 protocols and the size of the address space, the differences between them are bound to exist.

Similarities between OSPFv2 and OSPFv3: 

1. The router types are the same. Including internal routers, backbone routers, area border routers and autonomous system border routers.

2. The supported area types are the same. Including backbone area, standard area, stub area, NSSA and completely stub area.

3. Both OSPFv2 and OSPFv3 use SPF algorithm.

4. The election process of DR and BDR is the same.

5. The interface types are the same. Including point-to-point links, point-to-multipoint links, BMA links, NBMA links and virtual links.

6. The data packet types are the same, including Hello, DBD, LSR, LSU, and LSA, and the neighbor relationship establishment process is also the same.

7. The calculation method of the metric value has not changed.

The difference between OSPFv2 and OSPFv3:

1. In OSPFv3, the “subnet” concept of OSPFv2 is changed to the “link” concept, and two neighbors on the same link but belonging to different IPv6 subnets are allowed to exchange data packets.

2. The router ID, area ID, and LSA link state ID values are still expressed in 32 bits, so they cannot be expressed in IPv6 addresses.

3. On the link between the broadcast network and the NBMA network, OSPFv2 neighbors are identified by their interface addresses, while neighbors on other types of links are identified by RID. OSPFv3 cancels this inconsistency, and all neighbors on all types of links are identified by RID.

4. OSPFv3 retains the area (or AS) and area (area) flooding range of OSPFv2, but adds a link local flooding range. A new link LSA (Link LSA) is added to carry information that is only associated with neighbors on a single link.

5. The IPv6 protocol uses an authentication extension header, which is a standard authentication process. For this reason, OSPFv3 does not require its own authentication for OSPFv3 packets, it only needs to use IPv6 authentication.

6. Use the link-local address to discover neighbors and complete automatic configuration. IPv6 routers do not forward data packets whose source address is the link address. OSPFv3 believes that each router has assigned its own link address for each physical network segment (physical link) it connects to.

7. In OSPFv2, unknown LSA types are always discarded, while OSPFv3 can treat them as link local flooding range.

8. If an IPv4 address is set on the interface of the router, or a loopback interface is set, OSPFv3 will automatically select the IPv4 address as the router ID, otherwise, you need to set the ID number for the router.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

4 filtering ways of spam help your network safety

E-mail is a communication method that provides information exchange by electronic means and is the most used service on the Internet. Through the network’s e-mail system, users can communicate with network users in any corner of the world at a very low price and very fast.

E-mail can be in various forms such as text, image, and sound. At the same time, users can get a lot of free news and special emails, and easily realize easy information search. The existence of e-mail greatly facilitates the communication and exchanges between people and promotes the development of society.

There are many email formats, such as SMTP, POP3, MUA, MTA, etc.

Spam refers to emails sent forcibly without the user’s permission. The emails contain advertisements, viruses, and other content. For users, in addition to affecting normal mail reading, spam may also contain harmful information such as viruses; for service providers, spam can cause mail server congestion, reduce network efficiency, and even become a hacker attacking mail server. tool.

Generally speaking, a dedicated server is used to send spam. Generally speaking, it has the following characteristics:

1. Emails sent without the consent of the user are not relevant to the user.

2. Criminals obtain email addresses through deception.

3. The email contains false advertisements, which will spread a lot of spam.

The anti-spam method is basically divided into technical filtering and non-technical filtering in terms of technology, mainly technical filtering, active filtering, and establishing a filtering mechanism in the process of mail transmission;

Non-technical filtering includes: legal and regulatory documents, unified technical specifications, or social moral advocacy, etc. In the process, mail filtering is divided into server-side filtering and receiving-side filtering. The receiving-side filtering is to check the received mail through the server system program after the mail is sent to the mail server. It is passive filtering, mainly by IP address and keywords. As well as filtering for other obvious characteristics of spam, it is feasible and has a low error rate of normal mail. It is currently one of the main anti-spam methods.

From the beginning of spam, the majority of network providers and Internet companies have begun to make trouble for this. However, it is clear that 30 years of development have not produced effective anti-spam technologies or methods. One of the important reasons is that the situation is huge. The amount of spam and high-complexity filtering technology has not been until recent years, the development of artificial intelligence, machine learning and other disciplines has made progress in anti-spam work.

Common spam filtering methods:

1. Statistical method:

Bayesian algorithm: Based on statistical methods, using the method of marking weights, using known spam and non-spam as samples for content analysis and statistics to calculate the probability that the next email is spam, and generate filtering rules.

Connection/bandwidth statistics: anti-spam is achieved by counting whether the number of attempts to connect to a fixed IP address within a unit time is within a predetermined range, or limiting its effective bandwidth.

Mail quantity limit: Limit the number of mails that a single IP can send in a unit time.

2. List method:

BlackList and WhiteList respectively record the IP addresses or email addresses of known spammers and trusted email senders. This is also one of the more common forms of email filtering. At the beginning of anti-spam activities, this This kind of designated mail filtering is very limited because of the lack of list resources.

3. Source method:

DomainKeys: Use to verify whether the sender of the email is consistent with the claimed domain name and verify the integrity of the email. This technology is a public key + private key signature technology.

SPF (SenderPolicy Framework): The purpose of SPF is to prevent forgery of email addresses. SPF is based on reverse lookup technology to determine whether the specified domain name and IP address of the email correspond exactly.

4. Analysis method:

Content filtering: Filter spam by analyzing the content of emails and then using keyword filtering.

Multiple picture recognition technology: Recognize spam that hides malicious information through pictures.

Intent analysis technology: Email motivation analysis technology.

The sending and receiving of mail generally needs to go through the SMTPServer, and the SMTP server transfers messages through the SMTP (Simple Mail Transfer Protocol) protocol.

The email transmission process mainly includes the following three steps:

① The sender PC sends the mail to the designated SMTPServer.

②The sender SMTP Server encapsulates the mail information in an SMTP message and sends it to the receiver SMTP Server according to the destination address of the mail.

③The recipient receives the mail.

POP3 (Post OfficeProtocol 3) and IMAP (Internet Mail Access Protocol) stipulate how the computer manages and downloads e-mails on the mail server through the client software.

Spam prevention is an IP-based mail filtering technology that prevents the flood of spam by checking the legitimacy of the source IP of the sender’s SMTP Server. The proliferation of spam brings many problems:

① Occupy network bandwidth, cause mail server congestion, and reduce the operating efficiency of the entire network.

②Occupy the recipient’s mailbox space, affecting the reading and viewing of normal mail.

When the firewall is used as a security gateway, all external mails need to be forwarded through the firewall. By checking the IP address of the sender’s SMTP Server, spam can be effectively filtered.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

Five advantages of NETCONF protocol

Today we will learn the detailed explanation of NETCONF protocol.

With the upsurge of SDN over the years, a ten-year-old protocol has once again attracted people’s attention, and it is the NETCONF protocol.

The network configuration protocol NETCONF (Network Configuration Protocol) provides a mechanism for managing network devices. Users can use this mechanism to add, modify, and delete the configuration of network devices, and obtain configuration and status information of network devices.

Through the NETCONF protocol, network devices can provide standardized application programming interface APIs (Application Programming Interface), and applications can directly use these APIs to send and obtain configurations to network devices.

NETCONF (Network Configuration Protocol) is a network configuration and management protocol based on Extensible Markup Language (XML). It uses a simple RPC (Remote Procedure Call)-based mechanism to implement communication between the client and the server. The client can be a script or an application running on the network management system.

The advantages of using the NETCONF protocol are:

1. The NETCONF protocol defines messages in XML format and uses the RPC mechanism to modify configuration information. This can facilitate the management of configuration information and meet the interoperability of equipment from different manufacturers. .

2. It can reduce network failures caused by manual configuration errors.

3. It can improve the efficiency of using the configuration tool to upgrade the system software.

4. Good scalability, devices of different manufacturers can define their own protocol operations to achieve unique management functions.

5. NETCONF provides security mechanisms such as authentication and authentication to ensure the security of message transmission.

The basic network architecture of NETCONF mainly consists of several parts: 

1. NETCONFmanager:

 NETCONF Manager serves as the Client in the network, which uses the NETCONF protocol for system management of network equipment.

Send a request to the NETCONF Server to query or modify one or more specific parameter values.

Receive alarms and events actively sent by NETCONF Server to learn the current status of the managed device.

2. NETCONFagent:

The NETCONF Agent serves as the server in the network, which is used to maintain the information and data of the managed device and respond to the request of the NETCONF Manager.

The server will analyze the data after receiving the client’s request, and then return a response to the client.

When a device fails or other events, the server uses the Notification mechanism to actively notify the client of the device’s alarms and events, and report the current status change of the device to the client.

3. Configure Datastores:

NETCONF defines the existence of one or more configuration data sets and allows them to be configured. The configuration data set is defined as the complete configuration data set required to make the device enter the desired operating state from its initial default state.

The information that NETCONF Manager obtains from the running NETCONFAgent includes configuration data and status data

NETCONF Manager can modify the configuration data, and by operating the configuration data, make the state of the NETCONF Agent migrate to the state desired by the user.

NETCONF Manager cannot modify the status data. The status data is mainly related to the running status and statistics of the NETCONF Agent. 

Like ISO/OSI, the NETCONF protocol also adopts a layered structure. Each layer packages a certain aspect of the protocol and provides related services to the upper layer. The hierarchical structure allows each layer to focus on only one aspect of the protocol, making it easier to implement, and at the same time reasonably decouples the dependencies between each layer, which can minimize the impact of changes in the internal implementation mechanism of each layer on other layers.

The content layer represents a collection of managed objects. The content of the content layer needs to come from the data model, and the original MIB and other data models have defects for configuration management such as not allowing rows to be created and deleted, and the corresponding MIB does not support complex table structures.

The operation layer defines a series of basic primitive operation sets used in RPC. These operations will form the basic capabilities of NETCONF.

The RPC layer provides a simple, protocol-independent mechanism for the encoding of the RPC module. The request and response data of the client and server of the NETCONF protocol are encapsulated by using the <rpc> and <rpc-reply> elements. Normally, the <rpc-reply> element encapsulates the data required by the client or the prompt message of successful configuration , When the client request message has an error or the server-side processing is unsuccessful, the server-side will encapsulate a <rpc-error> element containing detailed error information in the <rpc-reply> element to feed back to the client.

Transport layer: The transport layer provides a communication path for the interaction between NETCONFManager and NETCONF Agent. The NETCONF protocol can be carried by any transport layer protocol that meets the basic requirements.

The basic requirements for the bearer protocol are as follows:

For connection-oriented, a persistent link must be established between NETCONFManager and NETCONF Agent. After the link is established, reliable serialized data transmission services must be provided.

User authentication, data integrity, security encryption, NETCONF protocol user authentication, data integrity, security and confidentiality all rely on the transport layer.

The bearer protocol must provide the NETCONF protocol with a mechanism for distinguishing session types (Client or Server).

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

Detailed VRRP technology

In the VRRP standard protocol mode, only the Master router can forward packets, and the Backup router is in the listening state and cannot forward packets. Although the creation of multiple backup groups can achieve load sharing between multiple routers, the hosts in the LAN need to set up different gateways, which increases the complexity of the configuration.

VRRP load balancing mode adds a load balancing function on the basis of the virtual gateway redundancy backup function provided by VRRP. Its realization principle is: Corresponding to a virtual IP address and multiple virtual MAC addresses, each router in the VRRP backup group corresponds to a virtual MAC address, so that each router can forward traffic. 

In VRRP load balancing mode, you only need to create a backup group to achieve load sharing among multiple routers in the backup group, avoiding the problem of backup devices in the VRRP backup group being always idle and low network resource utilization. .

The load balancing mode is based on the VRRP standard protocol mode. The working mechanisms in the VRRP standard protocol mode (such as the election, preemption, monitoring functions of the Master router, etc.) are supported by the VRRP load balancing mode. VRRP load balancing mode also adds a new working mechanism on this basis.

1. Virtual MAC address allocation:

In VRRP load balancing mode, the Master router is responsible for allocating virtual MAC addresses to the routers in the backup group, and responds to different virtual MAC addresses for ARP (in IPv4 networks)/ND (in IPv6 networks) requests from the host according to the load balancing algorithm , So as to achieve traffic sharing among multiple routers. The Backup router in the backup group will not respond to the host’s ARP (in IPv4 network)/ND (in IPv6 network) requests. 

2. Virtual repeater:

The allocation of virtual MAC addresses enables different hosts to send traffic to different routers in the backup group. To enable the routers in the backup group to forward the traffic sent by the host, a virtual forwarder needs to be created on the router. Each virtual forwarder corresponds to a virtual MAC address of the backup group, and is responsible for forwarding traffic whose destination MAC address is the virtual MAC address. 

The process of creating a virtual repeater is:

(1) After the router in the backup group obtains the virtual MAC address assigned by the Master router, it creates a virtual forwarder corresponding to the MAC address. This router is called the VF Owner (Virtual Forwarder Owner) of the virtual forwarder corresponding to the virtual MAC address. ).

(2) The VF Owner advertises the virtual forwarder information to other routers in the backup group.

(3) After the routers in the backup group receive the virtual forwarder information, they create a corresponding virtual forwarder locally.

It can be seen that the routers in the backup group not only need to create a virtual forwarder corresponding to the virtual MAC address assigned by the Master router, but also need to create a virtual forwarder corresponding to the virtual MAC address advertised by other routers. 

3. The weight and priority of the virtual repeater

The weight of the virtual repeater identifies the forwarding capability of the device. The higher the weight value, the stronger the forwarding capability of the device. When the weight is lower than a certain value-the lower limit of failure, the device can no longer forward traffic to the host. The priority of the virtual forwarder is used to determine the state of the virtual forwarder: the virtual forwarder with the highest priority is in the Active state, called AVF (Active Virtual Forwarder), and is responsible for forwarding traffic. The priority of the virtual forwarder ranges from 0 to 255, of which 255 is reserved for the VF Owner. The device calculates the priority of the virtual repeater according to the weight of the virtual repeater.

4. Virtual repeater backup

If the weight of the VF Owner is higher than or equal to the lower limit of invalidation, the priority of the VF Owner is the highest value of 255, as the AVF is responsible for forwarding traffic whose destination MAC address is the virtual MAC address; other routers also receive the Advertisement message sent by AVF. A virtual forwarder will be created. The virtual forwarder is in the Listening state and is called LVF (Listening Virtual Forwarder).

The LVF monitors the status of the AVF. When the AVF fails, the LVF with the highest priority of the virtual transponder will be elected as the AVF. The virtual repeater always works in preemptive mode. If the LVF receives the Advertisement message sent by the AVF, the priority of the virtual repeater is lower than the priority of the local virtual repeater, the LVF will preempt to become the AVF.

5. Packets in VRRP load balancing mode

Only one type of message is defined in the VRRP standard protocol mode-VRRP advertisement message, and only the Master router periodically sends this message, and the Backup router does not send VRRP advertisement message. 

①Advertisement message: Not only used to advertise the status of the backup group on the device, but also used to advertise the information of the virtual forwarder in the active state on the device. Both the Master and Backup routers send this message periodically.

②Request message: If the router in the Backup state is not a VFOwner (Virtual Forwarder Owner), it will send a Request message to request the Master router to assign a virtual MAC address to it.

③ Reply message: After receiving the Request message, the Master router will assign a virtual MAC address to the Backup router through the Reply message. After receiving the Reply message, the Backup router will create a virtual forwarder corresponding to the virtual MAC address. This router is called the owner of the virtual forwarder.

④Release message: After the expiration time of the VF Owner reaches a certain value, the router that takes over its work will send a Release message to notify the routers in the backup group to delete the virtual forwarder corresponding to the VF Owner.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

LACP technology explained

In short, Link Aggregation technology is to aggregate multiple physical links into a logical link with a higher bandwidth. The bandwidth of the logical link is equal to the sum of the bandwidth of the aggregated multiple physical links.

The number of aggregated physical links can be configured according to the bandwidth requirements of the service. Therefore, link aggregation has the advantages of low cost and flexible configuration. In addition, link aggregation also has the function of link redundancy backup, and the aggregated links dynamically backup each other, which improves the stability of the network. 

There was no uniform standard for the realization of early link aggregation technology. Each manufacturer had its own proprietary solutions, which were not completely the same in function and incompatible with each other.

Therefore, the IEEE has specially formulated a standard for link aggregation. The current official standard for link aggregation technology is IEEE Standard 802.3ad, and Link Aggregation Control Protocol is one of the main contents of the standard, which is a protocol for dynamic link aggregation. .

After the LACP protocol of a port is enabled, the port will advertise its system priority, system MAC address, port priority, port number, and operation key to the peer by sending LACPDU. 

After receiving the information, the opposite end compares the information with the information stored in other ports to select a port that can be aggregated, so that both parties can reach an agreement on the port joining or leaving a dynamic aggregation group.

The operation key is a configuration combination generated by the LACP protocol according to the port configuration (that is, speed, duplex, basic configuration, and management key) during port aggregation.

After the LACP protocol is enabled for the dynamic aggregation port, its management key defaults to zero. After LACP is enabled for a static aggregation port, the management key of the port is the same as the aggregation group ID.

For a dynamic aggregation group, members of the same group must have the same operation key, while in the manual and static aggregation groups, the active port has the same operation key.

Port aggregation is the aggregation of multiple ports together to form an aggregation group, so as to realize the load sharing among the member ports in the aggregation group, and also provide higher connection reliability.

Introduction to the main fields:

Actor_Port/Partner_Port: local/peer interface information.

Actor_State/Partner_State: Local/Partner State.

Actor_System_Priority/Partner_System_Priority: local/peer system priority.

Actor_System/Partner_System: Local/Peer system ID.

Actor_Key/Partner_Key: Local/peer operation key, the same value of each interface can be aggregated.

Actor_Port_Priority/Partner_Port_Priority: local/peer interface priority.

Overview of static and dynamic LACP:

Static lacp aggregation is manually configured by the user, and the system is not allowed to automatically add or delete ports in the aggregation group. The aggregation group must contain at least one port.

When there is only one port in the aggregation group, the port can only be deleted from the aggregation group by deleting the aggregation group. The LACP protocol of the static aggregation port is active. When a static aggregation group is deleted, its member ports will form one or more dynamic LACP aggregations and keep the LACP activated. Users are forbidden to close the lacp protocol of the static aggregation port. 

Dynamic lacp aggregation is an aggregation created/deleted automatically by the system, and users are not allowed to add or delete member ports in the dynamic lacp aggregation.

Only ports that have the same rate and duplex properties, are connected to the same device, and have the same basic configuration can be dynamically aggregated. Even if there is only one port, a dynamic aggregation can be created. At this time, it is a single-port aggregation. In dynamic aggregation, the lacp protocol of the port is enabled.

Port status in static aggregation group:

In a static aggregation group, the port may be in two states: selected or standby.

Both the selected port and the standby port can send and receive the lacp protocol, but the standby port cannot forward user messages. 

In a static aggregation group, the system sets the port in the selected or standby state according to the following principles:

The system selects the port with the highest priority in the selected state according to the priority order of port full duplex/high rate, full duplex/low rate, half duplex/high rate, half duplex/low rate, and other ports are in standby state .

It is different from the peer device connected to the smallest port in the selected state, or the port connected to the same peer device but the port is in a different aggregation group will be in the standby state.

Ports cannot be aggregated together due to hardware limitations (cannot be aggregated across boards), and ports that cannot aggregate with the smallest port in the selected state will be in the standby state.

Ports that are different from the basic configuration of the smallest port in the selected state will be in the standby state.

Since the number of selected ports in the aggregation group that the device can support is limited, if the current number of member ports exceeds the maximum number of selected ports that the device can support, the system will select some ports as selected ports in the order of port numbers from small to large , The others are standby ports.

Port status of dynamic aggregation group:

In a dynamic aggregation group, the port may be in two states: selected or standby. Both the selected port and the standby port can send and receive the lacp protocol, but the standby port cannot forward user messages.

Since the maximum number of ports in the aggregation group that the device can support is limited, if the current number of member ports exceeds the maximum number of ports, the local system and the peer system will negotiate, based on the port with the best device id. The size of the id determines the status of the port. 

The specific negotiation steps are as follows:

Compare the device id (system priority + system mac address). First compare the system priority, if the same, then compare the system mac address. The end with the smaller device id is considered superior.

Compare port id (port priority + port number). For each port on the end with the best device ID, the port priority is first compared, and if the priority is the same, the port number is compared. The port with the smaller port id is the selected port, and the remaining ports are standby ports.

In an aggregation group, the port with the smallest port number in the selected state is the main port of the aggregation group, and the other ports in the selected state are the member ports of the aggregation group.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.

What is WLAN WDS technology

Wireless Distribution System means that APs connect two or more independent local area networks through wireless links to form an interconnected network for data transmission.

In a traditional WLAN network, a wireless channel is used as the transmission medium between the STA and the AP, and the uplink of the AP is a wired network. In order to expand the coverage area of the wireless network, devices such as switches need to be used to connect APs to each other, which will result in higher final deployment costs and a longer time.

At the same time, when APs are deployed in some complex environments (such as subways, tunnels, docks, etc.), it is very difficult for APs to connect to the Internet in wired mode. Through WDS technology, wireless connections can be achieved between APs, which facilitates the deployment of wireless LANs in some complex environments, saves network deployment costs, is easy to expand, and realizes flexible networking.

The advantages of WDS network include:

① Connect two independent LAN segments through a wireless bridge, and provide data transmission between them.

② Low cost and high performance.

③ The scalability is good, and there is no need to lay new wired connections and deploy more APs.

④ Suitable for companies, large warehousing, manufacturing, docks and other fields.

Service VAP: In the traditional WLAN network, the AP is the WLAN service function entity provided for the STA. VAP is a concept virtualized on AP equipment, that is, multiple VAPs can be created on an AP to satisfy the access services of multiple user groups.

WDS VAP: In a WDS network, AP is a functional entity that provides WDS services to neighboring devices. WDS type VAP is divided into AP type VAP and STA type VAP. AP type VAP provides connection function for STA type VAP. As shown in the figure, VAP13 created on AP3 is a STA type VAP, and VAP12 created on AP2 is an AP type VAP. 

Wireless Virtual Link: WDS link established between STA-type VAP and AP-type VAP between adjacent APs.

AP working mode: According to the actual location of the AP in the WDS network, the working mode of the AP is divided into root mode, middle mode and leaf mode.

(1) Root mode: AP as the root node is connected to AC through wired connection, and at the same time, AP-type VAP is used to establish a wireless virtual link with STA-type VAP.

(2) Middle mode: AP as an intermediate node connects to AP-type VAP with STA-type VAP upwards, and connects to STA-type VAP with AP-type VAP downwards.

(3) Leaf mode: AP acts as a leaf node and connects to AP-type VAP with STA-type VAP upwards.

In terms of mode, WDS has three working modes, namely self-learning mode, relay mode and bridge mode.

The self-learning mode belongs to the passive mode, which means it can automatically recognize and accept WDS connections from other APs, but it will not actively connect to the surrounding WDS APs. Therefore, this WDS mode can only be used on the main access point router or AP, can only be used on the extended main AP, and cannot be used to extend other APs through WDS.

The relay mode is the WDS mode with the most complete functions. In this mode, the AP can not only extend the wireless network range through WDS, but also has the function of the AP to accept wireless terminal connections.

The bridge mode is very similar to a bridge in a wired network. It receives a data packet from one end and forwards it to the other end. The WDS bridge mode is basically the same as the relay mode except that it no longer has the AP function at the same time. Therefore, in the WDS bridge mode, the AP no longer accepts the connection of the wireless network terminal, and you cannot search for its existence.

In terms of roles, members in the WDS network can be divided into Main, Rely and Remote.

The equipment with Internet connection or local area network outlet is usually used as the main equipment, which is connected to the backbone network through the Ethernet cable; the equipment in the middle of the network to relay signals is the relay equipment; the edge of the wireless WDS network provides wireless access and sends data to The device forwarded by the master device is the remote base station.

With the replacement of current home-level wireless routers, the price of wireless routers with WDS generally goes down. In this way, wireless users can spend a relatively small amount of money to achieve the purpose of expanding the coverage of the wireless network, effectively increasing the coverage area of the wireless network, and reducing the dead angle of the wireless network signal.

The above is the news sharing from the PASSHOT. I hope it can be inspired you. If you think today’ s content is not too bad, you are welcome to share it with other friends. There are more latest Linux dumps, CCNA 200-301 dumpsCCNP Written dumps and CCIE Written dumps waiting for you.