CN115174482B - Message distribution method and device of network equipment - Google Patents

Message distribution method and device of network equipment Download PDF

Info

Publication number
CN115174482B
CN115174482B CN202210934791.XA CN202210934791A CN115174482B CN 115174482 B CN115174482 B CN 115174482B CN 202210934791 A CN202210934791 A CN 202210934791A CN 115174482 B CN115174482 B CN 115174482B
Authority
CN
China
Prior art keywords
message
header
processing unit
inner layer
network device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210934791.XA
Other languages
Chinese (zh)
Other versions
CN115174482A (en
Inventor
吴轩
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
XFusion Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XFusion Digital Technologies Co Ltd filed Critical XFusion Digital Technologies Co Ltd
Priority to CN202210934791.XA priority Critical patent/CN115174482B/en
Publication of CN115174482A publication Critical patent/CN115174482A/en
Application granted granted Critical
Publication of CN115174482B publication Critical patent/CN115174482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The application provides a method and a device for packet distribution of network equipment, wherein the network equipment comprises a multi-processing unit, the method is executed by the network equipment, and the method comprises the following steps: receiving a first message; shunting the first message to a first processing unit in the multiprocessing units for forwarding according to the tunneling protocol message header of the first message; and if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message. When the packets encapsulated by the tunneling protocol are shunted in the multiprocessing unit, the delay of the network device for processing the first packet is reduced.

Description

Message distribution method and device of network equipment
The application is a divisional application of an invention patent application with the name of 'a message distribution method and a message distribution device of network equipment' which is filed to China patent office, the application number is 201910423117.3, the application date is 2019, 05 and 21.
Technical Field
The present invention relates to the field of information technologies, and in particular, to a method and an apparatus for packet splitting in a network device.
Background
Network devices typically include multiple processing units (e.g., multi-core processors), typically employing a multi-core polling packet-receiving model that each checks for one forwarding process (thread), when multiple forwarding cores receive packets from one network port, lock protection is required for a shared resource, i.e., only one forwarding process (thread) can receive packets from the same network port, process, and send packets at the same time. In order to improve the concurrency capability of multiple cores and achieve the load balance of processing the message on the multiple cores, 2 modes of shunting based on a receiving-side scaling (RSS) technology or software hash (hash) shunting can be adopted.
Messages of the same message flow in the RSS message flow dividing technology are processed by one processing unit, namely, messages with the same source IP address, destination IP address, source port number and destination port number are forwarded to a target processing unit in a plurality of processing units as messages of the same message flow.
Since RSS splitting techniques aim at calculating a hash (hash) value based on pre-specified message specific fields (e.g., quadruples in a header), a processing unit that processes the message is determined based on the calculated hash value. When a large number of messages are encapsulated by adopting the tunnel encapsulation technology, the tunnel protocol headers of the messages are the same, or specific fields (for example, quadruplets) in the tunnel protocol headers of the messages are the same, based on the RSS technology described above, the encapsulated messages are forwarded to the same processing unit and are processed by the same processing unit. This results in an excessive load on some processing units in the network device, and increases the delay for processing the message.
Disclosure of Invention
The application provides a method and a device for shunting a message of network equipment, which are beneficial to reducing the time delay of processing a first message by the network equipment when shunting the message encapsulated by a tunnel protocol in a multi-processing unit.
In a first aspect, a method for packet offloading of a network device, where the network device includes a multi-processing unit, the method performed by the network device includes: receiving a first message; shunting the first message to a first processing unit in the multiprocessing units for forwarding according to the tunneling protocol message header of the first message; and if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message.
In the embodiment of the application, when the first processing unit cannot process the first message, the first message can be split again through the inner layer message header of the first message, which is favorable for reducing the time delay of the network equipment for processing the first message, and the problem that the first processing unit cannot process the first message after splitting the first message to the first processing unit in the traditional message splitting method, which leads to overtime of message transmission, is avoided.
In one possible implementation manner, if the first processing unit cannot process the first message, the splitting the first message again according to the inner layer header of the first message includes: and if the load of the first processing unit meets the preset condition, the first message is shunted again according to the inner layer message header of the first message.
In the embodiment of the application, when the load of the first processing unit meets the preset condition, the first message can be split again through the inner layer message header of the first message, which is favorable for reducing the load of the first processing unit, so as to provide a foundation for realizing load balancing among a plurality of processing units. In the process of shunting the traditional tunnel message header based on the message, the load of some processing units in the network equipment is overlarge and the load of some processing units is overlarge because the encapsulated message is forwarded to the same processing unit (such as a first processing unit), so that the load imbalance of the processing units in the network equipment is caused.
In one possible implementation, the method further includes: acquiring a first scheduling rule of the first message; determining an inner layer message header of the first message by a network card of the network equipment according to the first scheduling rule, and splitting the first message again; the splitting the first message again according to the inner layer message header of the first message includes: and re-distributing the first message based on the inner layer message header of the first message through the network card.
In the embodiment of the application, according to the indication of the first scheduling rule, the network card re-splits the first message according to the inner layer message header of the first message, so that a new module is not required to be added to realize re-splitting, and the load degree of the network equipment is simplified.
In one possible implementation, the method further includes: acquiring a second scheduling rule of the first message; determining an inner layer message header of the first message by a distribution module of the network equipment according to the second scheduling rule, and distributing the first message again; the splitting the first message again according to the inner layer message header of the first message includes: and the first message is shunted again based on the inner layer message header of the first message through the shunting module.
In the embodiment of the present application, according to the indication of the second scheduling rule, the first packet is shunted again by the shunting according to the inner layer packet header of the first packet, which is favorable for expanding the application scenario of the present application, for example, the method of the embodiment of the present application may be used even in the scenario that the network card does not support the re-shunting.
In one possible implementation manner, the splitting the first message again according to the inner layer header of the first message includes: and according to the inner layer message header of the first message, the first message is shunted to other processing units except the first processing unit in the multiprocessing unit again.
In one possible implementation manner, the splitting the first message again according to the inner layer header of the first message includes: and according to the inner layer message header of the first message, shunting the first message based on a receiving end expanded RSS strategy.
In a second aspect, a network device is provided, the network device comprising a multiprocessing unit comprising a first processing unit, the network device being configured to: receiving a first message; shunting the first message to the first processing unit for forwarding according to the tunnel protocol message header of the first message; if the first processing unit cannot process the first message, the first processing unit is further configured to shunt the first message again according to an inner layer header of the first message.
In one possible implementation, the network device is further configured to: and if the load of the first processing unit meets the preset condition, the first message is shunted again according to the inner layer message header of the first message.
In one possible implementation, the network device is further configured to: acquiring a first scheduling rule of the first message; determining an inner layer message header of the first message by a network card of the network equipment according to the first scheduling rule, and splitting the first message again; and re-distributing the first message based on the inner layer message header of the first message through the network card.
In one possible implementation, the network device is further configured to: acquiring a second scheduling rule of the first message; determining an inner layer message header of the first message by a distribution module of the network equipment according to the second scheduling rule, and distributing the first message again; and the first message is shunted again based on the inner layer message header of the first message through the shunting module.
In one possible implementation, the multi-processing unit is further configured to: and according to the inner layer message header of the first message, the first message is shunted to other processing units except the first processing unit in the multiprocessing unit again.
In a third aspect, there is provided a network device comprising a processor and a memory, the memory for storing a computer program, the processor for calling and running the computer program from the memory, causing the network device to perform the method of the above aspects.
In a fourth aspect, a network device is provided, the network device comprising respective modules for performing the methods in the respective aspects above.
In a fifth aspect, a computer readable medium is provided, storing a computer program which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the above aspects.
It should be noted that, the above computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and embodiments of the present application are not limited in this regard.
In a seventh aspect, a system on a chip is provided, the system on a chip comprising a processor for a network device to implement the functions involved in the above aspects, e.g. to generate, receive, transmit, or process data and/or information involved in the above methods. In one possible design, the chip system further includes a memory for storing program instructions and data necessary for the terminal device. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
Drawings
Fig. 1 shows a schematic diagram of a network device to which an embodiment of the present application is applicable.
Fig. 2 is a schematic block diagram of the format of an IPSec packet based on tunneling protocol encapsulation.
Fig. 3 is a schematic flow chart of a packet splitting method of a network device provided in the present application.
Fig. 4 is a schematic flow chart of a packet splitting method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a network device according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a network device of an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
Network devices (e.g., enterprise routers) are used to connect multiple logically separate networks, which are referred to as representing a single network or a subnetwork. This may be accomplished by a router when data is transferred from one subnet to another. The enterprise router is mainly used for connecting an enterprise local area network and a wide area network (Internet), and can realize interconnection of heterogeneous networks and interconnection of a plurality of subnets.
For example, secure communication and resource sharing between the headquarter and the branch office can be realized through a virtual private network (virtural private network, VPN), the gateway device a and the gateway device B integrated with VPN functions are respectively deployed on the branches and the outlets of the headquarter, and an internet protocol security (internet protocol security, IPsec) tunnel is established through the operator 1 and the operator 2, so as to provide end-to-end security service for the transmission of the IP data packet through encryption, authentication, and other manners.
In a period of time, a unidirectional message stream is transmitted between a source IP address and a destination IP address, and all messages have the same source port number, destination port number, protocol number and source and destination IP addresses, namely five-tuple content is the same.
In order to process network data traffic in real time, a high-performance server adopts multiple paths of multiple cores, and in order to fully utilize each processor to process a large amount of network data, a network data aggregation and dispersion technology can be adopted. All data streams input from the external network are aggregated, the data streams aggregated according to a certain distribution strategy are distributed into a plurality of queues, the queues are in one-to-one correspondence with the CPUs, namely one CPU processes the data in one queue, and therefore the CPU resources of multiple paths and multiple cores can be fully utilized.
A multi-core processor refers to the integration of two or more complete compute engines (cores) in a single processor. In a multi-core and multi-processor environment, when accessing a resource shared by a plurality of forwarding cores, locking processing is needed, namely, only one forwarding process can receive, process and send packets from the same network port at the same time.
In order to avoid the overhead caused by locking and the negative influence on the system performance, resource sharing is generally sought to be avoided in the system design, and for network devices such as routers, switches, network servers and the like, messages belonging to the same message flow are processed by adopting the same forwarding core, so that resource sharing among forwarding cores caused by message cross-forwarding core processing is avoided.
In the following, related terms referred to in the present application will be briefly described.
1. Tunnel encapsulation
Tunneling (Tunneling) is a way to transfer data between networks through an infrastructure using the internet. The data (or payload) communicated using the tunnel may be data frames or packets of different protocols. The tunneling protocol repackages data frames or packets of other protocols and then tunnels them. The new header provides routing information to deliver the encapsulated payload data over the internet.
To create a tunnel, both the tunnel's client and server must use the same tunneling protocol in creating the tunnel. Tunneling may be based on layer 2 or layer 3 tunneling protocols, respectively. Wherein the layer 2 tunneling protocol corresponds to a data link layer of the OSI model, using frames as data exchange units. The point-to-point tunneling protocol (PPTP), the layer-two tunneling protocol (L2 TP), and the layer-2 forwarding protocol (L2F) all belong to the layer-2 tunneling protocol, and the user data is encapsulated in a point-to-point protocol (PPP) frame and sent over the internet. The layer 3 tunneling protocol corresponds to the network layer of the OSI model, using packets as data exchange units. IPIP (IP over IP) and IPSec tunneling mode belong to layer 3 tunneling protocols, and are IP packets are transported over the IP network in additional IP headers.
The tunneling protocol is composed of a carrier for transmission, different encapsulation formats and user packets, with the difference that the user packets are encapsulated in different packets for transmission in a tunnel.
2. IPsec protocol
The internet security protocol (internet protocol security, IPsec) is a protocol packet that encrypts and authenticates packets of the IP protocol to protect the network transport protocol family of the IP protocol, i.e. the set of interrelated protocols. The IPsec protocol can utilize Internet routable address to encapsulate IP address of internal network to implement intercommunication of different networks.
IPsec is a three-layer tunnel encryption protocol formulated by IETF to provide high-quality, interoperable, cryptographically-based security for data transmitted over the Internet. The IPsec protocol gives an architecture for network data security applied at the IP layer, including network authentication protocol authentication header (authentication header, AH), encapsulating security payload (encapsulating security payload, ESP), and key management protocol internet key exchange (internet key exchange, IKE), as well as for network authentication and encryption algorithms, etc. The AH and ESP protocols are used to provide security services and the IKE protocol is used for key exchange.
The authentication header AH provides connectionless data integrity, message authentication and replay attack prevention protection for the IP datagram; the encapsulation security payload ESP is used to provide confidentiality, data source authentication, connectionless integrity, replay-proof and limited transport stream (traffic-flow) confidentiality; the security association SA is used to provide algorithms and packets, and parameters required for AH, ESP operation.
The AH protocol provides data source authentication, data integrity verification and message replay prevention functions, protecting communications from tampering, but is not resistant to eavesdropping, and is suitable for use in transmitting non-confidential data. The working principle of AH is to add an authentication header to a data packet, the header being inserted behind a standard IP header, providing integrity protection for the data. Alternative authentication algorithms are (message digest 5, MD5), secure hash algorithm (secure hash algorithm, SHA-1).
The ESP protocol provides encryption, data source authentication, data integrity check and message replay prevention functions. The ESP protocol adds an ESP header to the standard IP header of each packet and an ESP trailer to the packet. The ESP encrypts the user data to be protected and encapsulates the user data into an IP packet so as to ensure the confidentiality of the data. The encryption algorithm of ESP includes DES, 3DES, AES, etc. The user can select MD5 and SHA-1 algorithms to ensure the integrity and authenticity of the message.
The security mechanism of IPsec includes authentication and encryption, wherein the authentication mechanism enables a data receiver of IP communication to confirm the true identity of a data sender and whether the data is tampered in the transmission process; the encryption mechanism ensures confidentiality of the data by performing encryption operation on the data so as to prevent the data from being eavesdropped in the transmission process.
IPsec security services include Data Confidentiality (Data Integrity), data Integrity (Data Integrity), data origin authentication (Data Authentication), anti-Replay (Anti-Replay).
Wherein, the data confidentiality is used for the IPsec sender to encrypt the packet before transmitting the packet through the network; the data integrity is used for the IPsec receiver to authenticate the packet sent by the sender so as to ensure that the data is not tampered in the transmission process; the data source authentication is used for the IPsec to authenticate whether a sending end sending the IPsec message is legal or not at a receiving end; anti-replay is used for IPsec receivers to detect and reject the receipt of outdated or duplicate messages.
3. Data message
The message is a data unit exchanged and transmitted in the network, and is also a unit transmitted by the network, and the message contains complete data information to be sent. The message is continuously encapsulated into packets, packets and frames for transmission in the transmission process, wherein the encapsulation mode is to add a header formed by some control information, which is called a message header.
4. RSS splitting
RSS (receive side scaling, RSS) is a network card driving technique that enables efficient distribution of received messages among a plurality of processing units (e.g., CPUs) under a multiprocessor system. The multi-queue network card technology distributes and receives network traffic among a plurality of processors through hash processing of header files of incoming data packets through support of multi-queue network card drivers, and binds various queues to different cores through interruption so as to meet the requirements of the network card. The network card analyzes the received message to obtain an IP address, a protocol and a quadruple. The network card calculates a HASH value (HASH value) according to a specific field (e.g., a quadruple) of the header through the configured HASH function, and queries the RETA to determine a processing unit for processing the message based on the least significant bit (Least Significant Bit, LSB) of the HASH value as an index of RETA (redirection table).
The network card supports multiple receive and transmit queues, i.e., multiple queues. When receiving a packet stream, one network card can send different packets to different queues for decentralized processing among different CPUs. The network card specifies, for each packet, that this packet belongs to one of a few streams through a filter. The packets in each stream are held in a separate receive queue and the queue rotations are processed by the CPU, referred to as receiver spread RSS.
The driver of the multi-queue network card provides a kernel module parameter for specifying the number of hardware queues. For example, the parameter used by the bnx2x driver is num_queues if the device supports enough queues, one CPU corresponds to one receive queue in the RSS configuration. Or at least one receive queue per memory domain, a memory domain containing a series of CPUs and sharing a particular memory level (e.g., L1, L2, NUMA nodes).
The indirect table of the RSS device is mapped at the time of the driver initialization, and the default mapping is that the queue is uniformly published in the indirect table. However, the indirection table may be viewed or modified at run-time using the etkool command (- -show-rxfh-indir and- -set-rxfh-indir). Modifying the indirection table may set different proportions of weights to different queues.
Each receive queue has a separate IRQ, i.e., interrupt number. The NIC informs the CPU via the IRQ when a new packet arrives at the designated queue. PCIe devices use MSI-X to route each interrupt to the CPU. The effective queue to IRQ mapping is formulated by/proc/interfaces. An interrupt can be handled by any CPU in the default setting because the packet processing portion occurs in the receive interrupt handling function.
RSS should be enabled when low latency is a concern or receive interrupt processing becomes a bottleneck. Sharing the load between different CPUs reduces the queue length. As many queues as there are CPUs can be created for a low latency network. In an efficient configuration there is a minimum of queues and no queue overflow. This is because, by default, the total number of interrupts would increase with each increasing queue with interrupt aggregation enabled.
The RSS split is a necessary condition for better utilizing the multi-core processor by the network card, and the network card with a plurality of RSS queues can divide different network connections into different queues and then respectively send the different queues to different CPU cores for processing, so that load is dispersed to realize load balancing, and the capacity of the multi-core processor is fully utilized.
5. Control plane and forwarding plane
The control plane and the forwarding plane of the network device may be physically separated or logically separated. For example, the core switch, core router may employ physical separation. The processing unit (for example, CPU) on the main control board is not responsible for message forwarding and focuses on the control of the system; while the processing unit (e.g., CPU) in the service board is dedicated to data message forwarding.
The control plane refers to the portion of the system used to transmit instructions, calculate entries. Such as protocol message forwarding, protocol table entry computation, maintenance, etc., all fall within the control plane category. For example, in a routing system, the process responsible for routing protocol learning and routing table entry maintenance belongs to the control plane.
The forwarding plane refers to a part of the system used for encapsulating and forwarding the data message. Such as the receipt, decapsulation, encapsulation, forwarding, etc., of data messages fall into the category of the forwarding plane. For example, after the system receives the IP packet, it needs to perform decapsulation, look up the routing table, forward from the outbound interface, and the like, and the process responsible for the above actions in the system belongs to the forwarding plane.
After protocol interaction and route calculation are carried out on a control plane of the system, a plurality of table entries are generated and sent to a forwarding plane, and the forwarding plane is guided to forward the message. For example: the router establishes a routing table item through an OSPF protocol, and further generates a forwarding information base (forwarding information base, FIB) table, a fast forwarding table and the like to guide the system to forward the IP message.
It should be noted that, in the embodiments of the present application, the packet or the data packet mentioned above has the same meaning as the packet in the following text, and may be replaced.
Fig. 1 shows a schematic diagram of a network device to which an embodiment of the present application is applicable. The network device 100 shown in fig. 1 includes a network card 110, and a plurality of processing units 120.
The network card 110 has an RSS splitting function, and performs hash (hash) calculation on the received network data packet based on a triplet or a quintuple, so as to complete the task of hardware splitting, and split the messages with different hash values to different processing units.
The processing units 120, each corresponding to a different receiving queue, are configured to store a received message to be processed, and the processing units may obtain the message from the corresponding receiving queue and process, e.g. forward, the message.
It should be noted that, the processing unit may be a core in the processor, and the processing unit is also called a forwarding core. The multiple processing units may also be separate processors, which is not limited in this embodiment of the application.
Since RSS splitting techniques aim at calculating a hash (hash) value based on pre-specified message specific fields (e.g., five tuples in a header), and determining a processing unit to process a message based on the calculated hash value. When a large number of messages are encapsulated by using the tunnel encapsulation technology, the tunneling protocol headers of the messages are identical, or specific fields (e.g., five-tuple) located in the tunneling protocol headers are identical, based on the RSS technology described above, the encapsulated messages are forwarded to the same processing unit due to the specific identical specific fields, and are processed by the same processing unit. In this way, the load of some processing units in the network device is excessive, and the load of some processing units is too small, which causes load imbalance among the multiple processing units in the network device.
In order to avoid the above-mentioned problems, the embodiments of the present application provide a packet splitting scheme based on multiple processing units, where after a first packet is split into a first processing unit according to a tunneling protocol header, the first processing unit cannot process the packet, and then the packet may be split again according to an inner layer packet header of the first packet, which is beneficial to improving load imbalance among the multiple processing units in a network device.
For ease of understanding, the RSS split strategy applicable to the embodiments of the present application will be described in conjunction with table 1. The format of the IPSec packet in which the packet is encapsulated based on the tunneling protocol will be described with reference to fig. 2.
As shown in table 1, the skip list has 256 entries, 8 receiving queues (Q) are arranged, and the skip list (indirection table) is repeated 16 times from the queue 0 to the queue 7.
Table 1 jump table
Figure BDA0003783057350000071
The RSS splitting needs to configure an RSS hash key (key) field of the network card in advance, for example, a source ip, a source port, a destination ip and a destination port of a message are used as hash keys, the network card hardware calculates a hash value according to the hash keys, and determines to take an LSB of the hash value according to the specification of the skip list, for example, the specification of the skip list is 128, and can take 7-bit LSBs. The lower 7 bits 0-127 correspond to the appointed receiving queue number in the jump table, each receiving queue is bound with a processing unit, thus achieving the purpose of receiving and transmitting different hash value corresponding streams in parallel by the multi-processing unit.
Hereinafter, the message format of the present application will be briefly described.
At present, the messages of the IPSec protocol are mainly encapsulated in tunnels based on a generic routing encapsulation protocol (Generic Routing Encapsulation, GRE) tunnel, and the GRE tunnel can encapsulate multicast messages and broadcast messages into unicast messages and then encrypt the unicast messages by IPSec.
VPN deployment can be performed by adopting the GRE tunnel encrypted by IPSec, wherein the message encrypted by the IPSec technology is packaged by the tunnel and then transmitted from the tunnel, which is called GRE over IPsec. The virtual tunnel operation dynamic routing protocol is established through the GRE, the message is filtered through the technologies of configuring an access control list (Access Control List, ACL), service quality (Quality of Service, qoS) and the like on the interface equipment of the virtual tunnel, the IPSec technology protects the data transmitted in the GRE tunnel, and the specific encapsulation format of the message can be seen in fig. 2.
A schematic flow chart of a packet splitting method of the network device according to an embodiment of the present application is described below with reference to fig. 3. The method shown in fig. 3 may be performed by a network device in which the multi-processing units are located, for example, the network device shown in fig. 1. The first processing unit may be any one of the processing units. The method shown in fig. 3 includes steps 310 through 330.
310, receiving a first message.
The first message is based on a message encapsulated by a tunneling protocol. The first message may include two message headers, i.e. a tunneling protocol message header and an inner layer message header. The tunnel protocol message header is used for indicating that the first message is transmitted through the tunnel, and the inner layer message header indicates that the first message is transmitted based on the IPSec protocol. As a possible message format, see fig. 2.
And 320, shunting the first message to the first processing unit for forwarding according to the tunneling protocol message header of the first message.
Optionally, based on the RSS splitting policy introduced above, a specific field is selected from the tunneling protocol header, and the first packet is split to the first processing unit according to the specific field.
It should be noted that the specific field may be a five-tuple, a four-tuple, or a two-tuple in the tunneling protocol header, which is not limited in the embodiment of the present application.
330, if the first processing unit cannot process the first message, splitting the first message again according to the inner layer header of the first message.
It should be noted that, the reasons why the first processing unit cannot process the first packet may be that the first processing unit satisfies a preset condition, for example, the first processing unit is overloaded, or the calculated amount of the first processing unit satisfies a preset threshold, etc., and the specific condition may be referred to the description about the case #a2 in the preset condition hereinafter, which is not repeated herein for brevity.
Optionally, the step 330 includes: and according to the inner layer message header of the first message, the first message is shunted to other processing units except the first processing unit in the multiprocessing unit again.
The preset conditions of the message splitting method in the embodiment of the present application are specifically described below. It should be noted that, when the network device includes a multi-core processor, the control unit may be a control core in the multi-core processor, and the processing unit may be a forwarding core in the multi-core processor. When the network device includes a multiprocessor, the control unit may be a processor for control in the multiprocessor, and the processing unit may be a processor for forwarding in the multiprocessor.
Case #A1
When the CPU occupancy rate of the processing unit #1 is smaller than the first threshold, the control unit determines that the processing unit #1 adopts the RSS split mode according to the load information of the processing unit #1, that is, the message stream may be directly forwarded at the processing unit #1 after decryption and decapsulation.
For example, when the CPU occupancy rate of the processing unit #1 is less than 90%, the control unit determines that the processing unit #1 adopts the RSS split mode according to the load information of the processing unit # 1.
Or when the port flow of the processing unit #1 is smaller than the second threshold, the control unit determines that the processing unit #1 adopts an RSS split mode according to the load information of the processing unit #1, that is, the message flow can be directly forwarded at the processing unit after decryption and decapsulation.
For example, when the port flow rate of the processing unit #1 is less than 50%, the control unit determines that the processing unit #1 adopts the RSS split mode according to the load information of the processing unit # 1.
Case #A2
When the CPU occupancy rate of the processing unit #1 is greater than the first threshold, the control unit determines that the processing unit #1 adopts the secondary shunt mode according to the load information of the processing unit # 1.
For example, when the CPU occupancy rate of the processing unit #1 is greater than 90%, the control unit determines that the processing unit #1 adopts the secondary shunt mode according to the load information of the processing unit # 1.
Or when the port flow rate of the processing unit #1 is greater than the second threshold value, the control unit determines that the processing unit #1 adopts the secondary shunting mode according to the load information of the processing unit # 1.
For example, when the port flow rate of the processing unit #1 is greater than 50%, the control unit determines that the processing unit #1 adopts the RSS split mode according to the load information of the processing unit # 1.
The splitting in step 330 may be implemented by hardware splitting performed based on a network card, or may be implemented by software splitting based on a software module, which is not limited in this embodiment of the present application. For some network devices, the some network devices only support to split the message received from the outside, but do not support to process the message returned by the network device through the internal physical channel, that is, when the network card does not support the secondary splitting shown in step 330, the message may be split in a software splitting manner. Of course, if the network card supports the secondary shunting shown in step 330, the hardware shunting scheme based on the network card may be directly used, without adding a software module, to implement the foregoing software-based packet shunting scheme.
It should be noted that, if the network card supports software loopback and hardware stream splitting, the network card may be sent to the network card by using a loopback (loopback) mode, and the network card performs second splitting according to a specific field of the original message (or the inner layer message), that is, the decapsulated message is directly sent back to other processing units.
If the network card does not support the software loopback, the second time of shunting can be performed by adopting a software stream shunting mode, namely the message is sent to a shunting module, and the shunting module performs the second time of shunting according to the specific field of the original message, namely the unpacked message is directly sent back to other processing units.
In an embodiment of the present application, a control plane of the network device may include a scheduling analysis module, configured to perform traffic monitoring on a tunnel encapsulation packet flow entering a tunnel, and determine a splitting mode of the packet flow according to a load level of the processing unit.
For example, when the CPU occupancy rate of the processing unit #1 is less than 90%, the control unit determines that the processing unit #1 adopts the RSS split mode according to the load information of the processing unit.
For example, when the port flow rate of the processing unit #1 is less than 50%, the control unit determines that the processing unit #1 adopts the RSS split mode according to the load information of the processing unit.
For example, when the CPU occupancy rate of the processing unit #1 is greater than 90%, the control unit determines that the processing unit #1 adopts the secondary shunt mode according to the load information of the processing unit.
For example, when the port flow rate of the processing unit #1 is greater than 50%, the control unit determines that the processing unit #1 adopts the secondary split mode according to the load information of the processing unit.
Optionally, the secondary shunting mode includes a hardware shunting mode and a software shunting mode.
A schematic flow chart of a packet splitting method according to an embodiment of the present application is described below in connection with fig. 4, where the method shown in fig. 4 is performed by a network device, and the network device includes a network card, a plurality of forwarding cores, and a control core.
At S501, a network card (e.g., ethernet card) receives a first message.
In S502, the network card forwards a first message to a first forwarding core of the plurality of forwarding cores.
The network may determine, through hash computation (hash), a first forwarding core corresponding to the first packet.
In S503, the first forwarding core reports the encrypted message traffic of the first tunnel and the load of the first forwarding core to the control core, where the first tunnel is a tunnel used for transmitting the first message.
It should be noted that, the step S503 is executed after the first forwarding core receives the first message, or the first forwarding core periodically reports the first message to the controller, which is not limited in the embodiment of the present application.
In S504, the control core generates a scheduling rule based on the encrypted message traffic of the first tunnel and the load of the first forwarding core.
The above-described scheduling rules may indicate different scheduling modes in different situations, as will be described in more detail below in connection with the three situations.
And when the first forwarding core is overloaded and the network card supports the secondary shunting mode, the scheduling rule is used for indicating the network card to carry out secondary shunting according to the inner layer message of the first message.
And the second scheduling mode is used for indicating the shunting module to carry out secondary shunting according to the inner layer message of the first message when the load of the first forwarding core is overloaded and the network card does not support the secondary shunting mode.
And in a third scheduling mode, when the load of the first forwarding core is not overloaded, the scheduling rule is used for indicating the first forwarding core to directly forward the first message.
In the third scheduling mode, the specific flow of forwarding the first message by the first forwarding core may refer to a conventional message forwarding flow, which is not described herein for brevity.
The load information of the first forwarding core may be CPU occupancy rate of the forwarding core, port traffic information of the forwarding core, and the like.
Case #A1
When the CPU occupancy rate of the forwarding core #1 is smaller than a first threshold value, the control core determines that the forwarding core #1 adopts an RSS splitting mode according to the load information of the forwarding core, namely the message flow can be directly forwarded on the forwarding core after decryption and decapsulation.
For example, when the CPU occupancy of the forwarding core #1 is less than 90%, the control core determines that the forwarding core #1 adopts the RSS split mode according to the load information of the forwarding core.
Or when the port flow of the forwarding core #1 is smaller than the second threshold, the control core determines that the forwarding core #1 adopts an RSS split mode according to the load information of the forwarding core, namely the message flow can be directly forwarded on the forwarding core after decryption and decapsulation.
For example, when the port traffic of the forwarding core #1 is less than 50%, the control core determines that the forwarding core #1 adopts the RSS split mode according to the load information of the forwarding core.
Case #A2
When the CPU occupancy rate of the forwarding core #1 is larger than a first threshold value, the control core determines that the forwarding core #1 adopts a secondary shunting mode according to the load information of the forwarding core.
For example, when the CPU occupancy of the forwarding core #1 is greater than 90%, the control core determines that the forwarding core #1 adopts the secondary split mode according to the load information of the forwarding core.
Or when the port flow of the forwarding core #1 is greater than the second threshold, the control core determines that the forwarding core #1 adopts the secondary shunting mode according to the load information of the forwarding core.
For example, when the port traffic of the forwarding core #1 is greater than 50%, the control core determines that the forwarding core #1 adopts the secondary split mode according to the load information of the forwarding core.
At S505, the controller transmits the scheduling rule to the first forwarding core.
In S506, the first forwarding core performs decapsulation on the first packet flow, so that the first packet may be secondarily split according to the inner layer packet header of the first packet.
In S507, if the scheduling rule indicates that the network card performs secondary splitting on the first message, the first forwarding core forwards the first message after decapsulation to the network card; if the scheduling rule indicates that the shunting module shunts the first message for the second time, the first forwarding core forwards the first message after being unpacked to the shunting module.
After the first message is forwarded to the network card, the network card performs secondary shunting on the first message based on the inner layer message header, specifically, hash calculation can be performed on the inner layer message, and the forwarding core can be determined again. After the first message is forwarded to the splitting module, the splitting module performs secondary splitting on the first message based on the inner layer message header, specifically, hash calculation can be performed on the inner layer message, and the forwarding core can be determined again.
It should be further noted that, after the forwarding core is redetermined, the first packet after decapsulation may be transmitted to the redetermined forwarding core in a conventional packet splitting manner. For example, the network card may send the first packet after decapsulation to the packet receiving module, where the first packet is forwarded to the redetermined forwarding core by the packet receiving module. For another example, the splitting module may send the first packet after decapsulation to the packet receiving module, where the first packet is forwarded to the redetermined forwarding core by the packet receiving module.
The method of the embodiments of the present application is described above in connection with fig. 1 to 4, and the apparatus of the embodiments of the present application is described below in connection with fig. 5 to 6. It should be noted that the apparatus shown in fig. 5 to fig. 6 may implement each step in the above method, and for brevity, will not be described herein again.
Fig. 5 is a schematic diagram of a network device according to an embodiment of the present application, where the network device includes a multi-processing unit, and the network device 500 shown in fig. 5 may include a receiving module 510 and a processing module 520.
The network device receives the first message through the receiving module 510;
the network device shunts the first message to the first processing unit for forwarding according to the tunneling protocol message header of the first message through the processing module 520;
if the first processing unit cannot process the first message, the network device branches the first message again according to the inner layer header of the first message through the processing module 520.
Optionally, as an embodiment, the network device is further configured to: acquiring, by the processing module 520, a first scheduling rule of the first packet; determining, by the processing module 520, that the network card of the network device shunts the first packet again according to the first scheduling rule by using the inner layer packet header of the first packet; and re-distributing the first message based on the inner layer message header of the first message through the network card.
Optionally, as an embodiment, the network device is further configured to: acquiring, by the processing module 520, a second scheduling rule of the first packet; determining, by the processing module 520, that the splitting module of the network device splits the inner layer packet header of the first packet according to the second scheduling rule, and splitting the first packet again; and the first message is shunted again based on the inner layer message header of the first message through the shunting module.
Optionally, as an embodiment, if the load of the first processing unit meets a preset condition, the network device branches the first message again according to the inner layer header of the first message through the processing module 520.
Optionally, as an embodiment, the network device, through the processing module 520, re-branches the first packet to other processing units except the first processing unit in the multiple processing units according to the inner layer header of the first packet.
Optionally, as an embodiment, the network device branches, by using the processing module 520, the first packet according to an inner layer packet header of the first packet based on a receiving end extended RSS policy.
In an alternative embodiment, the processing module 520 may be a part of the processing cores 620, and the receiving module 510 may be the input-output interface 630. The network device 600 may also include a memory 610, as shown in particular in fig. 6.
Fig. 6 is a schematic block diagram of a network device of an embodiment of the present application. The network device 600 shown in fig. 6 may include: memory 610, processor 620, input/output interface 630. The memory 610, the processor 620 and the input/output interface 630 are connected through a communication connection, the memory 610 is used for storing program instructions, and the processor 620 is used for executing the program instructions stored in the memory 620, so as to control the input/output interface 630 to receive input data and information, and output data such as operation results.
The processor 620 includes multiple cores. Each core may correspond to a processing unit as described above.
It should be appreciated that in the embodiments of the present application, the processor 620 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (application specific integrated circuit, ASIC), or one or more integrated circuits for executing related programs to implement the solutions provided in the embodiments of the present application.
The memory 610 may include read only memory and random access memory and provides instructions and data to the processor 620. A portion of the processor 620 may also include non-volatile random access memory. For example, the processor 620 may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 620. The method disclosed in connection with the embodiments of the present application may be embodied directly in hardware processor execution or in a combination of hardware and software modules in a processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 610, and the processor 620 reads the information in the memory 610 and, in combination with its hardware, performs the steps of the method described above. To avoid repetition, a detailed description is not provided herein.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit (central processing unit, CPU), the processor may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that in embodiments of the present application, the memory may include read only memory and random access memory, and provide instructions and data to the processor. A portion of the processor may also include nonvolatile random access memory. The processor may also store information of the device type, for example.
It should also be understood that, in the embodiments of the present application, "first," "second," "third," etc. are merely for distinction and are not limited in order.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for packet distribution in a network device, wherein the network device comprises a multi-processing unit, the method being performed by the network device,
the method comprises the following steps:
receiving a first message;
shunting the first message to a first processing unit in the multiprocessing units for forwarding according to the tunneling protocol message header of the first message;
if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message; the tunnel protocol message header is used for indicating that the first message is transmitted through the tunnel, and the inner layer message header is used for indicating that the first message is transmitted based on the IPSec protocol.
2. The method of claim 1, wherein if the first processing unit cannot process the first message, re-splitting the first message according to an inner layer header of the first message, comprising:
and if the load of the first processing unit meets the preset condition, the first message is shunted again according to the inner layer message header of the first message.
3. The method of claim 1 or 2, wherein the method further comprises:
Acquiring a first scheduling rule of the first message;
determining, according to the first scheduling rule, that the network card of the network device shunts the first message again according to the inner layer header of the first message;
the splitting the first message again according to the inner layer message header of the first message includes:
and re-distributing the first message based on the inner layer message header of the first message through the network card.
4. A method as claimed in claim 3, wherein the method further comprises:
acquiring a second scheduling rule of the first message;
determining, by a splitting module of the network device according to the second scheduling rule, to split the first message again according to an inner layer header of the first message;
the splitting the first message again according to the inner layer message header of the first message includes:
and the first message is shunted again based on the inner layer message header of the first message through the shunting module.
5. The method according to any one of claims 1, 2 or 4, wherein the re-splitting the first message according to the inner layer header of the first message comprises:
And according to the inner layer message header of the first message, the first message is shunted to other processing units except the first processing unit in the multiprocessing unit again.
6. A network device, the network device comprising a multiprocessing unit comprising a first processing unit, the network device configured to:
receiving a first message;
shunting the first message to the first processing unit for forwarding according to the tunnel protocol message header of the first message;
if the first processing unit cannot process the first message, the first processing unit is further configured to shunt the first message again according to an inner layer header of the first message; the tunnel protocol message header is used for indicating that the first message is transmitted through the tunnel, and the inner layer message header is used for indicating that the first message is transmitted based on the IPSec protocol.
7. The network device of claim 6, wherein the network device is further to:
and if the load of the first processing unit meets the preset condition, the first message is shunted again according to the inner layer message header of the first message.
8. The network device of claim 6 or 7, wherein the network device is further configured to:
Acquiring a first scheduling rule of the first message;
determining, according to the first scheduling rule, that the network card of the network device shunts the first message again according to the inner layer header of the first message;
and re-distributing the first message based on the inner layer message header of the first message through the network card.
9. The network device of claim 8, wherein the network device is further to:
acquiring a second scheduling rule of the first message;
determining, by a splitting module of the network device according to the second scheduling rule, to split the first message again according to an inner layer header of the first message;
and the first message is shunted again based on the inner layer message header of the first message through the shunting module.
10. The network device of any of claims 6 or 7 or 9, wherein the multi-processing unit is further to:
and according to the inner layer message header of the first message, the first message is shunted to other processing units except the first processing unit in the multiprocessing unit again.
CN202210934791.XA 2019-05-21 2019-05-21 Message distribution method and device of network equipment Active CN115174482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210934791.XA CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210934791.XA CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment
CN201910423117.3A CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910423117.3A Division CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Publications (2)

Publication Number Publication Date
CN115174482A CN115174482A (en) 2022-10-11
CN115174482B true CN115174482B (en) 2023-06-02

Family

ID=73435839

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210934791.XA Active CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment
CN201910423117.3A Active CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910423117.3A Active CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Country Status (1)

Country Link
CN (2) CN115174482B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112953841B (en) * 2021-02-20 2022-05-27 杭州迪普信息技术有限公司 Message distribution method and system
CN113157445B (en) * 2021-03-30 2022-04-08 郑州信大捷安信息技术股份有限公司 Bidirectional message symmetric RSS processing method and system based on Hash operation and index value comparison
CN116055397B (en) * 2023-03-27 2023-08-18 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055679A (en) * 2011-01-28 2011-05-11 中国人民解放军国防科学技术大学 Message scheduling method for power consumption control in forwarding engine
CN104468391A (en) * 2014-12-16 2015-03-25 盛科网络(苏州)有限公司 Method and system for achieving load balance according to user information of tunnel message
WO2017162117A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Accurate speed limiting method and apparatus for cluster
CN108737239A (en) * 2018-08-30 2018-11-02 新华三技术有限公司 A kind of message forwarding method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7646708B2 (en) * 2005-08-01 2010-01-12 Hewlett-Packard Development Company, L.P. Network resource teaming combining receive load-balancing with redundant network connections
CN100496024C (en) * 2006-09-26 2009-06-03 杭州华三通信技术有限公司 A method to forward the channel message and a network device
CN100508499C (en) * 2006-11-02 2009-07-01 杭州华三通信技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
EP2712128B1 (en) * 2011-07-06 2016-01-13 Huawei Technologies Co., Ltd. Message processing method and related device thereof
CN102624611B (en) * 2011-12-31 2015-01-21 华为数字技术(成都)有限公司 Method, device, processor and network equipment for message dispersion
US9820182B2 (en) * 2013-07-12 2017-11-14 Telefonaktiebolaget Lm Ericsson (Publ) Method for enabling control of data packet flows belonging to different access technologies
US9742626B2 (en) * 2014-09-16 2017-08-22 CloudGenix, Inc. Methods and systems for multi-tenant controller based mapping of device identity to network level identity
CN106209664A (en) * 2016-07-22 2016-12-07 迈普通信技术股份有限公司 A kind of data transmission method, Apparatus and system
US20180285151A1 (en) * 2017-03-31 2018-10-04 Intel Corporation Dynamic load balancing in network interface cards for optimal system level performance
CN108270699B (en) * 2017-12-14 2020-11-24 中国银联股份有限公司 Message processing method, shunt switch and aggregation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055679A (en) * 2011-01-28 2011-05-11 中国人民解放军国防科学技术大学 Message scheduling method for power consumption control in forwarding engine
CN104468391A (en) * 2014-12-16 2015-03-25 盛科网络(苏州)有限公司 Method and system for achieving load balance according to user information of tunnel message
WO2017162117A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Accurate speed limiting method and apparatus for cluster
CN108737239A (en) * 2018-08-30 2018-11-02 新华三技术有限公司 A kind of message forwarding method and device

Also Published As

Publication number Publication date
CN111988211B (en) 2022-09-09
CN115174482A (en) 2022-10-11
CN111988211A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
JP6288802B2 (en) Improved IPsec communication performance and security against eavesdropping
US10404588B2 (en) Path maximum transmission unit handling for virtual private networks
US7835285B2 (en) Quality of service, policy enhanced hierarchical disruption tolerant networking system and method
US7596806B2 (en) VPN and firewall integrated system
CN115174482B (en) Message distribution method and device of network equipment
US7082477B1 (en) Virtual application of features to electronic messages
WO2018175140A1 (en) Hardware-accelerated secure communication management
US20070165638A1 (en) System and method for routing data over an internet protocol security network
US10601610B2 (en) Tunnel-level fragmentation and reassembly based on tunnel context
CN104272674A (en) Multi-tunnel virtual private network
JP2009506617A (en) System and method for processing secure transmission information
CN111385259B (en) Data transmission method, device, related equipment and storage medium
US10841840B2 (en) Processing packets in a computer system
US20190372948A1 (en) Scalable flow based ipsec processing
JP2016508682A (en) Method and arrangement for differentiating VPN traffic across domains by QOS
US20080133915A1 (en) Communication apparatus and communication method
CN106161386B (en) Method and device for realizing IPsec (Internet protocol Security) shunt
Liu et al. P4NIS: Improving network immunity against eavesdropping with programmable data planes
CN116260579A (en) Message encryption and decryption method for IP packet
Tennekoon et al. Prototype implementation of fast and secure traceability service over public networks
US10547532B2 (en) Parallelization of inline tool chaining
CN111669374A (en) Encryption and decryption performance expansion method for single tunnel software of IPsec VPN
CN114039795B (en) Software defined router and data forwarding method based on same
US20220210131A1 (en) System and method for secure file and data transfers
CN114731292A (en) Low latency medium access control security authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant