CN115174482A - Message distribution method and device of network equipment - Google Patents

Message distribution method and device of network equipment Download PDF

Info

Publication number
CN115174482A
CN115174482A CN202210934791.XA CN202210934791A CN115174482A CN 115174482 A CN115174482 A CN 115174482A CN 202210934791 A CN202210934791 A CN 202210934791A CN 115174482 A CN115174482 A CN 115174482A
Authority
CN
China
Prior art keywords
message
packet
processing unit
header
shunting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210934791.XA
Other languages
Chinese (zh)
Other versions
CN115174482B (en
Inventor
吴轩
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
XFusion Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XFusion Digital Technologies Co Ltd filed Critical XFusion Digital Technologies Co Ltd
Priority to CN202210934791.XA priority Critical patent/CN115174482B/en
Publication of CN115174482A publication Critical patent/CN115174482A/en
Application granted granted Critical
Publication of CN115174482B publication Critical patent/CN115174482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method and a device for distributing messages of network equipment, wherein the network equipment comprises a plurality of processing units, the method is executed by the network equipment, and the method comprises the following steps: receiving a first message; according to the tunnel protocol message header of the first message, shunting the first message to a first processing unit in the multi-processing unit for forwarding; if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message. When the message encapsulated by the tunnel protocol is shunted in the multi-processing unit, the time delay of the network equipment for processing the first message is favorably reduced.

Description

Message distribution method and device of network equipment
The application is a divisional application of an invention patent application with the application number of 201910423117.3 and the application date of 2019, 05 and 21, which is filed by the Chinese patent office, and is named as a message shunting method and a message shunting device of network equipment.
Technical Field
The present application relates to the field of information technologies, and in particular, to a method and an apparatus for packet offloading of a network device.
Background
The network device generally includes a plurality of processing units (e.g., a multi-core processor), and generally employs a multi-core polling packet receiving model in which each core corresponds to one forwarding process (thread), and when a plurality of forwarding cores receive a packet from one network port, lock protection needs to be performed on a shared resource, that is, only one forwarding process (thread) can receive, process, and send a packet from the same network port at the same time. In order to improve the multi-core concurrency capability and achieve load balancing of message processing on a multi-core, 2 ways of shunting based on the RSS (received-side scaling) technology or software hash (hash) shunting may be used.
In the RSS packet offloading technique, packets of the same packet flow are all processed by one processing unit, that is, packets having the same source IP address, destination IP address, source port number, and destination port number are forwarded to a target processing unit in the multi-processing unit as packets of the same packet flow.
Since the RSS splitting technique aims to calculate a hash (hash) value based on a pre-specified specific field of a packet (e.g., a quadruple in a packet header), and then determine a processing unit that processes the packet based on the calculated hash value. When a large number of messages are encapsulated by using the tunneling encapsulation technique, the tunneling protocol headers of the messages are the same, or specific fields (e.g., quadruples) in the tunneling protocol headers of the messages are the same, then based on the RSS technique described above, the encapsulated messages are all forwarded to the same processing unit and processed by the same processing unit. Thus, the load of some processing units in the network device is too large, and the delay of processing the packet is increased.
Disclosure of Invention
The application provides a message distribution method and device of network equipment, which are beneficial to reducing the time delay of the network equipment for processing a first message when distributing the message encapsulated by a tunnel protocol in a multi-processing unit.
In a first aspect, a method for offloading packets of a network device is provided, where the network device includes multiple processing units, and the method is performed by the network device and includes: receiving a first message; according to the tunnel protocol message header of the first message, the first message is distributed to a first processing unit in the multi-processing unit for forwarding; if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message.
In this embodiment of the present application, when the first processing unit cannot process the first packet, the first packet may be shunted again through the inner layer packet header of the first packet, which is beneficial to reducing a time delay of processing the first packet by the network device, and avoids that, in a conventional packet shunting method, after the first packet is shunted to the first processing unit, the first processing unit cannot process the packet, which results in a packet transmission timeout.
In a possible implementation manner, if the first processing unit cannot process the first packet, rerouting the first packet according to an inner layer packet header of the first packet includes: and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
In this embodiment of the present application, when the load of the first processing unit meets the preset condition, the first packet may be shunted again through the inner layer packet header of the first packet, which is beneficial to reducing the load of the first processing unit, so as to provide a basis for implementing load balancing among multiple processing units. The method and the device avoid the phenomenon that in the traditional process of shunting the tunnel message header based on the message, the load of some processing units in the network equipment is overlarge and the load of some processing units is undersized to cause the load imbalance of the processing units in the network equipment because the encapsulated message is forwarded to the same processing unit (such as a first processing unit).
In one possible implementation, the method further includes: acquiring a first scheduling rule of the first message; determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again; the re-shunting the first packet according to the inner layer packet header of the first packet includes: and shunting the first message again through the network card based on the inner layer message header of the first message.
In the embodiment of the application, according to the indication of the first scheduling rule, the network card performs re-distribution on the first message according to the inner layer message header of the first message, so that a new module is not required to be added to implement re-distribution, and the load degree of the network device is favorably simplified.
In one possible implementation, the method further includes: acquiring a second scheduling rule of the first message; determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again; the re-shunting the first packet according to the inner layer packet header of the first packet includes: and shunting the first message again through the shunting module based on the inner layer message header of the first message.
In the embodiment of the present application, the first packet is shunted again by shunting according to the inner layer packet header of the first packet according to the indication of the second scheduling rule, which is favorable for expanding the application scenario of the present application, for example, even in a scenario where the network card does not support redistribution, the method of the embodiment of the present application may be used.
In a possible implementation manner, the re-shunting the first packet according to the inner layer packet header of the first packet includes: and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
In a possible implementation manner, the re-shunting the first packet according to the inner layer packet header of the first packet includes: and according to the inner layer message header of the first message, distributing the first message based on a receiving end extended RSS strategy.
In a second aspect, a network device is provided, the network device comprising a multi-processing unit including a first processing unit, the network device configured to: receiving a first message; according to the tunnel protocol message header of the first message, shunting the first message to the first processing unit for forwarding; and if the first processing unit cannot process the first message, the first processing unit is further configured to re-shunt the first message according to the inner layer message header of the first message.
In one possible implementation, the network device is further configured to: and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
In one possible implementation, the network device is further configured to: acquiring a first scheduling rule of the first message; determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again; and shunting the first message again through the network card based on the inner layer message header of the first message.
In one possible implementation, the network device is further configured to: acquiring a second scheduling rule of the first message; determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again; and re-shunting the first message by the shunting module based on the inner-layer message header of the first message.
In one possible implementation, the multi-processing unit is further configured to: and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
In a third aspect, a network device is provided, which includes a processor and a memory, the memory is used for storing a computer program, and the processor is used for calling and running the computer program from the memory, so that the network device executes the method of the above aspects.
In a fourth aspect, a network device is provided that includes various modules for performing the methods in the various aspects described above.
In a fifth aspect, a computer-readable medium is provided, which stores a computer program, which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the above-mentioned aspects.
It should be noted that, all or part of the computer program code may be stored in the first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and this is not specifically limited in this embodiment of the present application.
In a seventh aspect, a system on chip is provided, which includes a processor configured to implement the functions recited in the above aspects, such as generating, receiving, sending, or processing data and/or information recited in the above methods. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the terminal device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
Drawings
Fig. 1 shows a schematic diagram of a network device to which the embodiment of the present application is applied.
Fig. 2 is a schematic block diagram of the format of a tunneling protocol encapsulation based IPSec packet.
Fig. 3 is a schematic flow chart of a packet offloading method of a network device provided in the present application.
Fig. 4 is a schematic flowchart of a message offloading method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a network device according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a network device according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Network devices (e.g., enterprise routers) are used to connect multiple logically separate networks, so-called logical networks, which represent a single network or a sub-network. This can be done by a router when data is transferred from one subnet to another. The enterprise router is mainly used for connecting an enterprise Local Area Network (LAN) and a wide area network (Internet), and the enterprise router can be used for realizing enterprise heterogeneous network interconnection and multi-subnet interconnection.
For example, secure communication and resource sharing may be implemented between the headquarters and the branch offices through a Virtual Private Network (VPN), the gateway device a and the gateway device B integrated with the VPN function are respectively deployed at the outlets of the branch offices and the headquarters, an internet protocol security (IPsec) tunnel is established through the operator 1 and the operator 2, and end-to-end security services are provided for transmission of IP packets through encryption, authentication, and other manners.
In a period of time, a unidirectional message flow transmitted between a source IP address and a destination IP address, all messages have the same source port number, destination port number and protocol number as well as the source IP address and the destination IP address, i.e. five-tuple content is the same.
In order to process network data traffic in real time, a high-performance server adopts multiple paths and multiple cores, and in order to fully utilize each processor to process a large amount of network data, aggregation and dispersion technologies of the network data can be adopted. All data streams input from the external network are aggregated, and the aggregated data streams are dispersed into a plurality of queues according to a certain distribution strategy, wherein the queues correspond to the CPUs one by one, namely, one CPU processes the data in one queue, so that the multi-path and multi-core CPU resources can be fully utilized.
A multi-core processor refers to the integration of two or more complete compute engines (cores) in one processor. In a multi-core and multi-processor environment, when accessing a resource shared by a plurality of forwarding cores, locking processing is required, that is, only one forwarding process can receive, process and send packets from the same network port at the same time.
In order to avoid overhead caused by locking and negative influence on system performance, resource sharing is usually avoided during system design, and for network devices such as routers, switches, network servers and the like, messages belonging to the same message flow are processed by the same forwarding core, so that resource sharing between the forwarding cores caused by message cross-forwarding core processing is avoided.
In the following, related terms to which this application relates are briefly described.
1. Tunnel encapsulation
Tunneling is a way of transferring data between networks by using the infrastructure of the internetwork. The data (or payload) communicated using the tunnel may be data frames or packets of different protocols. Tunneling protocols re-encapsulate data frames or packets of other protocols and then send through tunnels. The new header provides routing information for delivering the encapsulated payload data over the internet.
To create a tunnel, both the client and the server of the tunnel must use the same tunneling protocol during the creation of the tunnel. The tunneling technique may be based on layer 2 or layer 3 tunneling protocols, respectively. Among them, the layer 2 tunneling protocol corresponds to a data link layer of the OSI model, and uses frames as data exchange units. The point-to-point tunneling protocol (PPTP), the second layer tunneling protocol (L2 TP) and the layer 2 forwarding protocol (L2F) all belong to the layer 2 tunneling protocol, and user data are packaged in a point-to-point protocol (PPP) frame and sent through the Internet. The layer 3 tunneling protocol corresponds to a network layer of the OSI model, and uses packets as data exchange units. The IP over IP (IPIP) and IPSec tunneling modes belong to layer 3 tunneling protocols, and IP packets are encapsulated in additional IP headers and transmitted through an IP network.
The tunneling protocol is composed of a transport bearer, different encapsulation formats, and user packets, and is different in that the user packets are encapsulated in different packets and transported in a tunnel.
2. IPsec protocol
Internet protocol security (IPsec) is a protocol packet that encrypts and authenticates packets of an IP protocol to protect the network transport protocol suite of the IP protocol, i.e., a set of interrelated protocols. The IPsec protocol can encapsulate an IP address of an internal network by using an Internet routable address through a packet encapsulation technique, thereby realizing interworking between different-place networks.
IPsec is a three-layer tunnel encryption protocol established by IETF and provides high-quality, interoperable, cryptography-based security guarantees for data transmitted over the Internet. The IPsec protocol provides an architecture applied to network data security on an IP layer, and includes a network authentication protocol Authentication Header (AH), an Encapsulated Security Payload (ESP), an Internet Key Exchange (IKE) key, and an algorithm for network authentication and encryption. The AH protocol and the ESP protocol are used to provide security services, and the IKE protocol is used for key exchange.
The authentication header AH provides connectionless data integrity, message authentication and replay attack prevention protection for the IP datagram; the encapsulating security payload ESP is used to provide confidentiality, data source authentication, connectionless integrity, anti-replay and limited transport-flow (traffic-flow) confidentiality; the security association SA is used to provide algorithms and data packets, providing the parameters needed for AH, ESP operations.
The AH protocol provides data source authentication, data integrity verification, and message replay protection, protects communications from tampering, but does not prevent eavesdropping, and is suitable for use in transmitting non-confidential data. The operating principle of the AH is to add an identity authentication message header to a data packet, and the message header is inserted behind a standard IP packet header to provide integrity protection for data. Alternative authentication algorithms are (message digest 5, MD5), secure hash algorithm (secure hash algorithm 1, SHA-1).
The ESP protocol provides encryption, data source authentication, data integrity verification, and anti-replay functions. The ESP protocol adds an ESP message header behind the standard IP packet header of each data packet and adds an ESP tail behind the data packet. The ESP encrypts and encapsulates user data to be protected into an IP packet to ensure confidentiality of the data. The encryption algorithm of ESP includes DES, 3DES, AES, etc. The user can select the MD5 and SHA-1 algorithms to ensure the integrity and authenticity of the message.
The security mechanism of the IPsec comprises authentication and encryption, wherein the authentication mechanism enables a data receiver of IP communication to confirm the real identity of a data sender and whether data is tampered in the transmission process; the encryption mechanism guarantees the confidentiality of data by carrying out encryption operation on the data so as to prevent the data from being intercepted in the transmission process.
IPsec security services include Data Confidentiality (Confidentiality), data Integrity (Data Integrity), data Authentication (Data Authentication), and Anti-Replay (Anti-Replay).
Wherein, the data confidentiality is used for encrypting the packet before the IPsec sender transmits the packet through the network; the data integrity is used for authenticating a packet sent by a sender by an IPsec receiver so as to ensure that data is not tampered in the transmission process; the data source authentication is used for the IPsec to authenticate whether a sending end which sends the IPsec message is legal or not at a receiving end; replay protection is used for IPsec receivers to detect and reject the reception of outdated or duplicate messages.
3. Data message
The message is a data unit exchanged and transmitted in the network, and is also a network transmission unit, and the message contains complete data information to be sent. The message is continuously encapsulated into packets, packets and frames for transmission in the transmission process, and the encapsulation mode is to add a header consisting of some control information, which is called a message header.
4. RSS splitting
RSS (received signal scaling) is a network card driving technology that enables efficient distribution of received messages among multiple processing units (e.g., CPUs) in a multiprocessor system. The multi-queue network card technology distributes and receives network flow among a plurality of processors by carrying out hash processing on a header file of an incoming data packet through the support of a multi-queue network card drive, and binds each queue to different cores through interruption so as to meet the requirement of the network card. The network card analyzes the received message to obtain an IP address, a protocol and a quadruple. The network card calculates a HASH value according to a specific field (e.g., a quadruple) of a packet header by using a configured HASH function, and queries a RETA (redirection table) to determine a processing unit for processing the packet based on Least Significant Bits (LSBs) of the HASH value as an index of the RETA.
The network card supports multiple receive and transmit queues, i.e., multiple queues. When receiving a message stream, a network card can send different packets to different queues for distributed processing among different CPUs. The network card specifies for each packet, through a filter, that the packet belongs to one of a few streams. The packets in each stream are controlled in a separate receive queue and the queue loops are processed by the CPU, referred to as receiver spread RSS.
The driver of the multi-queue network card provides a kernel module parameter for specifying the number of hardware queues. For example, the parameter used by the bnx2x driver is num _ queues if the device supports enough queues, one CPU corresponds to one receive queue in the RSS configuration. Or at least one receive queue per memory domain, a memory domain comprising a series of CPUs and sharing a particular memory level (e.g., L1, L2, NUMA node).
The indirection table of the RSS equipment is mapped when the driver is initialized, and the default mapping is that the queues are uniformly distributed in the indirection table. However, the indirection table may be viewed or modified at run time using the ethnool command (- -show-rxfh-index and-set-rxfh-index). Modifying the indirection table may set different proportions of weight for different queues.
There is a separate IRQ, i.e. interrupt number, for each receive queue. The NIC informs the CPU through the IRQ when a new packet arrives in the designated queue. The PCIe device uses the MSI-X to route each interrupt to the CPU. The effective queue to IRQ mapping is formulated by/proc/interrupts. An interrupt can be processed by any one of the CPUs in the default setting because the packet processing section occurs in the receive interrupt handling function.
RSS should be enabled when low latency is a concern or receive interrupt processing becomes a bottleneck. Load is shared between different CPUs thereby reducing queue length. As many queues as there are CPUs can be created for a low latency network. In an efficient configuration there are minimal queues and no queue overflows. This is because by default the total number of interrupts with interrupt aggregation enabled increases with each increased queue.
The RSS shunting is a necessary condition for better utilizing the multi-core processor by the network card, and the network card with a plurality of RSS queues can divide different network connections into different queues and then respectively send the different queues to different CPU cores for processing, so that loads are dispersed to realize load balancing, and the capacity of the multi-core processor is fully utilized.
5. Control plane and forwarding plane
The control plane and forwarding plane of the network device may be physically or logically separated. For example, the core switches, core routers may be physically separated. The processing unit (for example, a CPU) on the main control board is not responsible for message forwarding and is dedicated to system control; while the processing units (e.g., CPUs) in the service boards are dedicated to data packet forwarding.
The control plane refers to the portion of the system used to transmit instructions, compute table entries. Such as protocol packet forwarding, protocol table entry calculation, maintenance, etc., all belong to the category of the control plane. For example, in a routing system, the process responsible for route protocol learning and route table entry maintenance belongs to the control plane.
The forwarding plane refers to a part used for encapsulating and forwarding data messages in the system. Such as receiving, decapsulating, encapsulating, forwarding, etc. of data packets, fall within the scope of the forwarding plane. For example, after the system receives an IP packet, it needs to perform decapsulation, lookup a routing table, and forward the packet from an egress interface, and the process responsible for the above actions in the system belongs to a forwarding plane.
After the control plane of the system carries out protocol interaction and route calculation, a plurality of table entries are generated and sent to the forwarding plane to guide the forwarding plane to forward the message. For example: the router establishes a routing table entry through an OSPF protocol, and further generates a Forwarding Information Base (FIB) table, a fast forwarding table and the like to guide the system to forward the IP packet.
It should be noted that, in the embodiment of the present application, the above-mentioned packet or data packet has the same meaning as the following message, and may be replaced.
Fig. 1 shows a schematic diagram of a network device to which the embodiment of the present application is applied. The network device 100 shown in fig. 1 includes a network card 110, and a plurality of processing units 120.
The network card 110 has an RSS stream splitting function, and performs hash (hash) calculation on the received network data packet based on a triplet or a quintet to complete a hardware stream splitting task and stream packets with different hash values to different processing units.
Each processing unit 120 corresponds to a different receiving queue, the receiving unit is configured to store a received message to be processed, and the processing unit may obtain the message from the corresponding receiving queue and process, for example, forward, the message.
It should be noted that the processing unit may be a core in a processor, and the processing unit is also called a forwarding core. The multiple processing units may also be independent processors, which is not limited in this embodiment of the present application.
Since the RSS splitting technique aims to calculate a hash (hash) value based on a pre-specified packet specific field (e.g., a quintuple in a packet header), and then determine a processing unit for processing a packet based on the calculated hash value. When a large number of messages are encapsulated by using the tunnel encapsulation technology, the tunnel protocol headers of the messages are the same, or specific fields (e.g., quintuple) located in the tunnel protocol headers are the same, based on the RSS technology described above, the encapsulated messages are forwarded to the same processing unit due to the specific same specific fields, and are processed by the same processing unit. This may result in some processing units in the network device being overloaded and some processing units being underloaded, causing a load imbalance among the multiple processing units in the network device.
In order to avoid the foregoing problems, embodiments of the present application provide a message offloading scheme based on multiple processing units, where after a first message is offloaded to a first processing unit according to a tunneling protocol header, and the first processing unit cannot process the message, the message may be offloaded again according to an inner layer packet header of the first message, which is beneficial to improving load imbalance among the multiple processing units in the network device.
For ease of understanding, the RSS splitting policy applicable to the embodiment of the present application is described in conjunction with table 1. The format of the IPSec message encapsulated based on the tunneling protocol is described with reference to fig. 2.
The specification of the skip list shown in table 1 is 256 entries, and 8 receiving queues (Q) are configured, and the queue 0 to the queue 7 are repeated 16 times in the skip list (indication table).
TABLE 1 jump table
Figure BDA0003783057350000071
RSS shunting requires configuring an RSS hash key (key) field of the network card in advance, for example, a source ip, a source port, a destination ip, and a destination port of a packet are used as hash keys, the network card hardware calculates a hash value according to the hash keys, and determines to take an LSB of the hash value according to a specification of a skip list, for example, the specification of the skip list is 128, and a 7-bit LSB can be taken. The low-order 7 bits 0-127 correspond to the appointed receiving queue number in the jump table, and each receiving queue is bound with a processing unit, thus achieving the purpose that the multiple processing units receive and transmit the flows corresponding to different hash values in parallel.
Hereinafter, the message format of the present application will be briefly described.
Currently, a message of the IPSec protocol is mainly encapsulated in a tunnel based on a Generic Routing Encapsulation (GRE) tunnel, and the GRE tunnel may encapsulate a multicast message and a broadcast message into a unicast message and then encrypt the unicast message by using the IPSec.
The GRE tunnel encrypted by IPSec can be used for VPN deployment, wherein a message encrypted by the IPSec technology is encapsulated through the tunnel and then transmitted from the tunnel, and the method is called GRE over IPsec. A dynamic routing protocol for running a virtual tunnel is established through GRE, a packet is filtered by configuring technologies such as an Access Control List (ACL) and Quality of Service (QoS) on an interface device of the virtual tunnel, and data transmitted in the GRE tunnel is protected by an IPSec technology, and a specific encapsulation format of the packet may be shown in fig. 2.
A schematic flow chart of a packet offloading method of a network device according to an embodiment of the present application is described below with reference to fig. 3. The method shown in FIG. 3 may be performed by a network device in which the multi-processing unit is located, such as the network device shown in FIG. 1. The first processing unit may be any one of the multi-processing units. The method shown in fig. 3 includes steps 310 to 330.
A first message is received 310.
The first message is based on a message encapsulated by a tunnel protocol. The first packet may include two headers, namely a tunneling protocol header and an inner layer header. The tunnel protocol header is used for indicating that the first message is transmitted through the tunnel, and the inner layer header indicates that the first message is transmitted based on the IPSec protocol. As a possible message format, see fig. 2.
And 320, according to the tunneling protocol header of the first packet, shunting the first packet to the first processing unit for forwarding.
Optionally, based on the RSS offloading policy introduced above, a specific field is selected from a tunneling protocol packet header, and the first packet is offloaded to the first processing unit according to the specific field.
It should be noted that the specific field may be a quintuple, a quadruplet, or a duplet in a header of the tunneling protocol, which is not limited in this embodiment of the application.
And 330, if the first processing unit cannot process the first packet, redistributing the first packet according to the inner-layer packet header of the first packet.
It should be noted that there are many reasons why the first processing unit cannot process the first packet, and specifically, the first processing unit may meet a preset condition, for example, the first processing unit is overloaded, or the calculation amount of the first processing unit meets a preset threshold, and the specific condition may be referred to as the description of the case # A2 in the preset condition, and is not described herein again for brevity.
Optionally, the step 330 includes: and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
The following specifically describes preset conditions of the message offloading method according to the embodiment of the present application. When the network device includes a multi-core processor, the control unit may be a control core in the multi-core processor, and the processing unit may be a forwarding core in the multi-core processor. When the network device includes multiple processors, the control unit in the following may be a processor for control in the multiple processors, and the processing unit may be a processor for forwarding in the multiple processors.
Case # A1
When the CPU occupancy of the processing unit #1 is smaller than the first threshold, the control unit determines that the processing unit #1 adopts the RSS splitting mode according to the load information of the processing unit #1, that is, the message stream can be directly forwarded on the processing unit #1 after being decrypted and decapsulated.
For example, when the CPU occupancy of the processing unit #1 is less than 90%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit # 1.
Or, when the port flow of the processing unit #1 is smaller than the second threshold, the control unit determines that the processing unit #1 adopts the RSS splitting mode according to the load information of the processing unit #1, that is, the message stream can be directly forwarded on the processing unit after being decrypted and decapsulated.
For example, when the port traffic of the processing unit #1 is less than 50%, the control unit determines that the processing unit #1 adopts the RSS splitting mode according to the load information of the processing unit # 1.
Case # A2
When the CPU occupancy rate of the processing unit #1 is larger than the first threshold value, the control unit determines that the processing unit #1 adopts the secondary shunting mode according to the load information of the processing unit # 1.
For example, when the CPU occupancy of the processing unit #1 is greater than 90%, the control unit determines that the processing unit #1 adopts the secondary splitting mode according to the load information of the processing unit # 1.
Alternatively, when the port flow rate of the processing unit #1 is greater than the second threshold, the control unit determines that the processing unit #1 adopts the secondary shunt mode according to the load information of the processing unit # 1.
For example, when the port traffic of the processing unit #1 is greater than 50%, the control unit determines that the processing unit #1 adopts the RSS splitting mode according to the load information of the processing unit # 1.
The shunting in step 330 may be implemented by hardware shunting executed based on a network card, or may be implemented by software shunting based on a software module, which is not limited in this embodiment of the present application. For part of the network devices, the part of the network devices only support shunting the externally received messages, but do not support processing the messages returned by the network devices through the internal physical channel, that is, when the network card does not support secondary shunting shown in step 330, the messages may be shunted in a software shunting manner. Of course, if the network card supports the secondary offloading shown in step 330, a hardware offloading scheme based on the network card may be directly used, and a software module does not need to be added, so as to implement the above-mentioned message offloading scheme based on software.
It should be noted that, if the network card supports software loopback and hardware stream shunting, the network card may send the packet to the network card by using an internal loopback (loopback) mode of the network card, and the network card performs secondary shunting according to a specific field of the original packet (or the inner layer packet), that is, the decapsulated packet is directly sent back to other processing units.
If the network card does not support software loopback, a software flow shunting mode can be adopted for secondary shunting, namely, the message is sent to the shunting module, the shunting module carries out secondary shunting according to a specific field of the original message, namely, the decapsulated message is directly sent back to other processing units.
In this embodiment, the control plane of the network device may include a scheduling analysis module, configured to perform traffic monitoring on a tunnel encapsulation packet flow entering a tunnel, and determine a flow splitting mode of the packet flow according to a load level of the processing unit.
For example, when the CPU occupancy of the processing unit #1 is less than 90%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit.
For example, when the port traffic of the processing unit #1 is less than 50%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit.
For example, when the CPU occupancy of the processing unit #1 is greater than 90%, the control unit determines that the processing unit #1 adopts the secondary splitting mode according to the load information of the processing unit.
For example, when the port traffic of the processing unit #1 is greater than 50%, the control unit determines that the processing unit #1 adopts the secondary splitting mode according to the load information of the processing unit.
Optionally, the secondary shunting mode includes a hardware shunting mode and a software shunting mode.
A schematic flow chart of a packet offloading method according to an embodiment of the present application is described below with reference to fig. 4, where the method shown in fig. 4 is executed by a network device, and the network device includes a network card, a plurality of forwarding cores, and a control core.
At S501, a network card (e.g., an ethernet card) receives a first message.
In S502, the network card forwards the first packet to a first forwarding core of the multiple forwarding cores.
The network may determine, through hash computation (hash), a first forwarding core corresponding to processing the first packet.
In S503, the first forwarding core reports the encrypted packet traffic of the first tunnel and the load of the first forwarding core to the control core, where the first tunnel is a tunnel used for transmitting the first packet.
It should be noted that, the above-mentioned S503 is executed after the first forwarding core receives the first message, or the first forwarding core periodically reports the first message to the controller, which is not limited in this embodiment of the application.
At S504, the control core generates a scheduling rule based on the encrypted packet traffic of the first tunnel and the load of the first forwarding core.
The scheduling rule may indicate different scheduling modes in different situations, which is described in detail below with reference to three situations.
In the first scheduling mode, when the load of the first forwarding core is overloaded and the network card supports the secondary shunting mode, the scheduling rule is used for indicating that the network card performs secondary shunting according to the inner layer message of the first message.
And in the second scheduling mode, when the load of the first forwarding core is overloaded and the network card does not support the secondary shunting mode, the scheduling rule is used for indicating that the shunting module carries out secondary shunting according to the inner layer message of the first message.
And in a third scheduling mode, when the load of the first forwarding core is not overloaded, the scheduling rule is used for indicating that the first forwarding core directly forwards the first message.
It should be noted that, in the third scheduling mode, a specific process of the first forwarding core forwarding the first packet may refer to a conventional packet forwarding process, and for brevity, details are not described herein again.
The load information of the first forwarding core may be CPU occupancy of the forwarding core, port traffic information of the forwarding core, or the like.
Case # A1
When the CPU occupancy of the forwarding core #1 is smaller than the first threshold, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core, that is, the message stream can be directly forwarded on the forwarding core after being decrypted and decapsulated.
For example, when the CPU occupancy of the forwarding core #1 is less than 90%, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core.
Or when the port flow of the forwarding core #1 is smaller than the second threshold, the control core determines that the forwarding core #1 adopts an RSS splitting mode according to the load information of the forwarding core, that is, the message stream can be directly forwarded on the forwarding core after being decrypted and decapsulated.
For example, when the port traffic of the forwarding core #1 is less than 50%, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core.
Case # A2
And when the CPU occupancy rate of the forwarding core #1 is greater than a first threshold value, the control core determines that the forwarding core #1 adopts a secondary shunting mode according to the load information of the forwarding core.
For example, when the CPU occupancy of the forwarding core #1 is greater than 90%, the control core determines that the forwarding core #1 adopts the secondary offloading mode according to the load information of the forwarding core.
Or when the port flow of the forwarding core #1 is greater than the second threshold, the control core determines that the forwarding core #1 adopts the secondary offloading mode according to the load information of the forwarding core.
For example, when the port traffic of the forwarding core #1 is greater than 50%, the control core determines that the forwarding core #1 adopts the secondary offload mode according to the load information of the forwarding core.
At S505, the controller transmits the scheduling rule to the first forwarding core.
At S506, the first forwarding core decapsulates the first packet flow, so that the first packet may be secondarily shunted according to the inner packet header of the first packet in the following.
In S507, if the scheduling rule indicates that the network card performs secondary shunting on the first packet, the first forwarding core forwards the decapsulated first packet to the network card; and if the dispatching rule indicates that the shunting module performs secondary shunting on the first message, the first forwarding core forwards the decapsulated first message to the shunting module.
It should be noted that, after the first packet is forwarded to the network card, the network card performs secondary shunting on the first packet based on the inner layer packet header, and specifically, may perform hash calculation on the inner layer packet, and re-determine the forwarding core. After the first packet is forwarded to the distribution module, the distribution module performs secondary distribution on the first packet based on the inner-layer header, specifically, hash calculation may be performed on the inner-layer packet, and the forwarding core is determined again.
It should be further noted that, after the forwarding core is re-determined, the decapsulated first packet may be transmitted to the re-determined forwarding core in a conventional packet splitting manner. For example, the network card may send the decapsulated first packet to the packet receiving module, and the packet receiving module forwards the decapsulated first packet to the re-determined forwarding core. For another example, the distribution module may send the decapsulated first packet to the packet receiving module, and the packet receiving module forwards the decapsulated first packet to the re-determined forwarding core.
The method of the embodiment of the present application is described above with reference to fig. 1 to 4, and the apparatus of the embodiment of the present application is described below with reference to fig. 5 to 6. It should be noted that the apparatuses shown in fig. 5 to fig. 6 can implement the steps in the above method, and are not described herein again for brevity.
Fig. 5 is a schematic diagram of a network device according to an embodiment of the present application, where the network device includes multiple processing units, and the network device 500 shown in fig. 5 may include a receiving module 510 and a processing module 520.
The network device receives a first message through a receiving module 510;
the network device shunts the first packet to the first processing unit for forwarding through a processing module 520 according to a tunneling protocol packet header of the first packet;
if the first processing unit cannot process the first packet, the network device shunts the first packet again through the processing module 520 according to the inner layer packet header of the first packet.
Optionally, as an embodiment, the network device is further configured to: acquiring a first scheduling rule of the first message through the processing module 520; determining, by the processing module 520 according to the first scheduling rule, that a network card of the network device shunts an inner-layer packet header of the first packet, and redistributing the first packet; and shunting the first message again through the network card based on the inner layer message header of the first message.
Optionally, as an embodiment, the network device is further configured to: acquiring a second scheduling rule of the first message through the processing module 520; determining, by the processing module 520 according to the second scheduling rule, that the inner layer packet header of the first packet is shunted by the shunting module of the network device, and shunting the first packet again; and shunting the first message again through the shunting module based on the inner layer message header of the first message.
Optionally, as an embodiment, if the load of the first processing unit meets a preset condition, the network device shunts the first packet again through the processing module 520 according to the inner layer packet header of the first packet.
Optionally, as an embodiment, the network device shunts the first packet to other processing units except the first processing unit in the multi-processing unit again through the processing module 520 according to the inner layer packet header of the first packet.
Optionally, as an embodiment, the network device shunts the first packet according to an inner-layer packet header of the first packet by using the processing module 520 based on a receiving-end-extended RSS policy.
In an alternative embodiment, the processing module 520 may be a part of the plurality of processing cores 620, and the receiving module 510 may be an input/output interface 630. The network device 600 may also include a memory 610, as shown in detail in fig. 6.
Fig. 6 is a schematic block diagram of a network device of an embodiment of the present application. The network device 600 shown in fig. 6 may include: memory 610, processor 620, input/output interface 630. The memory 610, the processor 620 and the input/output interface 630 are connected through a communication connection, the memory 610 is used for storing program instructions, and the processor 620 is used for executing the program instructions stored in the memory 620, so as to control the input/output interface 630 to receive input data and information and output data such as operation results.
The processor 620 includes multiple cores. Each core may correspond to a processing unit as described above.
It should be understood that, in the embodiment of the present application, the processor 620 may adopt a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, for executing a relevant program to implement the technical solutions provided in the embodiments of the present application.
The memory 610 may include a read-only memory and a random access memory, and provides instructions and data to the processor 620. A portion of processor 620 may also include non-volatile random access memory. For example, the processor 620 may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 620. The method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in the memory 610, and the processor 620 reads the information in the memory 610 and performs the steps of the above method in combination with the hardware thereof. To avoid repetition, it is not described in detail here.
It should be understood that in the embodiments of the present application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that in embodiments of the present application, the memory may comprise both read-only memory and random access memory, and may provide instructions and data to the processor. A portion of the processor may also include non-volatile random access memory. For example, the processor may also store information of the device type.
It should also be understood that in the embodiments of the present application, "first", "second", "third", and the like are merely for distinction and are not limited in sequence.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of message offloading for a network device, the network device comprising a multi-processing unit, the method performed by the network device,
the method comprises the following steps:
receiving a first message;
according to the tunnel protocol message header of the first message, the first message is distributed to a first processing unit in the multi-processing unit for forwarding;
if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message; wherein the tunneling protocol header is used to indicate that the first packet is transmitted through the tunnel, and the inner layer header is used to indicate that the first packet is transmitted based on the IPSec protocol.
2. The method of claim 1, wherein if the first processing unit cannot process the first packet, re-shunting the first packet according to an inner layer packet header of the first packet comprises:
and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
3. The method of claim 1 or 2, further comprising:
acquiring a first scheduling rule of the first message;
determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again;
the re-shunting the first packet according to the inner layer packet header of the first packet includes:
and shunting the first message again through the network card based on the inner layer message header of the first message.
4. The method of any one of claims 1-3, further comprising:
acquiring a second scheduling rule of the first message;
determining that a distribution module of the network equipment distributes an inner layer message header of the first message according to the second scheduling rule, and redistributing the first message;
the re-shunting the first packet according to the inner packet header of the first packet includes:
and shunting the first message again through the shunting module based on the inner layer message header of the first message.
5. The method according to any of claims 1-4, wherein said re-shunting the first packet according to the inner header of the first packet comprises:
and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
6. A network device, the network device comprising a multi-processing unit, the multi-processing unit comprising a first processing unit, the network device to:
receiving a first message;
according to the tunnel protocol message header of the first message, shunting the first message to the first processing unit for forwarding;
if the first processing unit cannot process the first message, the first processing unit is further configured to re-shunt the first message according to an inner layer message header of the first message; wherein the tunneling protocol header is used for indicating that the first packet is transmitted through the tunnel, and the inner layer header is used for indicating that the first packet is transmitted based on the IPSec protocol.
7. The apparatus of claim 6, wherein the network device is further to:
and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
8. The apparatus of claim 6 or 7, wherein the network device is further to:
acquiring a first scheduling rule of the first message;
determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again;
and shunting the first message again through the network card based on the inner layer message header of the first message.
9. The apparatus of any of claims 6-8, wherein the network device is further to:
acquiring a second scheduling rule of the first message;
determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again;
and shunting the first message again through the shunting module based on the inner layer message header of the first message.
10. The apparatus of any of claims 6-9, wherein the multi-processing unit is further to:
and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
CN202210934791.XA 2019-05-21 2019-05-21 Message distribution method and device of network equipment Active CN115174482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210934791.XA CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910423117.3A CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment
CN202210934791.XA CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910423117.3A Division CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Publications (2)

Publication Number Publication Date
CN115174482A true CN115174482A (en) 2022-10-11
CN115174482B CN115174482B (en) 2023-06-02

Family

ID=73435839

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910423117.3A Active CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment
CN202210934791.XA Active CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910423117.3A Active CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Country Status (1)

Country Link
CN (2) CN111988211B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112953841B (en) * 2021-02-20 2022-05-27 杭州迪普信息技术有限公司 Message distribution method and system
CN113157445B (en) * 2021-03-30 2022-04-08 郑州信大捷安信息技术股份有限公司 Bidirectional message symmetric RSS processing method and system based on Hash operation and index value comparison
CN116055397B (en) * 2023-03-27 2023-08-18 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070025252A1 (en) * 2005-08-01 2007-02-01 Mcgee Michael S Network resource teaming combining receive load-balancing with redundant network connections
CN102055679A (en) * 2011-01-28 2011-05-11 中国人民解放军国防科学技术大学 Message scheduling method for power consumption control in forwarding engine
CN104468391A (en) * 2014-12-16 2015-03-25 盛科网络(苏州)有限公司 Method and system for achieving load balance according to user information of tunnel message
US20160080268A1 (en) * 2014-09-16 2016-03-17 CloudGenix, Inc. Methods and systems for hub high availability and network load and scaling
US20160135074A1 (en) * 2013-07-12 2016-05-12 Telefonaktiebolaget L M Ericsson (Publ) Method for enabling control of data packet flows belonging to different access technologies
WO2017162117A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Accurate speed limiting method and apparatus for cluster
CN108737239A (en) * 2018-08-30 2018-11-02 新华三技术有限公司 A kind of message forwarding method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100496024C (en) * 2006-09-26 2009-06-03 杭州华三通信技术有限公司 A method to forward the channel message and a network device
CN100508499C (en) * 2006-11-02 2009-07-01 杭州华三通信技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
EP2712128B1 (en) * 2011-07-06 2016-01-13 Huawei Technologies Co., Ltd. Message processing method and related device thereof
CN102624611B (en) * 2011-12-31 2015-01-21 华为数字技术(成都)有限公司 Method, device, processor and network equipment for message dispersion
CN106209664A (en) * 2016-07-22 2016-12-07 迈普通信技术股份有限公司 A kind of data transmission method, Apparatus and system
US20180285151A1 (en) * 2017-03-31 2018-10-04 Intel Corporation Dynamic load balancing in network interface cards for optimal system level performance
CN108270699B (en) * 2017-12-14 2020-11-24 中国银联股份有限公司 Message processing method, shunt switch and aggregation network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070025252A1 (en) * 2005-08-01 2007-02-01 Mcgee Michael S Network resource teaming combining receive load-balancing with redundant network connections
CN102055679A (en) * 2011-01-28 2011-05-11 中国人民解放军国防科学技术大学 Message scheduling method for power consumption control in forwarding engine
US20160135074A1 (en) * 2013-07-12 2016-05-12 Telefonaktiebolaget L M Ericsson (Publ) Method for enabling control of data packet flows belonging to different access technologies
US20160080268A1 (en) * 2014-09-16 2016-03-17 CloudGenix, Inc. Methods and systems for hub high availability and network load and scaling
CN104468391A (en) * 2014-12-16 2015-03-25 盛科网络(苏州)有限公司 Method and system for achieving load balance according to user information of tunnel message
WO2017162117A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Accurate speed limiting method and apparatus for cluster
CN108737239A (en) * 2018-08-30 2018-11-02 新华三技术有限公司 A kind of message forwarding method and device

Also Published As

Publication number Publication date
CN115174482B (en) 2023-06-02
CN111988211B (en) 2022-09-09
CN111988211A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
US11792046B2 (en) Method for generating forwarding information, controller, and service forwarding entity
JP6288802B2 (en) Improved IPsec communication performance and security against eavesdropping
EP3387812B1 (en) Virtual private network aggregation
US7835285B2 (en) Quality of service, policy enhanced hierarchical disruption tolerant networking system and method
EP1435716B1 (en) Security association updates in a packet load-balanced system
CN111988211B (en) Message distribution method and device of network equipment
US10601610B2 (en) Tunnel-level fragmentation and reassembly based on tunnel context
WO2018176961A1 (en) Load balancing system, method, and device
US20220394014A1 (en) Multi-uplink path quality aware ipsec
US20220394017A1 (en) Ipsec processing on multi-core systems
CN101572644B (en) Data encapsulation method and equipment thereof
US20220394016A1 (en) Dynamic path selection of vpn endpoint
US20190372948A1 (en) Scalable flow based ipsec processing
US20230118718A1 (en) Handling multipath ipsec in nat environment
CN113365267A (en) Communication method and device
WO2022260711A1 (en) Multi-uplink path quality aware ipsec
CN106209401B (en) A kind of transmission method and device
Liu et al. P4NIS: Improving network immunity against eavesdropping with programmable data planes
CN116260579A (en) Message encryption and decryption method for IP packet
Seggelmann et al. SSH over SCTP—Optimizing a multi-channel protocol by adapting it to SCTP
Tennekoon et al. Prototype implementation of fast and secure traceability service over public networks
EP2996291B1 (en) Packet processing method, device, and system
CN111669374B (en) Encryption and decryption performance expansion method for single tunnel software of IPsec VPN
KR101922980B1 (en) Network device and packet transmission method of the network device
CN113965518A (en) Message processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant