CN111988211A - Message distribution method and device of network equipment - Google Patents

Message distribution method and device of network equipment Download PDF

Info

Publication number
CN111988211A
CN111988211A CN201910423117.3A CN201910423117A CN111988211A CN 111988211 A CN111988211 A CN 111988211A CN 201910423117 A CN201910423117 A CN 201910423117A CN 111988211 A CN111988211 A CN 111988211A
Authority
CN
China
Prior art keywords
message
processing unit
packet
shunting
header
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910423117.3A
Other languages
Chinese (zh)
Other versions
CN111988211B (en
Inventor
吴轩
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210934791.XA priority Critical patent/CN115174482B/en
Priority to CN201910423117.3A priority patent/CN111988211B/en
Publication of CN111988211A publication Critical patent/CN111988211A/en
Application granted granted Critical
Publication of CN111988211B publication Critical patent/CN111988211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method and a device for message distribution of network equipment, wherein the network equipment comprises a plurality of processing units, the method is executed by the network equipment, and the method comprises the following steps: receiving a first message; according to the tunnel protocol message header of the first message, the first message is distributed to a first processing unit in the multi-processing unit for forwarding; if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message. When the message encapsulated by the tunnel protocol is shunted in the multi-processing unit, the time delay of the network equipment for processing the first message is favorably reduced.

Description

Message distribution method and device of network equipment
Technical Field
The present application relates to the field of information technology, and in particular, to a method and an apparatus for message distribution of a network device.
Background
The network device generally includes a plurality of processing units (e.g., a multi-core processor), and generally employs a multi-core polling packet receiving model in which each core corresponds to one forwarding process (thread), and when a plurality of forwarding cores receive a packet from one network port, lock protection needs to be performed on a shared resource, that is, only one forwarding process (thread) can receive, process, and send a packet from the same network port at the same time. In order to improve the multi-core concurrency capability and achieve load balancing of message processing on a multi-core, 2 ways of shunting based on the RSS (received-side scaling) technology or software hash (hash) shunting may be used.
In the RSS packet offloading technique, packets of the same packet flow are all processed by one processing unit, that is, packets having the same source IP address, destination IP address, source port number, and destination port number are forwarded to a target processing unit in the multi-processing unit as packets of the same packet flow.
Since the RSS splitting technique aims to calculate a hash (hash) value based on a pre-specified packet specific field (e.g., a quadruple in a packet header), and then determine a processing unit for processing a packet based on the calculated hash value. When a large number of messages are encapsulated by using the tunneling encapsulation technique, the tunneling protocol headers of the messages are the same, or specific fields (e.g., quadruples) in the tunneling protocol headers of the messages are the same, then based on the RSS technique described above, the encapsulated messages are all forwarded to the same processing unit and processed by the same processing unit. Thus, the load of some processing units in the network device is too large, and the time delay for processing the message is increased.
Disclosure of Invention
The application provides a message distribution method and device of network equipment, which are beneficial to reducing the time delay of the network equipment for processing a first message when distributing the message encapsulated by a tunnel protocol in a multi-processing unit.
In a first aspect, a method for offloading packets of a network device is provided, where the network device includes multiple processing units, and the method is performed by the network device and includes: receiving a first message; according to the tunnel protocol message header of the first message, the first message is distributed to a first processing unit in the multi-processing unit for forwarding; if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message.
In this embodiment of the application, when the first processing unit cannot process the first packet, the first packet may be shunted again through an inner layer packet header of the first packet, which is beneficial to reducing a time delay of processing the first packet by the network device, and avoids that, in a conventional packet shunting method, after the first packet is shunted to the first processing unit, the first processing unit cannot process the packet, which results in a timeout of packet transmission.
In a possible implementation manner, if the first processing unit cannot process the first packet, rerouting the first packet according to an inner layer packet header of the first packet includes: and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
In this embodiment of the application, when the load of the first processing unit meets the preset condition, the first packet may be shunted again through the inner-layer packet header of the first packet, which is beneficial to reducing the load of the first processing unit, so as to provide a basis for implementing load balancing among the plurality of processing units. The method and the device avoid the phenomenon that in the traditional process of shunting the tunnel message header based on the message, the load of some processing units in the network equipment is overlarge and the load of some processing units is undersized to cause the load imbalance of the processing units in the network equipment because the encapsulated message is forwarded to the same processing unit (such as a first processing unit).
In one possible implementation, the method further includes: acquiring a first scheduling rule of the first message; determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again; the re-shunting the first packet according to the inner layer packet header of the first packet includes: and shunting the first message again through the network card based on the inner layer message header of the first message.
In the embodiment of the application, according to the indication of the first scheduling rule, the network card performs re-distribution on the first message according to the inner layer message header of the first message, so that a new module is not required to be added to implement re-distribution, and the load degree of the network device is favorably simplified.
In one possible implementation, the method further includes: acquiring a second scheduling rule of the first message; determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again; the re-shunting the first packet according to the inner layer packet header of the first packet includes: and shunting the first message again through the shunting module based on the inner layer message header of the first message.
In the embodiment of the present application, the first packet is shunted again by shunting according to the inner layer packet header of the first packet according to the indication of the second scheduling rule, which is favorable for expanding the application scenario of the present application, for example, even in a scenario where the network card does not support redistribution, the method of the embodiment of the present application may be used.
In a possible implementation manner, the re-shunting the first packet according to the inner layer packet header of the first packet includes: and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
In a possible implementation manner, the re-shunting the first packet according to the inner layer packet header of the first packet includes: and according to the inner layer message header of the first message, distributing the first message based on a receiving end extended RSS strategy.
In a second aspect, a network device is provided, the network device comprising a multi-processing unit including a first processing unit, the network device configured to: receiving a first message; according to the tunnel protocol message header of the first message, shunting the first message to the first processing unit for forwarding; and if the first processing unit cannot process the first message, the first processing unit is further configured to re-distribute the first message according to an inner layer message header of the first message.
In one possible implementation, the network device is further configured to: and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
In one possible implementation, the network device is further configured to: acquiring a first scheduling rule of the first message; determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again; and shunting the first message again through the network card based on the inner layer message header of the first message.
In one possible implementation, the network device is further configured to: acquiring a second scheduling rule of the first message; determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again; and shunting the first message again through the shunting module based on the inner layer message header of the first message.
In one possible implementation, the multi-processing unit is further configured to: and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
In a third aspect, a network device is provided, which includes a processor and a memory, the memory is used for storing a computer program, and the processor is used for calling and running the computer program from the memory, so that the network device executes the method of the above aspects.
In a fourth aspect, a network device is provided that includes various modules for performing the methods in the various aspects described above.
In a fifth aspect, a computer-readable medium is provided, which stores a computer program, which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the above-mentioned aspects.
It should be noted that, all or part of the computer program code may be stored in the first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and this is not specifically limited in this embodiment of the present application.
In a seventh aspect, a chip system is provided, which includes a processor for a network device to implement the functions recited in the above aspects, such as generating, receiving, sending, or processing data and/or information recited in the above methods. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the terminal device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
Drawings
Fig. 1 shows a schematic diagram of a network device to which the embodiment of the present application is applied.
Fig. 2 is a schematic block diagram of the format of a tunneling protocol encapsulation based IPSec packet.
Fig. 3 is a schematic flow chart of a packet offloading method of a network device provided in the present application.
Fig. 4 is a schematic flow chart of a message offloading method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a network device according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a network device of an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Network devices (e.g., enterprise routers) are used to connect multiple logically separate networks, so-called logical networks represent a single network or a sub-network. This can be done by a router when data is transferred from one subnet to another. The enterprise router is mainly used for connecting an enterprise Local Area Network (LAN) and a wide area network (Internet), and the enterprise heterogeneous network interconnection and the interconnection of a plurality of subnets can be realized by adopting the enterprise router.
For example, secure communication and resource sharing may be implemented between the headquarters and the branch offices through a Virtual Private Network (VPN), the gateway device a and the gateway device B integrated with the VPN function are respectively deployed at the outlets of the branch offices and the headquarters, an internet protocol security (IPsec) tunnel is established through the operator 1 and the operator 2, and end-to-end security services are provided for transmission of IP packets through encryption, authentication, and other manners.
In a one-way message flow transmitted between a source IP address and a destination IP address in a period of time, all messages have the same source port number, destination port number, protocol number and source and destination IP addresses, namely, five-tuple contents are the same.
In order to process network data traffic in real time, a high-performance server adopts multiple paths and multiple cores, and in order to fully utilize each processor to process a large amount of network data, aggregation and dispersion technologies of the network data can be adopted. All data streams input from the external network are aggregated, and the data streams aggregated by the packets are dispersed into a plurality of queues according to a certain flow distribution strategy, wherein the queues correspond to CPUs one to one, namely, one CPU processes data in one queue, so that multi-path and multi-core CPU resources can be fully utilized.
A multi-core processor refers to the integration of two or more complete compute engines (cores) in one processor. In a multi-core and multi-processor environment, when accessing a resource shared by a plurality of forwarding cores, locking processing is required, that is, only one forwarding process can receive, process and send packets from the same network port at the same time.
In order to avoid overhead caused by locking and negative influence on system performance, resource sharing is usually avoided during system design, and for network devices such as routers, switches, network servers and the like, messages belonging to the same message flow are processed by the same forwarding core, so that resource sharing between the forwarding cores caused by message cross-forwarding core processing is avoided.
In the following, related terms to which the present application refers are briefly described.
1. Tunnel encapsulation
Tunneling is a way of transferring data between networks by using the infrastructure of the internetwork. The data (or payload) communicated using the tunnel may be data frames or packets of different protocols. Tunneling protocols re-encapsulate data frames or packets of other protocols and then send through tunnels. The new header provides routing information to deliver the encapsulated payload data over the internet.
To create a tunnel, both the client and the server of the tunnel must use the same tunneling protocol during the creation of the tunnel. Tunneling may be based on layer 2 or layer 3 tunneling protocols, respectively. Among them, the layer 2 tunneling protocol corresponds to the data link layer of the OSI model, and uses frames as data exchange units. The point-to-point tunneling protocol (PPTP), the layer 2 tunneling protocol (L2TP), and the layer 2 forwarding protocol (L2F) all belong to the layer 2 tunneling protocol, and user data is encapsulated in a point-to-point protocol (PPP) frame and transmitted through the internet. The layer 3 tunneling protocol corresponds to a network layer of the OSI model, and uses packets as data exchange units. The IP over IP (IPIP) and IPSec tunneling modes belong to layer 3 tunneling protocols, and IP packets are encapsulated in additional IP headers and transmitted through an IP network.
The tunneling protocol is composed of a transport bearer, different encapsulation formats, and user packets, and is different in that the user packets are encapsulated in different packets and transported in a tunnel.
2. IPsec protocol
Internet protocol security (IPsec) is a protocol packet that encrypts and authenticates packets of an IP protocol to protect the network transport protocol suite of the IP protocol, i.e., a set of interrelated protocols. The IPsec protocol can encapsulate an IP address of an internal network by using an Internet routable address through a packet encapsulation technique, thereby realizing interworking between different-place networks.
IPsec is a three-layer tunnel encryption protocol established by IETF and provides high-quality, interoperable and cryptography-based security assurance for data transmitted over the Internet. The IPsec protocol provides an architecture applied to network data security on an IP layer, and includes a network authentication protocol Authentication Header (AH), an Encapsulated Security Payload (ESP), an Internet Key Exchange (IKE), and an internet authentication and encryption algorithm (for network authentication and encryption). The AH protocol and the ESP protocol are used to provide security services, and the IKE protocol is used for key exchange.
The authentication header AH provides connectionless data integrity, message authentication and replay attack protection for the IP datagram; the encapsulating security payload ESP is used to provide confidentiality, data source authentication, connectionless integrity, anti-replay and limited transport-flow (traffic-flow) confidentiality; the security association SA is used to provide algorithms and data packets, providing the parameters needed for AH, ESP operations.
The AH protocol provides data source authentication, data integrity verification, and message replay protection, protects communications from tampering, but does not prevent eavesdropping, and is suitable for use in transmitting non-confidential data. The operating principle of the AH is to add an identity authentication message header to a data packet, and the message header is inserted behind a standard IP packet header to provide integrity protection for data. Alternative authentication algorithms are (message digest 5, MD5), secure hash algorithm (SHA-1).
The ESP protocol provides encryption, data source authentication, data integrity verification, and anti-replay functions. The ESP protocol adds an ESP packet header behind the standard IP packet header of each data packet and adds an ESP tail behind the data packet. The ESP encrypts and encapsulates user data to be protected into an IP packet to ensure confidentiality of the data. The encryption algorithm of ESP includes DES, 3DES, AES, etc. The user can select MD5 and SHA-1 algorithm to ensure the integrity and authenticity of the message.
The security mechanism of the IPsec comprises authentication and encryption, wherein the authentication mechanism enables a data receiver of IP communication to confirm the real identity of a data sender and whether data is tampered in the transmission process; the encryption mechanism guarantees the confidentiality of data by carrying out encryption operation on the data so as to prevent the data from being intercepted in the transmission process.
IPsec security services include Data Confidentiality (Confidentiality), Data Integrity (Data Integrity), Data Authentication (Data Authentication), and Anti-Replay (Anti-Replay).
Wherein, the data confidentiality is used for encrypting the packet before the IPsec sender transmits the packet through the network; the data integrity is used for the IPsec receiver to authenticate the packet sent by the sender so as to ensure that the data is not tampered in the transmission process; the data source authentication is used for the IPsec to authenticate whether a sending end which sends the IPsec message is legal or not at a receiving end; anti-replay is used for IPsec receivers to detect and reject the reception of outdated or duplicated messages.
3. Data message
The message is a data unit exchanged and transmitted in the network, and is also a network transmission unit, and the message contains complete data information to be sent. The message is continuously encapsulated into packets, packets and frames for transmission in the transmission process, and the encapsulation mode is to add a header consisting of some control information, which is called a message header.
4. RSS splitting
RSS (received signal scaling) is a network card driving technology that enables efficient distribution of received messages among multiple processing units (e.g., CPUs) in a multiprocessor system. The multi-queue network card technology distributes and receives network flow among a plurality of processors by carrying out hash processing on a header file of an incoming data packet through the support of a multi-queue network card drive, and binds each queue to different cores through interruption so as to meet the requirement of the network card. The network card analyzes the received message to obtain an IP address, a protocol and a quadruple. The network card calculates a HASH value according to a specific field (e.g., a quadruple) of a packet header by using a configured HASH function, and queries a RETA (redirection table) to determine a processing unit for processing the packet based on Least Significant Bits (LSBs) of the HASH value as an index of the RETA.
The network card supports multiple receive and transmit queues, i.e., multiple queues. When receiving a message stream, a network card can send different packets to different queues for distributed processing among different CPUs. The network card specifies for each packet, through a filter, that the packet belongs to one of a few streams. The packets in each stream are controlled in a separate receive queue and the queue loops are processed by the CPU, referred to as receiver spread RSS.
The driver of the multi-queue network card provides a kernel module parameter for specifying the number of hardware queues. For example, the parameter used by the bnx2x driver is num _ queues if the device supports enough queues, one CPU for each receive queue in the RSS configuration. Or at least one receive queue per memory domain, a memory domain comprising a series of CPUs and sharing a particular memory level (e.g., L1, L2, NUMA node).
The indirection table of the RSS equipment is mapped when the driver is initialized, and the default mapping is that the queues are uniformly distributed in the indirection table. However, the indirection table may be viewed or modified at run time using the ethnool command (- -show-rxfh-index and-set-rxfh-index). Modifying the indirection table may set different proportions of weight for different queues.
There is a separate IRQ, i.e. interrupt number, for each receive queue. The NIC informs the CPU through the IRQ when a new packet arrives in the designated queue. The PCIe device uses the MSI-X to route each interrupt to the CPU. The effective queue to IRQ mapping is formulated by/proc/interrupts. An interrupt can be processed by any one of the CPUs in the default setting because the packet processing section occurs in the receive interrupt handling function.
RSS should be enabled when low latency is a concern or receive interrupt processing becomes a bottleneck. Load is shared between different CPUs thereby reducing queue length. As many queues as there are CPUs can be created for a low latency network. In an efficient configuration there are minimal queues and no queue overflows. This is because by default the total number of interrupts with interrupt aggregation enabled increases with each increased queue.
The RSS shunting is a necessary condition for better utilizing the multi-core processor by the network card, and the network card with a plurality of RSS queues can divide different network connections into different queues and then respectively send the different queues to different CPU cores for processing, so that loads are dispersed to realize load balancing, and the capacity of the multi-core processor is fully utilized.
5. Control plane and forwarding plane
The control plane and forwarding plane of the network device may be physically or logically separated. For example, the core switches, core routers may be physically separated. The processing unit (for example, a CPU) on the main control board is not responsible for message forwarding and is dedicated to system control; while the processing units (e.g., CPUs) in the service boards are dedicated to data packet forwarding.
The control plane refers to the portion of the system used to transmit instructions, compute table entries. Such as protocol packet forwarding, protocol table entry calculation, maintenance, etc., all belong to the category of the control plane. For example, in a routing system, the process responsible for route protocol learning and route table entry maintenance belongs to the control plane.
The forwarding plane refers to a part of the system used for encapsulating and forwarding data packets. Such as receiving, decapsulating, encapsulating, forwarding, etc. of data packets, fall within the scope of the forwarding plane. For example, after the system receives an IP packet, it needs to perform decapsulation, lookup of a routing table, forwarding from an egress interface, and the like, and a process responsible for the above actions in the system belongs to a forwarding plane.
After the control plane of the system carries out protocol interaction and route calculation, a plurality of table entries are generated and sent to the forwarding plane to guide the forwarding plane to forward the message. For example: the router establishes a routing table entry through an OSPF protocol, and further generates a Forwarding Information Base (FIB) table, a fast forwarding table and the like to guide the system to forward the IP packet.
It should be noted that, in the embodiment of the present application, the above-mentioned packet or data packet has the same meaning as the following message, and may be replaced.
Fig. 1 shows a schematic diagram of a network device to which the embodiment of the present application is applied. The network device 100 shown in fig. 1 includes a network card 110, and a plurality of processing units 120.
The network card 110 has an RSS stream splitting function, and performs hash (hash) calculation on the received network data packet based on a triplet or a quintet to complete a hardware stream splitting task and stream packets with different hash values to different processing units.
Each processing unit 120 corresponds to a different receiving queue, the receiving unit is configured to store a received message to be processed, and the processing unit may obtain the message from the corresponding receiving queue and process, for example, forward, the message.
It should be noted that the processing unit may be a core in a processor, and the processing unit is also called a forwarding core. The plurality of processing units may also be independent processors, which is not limited in this embodiment of the present application.
Since the RSS splitting technique aims to calculate a hash (hash) value based on a pre-specified packet specific field (e.g., a quintuple in a packet header), and then determine a processing unit for processing a packet based on the calculated hash value. When a large number of messages are encapsulated by using the tunnel encapsulation technology, the tunnel protocol headers of the messages are the same, or specific fields (e.g., quintuple) located in the tunnel protocol headers are the same, based on the RSS technology described above, the encapsulated messages are forwarded to the same processing unit due to the specific same specific fields, and are processed by the same processing unit. This may cause some processing units in the network device to be overloaded and some processing units to be underloaded, which may cause a load imbalance among the multiple processing units in the network device.
In order to avoid the foregoing problems, embodiments of the present application provide a message splitting scheme based on multiple processing units, where after a first message is split to a first processing unit according to a tunneling protocol header, the first processing unit cannot process the message, and then the message may be split again according to an inner layer message header of the first message, which is beneficial to improving load imbalance among the multiple processing units in the network device.
For ease of understanding, the RSS splitting policy applicable to the embodiment of the present application is described in conjunction with table 1. The format of the IPSec packet encapsulated based on the tunneling protocol is described with reference to fig. 2.
The specification of the skip list shown in table 1 is 256 entries, and 8 receiving queues (Q) are configured, and the queue 0 to the queue 7 are repeated 16 times in the skip list (indication table).
TABLE 1 jump table
Figure BDA0002066634630000071
RSS shunting requires configuring an RSS hash key (key) field of the network card in advance, for example, a source ip, a source port, a destination ip, and a destination port of a packet are used as hash keys, the network card hardware calculates a hash value according to the hash keys, and determines to take an LSB of the hash value according to a specification of a skip list, for example, the specification of the skip list is 128, and a 7-bit LSB can be taken. The low 7 bits 0-127 correspond to the appointed receiving queue number in the jump table, and each receiving queue is bound with a processing unit, thus achieving the purpose that the multiple processing units receive and transmit the flows corresponding to different hash values in parallel.
Hereinafter, the message format of the present application will be briefly described.
Currently, a message of the IPSec protocol is mainly encapsulated in a tunnel based on a Generic Routing Encapsulation (GRE) tunnel, and the GRE tunnel may encapsulate a multicast message and a broadcast message into a unicast message and then encrypt the unicast message by using the IPSec.
The GRE tunnel encrypted by IPSec can be used for VPN deployment, wherein a message encrypted by the IPSec technology is encapsulated through the tunnel and then transmitted from the tunnel, and the method is called GRE over IPsec. A dynamic routing protocol for virtual tunnel operation is established through GRE, messages are filtered by configuring Access Control List (ACL), Quality of Service (QoS) and other technologies on an interface device of a virtual tunnel, and IPSec technology protects data transmitted in the GRE tunnel, where a specific encapsulation format of the messages can be shown in fig. 2.
A schematic flow chart of a packet offloading method of a network device according to an embodiment of the present application is described below with reference to fig. 3. The method shown in FIG. 3 may be performed by a network device in which the multi-processing unit is located, such as the network device shown in FIG. 1. The first processing unit may be any one of the multi-processing units. The method shown in fig. 3 includes steps 310 to 330.
A first message is received 310.
The first message is based on a message encapsulated by a tunnel protocol. The first packet may include two headers, namely a tunneling protocol header and an inner layer header. The tunnel protocol header is used for indicating that the first message is transmitted through the tunnel, and the inner layer header indicates that the first message is transmitted based on the IPSec protocol. As a possible message format, see fig. 2.
And 320, according to the tunneling protocol header of the first packet, shunting the first packet to the first processing unit for forwarding.
Optionally, based on the RSS splitting policy introduced above, a specific field is selected from a header of a tunneling protocol, and the first packet is split to the first processing unit according to the specific field.
It should be noted that the specific field may be a quintuple, a quadruplet, or a duplet in a header of the tunneling protocol, which is not limited in this embodiment of the application.
330, if the first processing unit cannot process the first packet, re-shunting the first packet according to the inner layer packet header of the first packet.
It should be noted that there are many reasons why the first processing unit cannot process the first packet, and specifically, the first processing unit may meet a preset condition, for example, the first processing unit is overloaded, or the calculation amount of the first processing unit meets a preset threshold, and the specific condition may be referred to as the description of the case # a2 in the preset condition, and is not described herein again for brevity.
Optionally, the step 330 includes: and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
The following specifically describes preset conditions of the message splitting method according to the embodiment of the present application. When the network device includes a multi-core processor, the control unit may be a control core in the multi-core processor, and the processing unit may be a forwarding core in the multi-core processor. When the network device includes multiple processors, the control unit in the following may be a processor for control in the multiple processors, and the processing unit may be a processor for forwarding in the multiple processors.
Case # A1
When the CPU occupancy of the processing unit #1 is smaller than the first threshold, the control unit determines that the processing unit #1 adopts the RSS splitting mode according to the load information of the processing unit #1, that is, the message stream can be directly forwarded on the processing unit #1 after being decrypted and decapsulated.
For example, when the CPU occupancy of the processing unit #1 is less than 90%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit # 1.
Or, when the port flow of the processing unit #1 is smaller than the second threshold, the control unit determines that the processing unit #1 adopts the RSS splitting mode according to the load information of the processing unit #1, that is, the message stream can be directly forwarded on the processing unit after being decrypted and decapsulated.
For example, when the port traffic of the processing unit #1 is less than 50%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit # 1.
Case # A2
When the CPU occupancy rate of the processing unit #1 is larger than the first threshold value, the control unit determines that the processing unit #1 adopts the secondary shunting mode according to the load information of the processing unit # 1.
For example, when the CPU occupancy of the processing unit #1 is greater than 90%, the control unit determines that the processing unit #1 adopts the secondary splitting mode according to the load information of the processing unit # 1.
Or, when the port flow of the processing unit #1 is greater than the second threshold, the control unit determines that the processing unit #1 adopts the secondary split mode according to the load information of the processing unit # 1.
For example, when the port traffic of the processing unit #1 is greater than 50%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit # 1.
The shunting in step 330 may be implemented by hardware shunting executed based on a network card, or may be implemented by software shunting based on a software module, which is not limited in this embodiment of the present application. For part of the network devices, the part of the network devices only support shunting the externally received messages, but do not support processing the messages returned by the network devices through the internal physical channel, that is, when the network card does not support secondary shunting shown in step 330, the messages may be shunted in a software shunting manner. Of course, if the network card supports the secondary flow distribution shown in step 330, the hardware flow distribution scheme based on the network card may be directly used, and the software-based message flow distribution scheme is implemented without adding a software module.
It should be noted that, if the network card supports software loopback and hardware stream shunting, the network card may be sent to the network card by using an internal loopback (loopback) mode of the network card, and the network card performs secondary shunting according to a specific field of an original message (or an inner layer message), that is, directly sending a decapsulated message back to other processing units.
If the network card does not support software loopback, a software flow shunting mode can be adopted for secondary shunting, namely, the message is sent to the shunting module, the shunting module carries out secondary shunting according to a specific field of the original message, namely, the decapsulated message is directly sent back to other processing units.
In an embodiment of the present application, a control plane of the network device may include a scheduling analysis module, configured to perform traffic monitoring on a tunnel encapsulation packet flow entering a tunnel, and determine a flow splitting mode of the packet flow according to a load level of a processing unit.
For example, when the CPU occupancy of the processing unit #1 is less than 90%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit.
For example, when the port traffic of the processing unit #1 is less than 50%, the control unit determines that the processing unit #1 adopts the RSS feed mode according to the load information of the processing unit.
For example, when the CPU occupancy of the processing unit #1 is greater than 90%, the control unit determines that the processing unit #1 adopts the secondary splitting mode according to the load information of the processing unit.
For example, when the port traffic of the processing unit #1 is greater than 50%, the control unit determines that the processing unit #1 adopts the secondary splitting mode according to the load information of the processing unit.
Optionally, the secondary shunting mode includes a hardware shunting mode and a software shunting mode.
A schematic flow chart of a packet offloading method according to an embodiment of the present application is described below with reference to fig. 4, where the method shown in fig. 4 is executed by a network device, and the network device includes a network card, a plurality of forwarding cores, and a control core.
At S501, a network card (e.g., an ethernet card) receives a first message.
In S502, the network card forwards the first packet to a first forwarding core of the multiple forwarding cores.
The network may determine, through hash computation (hash), a first forwarding core corresponding to processing the first packet.
In S503, the first forwarding core reports the encrypted packet traffic of the first tunnel and the load of the first forwarding core to the control core, where the first tunnel is a tunnel used for transmitting the first packet.
It should be noted that, the above-mentioned S503 is executed after the first forwarding core receives the first message, or the first forwarding core periodically reports the first message to the controller, which is not limited in this embodiment of the application.
In S504, the control core generates a scheduling rule based on the encrypted packet traffic of the first tunnel and the load of the first forwarding core.
The above scheduling rules may indicate different scheduling modes in different situations, which are specifically described below with reference to three situations.
In the first scheduling mode, when the load of the first forwarding core is overloaded and the network card supports the secondary shunting mode, the scheduling rule is used for indicating that the network card performs secondary shunting according to the inner layer message of the first message.
And in the second scheduling mode, when the load of the first forwarding core is overloaded and the network card does not support the secondary shunting mode, the scheduling rule is used for indicating that the shunting module carries out secondary shunting according to the inner layer message of the first message.
And in a third scheduling mode, when the load of the first forwarding core is not overloaded, the scheduling rule is used for indicating that the first forwarding core directly forwards the first message.
It should be noted that, in the third scheduling mode, a specific process of forwarding the first packet by the first forwarding core may refer to a conventional packet forwarding process, and for brevity, details are not described herein again.
The load information of the first forwarding core may be CPU occupancy of the forwarding core, port traffic information of the forwarding core, and the like.
Case # A1
When the CPU occupancy of the forwarding core #1 is smaller than the first threshold, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core, that is, the message stream can be directly forwarded on the forwarding core after being decrypted and decapsulated.
For example, when the CPU occupancy of the forwarding core #1 is less than 90%, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core.
Or, when the port flow of the forwarding core #1 is smaller than the second threshold, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core, that is, the message stream can be directly forwarded on the forwarding core after being decrypted and decapsulated.
For example, when the port traffic of the forwarding core #1 is less than 50%, the control core determines that the forwarding core #1 adopts the RSS splitting mode according to the load information of the forwarding core.
Case # A2
And when the CPU occupancy rate of the forwarding core #1 is greater than a first threshold value, the control core determines that the forwarding core #1 adopts a secondary shunting mode according to the load information of the forwarding core.
For example, when the CPU occupancy of the forwarding core #1 is greater than 90%, the control core determines that the forwarding core #1 adopts the secondary offload mode according to the load information of the forwarding core.
Or, when the port traffic of the forwarding core #1 is greater than the second threshold, the control core determines that the forwarding core #1 adopts the secondary offloading mode according to the load information of the forwarding core.
For example, when the port traffic of the forwarding core #1 is greater than 50%, the control core determines that the forwarding core #1 adopts the secondary offload mode according to the load information of the forwarding core.
At S505, the controller transmits the scheduling rule to the first forwarding core.
At S506, the first forwarding core decapsulates the first packet flow, so that the first packet may be secondarily shunted according to the inner packet header of the first packet in the following.
In S507, if the scheduling rule indicates that the network card performs secondary shunting on the first packet, the first forwarding core forwards the decapsulated first packet to the network card; and if the dispatching rule indicates that the shunting module performs secondary shunting on the first message, the first forwarding core forwards the decapsulated first message to the shunting module.
It should be noted that, after the first packet is forwarded to the network card, the network card performs secondary shunting on the first packet based on the inner layer packet header, and specifically, may perform hash calculation on the inner layer packet, and re-determine the forwarding core. After the first packet is forwarded to the distribution module, the distribution module performs secondary distribution on the first packet based on the inner layer packet header, specifically, hash calculation may be performed on the inner layer packet, and the forwarding core is determined again.
It should be further noted that, after the forwarding core is re-determined, the decapsulated first packet may be transmitted to the re-determined forwarding core in a conventional packet splitting manner. For example, the network card may send the decapsulated first packet to the packet receiving module, and the packet receiving module forwards the decapsulated first packet to the re-determined forwarding core. For another example, the offloading module may send the decapsulated first packet to the packet receiving module, and the packet receiving module forwards the decapsulated first packet to the re-determined forwarding core.
The method of the embodiment of the present application is described above with reference to fig. 1 to 4, and the apparatus of the embodiment of the present application is described below with reference to fig. 5 to 6. It should be noted that the apparatuses shown in fig. 5 to fig. 6 can implement the steps in the above method, and are not described herein again for brevity.
Fig. 5 is a schematic diagram of a network device according to an embodiment of the present application, where the network device includes multiple processing units, and the network device 500 shown in fig. 5 may include a receiving module 510 and a processing module 520.
The network device receives a first message through a receiving module 510;
the network device shunts the first packet to the first processing unit for forwarding through a processing module 520 according to a tunneling protocol packet header of the first packet;
if the first processing unit cannot process the first packet, the network device shunts the first packet again through the processing module 520 according to the inner layer packet header of the first packet.
Optionally, as an embodiment, the network device is further configured to: acquiring a first scheduling rule of the first message through the processing module 520; determining, by the processing module 520 according to the first scheduling rule, that a network card of the network device shunts an inner-layer packet header of the first packet, and redistributing the first packet; and shunting the first message again through the network card based on the inner layer message header of the first message.
Optionally, as an embodiment, the network device is further configured to: acquiring a second scheduling rule of the first message through the processing module 520; determining, by the processing module 520 according to the second scheduling rule, that the inner layer packet header of the first packet is shunted by the shunting module of the network device, and shunting the first packet again; and shunting the first message again through the shunting module based on the inner layer message header of the first message.
Optionally, as an embodiment, if the load of the first processing unit meets a preset condition, the network device shunts the first packet again through the processing module 520 according to the inner layer packet header of the first packet.
Optionally, as an embodiment, the network device shunts the first packet to other processing units except the first processing unit in the multi-processing unit again through the processing module 520 according to the inner layer packet header of the first packet.
Optionally, as an embodiment, the network device shunts the first packet based on a receiving-end extended RSS policy according to an inner-layer packet header of the first packet through the processing module 520.
In an alternative embodiment, the processing module 520 may be a part of the plurality of processing cores 620, and the receiving module 510 may be an input/output interface 630. The network device 600 may also include a memory 610, as shown in particular in fig. 6.
Fig. 6 is a schematic block diagram of a network device of an embodiment of the present application. The network device 600 shown in fig. 6 may include: memory 610, processor 620, input/output interface 630. The memory 610, the processor 620 and the input/output interface 630 are connected through a communication connection, the memory 610 is used for storing program instructions, and the processor 620 is used for executing the program instructions stored in the memory 620, so as to control the input/output interface 630 to receive input data and information and output data such as operation results.
The processor 620 includes multiple cores. Each core may correspond to a processing unit as described above.
It should be understood that, in the embodiment of the present application, the processor 620 may adopt a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, for executing a relevant program to implement the technical solutions provided in the embodiments of the present application.
The memory 610 may include a read-only memory and a random access memory, and provides instructions and data to the processor 620. A portion of processor 620 may also include non-volatile random access memory. For example, the processor 620 may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 620. The method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 610, and the processor 620 reads the information in the memory 610 and performs the steps of the above method in combination with the hardware thereof. To avoid repetition, it is not described in detail here.
It should be understood that in the embodiments of the present application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that in embodiments of the present application, the memory may comprise both read-only memory and random access memory, and may provide instructions and data to the processor. A portion of the processor may also include non-volatile random access memory. For example, the processor may also store information of the device type.
It should also be understood that in the embodiments of the present application, "first", "second", "third", and the like are merely for distinction and are not limited in sequence.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of message offloading for a network device, the network device comprising a multi-processing unit, the method performed by the network device,
the method comprises the following steps:
receiving a first message;
according to the tunnel protocol message header of the first message, the first message is distributed to a first processing unit in the multi-processing unit for forwarding;
if the first processing unit cannot process the first message, the first message is shunted again according to the inner layer message header of the first message.
2. The method of claim 1, wherein if the first processing unit cannot process the first packet, re-shunting the first packet according to an inner layer packet header of the first packet comprises:
and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
3. The method of claim 1 or 2, wherein the method further comprises:
acquiring a first scheduling rule of the first message;
determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again;
The re-shunting the first packet according to the inner layer packet header of the first packet includes:
and shunting the first message again through the network card based on the inner layer message header of the first message.
4. The method of any one of claims 1-3, further comprising:
acquiring a second scheduling rule of the first message;
determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again;
the re-shunting the first packet according to the inner layer packet header of the first packet includes:
and shunting the first message again through the shunting module based on the inner layer message header of the first message.
5. The method according to any of claims 1-4, wherein said re-shunting the first packet according to the inner header of the first packet comprises:
and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
6. A network device, the network device comprising a multi-processing unit including a first processing unit, the network device to:
receiving a first message;
according to the tunnel protocol message header of the first message, shunting the first message to the first processing unit for forwarding;
and if the first processing unit cannot process the first message, the first processing unit is further configured to re-distribute the first message according to an inner layer message header of the first message.
7. The apparatus of claim 6, wherein the network device is further to:
and if the load of the first processing unit meets a preset condition, shunting the first message again according to the inner layer message header of the first message.
8. The apparatus of claim 6 or 7, wherein the network device is further to:
acquiring a first scheduling rule of the first message;
determining that a network card of the network equipment shunts an inner layer message header of the first message according to the first scheduling rule, and shunting the first message again;
and shunting the first message again through the network card based on the inner layer message header of the first message.
9. The apparatus of any of claims 6-8, wherein the network device is further to:
acquiring a second scheduling rule of the first message;
determining that the shunting module of the network equipment shunts the inner layer message header of the first message according to the second scheduling rule, and shunting the first message again;
and shunting the first message again through the shunting module based on the inner layer message header of the first message.
10. The apparatus of any of claims 6-9, wherein the multi-processing unit is further to:
and shunting the first message to other processing units except the first processing unit in the multi-processing unit again according to the inner layer message header of the first message.
CN201910423117.3A 2019-05-21 2019-05-21 Message distribution method and device of network equipment Active CN111988211B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210934791.XA CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment
CN201910423117.3A CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910423117.3A CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210934791.XA Division CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Publications (2)

Publication Number Publication Date
CN111988211A true CN111988211A (en) 2020-11-24
CN111988211B CN111988211B (en) 2022-09-09

Family

ID=73435839

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210934791.XA Active CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment
CN201910423117.3A Active CN111988211B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210934791.XA Active CN115174482B (en) 2019-05-21 2019-05-21 Message distribution method and device of network equipment

Country Status (1)

Country Link
CN (2) CN115174482B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112953841A (en) * 2021-02-20 2021-06-11 杭州迪普信息技术有限公司 Message distribution method and system
CN113157445A (en) * 2021-03-30 2021-07-23 郑州信大捷安信息技术股份有限公司 Bidirectional message symmetric RSS processing method and system based on Hash operation and index value comparison
CN116055397A (en) * 2023-03-27 2023-05-02 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1937591A (en) * 2006-11-02 2007-03-28 杭州华为三康技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
CN1953420A (en) * 2006-09-26 2007-04-25 杭州华为三康技术有限公司 A method to forward the channel message and network device
CN102624611A (en) * 2011-12-31 2012-08-01 成都市华为赛门铁克科技有限公司 Method, device, processor and network equipment for message dispersion
EP2712128A1 (en) * 2011-07-06 2014-03-26 Huawei Technologies Co., Ltd. Message processing method and related device thereof
CN104468391A (en) * 2014-12-16 2015-03-25 盛科网络(苏州)有限公司 Method and system for achieving load balance according to user information of tunnel message
CN106209664A (en) * 2016-07-22 2016-12-07 迈普通信技术股份有限公司 A kind of data transmission method, Apparatus and system
CN108270699A (en) * 2017-12-14 2018-07-10 中国银联股份有限公司 Message processing method, shunting interchanger and converging network
CN108694087A (en) * 2017-03-31 2018-10-23 英特尔公司 Dynamic load balancing in a network interface card for optimal system-level performance
CN108737239A (en) * 2018-08-30 2018-11-02 新华三技术有限公司 A kind of message forwarding method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7646708B2 (en) * 2005-08-01 2010-01-12 Hewlett-Packard Development Company, L.P. Network resource teaming combining receive load-balancing with redundant network connections
CN102055679B (en) * 2011-01-28 2012-09-19 中国人民解放军国防科学技术大学 Message scheduling method for power consumption control in forwarding engine
US9820182B2 (en) * 2013-07-12 2017-11-14 Telefonaktiebolaget Lm Ericsson (Publ) Method for enabling control of data packet flows belonging to different access technologies
US9960958B2 (en) * 2014-09-16 2018-05-01 CloudGenix, Inc. Methods and systems for controller-based network topology identification, simulation and load testing
CN107231269B (en) * 2016-03-25 2020-04-07 阿里巴巴集团控股有限公司 Accurate cluster speed limiting method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1953420A (en) * 2006-09-26 2007-04-25 杭州华为三康技术有限公司 A method to forward the channel message and network device
CN1937591A (en) * 2006-11-02 2007-03-28 杭州华为三康技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
EP2712128A1 (en) * 2011-07-06 2014-03-26 Huawei Technologies Co., Ltd. Message processing method and related device thereof
CN102624611A (en) * 2011-12-31 2012-08-01 成都市华为赛门铁克科技有限公司 Method, device, processor and network equipment for message dispersion
CN104468391A (en) * 2014-12-16 2015-03-25 盛科网络(苏州)有限公司 Method and system for achieving load balance according to user information of tunnel message
CN106209664A (en) * 2016-07-22 2016-12-07 迈普通信技术股份有限公司 A kind of data transmission method, Apparatus and system
CN108694087A (en) * 2017-03-31 2018-10-23 英特尔公司 Dynamic load balancing in a network interface card for optimal system-level performance
CN108270699A (en) * 2017-12-14 2018-07-10 中国银联股份有限公司 Message processing method, shunting interchanger and converging network
CN108737239A (en) * 2018-08-30 2018-11-02 新华三技术有限公司 A kind of message forwarding method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112953841A (en) * 2021-02-20 2021-06-11 杭州迪普信息技术有限公司 Message distribution method and system
CN112953841B (en) * 2021-02-20 2022-05-27 杭州迪普信息技术有限公司 Message distribution method and system
CN113157445A (en) * 2021-03-30 2021-07-23 郑州信大捷安信息技术股份有限公司 Bidirectional message symmetric RSS processing method and system based on Hash operation and index value comparison
CN113157445B (en) * 2021-03-30 2022-04-08 郑州信大捷安信息技术股份有限公司 Bidirectional message symmetric RSS processing method and system based on Hash operation and index value comparison
CN116055397A (en) * 2023-03-27 2023-05-02 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device
CN116055397B (en) * 2023-03-27 2023-08-18 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device

Also Published As

Publication number Publication date
CN111988211B (en) 2022-09-09
CN115174482A (en) 2022-10-11
CN115174482B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US11792046B2 (en) Method for generating forwarding information, controller, and service forwarding entity
JP6288802B2 (en) Improved IPsec communication performance and security against eavesdropping
EP3387812B1 (en) Virtual private network aggregation
US7835285B2 (en) Quality of service, policy enhanced hierarchical disruption tolerant networking system and method
US10601610B2 (en) Tunnel-level fragmentation and reassembly based on tunnel context
US12107834B2 (en) Multi-uplink path quality aware IPsec
CN111988211B (en) Message distribution method and device of network equipment
US20160043996A1 (en) Secure path determination between devices
US20230118718A1 (en) Handling multipath ipsec in nat environment
US20220394017A1 (en) Ipsec processing on multi-core systems
US12113773B2 (en) Dynamic path selection of VPN endpoint
US20190372948A1 (en) Scalable flow based ipsec processing
US20220393981A1 (en) End-to-end qos provisioning for traffic over vpn gateway
CN113365267A (en) Communication method and device
EP4248621A1 (en) Multi-uplink path quality aware ipsec
CN106209401B (en) A kind of transmission method and device
KR101922980B1 (en) Network device and packet transmission method of the network device
Tennekoon et al. Prototype implementation of fast and secure traceability service over public networks
EP2996291B1 (en) Packet processing method, device, and system
CN111669374B (en) Encryption and decryption performance expansion method for single tunnel software of IPsec VPN
CN113965518A (en) Message processing method and device
WO2023040782A1 (en) Message processing method and system, and device and storage medium
CN114039795B (en) Software defined router and data forwarding method based on same
WO2024027419A1 (en) Packet sending method, apparatus and system
CN117041156A (en) Communication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211231

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Applicant after: xFusion Digital Technologies Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant