CN114553774B - Message forwarding method, device, equipment and medium - Google Patents
Message forwarding method, device, equipment and medium Download PDFInfo
- Publication number
- CN114553774B CN114553774B CN202111599235.3A CN202111599235A CN114553774B CN 114553774 B CN114553774 B CN 114553774B CN 202111599235 A CN202111599235 A CN 202111599235A CN 114553774 B CN114553774 B CN 114553774B
- Authority
- CN
- China
- Prior art keywords
- task
- scheduled
- pipeline
- message
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4633—Interconnection of networks using encapsulation techniques, e.g. tunneling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4641—Virtual LANs, VLANs, e.g. virtual private networks [VPN]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2483—Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/22—Parsing or analysis of headers
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a message forwarding method, a device, equipment and a medium, wherein the method divides the business processing tasks in each assembly line into a first business processing task and a second business processing task, thereby reducing the possibility of congestion in the business processing tasks, avoiding idle running of a CPU when processing the subsequent distributing tasks and sending tasks, and reducing the resource waste.
Description
Technical Field
The present invention relates to the field of packet forwarding technologies, and in particular, to a method, an apparatus, a device, and a medium for forwarding a packet.
Background
The current forwarding of the message needs to go through network devices such as a traditional router, a firewall, a wireless product and the like, and a plurality of services are mounted on the network devices, and the message is forwarded after being processed, wherein the services comprise services which cannot be executed by hardware, such as traffic identification, application layer message information analysis, address translation (Network Address Translation Application Level Gateway, NAT ALG) services and the like. The message which can not be processed by hardware is forwarded to the control plane of the network equipment, and the control plane sends the message to the forwarding plane for business processing.
In order to execute the service which cannot be executed by hardware, a software forwarding architecture is adopted in the network device to execute the service, but the software forwarding architecture relies on occupation of hardware resources such as a central processing unit (Central Processing Unit, a CPU), a Cache (Cache) and a memory, so that the hardware resources are consumed when each message is forwarded, and when a large number of messages need to be forwarded, the consumption of the hardware resources is increased and the delay of message forwarding is caused.
The existing network device uses a pipeline to Forward a message, as shown in fig. 1, which is a schematic diagram of a pipeline, and one pipeline includes four tasks, namely, receiving (Rx), traffic processing (Forward), distributing (Dispatch), and transmitting (Tx). Specifically, after receiving a message through the network card driver, analyzing the message, determining the type of the message, and performing service processing on the analyzed message, wherein the service processing comprises pre-route services such as application identification, routing selection, target internet protocol (Internet Protocol, IP) address conversion and the like, and post-route services such as source IP address conversion, service quality (Quality of Service, QOS) and the like; and carrying out service processing on the message to obtain an outlet, distributing the message to the outlet, and sending the message to the network card drive again through the outlet.
In the existing pipeline, the resource waste is caused by idle running of the CPU when the network equipment waits for executing the message distribution and message sending tasks.
Disclosure of Invention
The invention provides a message forwarding method, a message forwarding device, message forwarding equipment and a message forwarding medium, which are used for solving the problem of resource waste in the prior art.
The invention provides a message forwarding method, which is applied to a multi-core Central Processing Unit (CPU) included in network equipment, wherein the CPU also comprises a pipeline corresponding to each network card included in the network equipment, each pipeline comprises a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the method comprises the following steps:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
Updating the task to be scheduled to the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
Further, after the adding the next task of the first pipeline to the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is the second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, the method further includes, after the task to be scheduled is invoked to process the message in the message set to be processed:
transmitting the processed tunnel service message to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, the task to be scheduled is the receiving task of the first pipeline, adding the processed message into a sliding window of a next task of the first pipeline, including:
for each processed message, executing:
performing stream hash calculation according to the selected characteristic information of the message processed currently;
selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result;
and adding the current processed message into a sliding window of the selected first service processing task.
Correspondingly, the invention provides a message forwarding device, which comprises:
the acquisition module is used for acquiring tasks to be scheduled from the task scheduling sequence;
the determining module is used for determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
the processing module is used for calling the task to be scheduled to acquire a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed if the task to be scheduled is determined to be the sending task of the first pipeline; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
And the updating module is used for updating the task to be scheduled into the next task of the task to be scheduled in the scheduling sequence, and turning to the acquisition module.
Further, the processing module is further configured to determine whether the task to be scheduled is a receiving task of the first pipeline after the adding of the next task of the first pipeline to the task scheduling sequence; if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence; and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is a second service processing task of the first pipeline, the sliding window of the task to be scheduled includes a message of a tunnel service, and the processing module is further configured to call the task to be scheduled to process the message in the message set to be processed, and then send the processed message of the tunnel service to a tunnel logic network card; and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, the task to be scheduled is the receiving task of the first pipeline, the processing module is specifically configured to execute, for each processed message: performing stream hash calculation according to the selected characteristic information of the message processed currently; selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result; and adding the current processed message into a sliding window of the selected first service processing task.
Accordingly, the present invention provides an electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory stores a computer program that, when executed by the processor, causes the processor to implement the steps of any one of the methods described above.
Accordingly, the present invention provides a computer readable storage medium storing a computer program which when executed by a processor performs the steps of any of the methods described above.
The invention provides a message forwarding method, a device, equipment and a medium, wherein a CPU applied to the method also comprises a pipeline corresponding to each network card included by network equipment, each pipeline comprises a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the task to be scheduled is obtained from a task scheduling sequence; determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled; if the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence; updating the task to be scheduled to the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence. The method divides the business processing tasks in each pipeline into the first business processing task and the second business processing task, thereby reducing the possibility of congestion of the business processing tasks, avoiding idle running of the CPU when processing the subsequent distributing tasks and sending tasks, and reducing the resource waste.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a pipeline according to the prior art;
fig. 2 is a schematic process diagram of a message forwarding method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a prior art pipeline in which the Forward task phase is the bottleneck;
FIG. 4 is a schematic diagram of a pipeline according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of message forwarding of a tunnel service according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a pipeline with 1 virtual N according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a message forwarding device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to improve the message forwarding speed and avoid the waste of system resources, the embodiment of the invention provides a schematic diagram of a process of a message forwarding method.
Example 1:
fig. 2 is a schematic process diagram of a message forwarding method according to an embodiment of the present invention, where the process includes the following steps:
s201: and acquiring a task to be scheduled from the task scheduling sequence.
The message forwarding method provided by the embodiment of the invention is applied to a multi-core Central Processing Unit (CPU) included in network equipment, wherein the network equipment can be a router, a server, a switch, a gateway, a computer and other equipment.
The CPU comprises pipelines corresponding to each network card included in the network equipment, namely the number of the network cards included in the network equipment is the same as the number of the pipelines included in the CPU, and each network card corresponds to one pipeline.
In the existing pipeline, a service processing task needs to process a plurality of services, and the services can only be processed in series, and as the services increase, the number of messages to be processed in the service processing task stage increases, so that the service processing task stage becomes a bottleneck of the pipeline. Fig. 3 is a schematic diagram of a bottleneck of a service processing task stage in a pipeline, where, as shown in fig. 3, a message after being processed in a receiving task stage includes a message 1, a message 2.
In order to avoid system resource waste, in the embodiment of the present invention, each pipeline includes a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, where the tasks are ordered according to a task processing order, for example, when the first service processing task is a pre-route service processing task, the second service processing task is a post-route service processing task.
Fig. 4 is a schematic diagram of a pipeline provided in an embodiment of the present invention, where, as shown in fig. 4, the pipeline includes a packet receiving task, a pre-route service processing (forwarding Ingress) task, a post-route service processing (forwarding Ingress) task, a packet distributing task, and a packet sending task.
In a task scheduling sequence stored in advance, the CPU acquires a task to be scheduled, wherein the task to be scheduled refers to any one task of a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task.
S202: and determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled.
In order to process a task to be scheduled, the task to be scheduled carries a first pipeline identifier, and the CPU can determine a first pipeline corresponding to the first pipeline identifier according to the first pipeline identifier carried in the task to be scheduled, and determine whether the task to be scheduled is a sending task of the first pipeline.
S203: if the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; and if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence.
If the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is indicated to be the last task of the first pipeline, so that the CPU calls the task to be scheduled, acquires a corresponding message set to be processed from the sliding window according to the sliding window of the task to be scheduled, and sends the message to be processed in the message set to be processed.
Specifically, the CPU sends a message to be processed in the set of messages to a network card driver of the network device itself, and the network card driver forwards the message to be processed to the next network device.
If it is determined that the task to be scheduled is not the sending task of the first pipeline, it indicates that the task to be scheduled is not the last task of the first pipeline, and other tasks still exist after the task to be scheduled, where the task to be scheduled may be any one of a receiving task, a first service processing task, a second service processing task, and a distributing task in the first pipeline.
Therefore, the CPU calls the dispatching task, acquires a corresponding message set to be processed from a sliding window of the dispatching task, and processes the messages in the message set to be processed. The method specifically comprises the steps of correspondingly processing messages in a message set to be processed according to a specific task type of a task to be scheduled, receiving and processing the messages in the message set to be processed if the task to be scheduled is a receiving task, and performing first service processing on the messages in the message set to be processed if the task to be scheduled is a first service processing task; if the task to be scheduled is a second service processing task, performing second service processing on the messages in the message set to be processed; and if the task to be scheduled is a distribution task, carrying out distribution processing on the messages in the message set to be processed.
And adding the processed message into a sliding window of the next task of the task to be scheduled in the first pipeline aiming at the processed message, and adding the next task of the first pipeline into a task scheduling sequence.
S204: and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and turning to S201.
In order to continue processing the next task of the scheduled task, in the embodiment of the present invention, the task to be scheduled is updated to the next task according to the next task of the task to be scheduled in the task scheduling sequence, and the step of acquiring the task to be scheduled from the task scheduling sequence is also performed for wasting resources.
Because the CPU applied by the method in the embodiment of the invention also comprises a pipeline corresponding to each network card included by the network equipment, each pipeline comprises a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the task to be scheduled is acquired from the task scheduling sequence; determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled; if the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence; updating the task to be scheduled to the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence. The method divides the business processing tasks in each pipeline into the first business processing task and the second business processing task, thereby reducing the possibility of congestion of the business processing tasks, avoiding idle running of the CPU when processing the subsequent distributing tasks and sending tasks, and reducing the resource waste.
Example 2:
in order to reduce resource waste, on the basis of the foregoing embodiment, in an embodiment of the present invention, after adding the next task of the first pipeline to the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
In order to reduce resource waste, in the embodiment of the present invention, when a task to be scheduled is acquired from a task scheduling sequence again, it is further determined whether the task to be scheduled is a receiving task of a first pipeline, if it is determined that the task to be scheduled is a receiving task of the first pipeline, it is determined that processing can be performed from the first task of the first pipeline, so that the task to be scheduled is retained in the task scheduling sequence.
If the task to be scheduled is not the receiving task of the first pipeline, the task to be scheduled cannot be processed from the first task of the first pipeline, and therefore the task to be scheduled is removed from the task scheduling sequence.
Example 3:
in order to reduce resource waste, in the foregoing embodiments, in the embodiments of the present invention, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, after the task to be scheduled is invoked to process the message in the set of messages to be processed, the method further includes:
transmitting the processed tunnel service message to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
In order to reduce resource waste, in the embodiment of the present invention, a tunnel logic network card is further present in the network device, and if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a tunnel service message, the tunnel service message is processed, specifically, the tunnel logic network card that processes the tunnel service message for the second service and encapsulates the tunnel service message based on a corresponding tunnel protocol, and sends the encapsulated tunnel service message.
The tunnel may be a tunnel encapsulated by a multiprotocol label switching (Multi-Protocol Label Switching, MPLS) protocol, a tunnel encapsulated by a virtual extended local area network (Virtual Extensible Local Area Network, VXLAN) protocol, a message encapsulated by a generic routing encapsulation protocol (Generic Routing Encapsulation, GRE), or a message encapsulated by internet protocol version 6 (Internet Protocol Version, ipv 6), which is not limited in this embodiment of the present invention.
And after receiving the message of the processed tunnel service through the tunnel logic network card, taking the message of the processed tunnel service as a new message, and adding the new message into a sliding window of a receiving task of the first pipeline.
Fig. 5 is a schematic flow chart of forwarding a tunnel service message provided in an embodiment of the present invention, as shown in fig. 5, after receiving the tunnel service message through a physical network card, performing receiving task processing, pre-routing service processing and post-routing service processing on the tunnel service message, packaging the tunnel service message after the post-routing service processing, sending the packaged tunnel service message to a tunnel logic network card, and then receiving the packaged tunnel service message through the tunnel logic network card and performing subsequent message receiving processing, pre-routing service processing, post-routing service processing, message distribution processing and message sending processing as a new message.
Example 4:
in order to increase the packet forwarding speed, in the foregoing embodiments, if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is the receiving task of the first pipeline, adding the processed packet into a sliding window of a next task of the first pipeline, where the sliding window includes:
for each processed message, executing:
performing stream hash calculation according to the selected characteristic information of the message processed currently;
selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result;
and adding the current processed message into a sliding window of the selected first service processing task.
If each pipeline includes at least two sub-pipelines sharing the same receiving task and the task to be scheduled obtained from the task scheduling sequence receives the task, when the processed message is supposed to reach the sliding window of the first service processing task of the first pipeline, because the processed message is based on a message of a different protocol, the protocol may be a transmission control protocol (Transmission Control Protocol, TCP), a user datagram protocol (User Datagram Protocol, UDP), a control message protocol (Internet Control Message Protocol, ICMP), and an original network interconnection protocol (rawtip), so that the message is forwarded based on a form of a message stream of the different protocol, and table 1 provides a flow identifier of the message stream based on the different protocol according to the embodiment of the present invention.
TABLE 1
As shown in table 1, when a packet is forwarded in the form of a TCP flow, the flow identifier of the packet includes VRFID, source IP address, destination IP address, IP protocol, TCP source port, TCP destination port; when the message is forwarded in the form of UDP flow, the flow identifier of the message comprises VRFID, source IP address, destination IP address, IP protocol, UDP source port and UDP destination port; when the message is forwarded in the form of ICMP stream, the stream identifier of the message comprises VRFID, source IP address, destination IP address, I IP protocol, ICMP ID, ICMP Type and Code; when the message is forwarded in the form of a RawIP stream, the stream identifier of the message comprises VRFID, source IP address, destination IP address, IP protocol and 0x0000.
In order to reduce resource waste, the following steps are executed for each processed message: according to the selected characteristic information of the current processed message, wherein the selected characteristic information can be any flow identifier of the message, performing flow hash calculation on the current processed message to obtain a calculation result corresponding to the current processed message.
And selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result of the currently processed message. The method comprises the steps that the first business processing task of each sub-assembly line corresponds to different calculation results, the first business processing task of the sub-assembly line corresponding to the calculation result is determined to be a selected first business processing task according to the calculation result of the message which is processed currently, and the message which is processed currently is added into a sliding window of the selected first business processing.
The method includes the steps of carrying out stream hash calculation on a current processed message according to any stream identifier of the current processed message, namely determining a first service processing task of a sub-pipeline corresponding to the information of the stream identifier according to the information of the stream identifier of the current processed message, and adding the current processed message into a sliding window of first service processing. Wherein the flow identifier comprises a destination IP address, a source port, and a destination port.
Wherein, at least two sub-pipelines of the first pipeline are also called 1 virtual N pipelines, the number of N is related to actual scene deployment, hardware configuration and the like, and the number of all working procedures on the common pipeline is not more than 2 times of the number of actual CPUs.
Fig. 6 is a schematic diagram of a pipeline with 1 virtual N according to an embodiment of the present invention, where as shown in fig. 6, the pipeline is a pipeline with 1 virtual 3, that is, 3 pipelines are concurrent, and the 3 pipelines share the same task for receiving a message.
Example 5:
fig. 7 is a schematic structural diagram of a message forwarding device according to an embodiment of the present invention, where on the basis of the foregoing embodiments, the embodiment of the present invention further provides a message forwarding device, and the device includes:
An obtaining module 701, configured to obtain a task to be scheduled from a task scheduling sequence;
a determining module 702, configured to determine whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
a processing module 703, configured to invoke the task to be scheduled to obtain a set of messages to be processed from a sliding window of the task to be scheduled and send the set of messages to be processed if it is determined that the task to be scheduled is a sending task of the first pipeline; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
and an updating module 704, configured to update the task to be scheduled to a task next to the task to be scheduled in the scheduling sequence, and turn to the obtaining module.
Further, the processing module is further configured to determine whether the task to be scheduled is a receiving task of the first pipeline after the adding of the next task of the first pipeline to the task scheduling sequence; if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence; and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is a second service processing task of the first pipeline, the sliding window of the task to be scheduled includes a message of a tunnel service, and the processing module is further configured to call the task to be scheduled to process the message in the message set to be processed, and then send the processed message of the tunnel service to a tunnel logic network card; and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, the task to be scheduled is the receiving task of the first pipeline, the processing module is specifically configured to execute, for each processed message: performing stream hash calculation according to the selected characteristic information of the message processed currently; selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result; and adding the current processed message into a sliding window of the selected first service processing task.
Example 6:
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and on the basis of the foregoing embodiments, the embodiment of the present invention further provides an electronic device, which includes a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete communication with each other through the communication bus 804;
the memory 803 stores a computer program that, when executed by the processor 801, causes the processor 801 to perform the steps of:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
Updating the task to be scheduled to the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
Further, the processor 801 is further configured to, after the adding the next task of the first pipeline to the task scheduling sequence, further comprise:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, the processor 801 is further configured to, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, call the task to be scheduled to process a message in the set of messages to be processed, and then the method further includes:
transmitting the processed tunnel service message to a tunnel logic network card;
And after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
Further, the processor 801 is further configured to add the processed packet to a sliding window of a next task of the first pipeline if each pipeline includes at least two sub-pipelines sharing the same receiving task, where the task to be scheduled is the receiving task of the first pipeline, and includes:
for each processed message, executing:
performing stream hash calculation according to the selected characteristic information of the message processed currently;
selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result;
and adding the current processed message into a sliding window of the selected first service processing task.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 802 is used for communication between the electronic device and other devices described above.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but also digital instruction processors (Digital Signal Processing, DSP), application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
Example 7:
on the basis of the above embodiments, the embodiments of the present invention also provide a computer-readable storage medium storing a computer program, the computer program being executed by a processor to:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
If the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
updating the task to be scheduled to the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
Further, after the adding the next task of the first pipeline to the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is the second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, the method further includes, after the task to be scheduled is invoked to process the message in the message set to be processed:
transmitting the processed tunnel service message to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, the task to be scheduled is the receiving task of the first pipeline, adding the processed message into a sliding window of a next task of the first pipeline, including:
for each processed message, executing:
performing stream hash calculation according to the selected characteristic information of the message processed currently;
selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result;
And adding the current processed message into a sliding window of the selected first service processing task.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
Claims (10)
1. The message forwarding method is applied to a multi-core Central Processing Unit (CPU) included in a network device, the CPU also includes a pipeline corresponding to each network card included in the network device, and each pipeline includes a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and is characterized in that the method includes:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be the sending task of the first pipeline, the task to be scheduled is called to acquire a message set to be processed from a sliding window of the task to be scheduled and sent; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
updating the task to be scheduled to the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
2. The method of claim 1, wherein after adding the next task of the first pipeline to the task scheduling sequence, the method further comprises:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
3. The method of claim 1, wherein if the task to be scheduled is a second traffic processing task of the first pipeline, the sliding window of the task to be scheduled includes a message of a tunnel traffic, the method further comprises, after invoking the task to be scheduled to process the message in the set of messages to be processed:
transmitting the processed tunnel service message to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
4. A method according to any one of claims 1-3, wherein if each pipeline comprises at least two sub-pipelines sharing the same receiving task, the task to be scheduled is the receiving task of the first pipeline, adding the processed message to a sliding window of a next task of the first pipeline, comprising:
for each processed message, executing:
performing stream hash calculation according to the selected characteristic information of the message processed currently;
selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result;
and adding the current processed message into a sliding window of the selected first service processing task.
5. A message forwarding device, the device comprising:
the acquisition module is used for acquiring tasks to be scheduled from the task scheduling sequence;
the determining module is used for determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
the processing module is used for calling the task to be scheduled to acquire a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed if the task to be scheduled is determined to be the sending task of the first pipeline; if the task to be scheduled is not the sending task of the first pipeline, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first pipeline, and adding the next task of the first pipeline into the task scheduling sequence;
And the updating module is used for updating the task to be scheduled into the next task of the task to be scheduled in the scheduling sequence, and turning to the acquisition module.
6. The apparatus of claim 5, wherein the processing module is further configured to determine whether the task to be scheduled is a receiving task of the first pipeline after the adding of the next task of the first pipeline to the task scheduling sequence; if the task to be scheduled is determined to be the receiving task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence; and if the task to be scheduled is not the receiving task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
7. The apparatus of claim 5, wherein if the task to be scheduled is a second service processing task of the first pipeline, the sliding window of the task to be scheduled includes a message of a tunnel service, the processing module is further configured to call the task to be scheduled to process the message in the set of messages to be processed, and then send the processed message of the tunnel service to a tunnel logic network card; and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of the receiving task of the first pipeline.
8. The apparatus according to any one of claims 5-7, wherein if each pipeline includes at least two sub-pipelines sharing a same receiving task, the task to be scheduled is the receiving task of the first pipeline, the processing module is specifically configured to, for each processed message, execute: performing stream hash calculation according to the selected characteristic information of the message processed currently; selecting a first business processing task from the first business processing tasks of at least two sub-pipelines of the first pipeline according to the calculation result; and adding the current processed message into a sliding window of the selected first service processing task.
9. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the message forwarding method of any of claims 1-4.
10. A computer readable storage medium, characterized in that it stores a computer program executable by a processor, which when run on the processor causes the processor to perform the message forwarding method of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111599235.3A CN114553774B (en) | 2021-12-24 | 2021-12-24 | Message forwarding method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111599235.3A CN114553774B (en) | 2021-12-24 | 2021-12-24 | Message forwarding method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114553774A CN114553774A (en) | 2022-05-27 |
CN114553774B true CN114553774B (en) | 2023-06-16 |
Family
ID=81669532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111599235.3A Active CN114553774B (en) | 2021-12-24 | 2021-12-24 | Message forwarding method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114553774B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103237039A (en) * | 2013-05-10 | 2013-08-07 | 汉柏科技有限公司 | Message forwarding method and message forwarding device |
CN104618253A (en) * | 2015-01-22 | 2015-05-13 | 大唐移动通信设备有限公司 | Dynamically changed transmission message processing method and device |
CN106953807A (en) * | 2017-03-02 | 2017-07-14 | 北京星网锐捷网络技术有限公司 | Message forwarding method and device |
WO2019129167A1 (en) * | 2017-12-29 | 2019-07-04 | 华为技术有限公司 | Method for processing data packet and network card |
CN111209283A (en) * | 2020-01-10 | 2020-05-29 | 深圳前海微众银行股份有限公司 | Data processing method and device |
WO2021031092A1 (en) * | 2019-08-19 | 2021-02-25 | 华为技术有限公司 | Packet processing method and network device |
EP3893416A1 (en) * | 2020-04-09 | 2021-10-13 | Commissariat à l'Energie Atomique et aux Energies Alternatives | Deterministic equipment system for communication between at least one transmitter and at least one receiver, configured to statically and periodically schedule the data frames, and method for managing receiving of data frames |
-
2021
- 2021-12-24 CN CN202111599235.3A patent/CN114553774B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103237039A (en) * | 2013-05-10 | 2013-08-07 | 汉柏科技有限公司 | Message forwarding method and message forwarding device |
CN104618253A (en) * | 2015-01-22 | 2015-05-13 | 大唐移动通信设备有限公司 | Dynamically changed transmission message processing method and device |
CN106953807A (en) * | 2017-03-02 | 2017-07-14 | 北京星网锐捷网络技术有限公司 | Message forwarding method and device |
WO2019129167A1 (en) * | 2017-12-29 | 2019-07-04 | 华为技术有限公司 | Method for processing data packet and network card |
WO2021031092A1 (en) * | 2019-08-19 | 2021-02-25 | 华为技术有限公司 | Packet processing method and network device |
CN111209283A (en) * | 2020-01-10 | 2020-05-29 | 深圳前海微众银行股份有限公司 | Data processing method and device |
EP3893416A1 (en) * | 2020-04-09 | 2021-10-13 | Commissariat à l'Energie Atomique et aux Energies Alternatives | Deterministic equipment system for communication between at least one transmitter and at least one receiver, configured to statically and periodically schedule the data frames, and method for managing receiving of data frames |
Non-Patent Citations (1)
Title |
---|
张先富 ; 冯冬芹 ; .基于并行处理的EPA通信调度算法研究与实现.高技术通讯.2009,(第04期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN114553774A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11677851B2 (en) | Accelerated network packet processing | |
CN109952746B (en) | Integrating physical and virtual network functions in a business-linked network environment | |
CN110535813B (en) | Method and device for processing coexistence of kernel mode protocol stack and user mode protocol stack | |
US20200008067A1 (en) | Resource partitioning for network slices in segment routing networks | |
CN108270813B (en) | Heterogeneous multi-protocol stack method, device and system | |
JP2023553086A (en) | Method and apparatus for forwarding computing power application traffic | |
CN106878482B (en) | Network address translation method and device | |
JP5993817B2 (en) | Routing system and method in carrier network | |
Yi et al. | Gpunfv: a gpu-accelerated nfv system | |
CN110995595B (en) | Message sending method, device, storage medium and node equipment | |
CN113891396A (en) | Data packet processing method and device, computer equipment and storage medium | |
US11606418B2 (en) | Apparatus and method for establishing connection and CLAT aware affinity (CAA)-based scheduling in multi-core processor | |
CN108737239B (en) | Message forwarding method and device | |
CN114553774B (en) | Message forwarding method, device, equipment and medium | |
US20200336573A1 (en) | Network switching with co-resident data-plane and network interface controllers | |
CN111010346B (en) | Message processing method, device, storage medium and device based on dynamic routing | |
CN115801498A (en) | Vehicle-mounted Ethernet gateway system and operation method | |
CN104702505A (en) | Message transmission method and node | |
CN109818882B (en) | Method and device for executing QoS strategy | |
CN114979128A (en) | Cross-region communication method and device and electronic equipment | |
US20230040655A1 (en) | Network switching with co-resident data-plane and network interface controllers | |
CN117938767A (en) | Message forwarding method and device applied to SRv SFC system | |
CN117376232A (en) | Message transmission method, device and system | |
CN118842607A (en) | Data transmission method, data transmission device, computer equipment, storage medium and program product | |
CN114900458A (en) | Message forwarding method, device, medium and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |