CN114553774A - Message forwarding method, device, equipment and medium - Google Patents

Message forwarding method, device, equipment and medium Download PDF

Info

Publication number
CN114553774A
CN114553774A CN202111599235.3A CN202111599235A CN114553774A CN 114553774 A CN114553774 A CN 114553774A CN 202111599235 A CN202111599235 A CN 202111599235A CN 114553774 A CN114553774 A CN 114553774A
Authority
CN
China
Prior art keywords
task
scheduled
message
pipeline
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111599235.3A
Other languages
Chinese (zh)
Other versions
CN114553774B (en
Inventor
林振彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruijie Networks Co Ltd filed Critical Ruijie Networks Co Ltd
Priority to CN202111599235.3A priority Critical patent/CN114553774B/en
Publication of CN114553774A publication Critical patent/CN114553774A/en
Application granted granted Critical
Publication of CN114553774B publication Critical patent/CN114553774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a message forwarding method, a message forwarding device, message forwarding equipment and a message forwarding medium.

Description

Message forwarding method, device, equipment and medium
Technical Field
The present invention relates to the field of packet forwarding technologies, and in particular, to a packet forwarding method, apparatus, device, and medium.
Background
Currently, a message needs to be forwarded through Network devices such as a traditional router, a firewall, a wireless product, and the like, a lot of services are hung on the Network devices, and the message is forwarded after being subjected to service processing, where the services include services that cannot be executed by hardware, such as traffic identification, Application layer message information analysis, and Address Translation (NAT ALG) services. The message which can not process the service by adopting the hardware is forwarded to the control plane of the network equipment, and the control plane sends the message to the forwarding plane for service processing.
In order to execute the services that cannot be executed by hardware, a software forwarding architecture is adopted in the network device to execute the services, but the software forwarding architecture depends on occupation of hardware resources such as a Central Processing Unit (CPU), a Cache memory (Cache), and a memory, and the hardware resources are consumed when each message is forwarded, and when a large number of messages need to be forwarded, consumption of the hardware resources is increased and delay of message forwarding is caused.
An existing network device uses a pipeline for message forwarding, and as shown in fig. 1, a schematic diagram of a pipeline is shown, where one pipeline includes four tasks, namely, receive (Rx), service process (Forward), Dispatch (Dispatch), and transmit (Tx). Specifically, after a message is received through a network card driver, the message is analyzed to determine the type of the message, and Service processing is performed on the analyzed message, wherein the Service processing includes pre-routing services such as application identification, routing, target Internet Protocol (IP) address conversion and the like, and post-routing services such as source IP address conversion, Quality of Service (QOS) and the like; and carrying out service processing on the message to obtain an outlet, distributing the message to the outlet, and sending the message to the network card drive again through the outlet.
In the existing assembly line, the CPU idles when the network equipment waits for executing the message distribution and message sending tasks, so that the resource waste is caused.
Disclosure of Invention
The invention provides a message forwarding method, a message forwarding device, message forwarding equipment and a message forwarding medium, which are used for solving the problem of resource waste in the prior art.
The invention provides a message forwarding method, which is applied to a multi-core Central Processing Unit (CPU) included in network equipment, wherein the CPU also comprises a production line corresponding to each network card included in the network equipment, and each production line comprises a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the method comprises the following steps:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
Further, after the adding the next task of the first pipeline into the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, after the task to be scheduled is called to process a message in the message set to be processed, the method further includes:
sending the processed message of the tunnel service to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of a receiving task of the first flow line.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is the receiving task of the first pipeline, adding the processed message into a sliding window of a next task of the first pipeline, including:
and aiming at each processed message, executing:
performing flow scatter calculation according to the selected characteristic information of the currently processed message;
selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result;
and adding the currently processed message into the sliding window of the selected first service processing task.
Correspondingly, the invention provides a message forwarding device, which comprises:
the acquisition module is used for acquiring the tasks to be scheduled from the task scheduling sequence;
the determining module is used for determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
the processing module is used for calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed if the task to be scheduled is determined to be a sending task of the first pipeline; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
and the updating module is used for updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence and turning to the obtaining module.
Further, the processing module is further configured to determine whether the task to be scheduled is a received task of the first pipeline after the next task of the first pipeline is added to the task scheduling sequence; if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence; and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, the processing module is further configured to send the processed message of the tunnel service to a tunnel logic network card after the task to be scheduled is called to process the message in the set of messages to be processed; and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of a receiving task of the first flow line.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is the receiving task of the first pipeline, the processing module is specifically configured to, for each processed packet, perform: performing flow scatter calculation according to the selected characteristic information of the currently processed message; selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result; and adding the currently processed message into the sliding window of the selected first service processing task.
Accordingly, the present invention provides an electronic device comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory has stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of any of the above-described message forwarding methods.
Accordingly, the present invention provides a computer readable storage medium, in which a computer program is stored, which, when being executed by a processor, performs the steps of any of the above-mentioned message forwarding methods.
The invention provides a message forwarding method, a device, equipment and a medium, wherein a CPU applied in the method also comprises a production line corresponding to each network card included in network equipment, each production line comprises a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the task to be scheduled is obtained from a task scheduling sequence; determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled; if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence; and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence. According to the method, the service processing tasks in each assembly line are divided into the first service processing task and the second service processing task, so that the possibility of congestion of the service processing tasks is reduced, and the idle running of a CPU (Central processing Unit) during the processing of subsequent distribution tasks and sending tasks is avoided, so that the resource waste is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a pipeline provided by the prior art;
fig. 2 is a process diagram of a message forwarding method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a Forward task stage becoming a bottleneck in a pipeline according to the prior art;
FIG. 4 is a schematic diagram of a pipeline provided by an embodiment of the present invention;
fig. 5 is a schematic flowchart of a packet forwarding process of a tunnel service according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a 1 virtual N pipeline according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a message forwarding apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to increase the message forwarding speed and avoid system resource waste, the embodiment of the invention provides a schematic diagram of a process of a message forwarding method.
Example 1:
fig. 2 is a schematic process diagram of a message forwarding method according to an embodiment of the present invention, where the process includes the following steps:
s201: and acquiring the task to be scheduled from the task scheduling sequence.
The message forwarding method provided by the embodiment of the invention is applied to a multi-core Central Processing Unit (CPU) included in network equipment, wherein the network equipment can be equipment such as a router, a server, a switch, a gateway, a computer and the like.
The CPU includes a pipeline corresponding to each network card included in the network device, that is, the number of the network cards included in the network device is the same as the number of the pipelines included in the CPU, and each network card corresponds to one pipeline.
In the existing pipeline, a service processing task needs to process a plurality of services, the services can only be processed in series, and as the services increase, messages to be processed in the service processing task stage increase, so that the service processing task stage becomes the bottleneck of the pipeline. Fig. 3 is a schematic diagram illustrating that a service processing task stage in a pipeline becomes a bottleneck, as shown in fig. 3, a message processed in a task receiving stage includes a message 1, a message 2.
In order to avoid system resource waste, in the embodiment of the present invention, each pipeline includes a receiving task, a first service processing task, a second service processing task, a distributing task, and a sending task, where the tasks are ordered according to a task processing order, for example, when the first service processing task is a pre-routing service processing task, the second service processing task is a post-routing service processing task.
Fig. 4 is a schematic diagram of a pipeline according to an embodiment of the present invention, and as shown in fig. 4, the pipeline includes a packet receiving task, a Forward Ingress task, a post-routing service processing (Forward Ingress) task, a packet distributing task, and a packet sending task.
In a pre-stored task scheduling sequence, the CPU obtains a task to be scheduled, where the task to be scheduled refers to any one of a receiving task, a first service processing task, a second service processing task, a distributing task, and a sending task.
S202: and determining whether the task to be scheduled is a sending task of the first pipeline corresponding to the first pipeline identifier carried by the task to be scheduled.
In order to process the task to be scheduled, the task to be scheduled carries a first pipeline identifier, and the CPU can determine the first pipeline corresponding to the first pipeline identifier according to the first pipeline identifier carried in the task to be scheduled, and determine whether the task to be scheduled is a sending task of the first pipeline.
S203: if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, the task to be scheduled is called to process the message in the message set to be processed, the processed message is added into a sliding window of the next task of the first assembly line, and the next task of the first assembly line is added into the task scheduling sequence.
If the task to be scheduled is determined to be the sending task of the first assembly line, the task to be scheduled is indicated to be the last task of the first assembly line, therefore, the CPU calls the scheduling task, acquires the corresponding message set to be processed from the sliding window according to the sliding window of the scheduling task, and sends the message to be processed in the message set to be processed.
Specifically, the CPU sends the message to be processed in the message set to be processed to the network card driver of the network device itself, and the network card driver forwards the message to be processed to the next network device.
If the task to be scheduled is determined not to be the sending task of the first pipeline, it indicates that the scheduling task is not the last task of the first pipeline, and other tasks still exist after the scheduling task, and the scheduling task may be any one of a receiving task, a first service processing task, a second service processing task, and a distributing task in the first pipeline.
Therefore, the CPU calls the scheduling task, acquires the corresponding message set to be processed from the sliding window of the scheduling task, and processes the messages in the message set to be processed. Specifically, the messages in the message set to be processed are correspondingly processed according to the specific task type of the task to be scheduled, if the task to be scheduled is a receiving task, the messages in the message set to be processed are received, and if the task to be scheduled is a first service processing task, the messages in the message set to be processed are subjected to first service processing; if the task to be scheduled is a second service processing task, performing second service processing on the message in the message set to be processed; and if the task to be scheduled is a distribution task, performing distribution processing on the message in the message set to be processed.
And aiming at the processed message, adding the processed message into a sliding window of the next task of the task to be scheduled in the first flow line, and adding the next task of the first flow line into a task scheduling sequence.
S204: and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and turning to S201.
In order to continue to process the next task of the scheduling task, in the embodiment of the present invention, the task to be scheduled is updated to the next task according to the next task of the task to be scheduled in the scheduling task sequence, and a step of obtaining the task to be scheduled from the task scheduling sequence is further performed for resource waste.
The CPU applied by the method in the embodiment of the invention also comprises a production line corresponding to each network card included by the network equipment, wherein each production line comprises a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the task to be scheduled is obtained from a task scheduling sequence; determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled; if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence; and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence. According to the method, the service processing tasks in each assembly line are divided into the first service processing task and the second service processing task, so that the possibility of congestion of the service processing tasks is reduced, and the idle running of a CPU (Central processing Unit) during the processing of subsequent distribution tasks and sending tasks is avoided, so that the resource waste is reduced.
Example 2:
to reduce the resource waste, on the basis of the foregoing embodiment, in an embodiment of the present invention, after the adding the next task of the first pipeline into the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
In order to reduce resource waste, in the embodiment of the present invention, when the task to be scheduled is obtained from the task scheduling sequence again, it is further determined whether the task to be scheduled is a received task of the first pipeline, and if it is determined that the task to be scheduled is a received task of the first pipeline, it is determined that processing can be performed from the first task of the first pipeline, so that the task to be scheduled is retained in the task scheduling sequence.
If the task to be scheduled is determined not to be the received task of the first pipeline, it indicates that the processing cannot be started from the first task of the first pipeline, and therefore the task to be scheduled is removed from the task scheduling sequence.
Example 3:
in order to reduce resource waste, on the basis of the foregoing embodiments, in an embodiment of the present invention, if the task to be scheduled is a second service processing task of the first pipeline, and a sliding window of the task to be scheduled includes a packet of a tunnel service, after the task to be scheduled is called to process a packet in the packet set to be processed, the method further includes:
sending the processed message of the tunnel service to a tunnel logic network card;
and after receiving the processed message of the tunnel service through the tunnel logic network card, adding the processed message of the tunnel service into a sliding window of a receiving task of the first flow line.
In order to reduce resource waste, in the embodiment of the present invention, a tunnel logic network card is further present in the network device, and if the task to be scheduled is a second service processing task of the first pipeline, and a sliding window of the task to be scheduled includes a message of the tunnel service, the message of the tunnel service is processed, specifically, the tunnel logic network card is configured to perform the second service processing on the message of the tunnel service and encapsulate the message of the tunnel service based on a corresponding tunnel protocol, and send the encapsulated message of the tunnel service.
The tunnel may be a tunnel encapsulated by a Multi-Protocol Label Switching (MPLS) Protocol, a tunnel encapsulated by a Virtual Extensible Local Area Network (VXLAN) Protocol, a packet encapsulated by a Generic Routing Encapsulation Protocol (GRE), or a packet encapsulated by an Internet Protocol Version 6 (IPv 6), which is not limited in this embodiment of the present invention.
And after receiving the processed message of the tunnel service through the tunnel logic network card, taking the processed message of the tunnel service as a new message, and adding the new message into a sliding window of a receiving task of the first flow line.
Fig. 5 is a schematic diagram of a flow of forwarding a tunnel service message according to an embodiment of the present invention, and as shown in fig. 5, after receiving a tunnel service message through a physical network card, the tunnel service message is subjected to a task receiving process, a pre-routing service process, and a post-routing service process, and the tunnel service message after the post-routing service process is encapsulated and sent to a tunnel logical network card, and then the tunnel logical network card receives the encapsulated tunnel service message and serves as a new message to perform subsequent message receiving process, a pre-routing service process, a post-routing service process, a message distribution process, and a message sending process.
Example 4:
in order to increase the packet forwarding speed, on the basis of the foregoing embodiments, in an embodiment of the present invention, if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is a receiving task of the first pipeline, adding a processed packet to a sliding window of a next task of the first pipeline, includes:
and aiming at each processed message, executing:
performing flow scatter calculation according to the selected characteristic information of the currently processed message;
selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result;
and adding the currently processed message into the sliding window of the selected first service processing task.
If each pipeline includes at least two sub-pipelines sharing the same receiving task and a task to be scheduled obtained from the task scheduling sequence receives the task, when the processed Message is assumed to be a sliding window of a first service processing task of the first pipeline, the processed Message is based on different protocols, which may be Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Control Message Protocol (Internet Control Message Protocol, ICMP), and raw network interconnection Protocol (RawIP), so the Message is forwarded in a form of Message streams based on different protocols, and table 1 is a stream identifier of the Message streams based on different protocols provided in the embodiment of the present invention.
Figure BDA0003432610510000111
TABLE 1
As shown in table 1, when a packet is forwarded in a TCP flow form, the flow identifier of the packet includes a VRFID, a source IP address, a destination IP address, an IP protocol, a TCP source port, and a TCP destination port; when the message is forwarded in a UDP flow form, the flow identifier of the message comprises a VRFID, a source IP address, a destination IP address, an IP protocol, a UDP source port and a UDP destination port; when the message is forwarded in an ICMP flow form, the flow identifier of the message comprises VRFID, a source IP address, a destination IP address, | IP protocol, ICMP ID and ICMP Type & Code; when the message is forwarded in the form of a RawIP stream, the stream identifier of the message comprises VRFID, a source IP address, a destination IP address, an IP protocol and 0x 0000.
In order to reduce resource waste, the following steps are executed for each processed message: and performing flow scatter calculation on the currently processed message according to the selected characteristic information of the currently processed message, wherein the selected characteristic information can be any flow identifier of the message, and obtaining a calculation result corresponding to the currently processed message.
And selecting one first service processing task from the first service processing tasks of at least two sub-assembly lines of the first assembly line according to the calculation result of the currently processed message. Specifically, the first service processing task of each sub-pipeline corresponds to different calculation results, the first service processing task of the sub-pipeline corresponding to the calculation result is determined as the selected first service processing task according to the calculation result of the currently processed message, and the currently processed message is added into the sliding window of the selected first service processing.
Exemplarily, according to any flow identifier of the currently processed message, performing flow hash calculation on the currently processed message, that is, according to the information of the flow identifier of the currently processed message, determining a first service processing task of a sub-pipeline corresponding to the information of the flow identifier, and adding the currently processed message into a sliding window of the first service processing. Wherein the flow identifier includes a destination IP address, a source port, and a destination port.
At least two sub pipelines of the first pipeline are also called 1 virtual N pipelines, the number of N is related to actual scene deployment, hardware configuration and the like, and the number of all working procedures on the general pipeline is not more than 2 times of the number of actual CPUs.
Fig. 6 is a schematic diagram of a 1-virtual-N pipeline according to an embodiment of the present invention, and as shown in fig. 6, the pipeline is 1-virtual-3 pipeline, that is, 3 pipelines are concurrently installed, and 3 pipelines share the same packet receiving task.
Example 5:
fig. 7 is a schematic structural diagram of a message forwarding apparatus according to an embodiment of the present invention, and on the basis of the foregoing embodiments, an embodiment of the present invention further provides a message forwarding apparatus, where the apparatus includes:
an obtaining module 701, configured to obtain a task to be scheduled from a task scheduling sequence;
a determining module 702, configured to determine whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
a processing module 703, configured to, if it is determined that the task to be scheduled is a sending task of the first pipeline, invoke the task to be scheduled to obtain a set of messages to be processed from a sliding window of the task to be scheduled and send the set of messages to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
an updating module 704, configured to update the task to be scheduled to a task next to the task to be scheduled in the scheduling sequence, and turn to the obtaining module.
Further, the processing module is further configured to determine whether the task to be scheduled is a received task of the first pipeline after the next task of the first pipeline is added to the task scheduling sequence; if the task to be scheduled is determined to be a received task of the first assembly line, the task to be scheduled is reserved in the task scheduling sequence; and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, the processing module is further configured to send the processed message of the tunnel service to a tunnel logic network card after the task to be scheduled is called to process the message in the set of messages to be processed; and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of a receiving task of the first flow line.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is the receiving task of the first pipeline, the processing module is specifically configured to, for each processed packet, perform: performing flow scatter calculation according to the selected characteristic information of the currently processed message; selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result; and adding the currently processed message into the sliding window of the selected first service processing task.
Example 6:
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and on the basis of the foregoing embodiments, an electronic device according to an embodiment of the present invention is further provided, where the electronic device includes a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete communication with each other through the communication bus 804;
the memory 803 has stored therein a computer program which, when executed by the processor 801, causes the processor 801 to perform the steps of:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
Further, after the processor 801 is further configured to add the next task of the first pipeline into the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, the processor 801 is further configured to, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a packet of a tunnel service, after the task to be scheduled is called to process a packet in the set of packets to be processed, the method further includes:
sending the processed message of the tunnel service to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of a receiving task of the first flow line.
Further, the processor 801 is further configured to add the processed packet to a sliding window of a next task of the first pipeline if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is the receiving task of the first pipeline, including:
and aiming at each processed message, executing:
performing flow scatter calculation according to the selected characteristic information of the currently processed message;
selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result;
and adding the currently processed message into the sliding window of the selected first service processing task.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 802 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Example 7:
on the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to perform the following steps:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
Further, after the adding the next task of the first pipeline into the task scheduling sequence, the method further includes:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
Further, if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, after the task to be scheduled is called to process a message in the message set to be processed, the method further includes:
sending the processed message of the tunnel service to a tunnel logic network card;
and after receiving the processed message of the tunnel service through the tunnel logic network card, adding the processed message of the tunnel service into a sliding window of a receiving task of the first flow line.
Further, if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is the receiving task of the first pipeline, adding the processed message into a sliding window of a next task of the first pipeline, including:
and aiming at each processed message, executing:
performing flow scatter calculation according to the selected characteristic information of the currently processed message;
selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result;
and adding the currently processed message into the sliding window of the selected first service processing task.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A message forwarding method is applied to a multi-core Central Processing Unit (CPU) included in network equipment, the CPU also includes a pipeline corresponding to each network card included in the network equipment, each pipeline includes a receiving task, a first service processing task, a second service processing task, a distributing task and a sending task, and the method is characterized by comprising the following steps:
acquiring a task to be scheduled from a task scheduling sequence;
determining whether the task to be scheduled is a sending task of a first pipeline corresponding to a first pipeline identifier carried by the task to be scheduled;
if the task to be scheduled is determined to be a sending task of the first assembly line, calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
and updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence, and executing the step of acquiring the task to be scheduled from the task scheduling sequence.
2. The method of claim 1, wherein after the adding the next task of the first pipeline into the task scheduling sequence, the method further comprises:
determining whether the task to be scheduled is a receiving task of the first pipeline;
if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence;
and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
3. The method according to claim 1, wherein if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a packet of a tunnel service, after the task to be scheduled is called to process a packet in the set of packets to be processed, the method further comprises:
sending the processed message of the tunnel service to a tunnel logic network card;
and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of a receiving task of the first flow line.
4. The method according to any of claims 1-3, wherein if each pipeline comprises at least two sub-pipelines sharing the same received task, and the task to be scheduled is the received task of the first pipeline, adding the processed message to a sliding window of a next task of the first pipeline, comprising:
and aiming at each processed message, executing:
performing flow scatter calculation according to the selected characteristic information of the currently processed message;
selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result;
and adding the currently processed message into the sliding window of the selected first service processing task.
5. A message forwarding apparatus, the apparatus comprising:
the acquisition module is used for acquiring the tasks to be scheduled from the task scheduling sequence;
the determining module is used for determining whether the task to be scheduled is a sending task of a first assembly line corresponding to a first assembly line identifier carried by the task to be scheduled;
the processing module is used for calling the task to be scheduled to obtain a message set to be processed from a sliding window of the task to be scheduled and sending the message set to be processed if the task to be scheduled is determined to be a sending task of the first pipeline; if the task to be scheduled is determined not to be the sending task of the first assembly line, calling the task to be scheduled to process the message in the message set to be processed, adding the processed message into a sliding window of the next task of the first assembly line, and adding the next task of the first assembly line into the task scheduling sequence;
and the updating module is used for updating the task to be scheduled to be the next task of the task to be scheduled in the scheduling sequence and turning to the obtaining module.
6. The apparatus of claim 5, wherein the processing module is further configured to determine whether the task to be scheduled is a receiving task of the first pipeline after the task next in the first pipeline is added to the task scheduling sequence; if the task to be scheduled is determined to be a received task of the first pipeline, the task to be scheduled is reserved in the task scheduling sequence; and if the task to be scheduled is determined not to be the received task of the first pipeline, removing the task to be scheduled from the task scheduling sequence.
7. The apparatus according to claim 5, wherein if the task to be scheduled is a second service processing task of the first pipeline, and the sliding window of the task to be scheduled includes a message of a tunnel service, the processing module is further configured to call the task to be scheduled to send the processed message of the tunnel service to a tunnel logic network card after processing the message in the set of messages to be processed; and after receiving the processed tunnel service message through the tunnel logic network card, adding the processed tunnel service message into a sliding window of a receiving task of the first flow line.
8. The apparatus according to any of claims 5 to 7, wherein if each pipeline includes at least two sub-pipelines sharing the same receiving task, and the task to be scheduled is a receiving task of the first pipeline, the processing module is specifically configured to, for each processed packet, perform: performing flow scatter calculation according to the selected characteristic information of the currently processed message; selecting a first service processing task from first service processing tasks of at least two sub-pipelines of the first pipeline according to a calculation result; and adding the currently processed message into the sliding window of the selected first service processing task.
9. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory has stored therein a computer program which, when executed by the processor, causes the processor to execute the message forwarding method of any of claims 1-4.
10. A computer-readable storage medium, characterized in that it stores a computer program executable by a processor, which program, when run on the processor, causes the processor to carry out the message forwarding method according to any one of claims 1-4.
CN202111599235.3A 2021-12-24 2021-12-24 Message forwarding method, device, equipment and medium Active CN114553774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111599235.3A CN114553774B (en) 2021-12-24 2021-12-24 Message forwarding method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111599235.3A CN114553774B (en) 2021-12-24 2021-12-24 Message forwarding method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114553774A true CN114553774A (en) 2022-05-27
CN114553774B CN114553774B (en) 2023-06-16

Family

ID=81669532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111599235.3A Active CN114553774B (en) 2021-12-24 2021-12-24 Message forwarding method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114553774B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237039A (en) * 2013-05-10 2013-08-07 汉柏科技有限公司 Message forwarding method and message forwarding device
CN104618253A (en) * 2015-01-22 2015-05-13 大唐移动通信设备有限公司 Dynamically changed transmission message processing method and device
CN106953807A (en) * 2017-03-02 2017-07-14 北京星网锐捷网络技术有限公司 Message forwarding method and device
WO2019129167A1 (en) * 2017-12-29 2019-07-04 华为技术有限公司 Method for processing data packet and network card
CN111209283A (en) * 2020-01-10 2020-05-29 深圳前海微众银行股份有限公司 Data processing method and device
WO2021031092A1 (en) * 2019-08-19 2021-02-25 华为技术有限公司 Packet processing method and network device
EP3893416A1 (en) * 2020-04-09 2021-10-13 Commissariat à l'Energie Atomique et aux Energies Alternatives Deterministic equipment system for communication between at least one transmitter and at least one receiver, configured to statically and periodically schedule the data frames, and method for managing receiving of data frames

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237039A (en) * 2013-05-10 2013-08-07 汉柏科技有限公司 Message forwarding method and message forwarding device
CN104618253A (en) * 2015-01-22 2015-05-13 大唐移动通信设备有限公司 Dynamically changed transmission message processing method and device
CN106953807A (en) * 2017-03-02 2017-07-14 北京星网锐捷网络技术有限公司 Message forwarding method and device
WO2019129167A1 (en) * 2017-12-29 2019-07-04 华为技术有限公司 Method for processing data packet and network card
WO2021031092A1 (en) * 2019-08-19 2021-02-25 华为技术有限公司 Packet processing method and network device
CN111209283A (en) * 2020-01-10 2020-05-29 深圳前海微众银行股份有限公司 Data processing method and device
EP3893416A1 (en) * 2020-04-09 2021-10-13 Commissariat à l'Energie Atomique et aux Energies Alternatives Deterministic equipment system for communication between at least one transmitter and at least one receiver, configured to statically and periodically schedule the data frames, and method for managing receiving of data frames

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张先富;冯冬芹;: "基于并行处理的EPA通信调度算法研究与实现" *

Also Published As

Publication number Publication date
CN114553774B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
US11677851B2 (en) Accelerated network packet processing
CN109952746B (en) Integrating physical and virtual network functions in a business-linked network environment
US10932136B2 (en) Resource partitioning for network slices in segment routing networks
CN107819663B (en) Method and device for realizing virtual network function service chain
EP3110084B1 (en) Method for generating forwarding information, controller and service forwarding entity
US20170366605A1 (en) Providing data plane services for applications
US8634415B2 (en) Method and system for routing network traffic for a blade server
CN108270813B (en) Heterogeneous multi-protocol stack method, device and system
CN108432194B (en) Congestion processing method, host and system
US10812393B2 (en) Packet distribution based on an identified service function
WO2018013443A1 (en) Multiple core software forwarding
Davoli et al. Implementation of service function chaining control plane through OpenFlow
CN113395212B (en) Network device, method of operating the same, and non-transitory computer readable medium
US10541842B2 (en) Methods and apparatus for enhancing virtual switch capabilities in a direct-access configured network interface card
CN108737239B (en) Message forwarding method and device
CN113726915A (en) Network system, message transmission method therein and related device
US9473396B1 (en) System for steering data packets in communication network
US20200336573A1 (en) Network switching with co-resident data-plane and network interface controllers
CN114553774B (en) Message forwarding method, device, equipment and medium
CN104702505A (en) Message transmission method and node
CN111866046A (en) Method for realizing cluster and related equipment
CN110932979B (en) Method and device for rapidly forwarding message
US20230040655A1 (en) Network switching with co-resident data-plane and network interface controllers
JP2006238161A (en) Packet-switching apparatus and packet processing method
CN117938767A (en) Message forwarding method and device applied to SRv SFC system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant