CN113282525B - Message distribution method and device - Google Patents

Message distribution method and device Download PDF

Info

Publication number
CN113282525B
CN113282525B CN202110586085.6A CN202110586085A CN113282525B CN 113282525 B CN113282525 B CN 113282525B CN 202110586085 A CN202110586085 A CN 202110586085A CN 113282525 B CN113282525 B CN 113282525B
Authority
CN
China
Prior art keywords
processed
message
cpu
physical interface
forwarding chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110586085.6A
Other languages
Chinese (zh)
Other versions
CN113282525A (en
Inventor
孙军伟
秦德楼
赵旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN202110586085.6A priority Critical patent/CN113282525B/en
Publication of CN113282525A publication Critical patent/CN113282525A/en
Application granted granted Critical
Publication of CN113282525B publication Critical patent/CN113282525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1678Details of memory controller using bus width
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a method and a device for message distribution, wherein the method comprises the following steps: when a message to be processed is received, acquiring and recording an identifier of a target physical interface on a physical CPU through which the message to be processed passes when the message to be processed is uploaded to a virtual CPU by a forwarding chip; and when the message to be processed is processed, issuing the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface. In view of the fact that the physical interface has a hardware characteristic of limiting the flow bandwidth, in the scheme, each virtual CPU uses the physical interface through which the message is uploaded to issue the processed message to the forwarding chip along the original path, and the characteristic of being limited by the interface bandwidth during message uploading guarantees that the interface bandwidth cannot be exceeded during issuing, so that the problem that the processed message cannot be forwarded due to exceeding the bandwidth limit of the physical interface is avoided.

Description

Message distribution method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for message offloading.
Background
In a network device configured with a plurality of virtual CPUs and forwarding chips, the virtual CPUs actually run in physical CPUs, a plurality of physical interfaces are respectively arranged on the physical CPUs and the forwarding chips, the physical interfaces are in one-to-one correspondence, and a plurality of physical links for transmitting data are formed between the physical CPUs and the forwarding chips. And the message processed by each virtual CPU is transmitted to a corresponding physical interface on the forwarding chip through a physical interface on the physical CPU so as to reach the forwarding chip for forwarding, wherein a plurality of physical interfaces on the physical CPU are shared by each virtual CPU.
In the related art, the number of virtual CPUs is used to obtain the remainder of the number of physical interfaces on a physical CPU, so as to determine the specific physical interface through which a message processed by each virtual CPU should pass when being issued to a forwarding chip, for example: the network equipment is provided with 3 virtual CPUs (central processing units), which are respectively marked as VCPU 0, VCPU1 and VCPU2, and two physical interfaces, which are respectively marked as a physical interface 0 and a physical interface 1, exist on the physical CPU; the message processed by the VCPU 0 and the VCPU2 is transmitted from the physical interface 0 to the forwarding chip, and the message processed by the VCPU1 is transmitted from the physical interface 1 to the forwarding chip.
Because the number of the virtual CPUs is not necessarily an integral multiple of the number of the physical interfaces, and the number of the messages processed by each virtual CPU is also uneven, the flow rate of the messages sent by the virtual CPUs exceeds the bandwidth limit of the physical interfaces on the physical CPUs, and the processed messages cannot be sent to the forwarding chip for forwarding, which negatively affects the message forwarding processing.
Disclosure of Invention
The application provides a method and a device for message distribution, which are used for solving the problems in the related art.
According to a first aspect of the embodiments of the present application, a method for message distribution is provided, where a forwarding chip and a physical CPU are configured in a network device, a plurality of corresponding physical interfaces exist on the forwarding chip and the physical CPU, the plurality of corresponding physical interfaces form a plurality of physical links for message transmission between the forwarding chip and the physical CPU, a plurality of virtual CPUs are run in the physical CPU, and the method is applied to the virtual CPUs, and includes:
when a message to be processed is received, acquiring and recording an identifier of a target physical interface on a physical CPU through which the message to be processed passes when the message to be processed is uploaded to a virtual CPU by a forwarding chip;
and when the message to be processed is processed, issuing the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface.
According to a second aspect of the embodiments of the present application, a device for message distribution is provided, where a forwarding chip and a physical CPU are configured in a network device, there are a plurality of corresponding physical interfaces on the forwarding chip and the physical CPU, the corresponding physical interfaces constitute a plurality of physical links for transmitting messages between the forwarding chip and the physical CPU, a plurality of virtual CPUs are run in the physical CPU, and the device is applied to the virtual CPU, and includes a recording unit and an issuing unit:
the recording unit is used for acquiring and recording the identifier of a target physical interface on a physical CPU (central processing unit) through which the message to be processed passes when the message to be processed is uploaded to the virtual CPU by a forwarding chip when the message to be processed is received;
and the issuing unit is used for issuing the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface when the message to be processed is processed.
In the technical scheme of the application, in view of the fact that the physical interface has a hardware characteristic of limiting the flow bandwidth, each virtual CPU issues the processed message to the forwarding chip along the original route by using the physical interface through which the message passes when being uploaded, and the characteristic of being limited by the interface bandwidth when the message is uploaded ensures that the interface bandwidth cannot be exceeded when the message is issued, so that the problem that the processed message cannot be forwarded due to exceeding the bandwidth limit of the physical interface is avoided.
Drawings
Fig. 1 is a flowchart of a method for message distribution provided in the present application;
fig. 2 is a schematic diagram illustrating that a message to be processed is uploaded to a virtual CPU by a forwarding chip in the present application;
fig. 3 is a schematic diagram illustrating that a processed message in the present application is issued to a forwarding chip by a virtual CPU;
fig. 4 is a schematic diagram of a hardware structure of a network device where a packet offloading apparatus provided in the present application is located;
fig. 5 is a block diagram of a message offloading device provided in the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
In view of the problem that the flow of messages sent to a forwarding chip exceeds the bandwidth limit of a physical interface on a physical CPU due to the fact that the virtual CPU uses the physical interface unevenly in the related art, the application provides a message shunting method.
The forwarding chip may be a common switching chip, or may also be an FPGA (Field Programmable Gate Array) chip; the virtual CPUs may be a plurality of virtual CPUs operating in software form through a hyper-threading technology in a physical CPU, or may be virtual CPUs respectively operating by a plurality of physical cores in a multi-core physical CPU, and it is understood that the plurality of virtual CPUs may be one or more VCPUs located on the same physical CPU, or one or more VCPUs respectively operating on different physical CPUs. The specific type of the forwarding chip, and the specific distribution and the specific number of the virtual CPUs on the physical CPUs of the network device are not limited in the present application.
The forwarding chip and the physical CPU are positioned on the same board card, and a plurality of physical interfaces are respectively arranged on the forwarding chip and the physical CPU; the physical interface on the forwarding chip and the physical interface on the physical CPU have a corresponding relationship, and a plurality of physical links for transmitting messages can be formed between the forwarding chip and the physical CPU. It can be understood that the forwarding chip and the physical interfaces on the physical CPU correspond to each other one to one, and the forwarding chip and the physical interfaces should be consistent in number, but the specific number of the physical interfaces is not limited.
Fig. 1 is a flowchart of a method for message distribution provided by the present application, where the method for message distribution includes the following specific steps:
step 102, when any virtual CPU in the network equipment receives a message to be processed, acquiring and recording the identifier of a target physical interface on a physical CPU through which the message to be processed passes when the message to be processed is uploaded to the virtual CPU by a forwarding chip.
For better understanding of the scheme of the present application, first, a brief description is given to relevant contents of the message to be processed before reaching the virtual CPU.
A message to be processed, which is sent to a network device, is received through a network card of the network device or a network interface such as an optical fiber interface, and then is buffered in a packet receiving queue of a forwarding chip of the network device; when the message to be processed is received, the message to be processed in the packet receiving queue of the forwarding chip is uploaded to the packet receiving queue of a virtual CPU in the network equipment through the physical interface on the forwarding chip and the physical interface on the corresponding physical CPU in sequence so as to perform subsequent processing.
Before uploading a message to be processed to a virtual CPU, a forwarding chip needs to determine a target virtual CPU corresponding to the message to be processed, and the message to be processed is specifically processed by the target virtual CPU. The target virtual CPU corresponding to the message to be processed is determined, and various selectable implementation modes exist, including selecting an idle virtual CPU according to the resource occupation degree of each virtual CPU, or selecting a virtual CPU corresponding to a hash value according to the hash value of the quintuple of the message to be processed obtained through calculation, and the like, and other modes are not repeated.
After determining which virtual CPU in the network device the to-be-processed message is uploaded to, it is further required to determine which data channel between the forwarding chip and the physical CPU is used to upload the to-be-processed message to the target virtual CPU, that is, it is determined that the to-be-processed message is transmitted to a packet receiving queue of the target virtual CPU through a certain physical interface on the forwarding chip and a certain physical interface on the corresponding physical CPU. The method includes determining which physical interface on a forwarding chip the message to be processed is uploaded to a target virtual CPU, wherein multiple selectable implementation modes exist, including a hash value of a quintuple of the message to be processed obtained through calculation, a physical interface corresponding to the hash value and the like, and other modes are not described again.
Because the physical interface has a hardware characteristic of limiting the flow bandwidth, if the packet is uploaded through the forwarding chip with the bandwidth limited to 10Gbps and the physical interface on the physical CPU, the flow bandwidth of the packet uploaded by each physical interface is always limited within 10Gbps, and in combination with the description of the background art, although the packet received by each virtual CPU does not exceed the bandwidth limit of the physical interface, because the manner of using the physical interface when the packet is issued by each virtual CPU is not balanced, the packet exceeding the bandwidth limit on the physical interface cannot be successfully issued to the forwarding chip for forwarding, which is a problem to be solved by the present application.
The following describes the packet offloading method provided in the present application, with any virtual CPU in the network device receiving a packet to be processed as a starting point.
In the scheme of the application, when any virtual CPU in network equipment receives a message to be processed, the virtual CPU obtains the identifier of a target physical interface on a physical CPU through which the message to be processed specifically passes when the message to be processed is uploaded to the target virtual CPU by a forwarding chip of the network equipment, wherein the target physical interface is one of a plurality of physical interfaces on the physical CPU; after obtaining the identifier of the target physical interface, the virtual CPU records the identifier of the target physical interface in the virtual CPU, so as to implement the related operation when the message to be processed is subsequently sent to the forwarding chip.
In step 102, there are multiple selectable implementation manners to acquire the identifier of the target physical interface used when the to-be-processed packet is uploaded to the target virtual CPU by the forwarding chip.
In an optional specific implementation manner, when a virtual CPU receives a to-be-processed message that is directly uploaded to the virtual CPU by a forwarding chip, an identifier of a target physical interface on a physical CPU through which the to-be-processed message passes when being uploaded to the virtual CPU by the forwarding chip may be determined according to a packet receiving queue in which the to-be-processed message is located; the packet receiving queue of the virtual CPU has a corresponding relation with a physical interface on the physical CPU.
In this implementation manner, packet receiving queues corresponding to physical interfaces on a physical CPU may be preconfigured on each virtual CPU in the network device, and a corresponding relationship exists between a physical interface on the physical CPU through which the to-be-processed packet is uploaded and a packet receiving queue of the virtual CPU to which the to-be-processed packet is uploaded.
For example, a physical interface 0 and a physical interface 1 exist on the physical CPU, a corresponding physical interface 0 'and a corresponding physical interface 1' exist on the forwarding chip, a packet receiving queue RX1-0 and a packet receiving queue RX1-1 which have a corresponding relationship with the physical interface 0 and the physical interface 1 are pre-configured on the virtual CPU1, if a message to be processed is uploaded to the target virtual CPU1 successively through the physical interface 0 'and the physical interface 0, the message to be processed is uploaded to the packet receiving queue RX1-0 of the target virtual CPU1, and if the message to be processed is uploaded to the target virtual CPU1 successively through the physical interface 1' and the physical interface 1, the message to be processed is uploaded to the packet receiving queue RX1-1 of the target virtual CPU 1.
Specifically, the packet receiving queue for uploading the to-be-processed packet from the forwarding chip to the virtual CPU may be executed by the DMA controller.
The DMA controller, which employs a DMA (Direct Memory Access) technology, can master a data bus and write data into or read data from a Memory of a network device without disturbing a CPU.
When the virtual CPU1 receives the message to be processed, according to the packet receiving queue RX1-1 where the message to be processed is located, it is determined that a target physical interface on a physical CPU through which the message to be processed passes when the message to be processed is uploaded to the virtual CPU by a forwarding chip is a physical interface 1.
In some specific scenes, the message to be processed is uploaded to a target virtual CPU by a forwarding chip for processing, and then needs to be forwarded to other virtual CPUs for processing; for example, each virtual CPU in the network device undertakes different message processing tasks, the virtual CPU2 is responsible for encrypting messages, and messages processed by the virtual CPU0 or the virtual CPU1 need to be encrypted by the virtual CPU2 and then sent out. Therefore, after the message to be processed is uploaded to the target virtual CPU by the forwarding chip, the message to be processed can be forwarded to other virtual CPUs by the target virtual CPU, or the message to be processed is continuously forwarded by the other virtual CPUs.
Under these scenarios, when the virtual CPU receives a message to be processed, which is forwarded by another virtual CPU and is not uploaded by a forwarding chip, that is, when the virtual CPU is not a target virtual CPU corresponding to the message to be processed, an implementation manner for acquiring the identifier of the target physical interface includes:
when any virtual CPU receives a message to be processed forwarded by another virtual CPU, the virtual CPU obtains the identifier of a target physical interface on a physical CPU through which the message to be processed recorded by the other virtual CPU passes when the message to be processed is uploaded to the virtual CPU by a forwarding chip.
In step 102, there are multiple selectable implementation manners for recording the identifier of the target physical interface through which the message to be processed is uploaded to the target virtual CPU by the forwarding chip.
In an alternative specific implementation manner, when the virtual CPU receives the to-be-processed packet uploaded by the forwarding chip and determines the identifier of the target physical interface, the identifier of the target physical interface may be recorded in the management structure of the to-be-processed packet.
The management structure of the messages to be processed is used for managing the storage, processing and forwarding of the messages to be processed, and can be a structure with a preset format aiming at each message to be processed by a system or a structure with an adjustable format aiming at each message to be processed by a technician; for example, the management structure in the Linux system includes a structure sk-buff.
Because the management structure of the message to be processed keeps the binding relationship with the message to be processed when the message to be processed is processed, the identifier of the target physical interface is recorded in the management structure of the message to be processed, and the efficiency can be improved when the identifier of the target physical interface needs to be called.
When the virtual CPU receives a message to be processed forwarded by another virtual CPU, the virtual CPU copies the management structure of the message to be processed, thereby acquiring and recording the identifier of a target physical interface through which the message to be processed passes when the message to be processed is uploaded to a target virtual CPU by a forwarding chip.
And 104, when the virtual CPU finishes processing the message to be processed, issuing the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface.
After the virtual CPU completes all processing of the message to be processed, the processed message to be processed is issued to a forwarding chip, and at the moment, the virtual CPU issues the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface, so that the forwarded message to be processed is forwarded to a target device.
In step 104, the message to be processed after the processing is sent to the forwarding chip through the target physical interface according to the recorded identifier of the target physical interface, and various selectable implementation manners exist.
In an alternative specific implementation manner, when the virtual CPU finishes processing the to-be-processed packet, the virtual CPU may transfer the processed to-be-processed packet to a packet forwarding queue corresponding to the identifier of the target physical interface on the virtual CPU, so that the processed to-be-processed packet is sent to a forwarding chip through the target physical interface; and the packet sending queue on the virtual CPU has a corresponding relation with the physical interface on the physical CPU.
In this implementation manner, packet sending queues corresponding to the physical interfaces of the physical CPU may be preconfigured on the virtual CPUs of the network device, and a corresponding relationship exists between the physical interface of the physical CPU through which the processed packet is sent and the packet sending queue of the virtual CPU in which the processed packet is located.
For example, a physical interface 0 and a physical interface 1 exist on the physical CPU, a corresponding physical interface 0 'and a corresponding physical interface 1' exist on the forwarding chip, a packet sending queue TX 2-0 and a packet sending queue TX2-1, which have a corresponding relationship with the physical interface 0 and the physical interface 1, are pre-configured on the virtual CPU2, the processed packets in the packet sending queue TX 2-0 are sequentially sent to the forwarding chip through the physical interface 0 and the physical interface 0', and the processed packets in the packet sending queue TX2-1 are sequentially sent to the forwarding chip through the physical interface 1 and the physical interface 1'.
Specifically, the processed message to be processed is issued to the forwarding chip from a certain packet issuing queue of the virtual CPU, and may be executed by the DMA controller.
When the virtual CPU2 finishes processing the message to be processed, according to the recorded identification of the target physical interface corresponding to the message to be processed: the physical interface 1 transfers the processed message to be processed to a packet sending queue TX2-1 of the virtual CPU, so that the processed message to be processed is issued to the forwarding chip through the physical interface 1 on the physical CPU and the physical interface 1' on the forwarding chip successively.
Further, the method of the present application further includes step 106, when the virtual CPU monitors that the processed packet to be processed is successfully forwarded by the forwarding chip, deleting the recorded identifier of the target physical interface.
The virtual CPU sends the processed message to be processed to a forwarding chip through the target physical interface according to the recorded identifier of the target physical interface, can detect whether the processed message to be processed is successfully forwarded outwards by the forwarding chip, and can release a memory in which data of the processed message to be processed is cached and delete the recorded identifier of the target physical interface corresponding to the processed message to be processed if the processed message to be processed is successfully forwarded by the forwarding chip according to the identifier such as a packet sending identifier, so that repeated sending of the message and misuse of the physical interface are avoided.
In the technical scheme of the application, in view of the fact that the physical interface has a hardware characteristic of limiting the flow bandwidth, each virtual CPU issues the processed message to the forwarding chip along the original route by using the physical interface through which the message passes when being uploaded, and the characteristic of being limited by the interface bandwidth when the message is uploaded ensures that the interface bandwidth cannot be exceeded when the message is issued, so that the problem that the processed message cannot be forwarded due to exceeding the bandwidth limit of the physical interface is avoided.
In order to make those skilled in the art better understand the technical solution in the present application, the method shown in fig. 1 is further described in detail below with reference to the accompanying drawings, and the embodiments described later are only a part of the embodiments of the present application, and not all embodiments.
In the network device, there are two physical interfaces, respectively identified as physical interface 0 and physical interface 1, on the physical CPU, there are two physical interfaces corresponding to the physical interfaces, respectively identified as physical interface 0 'and physical interface 1', on the forwarding chip, there are 3 virtual CPUs, respectively identified as VCPU 0, VCPU1 and VCPU2, running in the physical CPU.
As shown in fig. 2, each virtual CPU is pre-configured with a corresponding packet receiving queue for physical interface 0 and physical interface 1, respectively; the packet receiving queue identifier preconfigured for the physical interface 0 in the VCPU 0 is RX0-0, the packet receiving queue identifier preconfigured for the physical interface 1 is RX 0-1, and so on, the packet receiving queues RX1-0 and RX1-1 are preconfigured in the VCPU1, and the packet receiving queues RX2-0 and RX 2-1 are preconfigured in the VCPU 2.
As shown in fig. 3, each virtual CPU is pre-configured with a corresponding packet sending queue for physical interface 0 and physical interface 1, respectively; a packet sending queue identifier preconfigured for the physical interface 0 in the VCPU 0 is TX0-0, a packet receiving queue identifier preconfigured for the physical interface 1 is TX 0-1, and so on, a packet receiving queue identifier TX 1-0 and TX 1-1 are preconfigured in the VCPU1, and a packet receiving queue identifier TX 2-0 and TX2-1 are preconfigured in the VCPU 2.
At a certain moment, a forwarding chip receives a message to be processed, determines that a target virtual CPU corresponding to the message to be processed is a VCPU1, and uploads the message to be processed to a packet receiving queue RX1-1 of the VCPU1 through a physical interface 1. The message to be processed is firstly transmitted to the physical interface 1 on the physical CPU by the packet transmitting queue TX 1 of the forwarding chip through the physical link formed by the physical interface 1' and the physical interface 1 on the board card, and then is DMA-transmitted to the packet receiving queue RX1-1 of the VCPU 1.
The VCPU1 thus receives the message to be processed, determines that a target physical interface on the physical CPU through which the message to be processed passes during uploading is the physical interface 1 according to the packet receiving queues RX1-1 and VCPU1 in which the message to be processed is located, and records the physical interface identifier in the management structure sk-buff of the message to be processed.
After the VCPU1 processes the packet to be processed, forwarding the packet to be processed to the VCPU2 for further processing; the VCPU2 receives the message to be processed, and simultaneously obtains the identifier of the recorded target physical interface of the VCPU1 in the management structure sk-buff of the message to be processed: physical interface 1.
After finishing all processing of the to-be-processed packet, the VCPU2, according to the recorded identifier of the target physical interface: and the physical interface 1 transfers the message to be processed to a packet sending queue TX2-1 of the virtual CPU. The packet to be processed is DMA-transferred from the packet sending queue TX2-1 of the VCPU2 to the physical interface 1, and then reaches the packet receiving queue RX2 of the forwarding chip through the physical link formed by the physical interface 1 and the physical interface 1' on the board card.
After detecting that the message to be processed is successfully sent out by the packet sending queue of the forwarding chip, the VCPU2 releases the memory in which the data of the message to be processed is stored, and deletes the recorded identifier of the target physical interface; the network equipment finishes the processing and forwarding work of the message to be processed.
Corresponding to the foregoing method embodiment for message offloading, the present application further provides an embodiment of a device for message offloading.
The embodiment of the message distribution device provided by the application can be applied to any network equipment. The apparatus embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the network device where the software implementation is located as a logical means. In terms of hardware, as shown in fig. 4, a hardware structure diagram of a network device where a packet offloading device provided in the present application is located is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 4, the network device where the device is located in the embodiment may also include other hardware according to its actual function, which is not described again.
Referring to fig. 5, in order to provide a block diagram of a message offloading device provided in the present application, a forwarding chip and a physical CPU are configured in a network device, the forwarding chip and the physical CPU have a plurality of corresponding physical interfaces, the plurality of corresponding physical interfaces form a plurality of physical links for transmitting a message between the forwarding chip and the physical CPU, the physical CPU runs a plurality of virtual CPUs, and the device is applied to the virtual CPU and includes a recording unit 510 and an issuing unit 520:
the recording unit 510 is configured to, when receiving a to-be-processed packet, obtain and record an identifier of a target physical interface on a physical CPU through which the to-be-processed packet passes when being uploaded to a virtual CPU by a forwarding chip;
the issuing unit 520 is configured to, when the to-be-processed packet is processed, issue the processed packet to be processed to the forwarding chip through the target physical interface according to the recorded identifier of the target physical interface.
Optionally, when acquiring the identifier of the target physical interface on the physical CPU through which the to-be-processed packet is uploaded to the virtual CPU by the forwarding chip, the recording unit 510 is specifically configured to:
when the message to be processed is directly uploaded to the virtual CPU by the forwarding chip, the virtual CPU determines the identifier of the physical interface corresponding to the packet receiving queue as the identifier of the target physical interface on the physical CPU through which the message to be processed passes when the message to be processed is uploaded to the virtual CPU by the forwarding chip according to the packet receiving queue of the virtual CPU in which the message to be processed is located;
when the message to be processed is uploaded to other virtual CPUs by the forwarding chip and then is transferred to the virtual CPU by the other virtual CPUs, the virtual CPU obtains the identification of the target physical interface on the physical CPU, through which the message to be processed recorded by the other virtual CPUs passes when being uploaded to the other virtual CPUs by the forwarding chip.
Optionally, the identifier of the target physical interface is recorded in the management structure of the message to be processed.
Optionally, the issuing unit 520 issues the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identifier of the target physical interface, and is specifically configured to:
and transferring the processed message to be processed to a packet sending queue corresponding to the recorded identification of the target physical interface so that the message to be processed is issued to the forwarding chip from the packet sending queue through the target physical interface.
Further, the message offloading apparatus provided in the present application further includes a deleting unit 530, configured to delete the recorded identifier of the target physical interface when it is detected that the processed message to be processed is successfully forwarded by the forwarding chip.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware comprising the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The essential components of a computer include a central processing unit for implementing or executing instructions, and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (8)

1. A method for distributing messages is characterized in that a forwarding chip and a physical CPU are configured in a network device, a plurality of corresponding physical interfaces exist on the forwarding chip and the physical CPU, the plurality of corresponding physical interfaces form a plurality of physical links used for transmitting messages between the forwarding chip and the physical CPU, a plurality of virtual CPUs are operated in the physical CPU, and the method is applied to the virtual CPU and comprises the following steps:
when a message to be processed is received, acquiring and recording an identifier of a target physical interface on a physical CPU through which the message to be processed passes when the message to be processed is uploaded to a virtual CPU by a forwarding chip;
when the message to be processed is processed, the processed message to be processed is sent to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface,
the obtaining of the identifier of the target physical interface on the physical CPU through which the to-be-processed packet passes when being uploaded to the virtual CPU by the forwarding chip includes:
when the message to be processed is directly uploaded to the virtual CPU by the forwarding chip, the virtual CPU determines the identifier of the physical interface corresponding to the packet receiving queue as the identifier of the target physical interface on the physical CPU through which the message to be processed passes when the message to be processed is uploaded to the virtual CPU by the forwarding chip according to the packet receiving queue of the virtual CPU in which the message to be processed is located;
when the message to be processed is uploaded to other virtual CPUs by the forwarding chip and then is transferred to the virtual CPU by the other virtual CPUs, the virtual CPU obtains the identification of the target physical interface on the physical CPU, through which the message to be processed recorded by the other virtual CPUs passes when being uploaded to the other virtual CPUs by the forwarding chip.
2. The method of claim 1, wherein the identification of the target physical interface is recorded in a management structure of the pending message.
3. The method according to claim 1, wherein the issuing the processed packet to be processed to the forwarding chip through the target physical interface according to the recorded identifier of the target physical interface includes:
and transferring the processed message to be processed to a packet sending queue corresponding to the recorded identifier of the target physical interface, so that the message to be processed is issued to the forwarding chip by the packet sending queue through the target physical interface.
4. The method of claim 1, further comprising:
and when the message to be processed after the processing is detected to be successfully forwarded by the forwarding chip, deleting the recorded identification of the target physical interface.
5. A message shunting device is characterized in that a forwarding chip and a physical CPU are configured in a network device, a plurality of corresponding physical interfaces exist on the forwarding chip and the physical CPU, the plurality of corresponding physical interfaces form a plurality of physical links used for transmitting messages between the forwarding chip and the physical CPU, a plurality of virtual CPUs run in the physical CPU, and the device is applied to the virtual CPU and comprises a recording unit and an issuing unit:
the recording unit is used for acquiring and recording the identifier of a target physical interface on a physical CPU (central processing unit) through which the message to be processed passes when the message to be processed is uploaded to the virtual CPU by a forwarding chip when the message to be processed is received;
the issuing unit is used for issuing the processed message to be processed to the forwarding chip through the target physical interface according to the recorded identification of the target physical interface when the message to be processed is processed,
the recording unit, when acquiring the identifier of the target physical interface on the physical CPU through which the to-be-processed packet is uploaded to the virtual CPU by the forwarding chip, is specifically configured to:
when the message to be processed is directly uploaded to the virtual CPU by the forwarding chip, the virtual CPU determines the identifier of the physical interface corresponding to the packet receiving queue as the identifier of the target physical interface on the physical CPU through which the message to be processed passes when being uploaded to the virtual CPU by the forwarding chip according to the packet receiving queue of the virtual CPU in which the message to be processed is located;
when the message to be processed is uploaded to other virtual CPUs by the forwarding chip and then is transferred to the virtual CPU by the other virtual CPUs, the virtual CPU obtains the identification of the target physical interface on the physical CPU, through which the message to be processed recorded by the other virtual CPUs passes when being uploaded to the other virtual CPUs by the forwarding chip.
6. The apparatus of claim 5, wherein the identification of the target physical interface is recorded in a management structure of the pending message.
7. The apparatus according to claim 5, wherein the issuing unit is configured to issue the processed packet to be processed to the forwarding chip through the target physical interface according to the recorded identifier of the target physical interface, and is specifically configured to:
and transferring the processed message to be processed to a packet sending queue corresponding to the recorded identifier of the target physical interface, so that the message to be processed is issued to the forwarding chip by the packet sending queue through the target physical interface.
8. The apparatus according to claim 5, further comprising a deleting unit, configured to delete the recorded identifier of the target physical interface when it is detected that the processed packet to be processed is successfully forwarded by the forwarding chip.
CN202110586085.6A 2021-05-27 2021-05-27 Message distribution method and device Active CN113282525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110586085.6A CN113282525B (en) 2021-05-27 2021-05-27 Message distribution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586085.6A CN113282525B (en) 2021-05-27 2021-05-27 Message distribution method and device

Publications (2)

Publication Number Publication Date
CN113282525A CN113282525A (en) 2021-08-20
CN113282525B true CN113282525B (en) 2023-03-28

Family

ID=77282269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110586085.6A Active CN113282525B (en) 2021-05-27 2021-05-27 Message distribution method and device

Country Status (1)

Country Link
CN (1) CN113282525B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015031274A1 (en) * 2013-08-26 2015-03-05 Vmware, Inc. Virtual machine monitor configured to support latency sensitive virtual machines
CN109274592B (en) * 2018-11-22 2021-03-09 新华三技术有限公司 MAC address table item processing method and device and computer readable medium
CN112148422A (en) * 2019-06-29 2020-12-29 华为技术有限公司 IO processing method and device
CN110677358A (en) * 2019-09-25 2020-01-10 杭州迪普科技股份有限公司 Message processing method and network equipment
US11249804B2 (en) * 2019-10-07 2022-02-15 International Business Machines Corporation Affinity based optimization of virtual persistent memory volumes
CN112491821B (en) * 2020-11-12 2022-05-31 杭州迪普科技股份有限公司 IPSec message forwarding method and device

Also Published As

Publication number Publication date
CN113282525A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
US10725684B2 (en) Method and apparatus for cost-based load balancing for port selection
CN110413542B (en) Control method, equipment and system for data read-write command in NVMe over Fabric architecture
JP5132812B2 (en) Method, system, and manufactured product for deciding whether to queue an I / O request using priority (queue input / output (I / O) requests sent to a storage device using priority To decide whether or not to enter)
TWI351615B (en) Apparatus,method,and system for controller link fo
TWI399649B (en) Method, hub device, memory controller and memory system for providing indeterminate read data latency
US9344490B2 (en) Cross-channel network operation offloading for collective operations
US10169948B2 (en) Prioritizing storage operation requests utilizing data attributes
JP7010598B2 (en) QoS-aware I / O management methods, management systems, and management devices for PCIe storage systems with reconfigurable multiports.
EP2446606B1 (en) Method and system for the transmission of data between data memories by remote direct memory access
CN104216865B (en) Mapping and reduction operation acceleration system and method
CN109388597B (en) Data interaction method and device based on FPGA
US10534563B2 (en) Method and system for handling an asynchronous event request command in a solid-state drive
CN111723030A (en) Memory system and control method of memory system
JP2008199542A (en) Data encryption apparatus, data decryption apparatus, data encrypting method, data decrypting method, and data relaying device
CN110535861B (en) Method and device for counting SYN packet number in SYN attack behavior identification
CN113282525B (en) Message distribution method and device
CN107797893A (en) A kind of method and apparatus for the duration for calculating hard disk processing read write command
CN107291641B (en) Direct memory access control device for a computing unit and method for operating the same
US20150254102A1 (en) Computer-readable recording medium, task assignment device, task execution device, and task assignment method
CN115242813A (en) File access method, network card and computing device
US9990159B2 (en) Apparatus, system, and method of look-ahead address scheduling and autonomous broadcasting operation to non-volatile storage memory
EP2220820B1 (en) Usage of persistent information unit pacing protocol in fibre channel communications
US8918559B2 (en) Partitioning of a variable length scatter gather list
TW201635764A (en) Technologies for network packet pacing during segmentation operations
KR102128832B1 (en) Network interface apparatus and data processing method for network interface apparauts thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant