WO2024007844A1 - 报文转发方法、装置、计算设备及卸载卡 - Google Patents

报文转发方法、装置、计算设备及卸载卡 Download PDF

Info

Publication number
WO2024007844A1
WO2024007844A1 PCT/CN2023/100772 CN2023100772W WO2024007844A1 WO 2024007844 A1 WO2024007844 A1 WO 2024007844A1 CN 2023100772 W CN2023100772 W CN 2023100772W WO 2024007844 A1 WO2024007844 A1 WO 2024007844A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
forwarding
processor
target
hash value
Prior art date
Application number
PCT/CN2023/100772
Other languages
English (en)
French (fr)
Inventor
李春辉
谢红
王志达
曾德勋
徐新海
唐新晨
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024007844A1 publication Critical patent/WO2024007844A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7453Address table lookup; Address filtering using hashing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches

Definitions

  • the present application relates to the field of communication technology, and in particular to a message forwarding method, device, computing device and offload card.
  • the multiple packets need to be distributed to multiple nodes (the The node can be a gateway in the network) to implement message forwarding.
  • This message distribution method allocates message forwarding to different nodes, avoiding the situation where one node needs to forward multiple messages in a short time.
  • the processor in the device is responsible for distributing the multiple packets to the multiple nodes, such as an open virtual switch (OVS) deployed on the processor. However, if the processor undertakes such distribution operations, it will occupy more processor resources.
  • OVS open virtual switch
  • This application provides a message forwarding method, device, computing device and offload card to speed up message forwarding and reduce processor usage.
  • embodiments of the present application provide a message forwarding method.
  • the message forwarding method can be executed by an offload card in the computing device.
  • the offload card can forward some messages instead of the processor in the computing device.
  • the forwarding process of the first message is explained here.
  • the offload card queries the message forwarding flow table according to the first message, and determines the target forwarding rule from a plurality of forwarding rules included in the message forwarding flow table.
  • the offload card can also extract the key information of the first message from the first message, perform hash processing on the key information of the first message, and obtain the hash value of the first message.
  • the offload card forwards the first message according to the forwarding action corresponding to the hash value of the first message in the target forwarding rule.
  • the offload card can query the packet forwarding flow table, and can also use the hash value of the packet to determine the corresponding forwarding action from the target forwarding rule to forward the packet without the participation of the processor, effectively reducing the need for The occupancy of the processor can release the computing power of the processor.
  • the target forwarding rule includes matching information and action information.
  • the matching information indicates the fields in the message that need to be matched.
  • the action information is the action that needs to be performed when the message is consistent with the matching information.
  • the action information may directly indicate an action.
  • the offload card may not calculate the hash value of the first message, but directly perform the action performed by the action information to complete the forwarding of the message.
  • the action information may also be a composite action.
  • the action information includes multiple sets of forwarding actions, and further matching is required to select a set of forwarding actions from the multiple sets of actions.
  • the action information includes multiple buckets, each bucket includes at least one forwarding action, and each bucket is associated with a different hash code.
  • the offload card determines the target bucket among the multiple buckets of the target forwarding rule based on the hash value of the first message; then, the first message is processed based on the forwarding action included in the target bucket. forward.
  • the action information in the target forwarding rule includes multiple buckets, and the offload card can conveniently and quickly use the hash value of the message to select one bucket from multiple buckets to forward the message.
  • the processor and the offload card are located in the same device, and the processor and the offload card can be connected through a bus.
  • the bus can be a PCIe bus, a CXL or USB protocol bus, or a bus of the CXL or USB protocol. Other types of buses.
  • the offload card updates the target forwarding rules to the packet forwarding flow table, when it receives the first packet or other packets belonging to the same packet flow as the first packet, it can obtain the updated target forwarding rules from the updated packet forwarding flow table.
  • the target forwarding rules are found in the data packet forwarding flow table to implement packet forwarding, avoid processor involvement, and reduce processor consumption.
  • the second packet and the first packet belong to the same packet flow, and the second packet is the first packet in the packet flow.
  • the offload card will receive the second message before receiving the first message.
  • the offload card can query the packet forwarding flow table based on the second packet. If no matching forwarding rule is found in the packet forwarding flow table (because the target forwarding rule has not been updated to the packet forwarding flow table at this time), The offload card can send a second message to the processor. After obtaining the second message, the processor can generate a target forwarding rule based on the second message and send the target forwarding rule to the offload card. The processor may also use the target forwarding rule to forward the second message.
  • the offload card can submit the second message for which no matching forwarding rule is found to the processor so that the processor can process the message.
  • the offload card can also obtain the target forwarding rule from the processor so that subsequent After receiving a message that has the same message header as the second message or belongs to the same message flow, the target forwarding rule can be used to forward the message.
  • the offload card can hash the key information in the second message to obtain the hash value of the second message; the offload card sends the hash value of the second message to the processor. .
  • the processor can obtain the hash value of the second message from the offload card. If the action information in the target forwarding rule is a composite action, the processor can obtain the hash value of the second message from the target forwarding rule based on the hash value of the second message. Determine the forwarding action corresponding to the hash value of the second message, and forward the second message according to the forwarding action corresponding to the hash value of the second message in the target forwarding rule.
  • the offload card can provide the hash value of the second message to the processor, avoiding the processor from calculating the hash value by itself, reducing the occupation of the processor, and further releasing the computing power of the processor.
  • the key information of the first message is taken as an example to describe the key information of the message.
  • the embodiments of this application do not limit the specific content included in the key information of the first message.
  • the key information of the first message includes part or all of the following: destination address, source address, protocol supported by the message, source port, and destination port. These information are only examples.
  • the key information of the first message Other types of information can also be included.
  • the offload card performs hash processing on the key information of the first message, it can parse the message header of the first message, extract the key information of the first message from the message header of the first message, and perform the hash processing on the key information of the first message.
  • the key information of a message is hashed.
  • the offload card has message parsing capabilities and certain computing capabilities to ensure that key information can be extracted from the first message and hash calculation is performed to obtain the hash value of the first message.
  • embodiments of the present application also provide a message forwarding device.
  • the message forwarding device has the function of realizing the behavior of uninstalling the card in the method example of the first aspect.
  • the functions described can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the structure of the message forwarding device includes a determination module and a forwarding module. These modules can perform the corresponding functions in the method examples of the first aspect. For details, please refer to the detailed description in the method examples, which are not included here. To elaborate.
  • inventions of the present application also provide an unloading card, which has the function of implementing the behavior in the method example of the first aspect.
  • the beneficial effects can be found in the description of the first aspect and will not be described again here.
  • the structure of the device includes a processor and, optionally, a memory.
  • the processor is configured to support the offload card in performing corresponding functions in the method of the first aspect.
  • the memory is coupled to the processor and stores computer program instructions and data necessary for the communication device (such as a message forwarding flow table).
  • the structure of the offload card also includes an interface for communicating with a processor or other devices, such as receiving a target forwarding rule, sending a second message, and sending a hash value of the second message.
  • embodiments of the present application also provide a computing device.
  • the computing device includes an offload card and a processor.
  • the offload card can forward the first message by using the target forwarding rule obtained from the processor, and can also forward the first message.
  • the second message matching the forwarding rule is sent to the processor for processing.
  • the uninstall card has the function of realizing the uninstall card behavior in the method example of the first aspect. The beneficial effects can be found in the description of the first aspect and will not be described again here.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the above-mentioned first aspect and various possibilities of the first aspect. The method described in the embodiment.
  • the present application also provides a computer program product containing instructions that, when run on a computer, cause the computer to execute the method described in the above-mentioned first aspect and each possible implementation of the first aspect.
  • the present application also provides a computer chip, the chip is connected to a memory, the chip is used to read and execute the software program stored in the memory, and execute the above first aspect and each possibility of the first aspect.
  • the method of offloading card execution is described in the embodiment.
  • Figure 1 is a schematic diagram of a processor forwarding messages
  • Figure 2 is a schematic structural diagram of a computing device provided by this application.
  • FIG. 3 is a schematic diagram of a message forwarding method provided by this application.
  • Figure 4 is a schematic structural diagram of a message forwarding device provided by this application.
  • Figure 5 is a schematic structural diagram of a computing device provided by this application.
  • packet transmission is no longer limited to physical devices.
  • computing instances such as virtual machines or containers.
  • multiple computing instances can be deployed on one computing device, and packet transmission is allowed between computing instances on the same computing device or between computing instances on different computing devices.
  • a virtual switch can be run on the processor of the computing device.
  • the virtual switching machine is a software form.
  • Switch a common virtual switch is an open virtual switch (OVS).
  • OVS can implement not only Layer 2 forwarding based on media access control address (MAC), but also Layer 3 forwarding.
  • OVS also adopts a software defined network (SDN) mechanism.
  • SDN controller can deliver packet forwarding rules to OVS through the openflow protocol.
  • the set of forwarding rules on the OVS forms a packet forwarding flow table.
  • the matching information indicates the fields in the packet that need to be matched, such as the port number of the packet that needs to be matched, the source IP address of the packet, the source MAC, the destination Internet Protocol (IP) address of the packet, and the destination MAC.
  • the action information is used to indicate the action that OVS needs to perform when the fields indicated by the matching information completely match.
  • the action information can indicate the forwarding port of the packet, or the destination MAC of the update packet, etc.
  • OVS After receiving the message, OVS will first parse the message header of the message and extract the key information of the message.
  • the key information can be source MAC, destination MAC, layer 2 protocol type, source IP address, destination IP address, layer 3 protocol type, transmission control protocol/internet protocol (TCP/IP) port number, etc. wait. This is only a list of information that may be included in the key information, and the embodiments of this application do not limit the specific information included in the key information.
  • OVS searches for matching forwarding rules in the packet forwarding flow table issued by the SDN controller based on the extracted key information. After finding a matching forwarding rule, OVS will perform the action indicated by the action information in the matching forwarding rule.
  • FIG. 1 There are three virtual machines deployed on the server, namely VM1, VM2, and VM3.
  • a network card is installed on the server, and OVS is running on the server's processor.
  • the virtual function (VF) on the network card is formed based on Single Root I/O Virtualization (SR-IOV). Each VF acts as a "virtual" network card and is directly connected to a virtual machine. Transmit messages for this virtual machine.
  • SR-IOV Single Root I/O Virtualization
  • VM1 is connected to the virtual port (port) 1 of OVS through the network card. That is, the packets sent by VM1 can flow into port1 of OVS through the network card. The packets that need to be sent to VM1 can also flow out of port1 of OVS and be sent through the network card. to VM1.
  • VM2 is connected to port2 of OVS through a network card.
  • VM3 is connected to port3 of OVS through a network card.
  • the two forwarding rules recorded in the packet forwarding flow table received by OVS are as follows:
  • the input port is port1.
  • OVS After receiving a packet, OVS can select a forwarding rule that matches the packet from the packet forwarding flow table, and then forward the packet according to the matching forwarding rule.
  • packet 1 and packet 2 sent by VM1 enter OVS via PORT1.
  • the source IP addresses and destination IP addresses of message 1 and message 2 are as follows:
  • OVS For message 1, OVS extracts the key information in message 1, such as the source IP address and destination IP address, and uses the extracted key information to find matching forwarding rules. Packet 1 matches forwarding rule 1 through matching. OVS executes the action indicated by the action information in forwarding rule 1 and forwards packet 1 through port3. Packet 1 flows out from port3 and is transmitted to VM3 through the network card.
  • OVS For message 2, OVS extracts the key information in message 2, such as the source IP address and destination IP address, and uses the extracted key information to find matching forwarding rules. Packet 2 matches forwarding rule 2. OVS executes the action indicated in the action information in forwarding rule 2 and forwards packet 2 through port 4. The message 4 flows out from port 4 and is transmitted to the gateway outside the server through the network card and physical port.
  • the server needs to distribute the messages to multiple nodes (such as gateways). ) is forwarded to achieve load sharing.
  • software such as OVS
  • the load balancing algorithm is implemented through the action group table (group table action).
  • Group table action can be understood as the action information in forwarding rules, and group table action can be regarded as a composite action.
  • a group table action includes multiple buckets, and each bucket can contain one or more forwarding actions. Each bucket corresponds to a hash value. The forwarding actions in different buckets indicate that messages are forwarded through different nodes. That is, different buckets correspond to different nodes.
  • OVS After OVS receives the message and finds the forwarding rule that matches the message, if the action information in the forwarding rule is a group table action. OVS can hash some or all of the key information in the message to obtain the hash value, select the bucket corresponding to the hash value from the group table action based on the hash value, and perform the forwarding included in the bucket Action to complete the forwarding of the message. In this way, OVS will perform forwarding actions included in different buckets for different types of packets, thereby achieving load balancing.
  • Action information: type group table. This indicates that the action group table needs to be searched.
  • the packet forwarding process requires the steps of packet parsing, forwarding rule search, and forwarding action execution. These steps usually need to be executed by the processor.
  • the forwarding of messages will undoubtedly occupy processor resources, especially for messages that need to be load balanced. The number of such messages is large, and the processor resources required will be occupied. More will lead to more resource consumption.
  • embodiments of the present application provide a message forwarding method, in which an offload card in the computing device can replace the processor to implement message forwarding.
  • the offload card can parse the packet of the computing device, extract the key information of the packet, find the matching target forwarding rule from the packet forwarding flow table according to the packet, and extract the matching target forwarding rule from the packet forwarding flow table according to the hash value of the packet.
  • the target forwarding rule determines the forwarding action, and the forwarding action is executed for the packet.
  • the processor is no longer required to participate in the forwarding of messages. Instead, the offload card forwards the messages, which reduces the consumption of the processor and can release more processor resources.
  • FIG. 2 it is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • the computing device 10 includes a processor 100 and an offload card 200 .
  • Offload card 200 may be connected to processor 100 via a bus.
  • This application does not limit the location of the offload card 200 in the computing device 10.
  • the offload card 200 can be directly inserted into a card slot on the motherboard or backplane of the computing device 10 and exchange data with the processor 100 through a bus.
  • the embodiment of the present application does not limit the specific form of the offload card 200.
  • the offload card 200 can be installed on the computing device 10 in the form of a smart network card, or can be deployed on the computing device 10 in other forms.
  • the offload card 200 stores a packet forwarding flow table, which includes one or more forwarding rules.
  • the offload card 200 is responsible for most of the packet forwarding operations in the computing device 10 . A small portion of the message forwarding operations may be performed by the processor 100.
  • Most of the messages here refer to the messages for which the offload card 200 can find the forwarding rules matching the message (such as the key information of the message).
  • a small part of the messages refer to the messages for which the offload card 200 cannot find the forwarding rules for the message. packets that match the forwarding rules.
  • Messages to be sent in the computing device 10 may arrive at the offload card 200 first. After the offload card 200 receives the message to be sent, the offload card 200 extracts key information of the message and uses the key information to search for forwarding rules from the message forwarding flow table.
  • the offload card 200 can forward the message according to the target forwarding rule. If the action information of the target forwarding rule includes an action group table, the offload card 200 can obtain the hash value of the message, and forward the message according to the forwarding action corresponding to the hash value of the message in the target forwarding rule. . For example, the offload card 200 determines a bucket from the action group table according to the hash value of the message, and executes the actions included in the bucket to forward the message. If the action information of the forwarding rule does not include an action group table, but directly indicates an action, the offload card 200 executes the action to forward the message.
  • the offload card 200 can hand the message to the processor 100 for processing.
  • the offload card 200 can also calculate the hash value of the message and send the hash value of the message to the processor 100 .
  • the offload card 200 can also obtain the target forwarding rule that matches the key information of the message from the processor 100, and update the target forwarding rule into the message forwarding flow table, so that subsequent messages with the same key information can be processed. Forward.
  • a software forwarding unit is deployed on the processor 100.
  • the processor 100 calls the software forwarding unit to undertake the forwarding operation of a small part of the messages.
  • the processor 100 receives a message from the offload card 200 that the offload card 200 cannot process, the processor 100 Generate target forwarding rules for this packet. And forward the message according to the destination forwarding rules of the message.
  • the processor 100 may also deliver the target forwarding rule to the offload card 200 .
  • the processor 100 obtains the hash value of the message, and forwards the message according to the forwarding action corresponding to the hash value of the message in the target forwarding rule. For example, the processor 100 determines a bucket from the action group table according to the hash value of the message, and executes the actions included in the bucket to forward the message. If the action information of the target forwarding rule does not include an action group table but directly indicates an action, the processor 100 executes the action to forward the message.
  • the transmission interface between the processor 100 and the offload card 200 can be defined based on the data plane development kit (DPDK).
  • the processor 100 obtains the messages transmitted by the offload card 200, and the offload card 200 obtains the downward messages from the processor 100. All forwarding rules can be transmitted through this transmission interface.
  • the offload card 200 can maintain a data cache queue, which is used to cache data that needs to be sent to the processor 100.
  • the data includes the message and the hash value of the message.
  • the offload card 200 can put the message and the hash value of the message in the data cache queue.
  • the processor 100 can extract the message and the hash value of the message from the data cache queue through the transmission interface.
  • the offload card 200 performs most of the packet forwarding operations on behalf of the processor 100. For some packets that the offload card 200 cannot process, the offload card 200 can also obtain relevant forwarding rules (such as target Forwarding rules), so that for subsequent received messages with the same key information, the offload card 200 can process such messages.
  • This type of processing method is suitable for the computing device 10 that needs to send a message stream composed of multiple messages. The message headers of the messages in the message stream are the same, that is, the key information of these messages is the same, and the message carried in the message stream The data is different.
  • the offload card 200 After the offload card 200 receives the first message in the message flow, if it cannot process the first message, it can hand it over to the processor 100 for processing and forwarding; and obtain the relevant forwarding rules from the processor 100, thus offloading The card 200 can use the forwarding rule to process subsequent messages of the message flow.
  • Figure 3 shows a message forwarding method provided by an embodiment of the present application.
  • the method includes two parts. One part is the forwarding process of the first message in the message flow, see steps 301 to 307. The other part is the forwarding process of subsequent messages in the message flow, see steps 308 to 311. It should be noted that, for other messages, the offload card 200 may forward them in a manner similar to steps 301 to 307, or may forward them in a manner similar to steps 308 to 311.
  • Step 301 The offload card 200 obtains the first message in the message flow.
  • Computing device 10 may send multiple messages to other computing devices, the multiple messages forming a message stream.
  • the computing instance deployed on the computing device 10 can also send multiple messages to other computing devices or other computing instances, and the multiple messages form a message flow.
  • the packets in the packet flow will arrive at the offload card 200 in sequence, and the offload card 200 first obtains the first packet in the packet flow.
  • Step 302 The offload card 200 parses the first message, extracts key information of the first message, and searches for matching forwarding rules from the message forwarding flow table based on the key information.
  • the offload card 200 When the offload card 200 performs step 302, it mainly analyzes the message header of the first message and extracts key information of the first message.
  • the content included in the key information is related to the matching information of each forwarding rule in the packet forwarding flow table. That is to say, the offload card 200 can only extract the information related to the matching information of each forwarding rule in the packet forwarding flow table. as key information.
  • the offload card 200 After extracting the key information of the first message, the offload card 200 searches for a matching forwarding rule based on the key information, that is, it finds a forwarding rule whose field indicated by the matching information is consistent with the key information.
  • Uninstalling the card 200 may perform step 303.
  • Step 303 The offload card 200 does not find a matching forwarding rule in the packet forwarding flow table, and the offload card 200 sends the first packet to the processor 100.
  • the offload card 200 can also use the extracted key information to obtain the hash value of the first message.
  • the offload card 200 may send the hash value to the processor 100 together with the first message.
  • the offload card 200 writes the first message into the data cache queue. If the offload card 200 also calculates the hash value of the first message, the offload card 200 can also store the hash value of the first message. Write to the data cache queue.
  • the embodiment of the present application does not limit the way in which the offload card 200 obtains the hash value of the first message.
  • the offload card 200 can hash part or all of the key information to obtain the hash value of the first message. .
  • Step 304 The processor 100 obtains the first message and generates a target forwarding rule based on the first message.
  • the processor 100 can extract the first message from the data cache queue through the transmission interface with the offload card 200. If the data cache queue also includes the hash value of the first message, the processor 100 The hash value of the first message can also be extracted from the data cache queue.
  • the processor 100 can parse the message header of the first message, extract key information of the first message, and generate target forwarding rules based on the key information.
  • the processor 100 may be preset with a forwarding rule generation strategy.
  • the forwarding rule generation strategy describes the setting strategy for action information and matching information in the forwarding rule when generating the forwarding rule.
  • the forwarding rule generation policy can indicate the fields of the packets that need to be matched, the specific values of the fields, and the forwarding actions that need to be performed.
  • the embodiments of this application do not limit the default way of generating the forwarding rule policy.
  • the forwarding rule generation strategy is set based on the openflow protocol.
  • the processor 100 generates a target forwarding rule based on the preset forwarding rule generation policy and the first message.
  • the target forwarding rule generated by the processor 100 for the packet may exist in two forms: one is a target forwarding rule including an action group table, that is, a target forwarding rule used to support load balancing of the packet. The other is a target forwarding rule that does not contain an action group table.
  • the processor 100 can directly perform the action indicated by the action information in the target forwarding rule for the message and forward the first message.
  • the processor 100 executes step 305.
  • Step 305 The processor 100 determines the target bucket corresponding to the hash value from the target forwarding rule according to the hash value of the first message.
  • the processor 100 extracts the hash value of the first message from the data cache queue, and the processor 100 determines the target bucket corresponding to the hash value from the target forwarding rule based on the extracted hash value.
  • the embodiment of the present application does not limit the manner in which the processor 100 uses the hash value to determine the target bucket corresponding to the hash value from the target forwarding rule.
  • the correspondence between hash values and buckets can be formed based on preset rules.
  • the preset rules are different in different scenarios, and the correspondence between hash values and buckets is also different.
  • the rule may be that the remainder of the hash value is the identifier of the corresponding bucket. In this case, the processor 100 may determine the corresponding bucket after taking the remainder of the hash value of the first message. Target bucket.
  • the rule can be that the last digit of the hash value is the identifier of the corresponding bucket. In this case, the processor 100 may determine the corresponding target bucket based on the last digit of the hash value of the first message.
  • Step 306 The processor 100 forwards the first packet according to the forwarding action included in the target bucket.
  • each bucket includes one or more forwarding actions, and the processor 100 executes the forwarding action.
  • One or more forwarding actions to complete the forwarding of the first packet.
  • the target bucket includes two forwarding actions.
  • One forwarding action indicates modifying the destination MAC of the message, and indicates the modified destination MAC.
  • Another forwarding action indicates the output port of the message.
  • the processor 100 can perform these two forwarding actions, modify the destination MAC of the first message, and transmit the first message with the modified destination MAC through the indicated output port.
  • Step 307 The processor 100 sends the target forwarding rule to the offload card 200.
  • the offload card 200 receives the target forwarding rule and updates the target forwarding rule into the packet forwarding flow table.
  • the processor 100 can deliver the target forwarding rule to the offload card 200 through the transmission interface. After receiving the target forwarding rule, the offload card 200 updates the locally saved message forwarding flow table.
  • the forwarding of the first message in the message flow is completed, and the offload card 200 can process subsequent messages in the message flow. For details, see step 308.
  • Step 308 The offload card 200 obtains subsequent messages in the message flow.
  • Step 309 The offload card 200 parses the subsequent message and extracts key information of the subsequent message.
  • the offload card 200 searches for matching target forwarding rules from the message forwarding flow table based on the key information.
  • the manner in which the offload card 200 extracts the key information of the subsequent message is similar to the manner in which the offload card 200 extracts the key information of the first message. For details, please refer to the foregoing content and will not be repeated here.
  • the target forwarding rule is the target forwarding rule issued by the processor 100 to the offload card 200 in step 307.
  • the offload card 200 directly performs the action indicated by the action information in the target forwarding rule for the subsequent message, and forwards the subsequent message.
  • the offload card 200 executes step 310.
  • Step 310 The offload card 200 obtains the hash value of the subsequent message.
  • the offload card 200 determines the target bucket corresponding to the hash value from the target forwarding rule based on the hash value.
  • the way in which the offload card 200 obtains the hash value of the subsequent message is similar to the way in which the offload card 200 obtains the hash value of the first message. For details, please refer to the foregoing content and will not be described again here.
  • the manner in which the uninstall card 200 executes step 310 is similar to the manner in which the processor 100 executes step 304. For details, please refer to the foregoing description and will not be described again here.
  • the target bucket determined by the offload card 200 is the same as the target bucket determined by the processor 100 in step 304.
  • Step 311 The offload card 200 forwards the message according to the forwarding action included in the target bucket.
  • the manner in which the uninstall card 200 executes step 311 is similar to the manner in which the processor 100 executes step 305. For details, please refer to the foregoing description and will not be described again here.
  • the offload card 200 hands the first message to the processor 100 for processing, and can obtain the target forwarding rule from the processor 100 .
  • the offload card uses the target forwarding rule to forward subsequent messages in the message flow without requiring the processor 100 to participate, thereby reducing the consumption of the processor 100.
  • the target forwarding rules may include the action group table.
  • the offload card 200 can more conveniently use the hash value of the message to find the corresponding target bucket to achieve load balancing of the message, and include the target forwarding rule of the action group table.
  • the form of expression is more concise and does not occupy more storage space of the uninstall card 200.
  • the message forwarding method provided by this application is introduced above with reference to Figures 1 to 3. Based on the same inventive concept as the method embodiment, the embodiment of this application also provides a message forwarding device.
  • the message forwarding device is used to execute The relevant features of the method for uninstalling card execution described in the method embodiment shown in Figure 3 can be found in the above method embodiment, and will not be repeated here. narrate.
  • the message forwarding device 400 includes a determining module 401 and a forwarding module 402.
  • the determining module 401 is configured to determine a target forwarding rule from a plurality of forwarding rules included in the packet forwarding flow table according to the first message.
  • the forwarding module 402 is configured to hash the key information in the first message to obtain the hash value of the first message; according to the forwarding action corresponding to the hash value of the first message in the target forwarding rule, Forward the first message.
  • the message forwarding device 400 in the embodiment of the present invention can be implemented by a central processing unit (CPU), or can be implemented by an application-specific integrated circuit (ASIC), or can be Programmable logic device (PLD) implementation.
  • PLD Programmable logic device
  • the above PLD can be a complex programmable logical device (CPLD), a field-programmable gate array (FPGA), or a general array logic (generic array). logic, GAL), data processing unit (DPU), smart network interface card (Smart NIC) or (intelligence network interface card, iNIC), infrastructure processor (Infrastructure Processing Unit, IPU) or other random combination.
  • CPLD complex programmable logical device
  • FPGA field-programmable gate array
  • GAL general array logic
  • DPU data processing unit
  • Smart NIC smart network interface card
  • iNIC intelligence network interface card
  • IPU infrastructure processor
  • the message forwarding device 400 and its respective modules can also be software modules.
  • the target forwarding rule includes matching information and action information.
  • the action information includes multiple buckets, each bucket includes at least one forwarding action, and each bucket corresponds to a different hash value.
  • the forwarding module 402 When forwarding the first message, the target bucket may be determined from multiple buckets of the target forwarding rule based on the hash value of the first message; and the first message may be forwarded based on the forwarding action included in the target bucket.
  • the offload card and the processor are located in the same device, and the offload card and the processor are connected through a bus.
  • the determination module 401 obtains the target forwarding rules from the processor; updates the target forwarding rules into the message forwarding flow table. .
  • the determination module 401 can also receive a second message, which belongs to the same message stream as the first message, and the second message is the first message in the message stream. , when the determination module 401 does not query the forwarding rule from the message forwarding flow table according to the second message, it can send the second message to the processor. After obtaining the second message, the processor can The second message generates a target forwarding rule and sends the target forwarding rule to the determination module 401, and the determination module 401 can receive the target forwarding rule.
  • the determination module 401 can also perform hash processing on the key information in the second message to obtain the hash value of the second message; and send the hash value of the second message to the processor. .
  • the key information of the first message includes part or all of the following: destination address, source address, protocol supported by the message, source port, and destination port.
  • the forwarding module 402 can obtain the information from the first message.
  • the key information of the first message is extracted from the message header of the message, and the key information of the first message is hashed.
  • the message forwarding device 400 may correspond to performing the method described in the embodiment of the present invention, and the above and other operations and/or functions of each unit in the message forwarding device 400 are respectively implemented.
  • the corresponding processes of each method in Figures 1 to 3 will not be described again for the sake of simplicity.
  • FIG. 5 is a schematic structural diagram of a computing device 10 provided by this application.
  • the computing device 10 includes a bus 300, a processor 100, an offload card 200, and a memory 400.
  • the processor 100, the offload card 200, and the memory 400 communicate through the bus 300.
  • the bus 300 may be a line based on the peripheral component interconnect express (PCIe) standard, or may be a compute express link (CXL), a unified bus (unified bus, Ubus or UB) or a cache coherent interconnect. Protocol (cache coherent interconnect for accelerators, CCIX) protocol or other protocol bus.
  • the bus 300 can be divided into an address bus, a data bus, a control bus, etc.
  • the processor 100 may be a central processing unit (CPU), an application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (FPGA), artificial intelligence (artificial intelligence, AI) chip, system on chip (SoC) or complex programmable logic device (complex programmable logic device (CPLD), graphics processing unit (GPU), etc.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • AI artificial intelligence
  • SoC system on chip
  • CPLD complex programmable logic device
  • GPU graphics processing unit
  • the memory 400 may include volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM), dynamic random access memory (dynamic random access memory, DRAM), etc. It can also include non-volatile memory (non-volatile memory), such as storage class memory (storage class memory, SCM), etc., or a combination of volatile memory and non-volatile memory, etc. It can also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), hard disk drive (HDD) or solid state drive (solid state disk, SSD), etc.
  • volatile memory such as random access memory (Random Access Memory, RAM), dynamic random access memory (dynamic random access memory, DRAM), etc. It can also include non-volatile memory (non-volatile memory), such as storage class memory (storage class memory, SCM), etc., or a combination of volatile memory and non-volatile memory, etc. It can also include non-volatile memory (non-volatile memory), such as read-only memory
  • the memory 400 may also include operating systems and other computer program instructions required to run processes.
  • the operating system can be LINUX TM , UNIX TM , WINDOWS TM , etc.
  • the memory 400 can also store other data, such as data that needs to be carried in the message.
  • the processor 100 can execute the method executed by the processor 100 in the data forwarding method provided by the embodiment of the present application by calling the computer program instructions stored in the memory 400 .
  • the offload card 200 includes a processor 210 and a memory 220.
  • the offload card may also include other components such as power supply circuits, interfaces, and buses.
  • the processor 210 and the memory 220 are connected through a bus.
  • the processor 210 can interact with other components of the computing device 10 (such as the processor 100) through interfaces, such as receiving target forwarding rules, sending messages, and sending hash values of messages.
  • the processor 210 is similar to the processor 100, and the processor 210 may be a CPU, ASIC, FPGA, AI chip, SoC, CPLD, or GPU, etc.
  • the processor 210 in the offload card 200 may be deployed in the computing device 10 as a co-processor of the processor 100 to cooperate with the processor 100 to perform operations.
  • the processor 210 may also be a DPU or IPU.
  • the offload card 200 may have a separate memory 220, which may store computer program instructions required by the processor 210, and may also be used as a cache to store data, such as packet forwarding flow tables, packets, or hash values of packets.
  • the processor 100 and the processor 210 in the offload card 200 can share the memory 400, that is, the memory 400 can have all or part of the functions of the memory 220.
  • the memory 400 can store all the computer program instructions that the processor 210 needs to call, and the memory 220 can obtain some of the computer program instructions that the processor 220 needs to call from the memory 400 for the processor 220 to call.
  • the processor 210 can execute the method of offloading card execution in the data forwarding method provided by the embodiment of the present application by calling computer program instructions stored in the memory 220 (for example, when the processor 210 is a CPU, an AI chip, a GPU, or a DPU).
  • the processor 210 also runs the computer program instructions or the processing logic of the hardware circuit programmed on the processor 210 (for example, when the processor 210 is an ASIC, FPGA, SoC, or CPLD), and performs the data forwarding provided by the embodiments of the present application. Method to uninstall the card execution.
  • the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination.
  • the above-described embodiments may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center via wired (e.g.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center that contains one or more sets of available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, DVD), or semiconductor media.
  • the semiconductor medium may be a solid state disk (SSD).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

报文转发方法、装置、计算设备及卸载卡,涉及通信技术领域。上述方法包括:卸载卡在获取该第一报文后,根据第一报文查询报文转发流表,从报文转发流表包括的多个转发规则中确定目标转发规则。卸载卡从第一报文提取该第一报文的关键信息,对第一报文的关键信息进行哈希处理,获得第一报文的哈希值。卸载卡根据目标转发规则中、与第一报文的哈希值对应的转发动作,对第一报文进行转发。卸载卡能够代替计算设备中的处理器查询报文转发流表,利用报文的哈希值从目标转发规则中确定对应的转发动作,以进行报文的转发,无需处理器参与,有效减少了对处理器的占用,能够释放处理器的算力。

Description

报文转发方法、装置、计算设备及卸载卡
相关申请的交叉引用
本申请要求在2022年07月07日提交中国专利局、申请号为202210802430.X、申请名称为“一种数据处理的方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2022年08月18日提交中国专利局、申请号为202210993314.0、申请名称为“报文转发方法、装置、计算设备及卸载卡”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种报文转发方法、装置、计算设备及卸载卡。
背景技术
为了在网络内实现快速、稳定、可靠的数据传输,携带有数据的报文在网络内转发的过程中,如果报文的数量较大,需要将该多个报文分发到多个节点(该节点可以为网络中的网关),以实现报文转发,这种报文的分发的方式将报文的转发分摊在不同的节点,避免了一个节点需要短时间转发多个报文的情况。
设备中的处理器承担了将该多个报文分发到该多个节点的分发操作,例如由处理器上部署的开放虚拟交换机(open virtual switch,OVS)承担。但处理器承担这类分发操作,将占用较多处理器资源。
发明内容
本申请提供一种报文转发方法、装置、计算设备及卸载卡,用以加快报文的转发,减少对处理器的占用。
第一方面,本申请实施例提供了一种报文转发方法,该报文转发方法可以有计算设备中的卸载卡执行,该卸载卡能够代替计算设备中的处理器对一些报文进行转发,这里以第一报文的转发过程进行说明。卸载卡在获取该第一报文后,根据第一报文查询报文转发流表,从报文转发流表包括的多个转发规则中确定目标转发规则。卸载卡还可以从第一报文提取该第一报文的关键信息,对第一报文的关键信息进行哈希处理,获得第一报文的哈希值。之后,卸载卡根据目标转发规则中、与第一报文的哈希值对应的转发动作,对第一报文进行转发。
通过上述方法,卸载卡能够查询报文转发流表,还能够利用报文的哈希值从目标转发规则中确定对应的转发动作,以进行报文的转发,无需处理器参与,有效减少了对处理器的占用,能够释放处理器的算力。
在一种可能的实施方式中,目标转发规则包括匹配信息以及动作信息,匹配信息指示了报文中需要匹配的字段,该动作信息是在报文与匹配信息一致的情况下需要执行的动作。该动作信息可以直接指示动作,这种情况下,卸载卡可以不计算第一报文的哈希值,而是直接执行该动作信息所执行的动作,完成对报文的转发。该动作信息也可以为一个复合型的动作,动作信息包括了多组转发动作,需要进一步进行匹配,从多组动作中选择一组转发动作。例如,动作信息包括多个桶,每个桶包括至少一个转发动作,每个桶与不同的哈 希值对应,卸载卡在转发第一报文时,根据第一报文的哈希值确定目标转发规则的多个桶中确定目标桶;之后,根据目标桶包括的转发动作对第一报文进行转发。
通过上述方法,目标转发规则中的动作信息包括了多个桶,卸载卡能够方便、快捷地利用报文的哈希值从多个桶选择一个桶,以实现报文的转发。
在一种可能的实施方式中,处理器与卸载卡位于同一设备,处理器与卸载卡之间可以通过总线连接,该总线可以为PCIe总线,也可以为CXL、USB协议的总线,还可以为其他类型的总线。卸载卡对第一报文中的关键信息进行哈希处理之前,可以从处理器获取目标转发规则;将目标转发规则更新到报文转发流表中。
通过上述方法,卸载卡在将目标转发规则更新到报文转发流表后,当接收到第一报文、或与第一报文属于同一报文流的其他报文时,可从更新后的数据报文转发流表中查找到目标转发规则,实现报文的转发,避免处理器参与,降低对处理器的消耗。
在一种可能的实施方式中,以第二报文和第一报文属于同一报文流,第二报文为该报文流中的首个报文为例。卸载卡会在接收到第一报文之前,先接收到第二报文。卸载卡可以根据该第二报文查询报文转发流表,若未在该报文转发流表中找到匹配的转发规则(因为此时目标转发规则还未更新到报文转发流表中),卸载卡可以向处理器发送第二报文,处理器在获取该第二报文后,可以根据第二报文生成目标转发规则,向卸载卡发送目标转发规则。处理器还可以利用该目标转发规则对该第二报文转发。
通过上述方法,卸载卡可以将未查找到匹配的转发规则的第二报文提交给处理器,以使得处理器能够处理该报文,卸载卡还能够从处理器获取目标转发规则,以使得后续接收到与该第二报文的报文头相同或属于同一报文流的报文后,能够利用目标转发规则转达报文。
在一种可能的实施方式中,卸载卡可以对第二报文中的关键信息进行哈希处理,获得第二报文的哈希值;卸载卡向处理器发送第二报文的哈希值。
处理器能够从卸载卡获取该第二报文的哈希值,若目标转发规则中动作信息为一种复合型的动作,处理器可以根据该第二报文的哈希值从该目标转发规则中确定与第二报文的哈希值对应的转发动作,根据目标转发规则中、与第二报文的哈希值对应的转发动作,对第二报文进行转发。
通过上述方法,卸载卡能够向处理器提供第二报文的哈希值,避免了处理器自行计算哈希值,减少对处理器的占用,进一步释放处理器的算力。
在一种可能的实施方式中,以第一报文的关键信息为例,对报文的关键信息进行说明,本申请实施例并不限定该第一报文的关键信息中包括的具体内容,例如,第一报文的关键信息包括下列的部分或全部:目的地址、源地址、报文所支持的协议、源端口、以及目的端口,这些信息仅是举例,该第一报文的关键信息还可以包括其他类型的信息。卸载卡对第一报文的关键信息进行哈希处理时,可以对第一报文的报文头进行解析,从第一报文的报文头中提取第一报文的关键信息,对第一报文的关键信息进行哈希处理。
通过上述方法,卸载卡具有报文解析能力以及一定的计算能力,以保证能够从第一报文中提取关键信息,并进行哈希计算获得第一报文的哈希值。
第二方面,本申请实施例还提供了一种报文转发装置,该报文转发装置具有实现上述第一方面的方法实例中卸载卡的行为的功能,有益效果可以参见第一方面的描述此处不再赘述。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软 件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述报文转发装置的结构中包括确定模块、转发模块,这些模块可以执行上述第一方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第三方面,本申请实施例还提供了一种卸载卡,该卸载卡具有实现上述第一方面的方法实例中行为的功能,有益效果可以参见第一方面的描述此处不再赘述。所述装置的结构中包括处理器,可选的,还可以包括存储器。所述处理器被配置为支持所述卸载卡执行上述第一方面方法中相应的功能。所述存储器与所述处理器耦合,其保存所述通信装置必要的计算机程序指令和数据(如报文转发流表)。所述卸载卡的结构中还包括接口,用于与处理器或者其他设备进行通信,如可以接收目标转发规则、发送第二报文、发送第二报文的哈希值。
第四方面,本申请实施例还提供了一种计算设备,该计算设备包括卸载卡和处理器,卸载卡能够利用从处理器获取的目标转发规则对第一报文进行转发,还能够将未匹配到转发规则的第二报文发送给处理器处理。卸载卡具有实现上述第一方面的方法实例中卸载卡行为的功能,有益效果可以参见第一方面的描述此处不再赘述。
第五方面,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面以及第一方面的各个可能的实施方式中所述的方法。
第六方面,本申请还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面以及第一方面的各个可能的实施方式中所述的方法。
第七方面,本申请还提供一种计算机芯片,所述芯片与存储器相连,所述芯片用于读取并执行所述存储器中存储的软件程序,执行上述第一方面以及第一方面的各个可能的实施方式中所述卸载卡执行的方法。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
图1为一种处理器转发报文的示意图;
图2为本申请提供的一种计算设备的结构示意图;
图3为本申请提供的一种报文转发方法示意图;
图4为本申请提供的一种报文转发装置的结构示意图;
图5为本申请提供的一种计算设备的结构示意图。
具体实施方式
随着虚拟化技术的推广,报文的传输不再局限在物理设备之间。计算实例(如虚拟机或者容器)之间也存在的大量的报文传输。借助虚拟化技术,一台计算设备上可以部署多个计算实例,同一台计算设备上的计算实例之间,或者不同计算设备上的计算实例之间允许进行报文的传输。
为了实现同一计算设备上计算实例之间的报文传输、或者不同计算设备上的计算实例之间的报文传输,计算设备的处理器上可以运行虚拟交换机,虚拟交换机器是一种软件形态的“交换机”,常见的虚拟交换机为开放虚拟交换机(open virtual switch,OVS)。
下面以OVS为例对虚拟交换机的功能进行说明。OVS既可以实现基于媒体存取控制地址(media access control address,MAC)的二层转发,还能够实现三层转发。OVS还采用了软件定义网络(software defined network,SDN)机制,SDN控制器可以通过openflow协议将报文的转发规则下发到OVS上。在该OVS上转发规则集合形成报文转发流表。
对于任一转发规则,可以分为两部分,一部分为匹配(match)信息,另一部分为动作(action)信息。匹配信息指示了报文中需要匹配的字段,如需要匹配的报文的端口号、报文的源IP地址、源MAC、报文的目的互联网协议(internet protocol,IP)地址、目的MAC。动作信息用于指示在匹配信息指示的字段完全匹配的情况下OVS需要执行的动作,如动作信息可以指示报文的转发端口,或者指示更新报文的目的MAC等。
OVS在接收到报文之后,首先会解析该报文的报文头,提取出该报文的关键信息。该关键信息可以为源MAC、目标MAC、二层协议类型、源IP地址、目标IP地址、三层协议类型、传输控制协议/网际协议(transmission control protocol/internet protocol,TCP/IP)端口号等等。这里仅是列举了关键信息中可能包括的信息,本申请实施例并不限定该关键信息所包括的具体信息。
之后,OVS根据提取到的关键信息在SDN控制器下发的报文转发流表中查找匹配的转发规则。在找到匹配的转发规则后,OVS会执行匹配的转发规则中动作信息所指示的动作。
下面结合图1对OVS实现报文转发的方式进行说明,如图1所述,在服务器部署有三个虚拟机,分别为VM1、VM2、以及VM3。该服务器上安装有网卡,服务器的处理器上运行有OVS。网卡上的虚拟功能(virtual function,VF)是基于单根I/O虚拟化(Single Root I/O Virtualization,SR-IOV)形成的,每个VF作为“虚拟”网卡,与一个虚拟机直通,为该虚拟机进行报文传输。
其中,VM1通过网卡与OVS的虚拟端口(port)1连接,也即VM1发出的报文经过网卡可以流入到OVS的port1,需要发往VM1的报文也可以从OVS的port1流出、经过网卡发到VM1。VM2通过网卡与OVS的port2连接。VM3通过网卡与OVS的port3连接。
OVS所接收的报文转发流表所记录的两个转发规则如下:
转发规则1:
匹配信息:
type=match_in_port,port=port1。此部分指示需要匹配报文的输入端口,输入端口为port1。
type=MATCH_IP,dst_ip=192.168.1.3。此部分指示需要匹配报文的目的IP地址(也即dst_ip),目的IP地址为192.168.1.3。
动作信息:type=output,port=port3。此部分指示了输出端口,输出端口为PORT3。
转发规则2:
匹配信息:
typeMATCH_IP,dst_ip=192.168.2.3,mask=255.255.255.0。此部分指示需要匹配报文的目的IP地址,目的IP地址为192.168.2.3,相应的掩码为255.255.255.0。
动作信息:type=output,port=port4。此部分指示了输出端口,输出端口为port4。
OVS在接收到一个报文后,可以从报文转发流表中选择与该报文匹配的一个转发规则,之后根据匹配的转发规则对该报文进行转发。
举例来说,VM1发送的报文1和报文2经由PORT1进入OVS。报文1以及报文2的源IP地址以及目的IP地址如下:
报文1:源IP=192.168.1.1,目的IP=192.168.1.3。
报文2:源IP=192.168.1.1,目的IP=192.168.2.3。
对于报文1,OVS提取报文1中的关键信息,如源IP地址以及目的IP地址,利用提取的关键信息查找匹配的转发规则。报文1通过匹配命中了转发规则1,OVS执行该转发规则1中动作信息所指示的动作,将报文1通过port3转发。该报文1从port3流出、经过网卡传输至VM3。
对于报文2,OVS提取报文2中的关键信息,如源IP地址以及目的IP地址,利用提取的关键信息查找匹配的转发规则。报文2通过匹配命中了转发规则2,OVS执行该转发规则2中动作信息所指示的动作,将报文2通过port4转发。该报文4从port4流出、经过网卡、物理端口传输至该服务器外部的网关。
除此之外,为了实现更快速、稳定、可靠的网络通信,报文在网络上转发的过程中,如服务器发出的报文数量较大,服务器需要将报文分发到多个节点(如网关)进行转发,以实现负载分摊。服务器在转发报文前,需要由软件(如OVS)根据负载均衡算法确定每个报文转发的节点。以OVS为例,负载均衡算法是通过动作组表(group table action)实现的。
group table action可以理解为转发规则中动作信息,group table action可看做为一种复合型的动作。一个group table action包括多个桶(bucket),每个bucket可以包含一个或多个转发动作。每个桶与一个哈希值对应。不同的bucket中的转发动作指示了报文通过不同的节点进行中转。也即不同的bucket对应了不同的节点。
OVS在接收到报文后,在找到与报文的匹配的转发规则后,若该转发规则中的动作信息为一个group table action。OVS可以对报文中的关键信息的部分或全部进行哈希,获得哈希值,根据该哈希值从该group table action中选择与该哈希值对应的桶,执行该桶所包括的转发动作,完成该报文的转发。这样OVS针对不同类型的报文会执行不同的bucket所包括的转发动作,从而实现负载均衡。
下面列举一种包含有group table action的转发规则:
匹配信息:type=MATCH_IP,dst_ip=192.168.2.0,mask=255.255.255.0。此部分为指示匹配报文的目的IP地址,目的IP地址为192.168.2.0。
动作信息:type=group table。这里指示需要查找动作组表。
桶(bucket)1:
转发动作1:type=set_dst_mac,mac=00:02:03:04:05:01。此部分指示更新目的MAC,更新为00:02:03:04:05:01。
转发动作2:type=output,port=port4。此部分指示输出端口为port4。
桶(bucket)2:
转发动作1:type=set_dst_mac,mac=00:02:03:04:05:02。此部分指示更新目的MAC,更新为00:02:03:04:05:02。
转发动作2:type=output,port=PORT4。
桶(bucket)3:
转发动作1:type=set_dst_mac,mac=00:02:03:04:05:03。此部分指示更新目的MAC,更新为00:02:03:04:05:03。
转发动作2:type=output,port=PORT4。
由上可知,报文的转发过程需要经过报文的解析、转发规则的查找以及转发动作的执行这些步骤。这些步骤通常是需要通过处理器执行的,报文的转发无疑会占用处理器资源,尤其针对于需要实现负载均衡的报文,这类报文的数量较多,所需占用的处理器资源将更多,会导致较多的资源消耗。
为此,本申请实施例提供的一种报文转发方法,在该方法中,由计算设备中卸载卡可以代替处理器实现报文的转发。该卸载卡能够解析该计算设备的报文,提取该报文的关键信息,根据报文从报文转发流表中查找到匹配的目标转发规则,并根据报文的哈希值从该匹配的目标转发规则确定转发动作,针对该报文执行该转发动作。在本申请实施例中,不再需要处理器参与报文的转发,而是由卸载卡对报文进行转发,减少对处理器的消耗,能够释放较多的处理器资源。
如图2所示,为本申请实施例提供的一种计算设备的结构示意图。在该计算设备10包括处理器100、以及卸载卡200。卸载卡200可以通过总线与处理器100连接。本申请并不限定该卸载卡200在计算设备10中的位置,例如,卸载卡200可以直接插在计算设备10的主板或背板上的卡槽中,通过总线与处理器100交换数据。
本申请实施例并不限定该卸载卡200的具体形态,该卸载卡200可以以智能网卡的形态安装在计算设备10上,也可以以其他形态部署在该计算设备10上。
卸载卡200保存有报文转发流表,该报文转发流表包括一个或多个转发规则。该卸载卡200承担该计算设备10中大部分的报文转发操作。小部分报文转发操作可以由处理器100执行。
这里的大部分报文是指卸载卡200能够查找到与该报文(如报文的关键信息)匹配的转发规则的报文,小部分报文则是指卸载卡200无法查找到与该报文匹配的转发规则的报文。
计算设备10中待发送的报文可以先到达卸载卡200。卸载卡200在接收到待发送的报文后,卸载卡200提取报文的关键信息,利用该关键信息从报文转发流表中查找转发规则。
若卸载卡200查找到与该报文的关键信息匹配的目标转发规则,卸载卡200可以根据该目标转发规则转发该报文。若该目标转发规则的动作信息包括动作组表,卸载卡200可以获取该报文的哈希值,根据该目标转发规则中、与该报文的哈希值对应的转发动作对报文进行转发。例如,卸载卡200根据该报文的哈希值从该动作组表中确定桶,执行该桶所包括的动作以转发该报文。若该转发规则的动作信息不包括动作组表,而是直接指示了动作,该卸载卡200执行该动作,以转发该报文。
若卸载卡200未查找到与该报文的关键信息匹配的转发规则,卸载卡200可以将该报文交由处理器100处理。该卸载卡200还可以计算该报文的哈希值,将该报文的哈希值也发送给处理器100。卸载卡200还可以从处理器100获取与该报文的关键信息匹配的目标转发规则,将该目标转发规则更新到报文转发流表中,以便后续对接收到的关键信息相同的报文进行转发。
处理器100上部署软件转发单元,处理器100调用软件转发单元,承担小部分报文的转发操作,当处理器100从卸载卡200接收到该卸载卡200无法处理的报文时,处理器100为该报文生成目标转发规则。并根据该报文的目标转发规则转发该报文。处理器100还可以将该目标转发规则下发给卸载卡200。
若该目标转发规则的动作信息包括动作组表,处理器100获取该报文的哈希值,根据该目标转发规则中、与该报文的哈希值对应的转发动作对报文进行转发。例如,处理器100根据该报文的哈希值从该动作组表中确定桶,执行该桶所包括的动作以转发该报文。若该目标转发规则的动作信息不包括动作组表,而是直接指示了动作,该处理器100执行该动作,以转发该报文。
处理器100与卸载卡200之间可以基于数据平面开发套件(data plane development kit,DPDK)定义传输接口,处理器100获取卸载卡200传输的报文、以及卸载卡200获取处理器100向下的发的转发规则均可以通过该传输接口传输。
另外,卸载卡200可以维护一个数据缓存队列,该数据缓存队列用于缓存需要向处理器100发送的数据,该数据包括该报文以及该报文的哈希值。当卸载卡200需要向处理器100发送报文以及该报文的哈希值时,卸载卡200可以将报文以及该报文的哈希值放在该数据缓存队列中。处理器100可以通过传输接口从该数据缓存队列中提取该报文以及该报文的哈希值。
由上述描述可知,卸载卡200代替处理器100执行了大部分报文的转发操作,对于卸载卡200无法处理的一些报文,卸载卡200也能够从处理器100获取相关的转发规则(如目标转发规则),这样对于后续接收到的具备相同关键信息的报文,卸载卡200可以处理这类报文。这类处理方式适用于计算设备10需要发送多个报文构成的报文流,报文流中的报文的报文头是相同的,也即这些报文的关键信息相同,报文所携带的数据不同。卸载卡200在接收到该报文流中的首个报文后,若无法处理该首个报文,可以交由处理器100处理、转发;并从处理器100获取相关的转发规则,这样卸载卡200可以利用该转发规则对该报文流的后续报文进行处理。
下面以报文流中报文的转发方式为例对本申请实施例提供的报文的转发方法进行说明。如图3,为本申请实施例提供的一种报文的转发方法,该方法包括两部分,一部分为报文流中首个报文的转发过程,参见步骤301~步骤307。另一部分为报文流中后续报文的转发过程,参见步骤308~步骤311。需要说明的是,对于其他报文,卸载卡200可以采用与步骤301~步骤307类似的方式进行转发,也可以采用与步骤308~步骤311类似的方式进行转发。
步骤301:卸载卡200获取报文流中的首个报文。
计算设备10可以向其他计算设备发送多个报文,该多个报文形成报文流。计算设备10上部署的计算实例也可以向其他计算设备或其他计算实例发送多个报文,该多个报文形成报文流。在该计算设备10内部,报文流中的报文会依次到达该卸载卡200,卸载卡200先获取该报文流中的首个报文。
步骤302:卸载卡200解析该首个报文,提取该首个报文的关键信息,根据该关键信息从报文转发流表中查找匹配的转发规则。
卸载卡200在执行步骤302时,主要是对该首个报文的报文头进行解析,提取该首个报文的关键信息。关键信息所包括的内容与该报文转发流表中各个转发规则的匹配信息相关,也就是说,卸载卡200可以仅是提取与该报文转发流表中各个转发规则的匹配信息相关的信息作为关键信息。
卸载卡200在提取该首个报文的关键信息后,根据该关键信息查找匹配的转发规则,也就是说,找到匹配信息所指示的字段与该关键信息一致的转发规则。
这里假设卸载卡200并未在该报文转发流表中找到与该关键信息相匹配的转发规则。卸载卡200可以执行步骤303。
步骤303:卸载卡200在报文转发流表中未查找到匹配的转发规则,卸载卡200将该首个报文发送给处理器100。卸载卡200还可以利用所提取的关键信息获取该首个报文的哈希值。卸载卡200可以将该哈希值随该首个报文一同发送给处理器100。
卸载卡200将该首个报文写入到数据缓存队列中,若该卸载卡200还计算了该首个报文的哈希值,卸载卡200还可以将首个报文的哈希值也写入到该数据缓存队列中。
本申请实施例并不限定该卸载卡200获取该首个报文的哈希值的方式,卸载卡200可以对该关键信息的部分或全部进行哈希,获得该首个报文的哈希值。
步骤304:处理器100获取该首个报文,根据该首个报文生成目标转发规则。
处理器100可以通过与卸载卡200之间的传输接口从该数据缓存队列中提取该首个报文,若该数据缓存队列中还包括有该首个报文的哈希值,该处理器100也可以从该数据缓存队列中提取该首个报文的哈希值。
处理器100可以解析该首个报文的报文头,提取该首个报文的关键信息,并根据该关键信息生成目标转发规则。
处理器100侧可以预设有转发规则生成策略,该转发规则生成策略描述了在生成转发规则时,转发规则中动作信息以及匹配信息的设置策略。举例来说,转发规则生成策略可以指示需要匹配的报文的字段、以及字段的具体取值,需要执行的转发动作。本申请实施例并不限定该转发规则生成策略的预设方式。例如,该转发规则生成策略是基于openflow协议设置的。
处理器100侧根据预设有转发规则生成策略以及该首个报文,生成目标转发规则。
处理器100为该报文生成的目标转发规则可能存在两种形式:一种为包含有动作组表的目标转发规则,也即用于支持报文的负载均衡的目标转发规则。另一种为不包含有动作组表的目标转发规则。
若处理器100生成的目标转发规则不包括动作组表,处理器100可以直接针对该报文执行该目标转发规则中的动作信息所指示的动作,对该首个报文进行转发。
若处理器100生成的目标转发规则包括动作组表,处理器100执行步骤305。
步骤305:处理器100根据该首个报文的哈希值从该目标转发规则中确定与该哈希值对应的目标桶。
处理器100从数据缓存队列中提取了该首个报文的哈希值,处理器100根据所提取的哈希值从该目标转发规则中确定与该哈希值对应的目标桶。
本申请实施例并不限定处理器100利用哈希值从目标转发规则确定与哈希值对应的目标桶的方式。哈希值与桶之间的对应关系可是基于预设的规则形成的。在不同场景中该预设的规则不同,哈希值与桶之间的对应关系也不同。例如,该规则可以为哈希值的取余后的数值为所对应的桶的标识,在这种情况下,处理器100可以对该首个报文的哈希值取余后确定所对应的目标桶。又例如,该规则可以为哈希值最后一位数值为所对应的桶的标识。在这种情况下,处理器100可以根据该首个报文的哈希值最后一位数值确定所对应的目标桶。
步骤306:处理器100根据该目标桶中包括的转发动作对该首个报文进行转发。
根据前述对动作组表的描述,每个桶包括一个或多个转发动作,处理器100执行该一 个或多个转发动作,以完成该首个报文的转发。
例如,该目标桶包括了两个转发动作,一个转发动作指示修改报文的目的MAC,并指示了修改后的目的MAC。另一个转发动作指示了报文的输出端口。处理器100可以执行这两个转发动作,修改首个报文的目的MAC,并通过所指示的输出端口传输修改了目的MAC的首个报文。
步骤307:处理器100向卸载卡200发送该目标转发规则,卸载卡200接收该目标转发规则,将该目标转发规则更新到报文转发流表中。
处理器100可以通过传输接口将该目标转发规则下发给卸载卡200,卸载卡200在接收到该目标转发规则后,更新本地保存的报文转发流表。
至此,在卸载卡200与处理器100的配合下,完成了报文流中首个报文的转发,卸载卡200可以对该报文流中的后续报文进行处理。具体可以参见步骤308。
步骤308:卸载卡200获取报文流中的后续报文。
步骤309:卸载卡200解析该后续报文,提取该后续报文的关键信息,卸载卡200根据该关键信息从报文转发流表中查找匹配的目标转发规则。卸载卡200提取该后续报文的关键信息的方式与卸载卡200提取该首个报文的关键信息的方式类似,具体可以参见前述内容,此处不再赘述。
从报文转发流表中查找与该关键信息匹配的目标转发规则,该目标转发规则即为在步骤307处理器100向卸载卡200下发的目标转发规则。
若该目标转发规则不包括动作组表,卸载卡200直接针对该后续报文执行该目标转发规则中的动作信息所指示的动作,对该后续报文进行转发。
若该目标转发规则包括动作组表,卸载卡200执行步骤310。
步骤310:卸载卡200获取该后续报文的哈希值。卸载卡200根据该哈希值从该目标转发规则中确定与该哈希值对应的目标桶。卸载卡200获取该后续报文的哈希值的方式与卸载卡200获取该首个报文的哈希值的方式类似,具体可以参见前述内容,此处不再赘述。卸载卡200执行步骤310的方式与处理器100执行步骤304的方式类似,具体可以参见前述说明,此处不再赘述。由于该后续报文的报文头与首个报文的报文头相同,也就是关键信息相同,进而该后续报文的哈希值与首个报文的哈希值也相同,在步骤310中卸载卡200所确定的目标桶与步骤304中处理器100所确定的目标桶是相同的。
步骤311:卸载卡200根据该目标桶中包括的转发动作对该报文进行转发。卸载卡200执行步骤311的方式与处理器100执行步骤305的方式类似,具体可以参见前述说明,此处不再赘述。
在如图3所示的实施例中,对于报文流中的首个报文,卸载卡200将首个报文交由处理器100处理,并能够从处理器100获取目标转发规则。卸载卡利用该目标转发规则能够该报文流中的后续报文实现报文转发,无需处理器100再参与,减少了对处理器100的消耗。另外,目标转发规则可以包括动作组表,卸载卡200能够较为方便的利用报文的哈希值找到对应的目标桶,以实现对报文的负载均衡,而且包括动作组表的目标转发规则的表现形式更加简洁,不会占用卸载卡200较多的存储空间。
上文结合图1至图3介绍了本申请提供的报文转发方法,基于与方法实施例同一发明构思,本申请实施例还提供了一种报文转发装置,该报文转发装置用于执行上述如图3所示的方法实施例中所述卸载卡执行的方法,相关特征可参见上述方法实施例,此处不再赘 述。如图4所示,所述报文转发装置400包括确定模块401以及转发模块402。
确定模块401,用于根据第一报文从报文转发流表包括的多个转发规则中确定目标转发规则。
转发模块402,用于对于第一报文中的关键信息进行哈希处理,获得第一报文的哈希值;根据目标转发规则中、与第一报文的哈希值对应的转发动作,对第一报文进行转发。
应理解的是,本发明本申请实施例的报文转发装置400可以通过中央处理单元(central processing unit,CPU)实现,也可以通过专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logic device,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)、数据处理单元(data processing unit,DPU)、智能网卡(smart network interface card,Smart NIC)或(intelligence network interface card,iNIC)、基础设施处理器(Infrastructure Processing Unit,IPU)或其任意组合。也可以通过软件实现图1至图3所示的报文转发方法时,报文转发装置400及其各个模块也可以为软件模块。
在一种可能的实施方式中,目标转发规则包括匹配信息以及动作信息,动作信息包括多个桶,每个桶包括至少一个转发动作,每个桶与不同的哈希值对应,转发模块402在转发第一报文时,可以根据第一报文的哈希值确定目标转发规则的多个桶中确定目标桶;并根据目标桶包括的转发动作对第一报文进行转发。
在一种可能的实施方式中,卸载卡与处理器位于同一设备,卸载卡与处理器通过总线连接,确定模块401从处理器获取目标转发规则;将目标转发规则更新到报文转发流表中。
在一种可能的实施方式中,确定模块401还可以接收第二报文,该第二报文与第一报文属于同一报文流,第二报文为报文流中的首个报文,确定模块401在根据第二报文从报文转发流表中未查询到转发规则的情况下,可以向处理器发送第二报文,处理器在获取该第二报文后,可以根据该第二报文生成目标转发规则,并向确定模块401发送给目标转发规则,确定模块401可以接收该目标转发规则。
在一种可能的实施方式中,确定模块401还可以对第二报文中的关键信息进行哈希处理,获得第二报文的哈希值;向处理器发送第二报文的哈希值。
在一种可能的实施方式中,第一报文的关键信息包括下列的部分或全部:目的地址、源地址、报文所支持的协议、源端口、以及目的端口,转发模块402可以从第一报文的报文头中提取第一报文的关键信息,对第一报文的关键信息进行哈希处理。
根据本发明本申请实施例的报文转发装置400可对应于执行本发明本申请实施例中描述的方法,并且报文转发装置400中的各个单元的上述和其它操作和/或功能分别为了实现图1至图3中的各个方法的相应流程,为了简洁,在此不再赘述。
图5为本申请提供的一种计算设备10的结构示意图,计算设备10包括总线300、处理器100、卸载卡200、存储器400。处理器100、卸载卡200、存储器400之间通过总线300通信。总线300可以为基于快捷外围部件互连标准(peripheral component interconnect express,PCIe)的线路,也可以为计算快速互联(compute express link,CXL)、统一总线(unified bus,Ubus或UB)或缓存一致互联协议(cache coherent interconnect for accelerators,CCIX)协议或其他协议的总线。总线300可以分为地址总线、数据总线、控制总线等。
其中,处理器100可以为中央处理器(central processing unit,CPU)、专用集成电路 (application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)、人工智能(artificial intelligence,AI)芯片、片上系统(system on chip,SoC)或复杂可编程逻辑器件(complex programmable logic device,CPLD),图形处理器(graphics processing unit,GPU)等。
存储器400可以包括易失性存储器(volatile memory),例如随机存取存储器(Random Access Memory,RAM)、动态随机存取存储器(dynamic random access memory,DRAM)等。也可以包括非易失性存储器(non-volatile memory),例如存储级内存(storage class memory,SCM)等,或者易失性存储器与非易失性存储器的组合等。还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),硬盘驱动器(hard disk drive,HDD)或固态驱动器(solid state disk,SSD)等。
存储器400中还可以包括操作系统等其他运行进程所需的计算机程序指令。操作系统可以为LINUXTM,UNIXTM,WINDOWSTM等。该存储器400中还可以存储其他数据,如报文中需要携带的数据。
处理器100可以通过调用存储器400中存储的计算机程序指令,执行本申请实施例提供的数据转发方法中处理器100执行的方法。
在硬件上,卸载卡200包括处理器210以及存储器220,可选的,卸载卡还可以包括供电电路、接口以及总线等其他组件。处理器210与存储器220通过总线连接。处理器210能够通过接口与计算设备10的其他组件(如处理器100)进行交互,如接收目标转发规则,发送报文,发送报文的哈希值。
处理器210与处理器100类似,该处理器210可以为CPU、ASIC、FPGA、AI芯片、SoC、CPLD、或GPU等。卸载卡200中的处理器210可以作为处理器100的协处理器部署在计算设备10中,与处理器100配合执行操作。该处理器210还可以为DPU或IPU。
卸载卡200中可以单独有存储器220,该存储器220可以存储处理器210需要的计算机程序指令,还可以作为缓存以存储数据,如报文转发流表、报文或者报文的哈希值。在一种可能的情况下,处理器100和卸载卡200中的处理器210可以共用存储器400,也即存储器400能够兼具存储器220的所有功能或部分功能。存储器400能够存储处理器210需要调用的全部计算机程序指令,存储器220能够从存储器400中获取处理器220需要调用的部分计算机程序指令,以供处理器220调用。
处理器210可以通过调用存储器220中存储的计算机程序指令(如该处理器210为CPU、AI芯片或GPU、DPU时),执行本申请实施例提供的数据转发方法中卸载卡执行的方法。处理器210也自行运行烧写在处理器210上的计算机程序指令或硬件电路的处理逻辑(如该处理器210为ASIC、FPGA、SoC、或CPLD时),执行本申请实施例提供的数据转发方法中卸载卡执行的方法。
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载或执行所述计算机程序指令时,全部或部分地产生按照本发明本申请实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同 轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘(solid state disk,SSD)。
以上所述,仅为本申请的具体实施方式。熟悉本技术领域的技术人员根据本申请提供的具体实施方式,可想到变化或替换,都应涵盖在本申请的保护范围之内。

Claims (14)

  1. 一种报文转发方法,其特征在于,所述方法由卸载卡执行,包括:
    根据第一报文从报文转发流表包括的多个转发规则中确定目标转发规则;
    对第一报文中的关键信息进行哈希处理,获得所述第一报文的哈希值;
    根据所述目标转发规则中、与所述第一报文的哈希值对应的转发动作,对所述第一报文进行转发。
  2. 如权利要求1所述的方法,其特征在于,所述目标转发规则包括动作信息,所述动作信息包括多个桶,每个桶包括至少一个转发动作,每个桶与不同的哈希值对应,所述卸载卡根据所述目标转发规则中、与所述第一报文的哈希值对应的转发动作,对所述第一报文进行转发,包括:
    根据所述第一报文的哈希值确定所述目标转发规则的多个桶中确定目标桶;
    根据所述目标桶包括的转发动作对所述第一报文进行转发。
  3. 如权利要求1所述的方法,其特征在于,处理器与所述卸载卡位于同一设备,所述卸载卡对第一报文中的关键信息进行哈希处理之前,还包括:
    从所述处理器获取所述目标转发规则;
    将所述目标转发规则更新到所述报文转发流表中。
  4. 如权利要求3所述的方法,其特征在于,所述卸载卡从所述处理器获取所述目标转发规则,包括:
    在根据所述第二报文从所述报文转发流表中未查询到转发规则的情况下,向所述处理器发送所述第二报文,所述第二报文与所述第一报文属于同一报文流,所述第二报文为所述报文流中的首个报文,所述第一报文为所述第二报文之后的报文;
    接收所述处理器根据所述第二报文生成的所述目标转发规则。
  5. 如权利要求4所述的方法,其特征在于,所述方法还包括:
    对所述第二报文中的关键信息进行哈希处理,获得所述第二报文的哈希值;
    向所述处理器发送所述第二报文的哈希值;
    所述处理器根据所述目标转发规则中、与所述第二报文的哈希值对应的转发动作,对所述第二报文进行转发。
  6. 如权利要求1~5任一项所述的方法,其特征在于,所述第一报文的关键信息包括下列的部分或全部:目的地址、源地址、报文所支持的协议、源端口、以及目的端口,
    则所述卸载卡对第一报文的关键信息进行哈希处理,包括:
    从所述第一报文的报文头中提取所述第一报文的关键信息,对所述第一报文的关键信息进行哈希处理。
  7. 一种报文转发装置,其特征在于,所述报文转发装置包括确定模块以及转发模块;
    所述确定模块,用于根据第一报文从报文转发流表包括的多个转发规则中确定目标转发规则;
    所述转发模块,用于对于第一报文中的关键信息进行哈希处理,获得所述第一报文的哈希值;根据所述目标转发规则中、与所述第一报文的哈希值对应的转发动作,对所述第一报文进行转发。
  8. 如权利要求7所述的装置,其特征在于,所述目标转发规则包括匹配信息以及动作 信息,所述动作信息包括多个桶,每个桶包括至少一个转发动作,每个桶与不同的哈希值对应,所述转发模块,用于:
    根据所述第一报文的哈希值确定所述目标转发规则的多个桶中确定目标桶;
    根据所述目标桶包括的转发动作对所述第一报文进行转发。
  9. 如权利要求7所述的装置,其特征在于,所述装置与处理器位于同一设备,所述确定模块,还用于:
    从所述处理器获取所述目标转发规则;
    将所述目标转发规则更新到所述报文转发流表中。
  10. 如权利要求9所述的装置,其特征在于,所述确定模块,用于:
    在根据所述第二报文从所述报文转发流表中未查询到转发规则的情况下,向所述处理器发送所述第二报文,所述第二报文与所述第一报文属于同一报文流,所述第二报文为所述报文流中的首个报文,所述第一报文为所述第二报文之后的报文;
    接收所述处理器发送的所述目标转发规则。
  11. 如权利要求10所述的装置,其特征在于,所述确定模块,还用于:
    对所述第二报文中的关键信息进行哈希处理,获得所述第二报文的哈希值;
    向所述处理器发送所述第二报文的哈希值。
  12. 如权利要求7~11任一项所述的装置,其特征在于,所述第一报文的关键信息包括下列的部分或全部:目的地址、源地址、报文所支持的协议、源端口、以及目的端口,所述转发模块,用于:
    从所述第一报文的报文头中提取所述第一报文的关键信息,对所述第一报文的关键信息进行哈希处理。
  13. 一种计算设备,其特征在于,所述计算设备包括卸载卡,所述卸载卡用于实现如权利要求1至6任一所述方法的操作步骤。
  14. 一种卸载卡,其特征在于,所述卸载卡包括数据处理单元DPU以及供电电路,所述供电电路用于为所述DPU供电,所述DPU用于执行如权利要求1~6任一所述的方法的操作步骤。
PCT/CN2023/100772 2022-07-07 2023-06-16 报文转发方法、装置、计算设备及卸载卡 WO2024007844A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210802430 2022-07-07
CN202210802430.X 2022-07-07
CN202210993314.0A CN115567446A (zh) 2022-07-07 2022-08-18 报文转发方法、装置、计算设备及卸载卡
CN202210993314.0 2022-08-18

Publications (1)

Publication Number Publication Date
WO2024007844A1 true WO2024007844A1 (zh) 2024-01-11

Family

ID=84739103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/100772 WO2024007844A1 (zh) 2022-07-07 2023-06-16 报文转发方法、装置、计算设备及卸载卡

Country Status (2)

Country Link
CN (1) CN115567446A (zh)
WO (1) WO2024007844A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118250235A (zh) * 2024-05-22 2024-06-25 北京华耀科技有限公司 流量分发方法、装置、设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115567446A (zh) * 2022-07-07 2023-01-03 华为技术有限公司 报文转发方法、装置、计算设备及卸载卡
CN116010130B (zh) * 2023-01-30 2024-04-19 中科驭数(北京)科技有限公司 Dpu虚拟口的跨卡链路聚合方法、装置、设备及介质
CN116170404B (zh) * 2023-02-17 2023-09-29 通明智云(北京)科技有限公司 一种基于dpdk的数据转发方法及装置
CN115941598B (zh) * 2023-03-09 2023-05-16 珠海星云智联科技有限公司 一种流表半卸载方法、设备及介质
CN117312329B (zh) * 2023-11-29 2024-02-23 苏州元脑智能科技有限公司 数据流表生成方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711577A (zh) * 2020-07-24 2020-09-25 杭州迪普信息技术有限公司 流控设备的报文转发方法及装置
WO2021128089A1 (zh) * 2019-12-25 2021-07-01 华为技术有限公司 转发设备、网卡及报文转发方法
CN113765804A (zh) * 2021-08-05 2021-12-07 中移(杭州)信息技术有限公司 报文转发方法、装置、设备及计算机可读存储介质
CN114363256A (zh) * 2020-09-28 2022-04-15 华为云计算技术有限公司 基于网卡的报文解析方法以及相关装置
CN115567446A (zh) * 2022-07-07 2023-01-03 华为技术有限公司 报文转发方法、装置、计算设备及卸载卡

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104025520B (zh) * 2012-12-25 2017-04-26 华为技术有限公司 查找表的创建方法、查询方法、控制器、转发设备和系统
CN109361609B (zh) * 2018-12-14 2021-04-20 东软集团股份有限公司 防火墙设备的报文转发方法、装置、设备及存储介质
CN114079625A (zh) * 2020-08-17 2022-02-22 华为技术有限公司 数据中心中的通信方法、装置和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021128089A1 (zh) * 2019-12-25 2021-07-01 华为技术有限公司 转发设备、网卡及报文转发方法
CN111711577A (zh) * 2020-07-24 2020-09-25 杭州迪普信息技术有限公司 流控设备的报文转发方法及装置
CN114363256A (zh) * 2020-09-28 2022-04-15 华为云计算技术有限公司 基于网卡的报文解析方法以及相关装置
CN113765804A (zh) * 2021-08-05 2021-12-07 中移(杭州)信息技术有限公司 报文转发方法、装置、设备及计算机可读存储介质
CN115567446A (zh) * 2022-07-07 2023-01-03 华为技术有限公司 报文转发方法、装置、计算设备及卸载卡

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118250235A (zh) * 2024-05-22 2024-06-25 北京华耀科技有限公司 流量分发方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN115567446A (zh) 2023-01-03

Similar Documents

Publication Publication Date Title
WO2024007844A1 (zh) 报文转发方法、装置、计算设备及卸载卡
US11218537B2 (en) Load balancing in distributed computing systems
US10394784B2 (en) Technologies for management of lookup tables
US20240345988A1 (en) Message forwarding method and apparatus based on remote direct data storage, and network card and device
TWI766893B (zh) 虛擬專有網路及規則表生成方法、裝置及路由方法
WO2019227883A1 (zh) 地址转换方法、装置及系统
WO2024093064A1 (zh) 一种大规模多模态网络中标识管理及优化转发方法和装置
EP3742307A1 (en) Managing network traffic flows
US20240195749A1 (en) Path selection for packet transmission
CN117651936A (zh) 网络层7卸载到服务网格的基础设施处理单元
WO2024114703A1 (zh) 一种数据处理方法、智能网卡和电子设备
CN117240935A (zh) 基于dpu的数据平面转发方法、装置、设备及介质
US20230116614A1 (en) Deterministic networking node
US20230109396A1 (en) Load balancing and networking policy performance by a packet processing pipeline
CN115484233B (zh) 数通芯片中链路聚合报文的转发方法、装置、设备及介质
US20230409889A1 (en) Machine Learning Inference Service Disaggregation
CN114531389A (zh) 一种路由表优化方法、控制器以及路由器
CN114039894B (zh) 一种基于矢量包的网络性能优化方法、系统、设备、介质
CN117527731B (zh) 一种用于硬件卸载的包编辑方法、计算机设备及介质
WO2024114645A1 (zh) 一种虚拟化网络功能vnf的实例化方法及装置
EP4351094A1 (en) Method and apparatus for allocating address, method and apparatus for determining node, and storage medium
CN118250228B (zh) 基于硬件卸载的流量限速装置及方法
CN117978758B (zh) 用于数据处理单元的适配方法、计算机设备及介质
CN118394420B (zh) 一种驱动方法、计算机设备及介质
CN115225708A (zh) 一种报文转发方法计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23834618

Country of ref document: EP

Kind code of ref document: A1