WO2024037366A1 - 转发规则下发方法、智能网卡及存储介质 - Google Patents

转发规则下发方法、智能网卡及存储介质 Download PDF

Info

Publication number
WO2024037366A1
WO2024037366A1 PCT/CN2023/111395 CN2023111395W WO2024037366A1 WO 2024037366 A1 WO2024037366 A1 WO 2024037366A1 CN 2023111395 W CN2023111395 W CN 2023111395W WO 2024037366 A1 WO2024037366 A1 WO 2024037366A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
forwarding
forwarding rule
hardware acceleration
data
Prior art date
Application number
PCT/CN2023/111395
Other languages
English (en)
French (fr)
Inventor
吕怡龙
Original Assignee
阿里云计算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里云计算有限公司 filed Critical 阿里云计算有限公司
Publication of WO2024037366A1 publication Critical patent/WO2024037366A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches

Definitions

  • the embodiments of this application relate to the field of cloud computing technology, and specifically relate to a method for issuing forwarding rules, a smart network card, and a storage medium.
  • virtualization functions such as network virtualization can be offloaded to smart network cards.
  • the virtual switch (Vswitch) running on the host can be offloaded to the smart network card to achieve high-performance forwarding of packets.
  • the offloading referred to here can be understood as the process of offloading software functions to hardware so that they can be implemented by the hardware.
  • virtual switches can be used to forward packets of virtual machines.
  • a virtual machine can send and receive packets by a virtual switch based on forwarding rules.
  • One of the key technologies for offloading a virtual switch to an intelligent network card is to deliver the forwarding rules of the virtual switch to the hardware part of the intelligent network card, so that the intelligent network card can implement hardware acceleration of packet forwarding.
  • how to improve the delivery performance and reliability of forwarding rules to smart network cards has become an urgent technical problem that technicians in the field need to solve.
  • embodiments of the present application provide a forwarding rule delivery method, a smart network card, and a storage medium to improve the delivery performance and reliability of forwarding rules to the smart network card.
  • embodiments of this application provide a method for issuing forwarding rules, including:
  • embodiments of this application provide a method for issuing forwarding rules, including:
  • the message carries the forwarding rule of the message
  • embodiments of the present application provide a smart network card, including: an on-chip processor and a hardware acceleration engine; the on-chip processor is configured to execute the method for issuing forwarding rules as described in the first aspect; the hardware The acceleration engine is configured to execute the forwarding rule issuing method described in the second aspect above.
  • embodiments of the present application provide a storage medium that stores one or more computer-executable instructions.
  • the one or more computer-executable instructions are executed, the above-described first aspect is implemented.
  • the on-chip processor can generate the forwarding rules of the message, and carry the generated forwarding rules in the information structure of the message; thus, the on-chip processor can generate the forwarding rules according to the information structure of the message. information structure to generate a message, so that the generated message carries the forwarding rules of the message; furthermore, the on-chip processor can use the data channel for data message transmission with the hardware acceleration engine to deliver the generated message , so that the hardware acceleration engine can obtain the message carrying the forwarding rules through the data channel, so as to realize the forwarding rules of the message sent by the on-chip processor to the hardware acceleration engine.
  • the embodiment of the present application uses the data channel to deliver a message to the hardware acceleration engine, it can simultaneously carry the forwarding rules in the message, so that the forwarding rules can be delivered through the data channel, because the forwarding rules are delivered through the data channel.
  • Delivery performance is equivalent to the delivery performance of the on-chip processor and hardware acceleration engine at the data packet level, while the delivery performance of the on-chip processor and hardware acceleration engine at the data packet level is much greater than the delivery of configuration information and commands in the configuration channel performance, so the embodiments of this application can greatly improve the performance of forwarding rule delivery; at the same time, because the forwarding rules are delivered together with the message, the delivery of the forwarding rules is a synchronous event, which can solve the problem of asynchronous delivery and forwarding.
  • the reliability issues caused by rules enable the delivery method of forwarding rules to meet the needs of business scenarios such as network short links, thereby improving the reliability of forwarding rule delivery.
  • the embodiment of the present application uses an on-chip processor and hardware acceleration engine
  • the data channel between engines delivers forwarding rules; and the forwarding rules are carried in the information structure of the message, so that the forwarding rules can be delivered to the hardware acceleration engine synchronously with the message, so that the embodiment of the present application can improve the delivery of forwarding rules. delivery performance and reliability.
  • Figure 1 is an example diagram of a system structure with a virtual switch.
  • Figure 2A is an example structural diagram of a smart network card.
  • Figure 2B is another structural example diagram of a smart network card.
  • Figure 3 shows an example of issuing forwarding rules in a network short link scenario.
  • Figure 4 is a flow chart of a forwarding rule issuing method provided by an embodiment of the present application.
  • Figure 5 is another flowchart of a forwarding rule issuing method provided by an embodiment of the present application.
  • Figure 6 is an example of the structure of a flow table.
  • Figure 7 is an example diagram of the information structure of the message.
  • Figure 8 is an example diagram of the process of issuing forwarding rules provided by the embodiment of this application.
  • Figure 1 schematically shows an example system structure diagram with a virtual switch.
  • the system can include multiple virtual machines 101 to 10n (n is the number of virtual machines, which can be determined according to the actual situation) and virtual switch 110.
  • the message sending and receiving of multiple virtual machines 101 to 10n can be realized through the virtual switch 110; for example, the virtual switch can forward the messages of the virtual machines to the destination port based on the forwarding rules; the virtual switch can forward the messages received by the host based on the forwarding rules. forward the message to the virtual machine.
  • Virtual switch 110 originally runs on the host.
  • the host's processor such as CPU
  • the packet forwarding performance of the virtual switch running on the host can no longer meet the performance requirements. In order to achieve high-performance forwarding of packets, the packet forwarding performance of the virtual switch running on the host is no longer sufficient.
  • the virtual switch 110 of the host can be offloaded to the smart network card 120 to achieve high-performance forwarding of packets.
  • smart network cards can also improve application performance and significantly reduce processor (such as CPU) consumption in communication through a built-in programmable and configurable hardware acceleration engine.
  • processor such as CPU
  • the smart network card can be understood as a network card that uses hardware to complete the packet forwarding operation of the virtual switch and has an on-chip processor (such as an on-chip CPU).
  • FIG. 2A schematically shows an example structural diagram of a smart network card. As shown in FIG. 2A , the smart network card may include an on-chip processor 210 and a hardware acceleration engine 220 .
  • the on-chip processor 210 can be understood as a part of the processor (eg, CPU) and operating system of the smart network card.
  • the on-chip processor 210 can be responsible for network management, configuration and other functions originally running on the host; in addition, the on-chip processor 210 can also be responsible for processing functions that do not require high performance. business.
  • the process of the virtual switch can run on the on-chip processor of the smart network card, and the packet forwarding and configuration management of the virtual switch can be implemented through the processor core of the on-chip processor; further combined with Figure 2A
  • the processor core of the on-chip processor 210 may include at least one data core 211 and at least one control core 212.
  • the data core 211 is responsible for packet forwarding of the virtual switch
  • the control core 212 is responsible for the configuration management of the virtual switch.
  • the hardware acceleration engine 220 is a hardware component of the smart network card.
  • the hardware acceleration engine 220 has higher performance than the on-chip processor 210, but is less flexible and can be used to process high-performance services.
  • the hardware acceleration engine 220 may include hardware processing such as NP (Network Processor, network processor), FPGA (Field Programmable Gate Array, field programmable logic gate array), or ASIC (Application Specific Integrated Circuit, application specific integrated circuit). device.
  • NP Network Processor, network processor
  • FPGA Field Programmable Gate Array, field programmable logic gate array
  • ASIC Application Specific Integrated Circuit, application specific integrated circuit
  • the forwarding rule table 221 can be used to record the forwarding rules of the packet, thereby reporting It implements accelerated forwarding at the hardware level and improves the forwarding performance of messages.
  • a message forwarding method based on the forwarding rule table 221 may be: when the smart network card obtains the message, the smart network card
  • the hardware acceleration engine 220 in the server can query whether the forwarding rule table 221 records a forwarding rule that matches the message; if so, forward the message according to the recorded forwarding rule; if not, pass the message to the intelligent On-chip processor 210 of the network card.
  • the on-chip processor 210 After obtaining the message transmitted by the hardware acceleration engine 220 (corresponding to the situation where the forwarding rule table 221 does not record the forwarding rule of the message), the on-chip processor 210 can query multiple forwarding configuration information of the virtual switch to generate the forwarding rule of the message.
  • the on-chip processor 210 can deliver the generated forwarding rules to the hardware acceleration engine 220, and the hardware acceleration engine 220 will record the forwarding rules in the forwarding rule table 221, so as to implement the forwarding rules of the virtual switch to be delivered to the hardware of the smart network card. part.
  • the hardware acceleration engine 220 can forward the packet according to the forwarding rule hit in the forwarding rule table, without having to add the forwarding rule to the packet.
  • the message is passed to the on-chip processor 210, thereby accelerating the forwarding of the message through hardware and improving the forwarding performance of the message.
  • the on-chip processor needs to generate the forwarding rules and deliver them to the hardware acceleration engine. , so that the hardware acceleration engine records the forwarding rules of the packet (for example, records the forwarding rules of the packet in the forwarding rule table).
  • the above process involves the process of the on-chip processor delivering the forwarding rules to the hardware acceleration engine; that is, the process of delivering the forwarding rules of the virtual switch to the hardware part of the smart network card as referred to in the embodiment of this application.
  • FIG. 2B illustrates another structural example diagram of a smart network card. As shown in FIG. 2A and FIG. 2B , there are a configuration channel 231 and a data channel 232 between the on-chip processor 210 and the hardware acceleration engine 220 .
  • the configuration channel 231 is used for configuration management between the on-chip processor 210 and the hardware acceleration engine 220; for example, the software running on the on-chip processor 210 can issue various information to the hardware of the hardware acceleration engine 220 through the configuration channel 231. configuration commands and information.
  • the data channel 232 is used for data and message transmission between the on-chip processor 210 and the hardware acceleration engine 220.
  • the forwarding rules of the configuration information can be delivered to the hardware acceleration engine 220 by the on-chip processor 210 through the configuration channel 231, so that the forwarding rules of the virtual switch can be delivered to The hardware part of the smart network card.
  • the control core or data core of the virtual switch can generate forwarding rules, and then deliver the forwarding rules to the hardware acceleration engine of the smart network card through the configuration channel.
  • the transmission rate of the configuration channel is low, resulting in low performance in issuing forwarding rules; for example, the transmission rate of the configuration channel is generally 1 megabit per second, which is not suitable for network short links such as CPS (Cyber Physical Systems).
  • CPS Network Physical Systems
  • configuring a channel with a lower transmission rate will not be able to meet the delivery requirements of forwarding rules, and may also cause a large number of forwarding rules to fail to be delivered; for example, the business scenario of network short links requires that forwarding rules need to be delivered in Recording is performed at the hardware level of the smart network card in a short period of time, but the lower transmission rate of the configuration channel cannot meet the demand, which may result in the failure to deliver a large number of forwarding rules;
  • the configuration channel delivers forwarding rules asynchronously.
  • the forwarding rules are stored in the shared memory space shared with the hardware acceleration engine. This requires the hardware acceleration engine to obtain forwarding rules one by one from the shared memory space. rules and record them in the forwarding rule table; the above method of asynchronously issuing forwarding rules may cause the forwarding rules to be actually issued to the hardware acceleration engine when facing the business scenario of network short links (for example, the forwarding rules are actually recorded to the hardware ), the network short link has been terminated. This not only causes the hardware acceleration engine to be unable to accelerate packet forwarding, but may also waste the overhead used to issue forwarding rules.
  • Figure 3 illustrates an example diagram of issuing forwarding rules in a network short link scenario.
  • a connection is established between the client and the server through a three-step handshake.
  • the client sends a SYN (Synchronize Sequence Numbers, synchronization sequence number) request to the server, and the server sends a SYNACK (Acknowledge Character, confirmation character) to the client.
  • SYNACK Send acknowledge Character, confirmation character
  • the software of the on-chip processor of the smart network card begins to issue forwarding rules. Due to the low delivery performance of the configuration channel and the delay caused by asynchronous delivery, the forwarding rules are not actually issued until the client and server complete the three-step handshake.
  • the forwarding rule is successfully sent (for example, when the forwarding rule is actually recorded in the forwarding rule table)
  • the link between the client and the server has entered the completed state or is about to be completed. This results in the forwarding rule being issued unable to be used in this network short link.
  • Use that is, the hardware acceleration engine of the smart network card cannot accelerate the forwarding of packets during this network short link.
  • the client and the server have interacted k times, and the client has sent interaction completion information to the server, which means that the forwarding rules are truly When the delivery is successful, the link between the client and the server has entered the completion state or is about to be completed.
  • the client can send interaction completion information to the server, and the server sends interaction completion confirmation to the client; when the server completes interaction with the client, the server sends interaction completion information to the client, and the client The client sends an interaction completion confirmation to the server.
  • embodiments of the present application provide an improved forwarding rule delivery solution to improve forwarding rule delivery performance and reliability.
  • embodiments of the present application consider using the data channel between the on-chip processor and the hardware acceleration engine to deliver forwarding rules; and carrying the forwarding rules in the information structure of the message so that the forwarding rules can be integrated with the message. It is synchronously delivered from the software level of the smart network card (that is, the level of the on-chip processor) to the hardware level of the smart network card (that is, the level of the hardware acceleration engine), thereby improving the performance and reliability of forwarding rule delivery.
  • Figure 4 exemplarily shows an optional flow chart of the method for issuing forwarding rules provided by the embodiment of the present application.
  • the method process can be implemented by the smart network card, for example, by the on-chip processor and hardware acceleration of the smart network card.
  • the engine executes the implementation; referring to Figure 4, the method flow may include the following steps.
  • step S410 the on-chip processor generates forwarding rules for the message.
  • the smart network card can first query whether there is a forwarding rule matching the message in the hardware acceleration engine of the smart network card; if there is no forwarding rule matching the message in the hardware acceleration engine, , the hardware acceleration engine can pass the message to the on-chip processor of the smart network card; thus, the on-chip processor can generate forwarding rules that match the message according to the forwarding configuration information of the virtual switch, so that the generated forwarding rules can be delivered To the hardware acceleration engine, the hardware acceleration engine can record the forwarding rules and forward the packets.
  • a forwarding rule table can be stored in the hardware acceleration engine, and the forwarding rule table can record the forwarding rules of the message; after obtaining the message, the hardware acceleration engine can query whether the forwarding rule table records a record that matches the message. Forwarding rules are used to query whether there are forwarding rules matching the packet in the hardware acceleration engine; if there is no forwarding rule matching the packet recorded in the forwarding rule table, the hardware acceleration engine can pass the packet to the smart network card.
  • On-chip processor therefore, the on-chip processor can generate forwarding rules that match the message based on the forwarding configuration information of the virtual switch, so as to deliver the generated forwarding rules to the hardware acceleration engine, so that the hardware acceleration engine can record the forwarding rules and Forwarding of messages.
  • forwarding rules can be rules used to describe packet forwarding behavior, including packet matching information, such as five-tuple (source IP, destination IP, source port, destination port and protocol number), and other necessary Information that can distinguish the data flow of the message, as well as forwarding actions and other information. Forwarding actions include NAT (Network Address Translation), tunnel encapsulation and/or decapsulation, port (port) for forwarding packets, etc.
  • packet matching information such as five-tuple (source IP, destination IP, source port, destination port and protocol number)
  • forwarding actions include NAT (Network Address Translation), tunnel encapsulation and/or decapsulation, port (port) for forwarding packets, etc.
  • the forwarding configuration information of the virtual switch is used by the on-chip processor when generating forwarding rules, such as ACL (Access Control Lists) information, Qos (Quality of Service, Quality of Service) information, routing information, etc. .
  • forwarding rules such as ACL (Access Control Lists) information, Qos (Quality of Service, Quality of Service) information, routing information, etc.
  • the on-chip processor can obtain the query results of the query message in each forwarding table by querying each forwarding table, thereby converting the query results of each forwarding table.
  • the set is used as the forwarding rule that matches the message; that is to say, the forwarding rule of the message can be regarded as the set of query results of the message in each forwarding table.
  • ACL table An example of the ACL table is:
  • An example of a routing table is:
  • the set of query results can be As a packet forwarding rule, for example, the packet forwarding rule is: 1.1.1.1:1111->192.168.0.1:80, tcp, accept, port1.
  • step S411 the on-chip processor carries the forwarding rule in the information structure of the message.
  • the hardware acceleration engine determines that the packet does not record forwarding rules, it needs to pass the packet to the on-chip processor (for example, pass the packet to the on-chip processor through the data channel), and then the on-chip processor generates the forwarding of the packet.
  • the packets passed by the hardware acceleration engine also need to be passed back to the hardware acceleration engine (for example, the packets are passed back to the hardware acceleration engine through the data channel), so that the hardware The acceleration engine uses the forwarding rules issued by the on-chip processor to perform hardware accelerated forwarding of messages.
  • the on-chip processor needs to convert the data information of the message (for example, the number of the message Content and other information) is carried in the information structure of the message, so that the message passed by the on-chip processor to the hardware acceleration engine is generated based on the information structure.
  • the on-chip processor does not send the forwarding rules of the message to the hardware acceleration engine through the configuration channel, but uses the data channel to send the forwarding rules of the message to the hardware acceleration engine; in order to make the message
  • the forwarding rules can be sent to the hardware acceleration engine through the data channel.
  • the forwarding rules generated by the on-chip processor can be carried in the messages transmitted on the data channel, so that the forwarding rules generated by the on-chip processor are transmitted together with the messages. Give hardware acceleration engine.
  • embodiments of the present application can call the information structure of the message, thereby carrying the forwarding rules of the message generated by the on-chip processor in the information structure; furthermore, on-chip processing The processor can generate a message based on the information structure carrying the forwarding rules of the message, so that the message generated by the on-chip processor can carry the forwarding rule of the message generated by the on-chip processor.
  • the embodiments of the present application may carry the forwarding rule in the header space (head room) field of the information structure of the message.
  • the header space field may be located between the header field and the data field of the information structure of the message.
  • the space size of the header space field can be set.
  • the embodiment of the present application can set the space size of the header space field to correspond to the preset length of the forwarding rule; thus, the implementation of the present application can For example, a forwarding rule of a preset length can be carried in front of the data field of the information structure, so that the forwarding rule can be carried in the header space field of the information structure.
  • step S412 the on-chip processor generates a message carrying the forwarding rule according to the information structure carrying the forwarding rule.
  • step S413 the on-chip processor uses the data channel to send the generated message to the hardware acceleration engine.
  • the on-chip processor After the on-chip processor carries the forwarding rules of the message in the information structure of the message, it can generate a message based on the information structure that has carried the forwarding rules, so that the generated message carries the forwarding rules. Furthermore, the on-chip processor can use the data channel to deliver the generated message to the hardware acceleration engine, so that the hardware acceleration engine can simultaneously obtain the forwarding rules carried in the message while obtaining the message.
  • the on-chip processor may simultaneously pass a preset flag when sending the message to the hardware acceleration engine through the data channel. (flag) information to indicate that the packets sent by the hardware acceleration engine carry forwarding rules.
  • the preset flag information can be realized through flag information that can be directly transferred between the software and hardware of the smart network card. That is to say, the flag information can be on-chip. Direct transfer between processor and hardware acceleration engine.
  • the preset flag information is a flag information, which can be a bit or a number or any form of flag information. This flag information can tell the hardware acceleration engine that the currently transmitted message carries forwarding information. Regular, preset flag information can be passed from the on-chip processor to the hardware acceleration engine through the data channel.
  • step S414 the hardware acceleration engine parses the information structure of the message to determine the forwarding rule carried in the message.
  • step S415 the hardware acceleration engine records the forwarding rules of the message.
  • the hardware acceleration engine After the hardware acceleration engine obtains the message sent by the on-chip processor through the data channel, it can analyze the message and determine the forwarding rules from the message. In some embodiments, the hardware acceleration engine may parse the header space (headroom) field of the information structure of the message to determine the forwarding rule carried in the header space field.
  • header space headroom
  • the space size of the header space field can be set.
  • the space size of the header space field can be set to correspond to the preset length of the forwarding rule; thus the hardware acceleration engine can, according to the preset length of the forwarding rule,
  • the forwarding rules are separated from the front of the data field of the information structure of the message to determine the forwarding rules carried in the header space field.
  • the header space field is located between the header field and the data field of the information structure, and is located in front of the data field. Therefore, based on the preset length of the forwarding rule, the header space field is placed in front of the data field. The information of the preset length is separated, and the forwarding rules carried in the message can be obtained.
  • the hardware acceleration engine may determine that the packet carries a forwarding rule based on the preset flag information passed by the on-chip processor, and thereby perform the step of parsing the packet to determine the forwarding rule carried in the packet. Furthermore, if the on-chip processor does not directly transmit the preset flag information when sending a message, the hardware acceleration engine may determine that the message sent by the on-chip processor does not carry a forwarding rule, and the hardware acceleration engine may not perform the determination. The forwarding rules carried in the packet.
  • the hardware acceleration engine may record the forwarding rule of the message.
  • the hardware acceleration engine can also forward packets based on packet forwarding rules. Subsequently, after the hardware acceleration engine obtains the message, it can also enable the message to be forwarded directly based on the forwarding rules recorded by the hardware acceleration engine when the forwarding rules of the message have been recorded.
  • the hardware acceleration engine can set a forwarding rule table, and the hardware acceleration engine can determine from the message The forwarding rules are recorded in the forwarding rule table.
  • the forwarding rule table can record the correspondence between the message identifier of the message and the forwarding rule of the message; in one example, the message identifier is such as five-tuple information, which can also be regarded as part of the forwarding rule. ;
  • the hardware acceleration engine can record the correspondence between the message identifier and the forwarding rule of the message in the forwarding rule table based on the message identifier of the message sent by the on-chip processor and the forwarding rules carried in the message.
  • the message identifier of the message can be determined from the information structure of the message.
  • the message identifier of the message may be indicated by the data flow identifier of the data flow corresponding to the message; the forwarding rule of the message may be the data flow corresponding to the message.
  • Forwarding rules; correspondingly, the forwarding rule table can be a flow table (flowtable), used to record the forwarding rules of each data flow; thus the hardware acceleration engine can record the forwarding rules of the data flow corresponding to the message in the flow table (for example, Record the flow identifier of the data flow corresponding to the message and the corresponding relationship with the forwarding rules) to record the forwarding rules of the message in the forwarding rule table.
  • the preset flag information and the packet identification (such as match) in the forwarding rule table are different.
  • the flow table is divided into match (match) and action (behavior).
  • Match is Information such as the quintuple of the message is used to distinguish the data flow to which the message belongs, and action is used to specify how the data flow to which the message belongs should be forwarded.
  • the on-chip processor can generate the forwarding rules of the message, and carry the generated forwarding rules in the information structure of the message; thus, the on-chip processor can generate the forwarding rules according to the information structure of the message. information structure to generate a message, so that the generated message carries the forwarding rules of the message; furthermore, the on-chip processor can use the data channel for data message transmission with the hardware acceleration engine to deliver the generated message , so that the hardware acceleration engine can obtain the message carrying the forwarding rules through the data channel, so as to realize the forwarding rules of the message sent by the on-chip processor to the hardware acceleration engine.
  • the embodiment of the present application uses the data channel to deliver a message to the hardware acceleration engine, it can simultaneously carry the forwarding rules in the message, so that the forwarding rules can be delivered through the data channel, because the forwarding rules are delivered through the data channel.
  • Delivery performance is equivalent to the delivery performance of the on-chip processor and hardware acceleration engine at the data packet level, while the delivery performance of the on-chip processor and hardware acceleration engine at the data packet level is much greater than the delivery of configuration information and commands in the configuration channel performance, so the embodiments of this application can greatly improve the performance of forwarding rule delivery; at the same time, because the forwarding rules are delivered together with the message, the delivery of the forwarding rules is a synchronous event, which can solve the problem of asynchronous delivery and forwarding.
  • the reliability issues caused by rules enable the delivery method of forwarding rules to meet the needs of business scenarios such as network short links, thereby improving the reliability of forwarding rule delivery. It can be seen that the embodiment of the present application uses the data channel between the on-chip processor and the hardware acceleration engine to deliver forwarding rules; and the forwarding rules are carried in the information structure of the message, so that the forwarding rules can be delivered to the hardware acceleration synchronously with the message. engine, so that the embodiment of the present application can improve the delivery performance and reliability of forwarding rule delivery.
  • Figure 5 exemplarily shows another optional flow chart of the method for issuing forwarding rules provided by the embodiment of the present application.
  • the method process can be implemented by an on-chip processor and a hardware acceleration engine. Referring to Figure 5, The method flow may include the following steps.
  • step S510 the hardware acceleration engine obtains the packet of the data flow.
  • the smart network card After the smart network card obtains the data flow packets sent by the virtual machine or the data flow packets sent to the virtual machine, it can perform hardware acceleration and forwarding of the data flow packets through the hardware acceleration engine of the smart network card, so that the hardware acceleration engine The packets of the data flow can be obtained accordingly.
  • the packets of the data flow obtained by the hardware acceleration engine can be regarded as the packets of the data flow to be forwarded (for example, the packets of the data flow to be forwarded through hardware acceleration).
  • step S511 the hardware acceleration engine queries the flow table to see whether there is a flow entry matching the data flow of the message. If yes, step S512 is executed. If not, step S513 is executed.
  • step S512 the hardware acceleration engine forwards the packet of the data flow based on the flow table entry matching the data flow of the packet.
  • step S513 the hardware acceleration engine transmits the data flow message to the on-chip processor through the data channel.
  • the hardware acceleration engine After the hardware acceleration engine obtains the packet of the data flow, it can query whether there is a flow table entry matching the data flow of the packet recorded in the flow table (the flow table entry in the flow table can indicate the forwarding of the data flow). rule). If there is a flow entry recorded in the flow table that matches the data flow of the packet, the hardware acceleration engine can perform processing on the packet of the data flow based on the flow entry recorded in the flow table that matches the data flow of the packet. Hardware accelerated forwarding improves the packet forwarding performance of data flows. If there is no flow entry matching the data flow of the message recorded in the flow table, the hardware acceleration engine needs to pass the data flow message to the on-chip processor through the data channel; thus the on-chip processor generates the message The data flow matches the flow entry.
  • a data stream may be formed from a group of messages of the same type (messages such as data packets).
  • the type distinction can be determined by the value of the match field in the message.
  • the value of the match field can be regarded as the value of the matching field in the message.
  • the flow table may record multiple flow entries, and one flow entry may be used to indicate the forwarding rule of a data flow.
  • a flow table entry can record the data flow identifier of a data flow and the forwarding rules corresponding to the data flow.
  • Figure 6 schematically shows an example structural diagram of a flow table. As shown in Figure 6, multiple flow table items 601 to 60m are recorded in the flow table (m is the flow table item. The number may be determined according to the actual situation).
  • One flow table entry can record the data flow identifier of a data flow and the forwarding rules corresponding to the data flow.
  • the hardware acceleration engine can query whether there is a flow table entry matching the data flow identifier of the data flow recorded in the flow table according to the data flow identifier corresponding to the data flow of the message; if so, It means that the hardware acceleration engine records a forwarding rule that matches the data flow of the message; if not, it means that the hardware acceleration engine does not record a forwarding rule that matches the data flow of the message.
  • step S514 the on-chip processor generates a flow entry matching the data flow of the message according to the forwarding configuration information of the virtual switch.
  • step S515 the on-chip processor carries the generated flow entry in the header space field of the information structure of the message.
  • the on-chip processor can query various forwarding configuration information of the virtual switch (such as querying the ACL information, Qos information, routing information, etc. of the virtual switch), thereby based on The forwarding configuration information of the virtual switch generates flow entries that match the data flow of the message (flow entries that match the data flow of the message can be used to indicate forwarding rules that match the data flow of the message) .
  • the on-chip processor After the on-chip processor generates a flow table entry that matches the data flow of the packet, it can carry the flow table entry in the header space field of the packet's information structure, so that the flow table entry can be carried in the packet.
  • the on-chip processor is used to run the data core of the virtual switch, can generate forwarding rules, and carry the forwarding rules in the header space field of the information structure of the message.
  • the data core can generate flow entries that match the data flow of the packet, and carry the flow entries in the header space fields of the information structure of the packet.
  • FIG. 7 schematically shows an example diagram of the information structure of a message.
  • the information structure of the message includes a data (data) field used to carry message data.
  • the front of the data field is the head space (head room) field, the space size of the header space field can be set, so that the embodiment of the present application can set the space size of the header space field according to the preset length of the forwarding rule (such as the preset length of the flow entry); and then In front of the data field, a forwarding rule of a preset length (such as a flow entry of a preset length) is carried, so that the forwarding rule is carried in the header space field in front of the data field.
  • the header space field is preceded by the private field, and the private field is preceded by the structure field of the information structure. Private fields and structure fields can be regarded as header fields in the information structure of the message.
  • the information structure of the message uses the mbuf (memory buffer, memory cache) structure as an example.
  • the data field in the mbuf structure can store the data information of the message.
  • the mbuf structure can be applied to scenarios such as DPDK (Data Plane Development Kit).
  • step S5166 the on-chip processor generates a message carrying the flow entry according to the information structure carrying the flow entry.
  • step S517 the on-chip processor delivers the generated message to the hardware acceleration engine through the data channel; and transmits preset flag information to the hardware acceleration engine to indicate that the delivered message carries the flow entry. .
  • the on-chip processor After the on-chip processor carries the flow table entry in the information structure of the packet, it can generate a packet carrying the flow table entry; and sends the generated packet to the hardware acceleration engine through the data channel to enable hardware acceleration.
  • the engine can obtain packets carrying flow entries through the data channel.
  • the on-chip processor can also transmit preset flag information to the hardware acceleration engine while delivering the packets; the preset flag information can be used
  • the message sent by the instructing on-chip processor carries the flow table entry.
  • step S5128 the hardware acceleration engine determines that the message sent by the on-chip processor through the data channel carries the flow entry according to the preset flag information; and determines from the header space field of the information structure of the message Flow entry.
  • the hardware acceleration engine After the hardware acceleration engine obtains the message sent by the on-chip processor through the data channel and obtains the preset flag information passed by the on-chip processor, it can determine that the sent message carries the flow table entry based on the preset flag information. ;Thereby, the hardware acceleration engine can determine the flow table entry (that is, the flow table entry that matches the data flow of the packet) from the header space field of the information structure of the packet. In some embodiments, the hardware acceleration engine can separate the information of the preset length from the front of the data field of the information structure of the packet according to the preset length of the flow entry, thereby obtaining the flow entry carried in the packet.
  • step S519 the hardware acceleration engine records the flow entry in the flow table; and based on the flow entry, forwards the message sent by the on-chip processor.
  • the hardware acceleration engine After the hardware acceleration engine determines the flow table entry that matches the data flow of the packet in the packet sent from the on-chip processor, it can record the flow table entry in the flow table (for example, insert the flow table entry into the flow table ); and, the hardware acceleration engine can forward the packets sent by the on-chip processor based on the forwarding rules of the data flow indicated by the flow entry.
  • the hardware acceleration engine after the hardware acceleration engine subsequently obtains packets of the same data flow, it can query the flow table entry that matches the data flow identifier of the data flow, thereby based on the queried flow table Item, realizes accelerated forwarding of data flow packets at the hardware level and improves the packet forwarding performance of data flow.
  • the hardware acceleration engine can directly forward the packet based on the flow entry that the packet hits. Only when the packet misses the flow entry in the hardware acceleration engine, the hardware acceleration engine passes the packet to the on-chip processor, and the on-chip processor generates the flow entry for the packet.
  • the flow table entries generated by the on-chip processor can be carried in the packet and sent to the hardware acceleration engine through the data channel.
  • the hardware acceleration engine fails to insert a flow table item into the flow table due to some abnormal reasons (for example, there is a hash conflict between the flow table item to be inserted and the flow table item already recorded in the flow table, resulting in Flow table entry insertion fails), because the packets of the subsequent data flow cannot match the flow table entries recorded in the flow table, the data flow packets can continue to be delivered to the on-chip processor, and the on-chip processor generates the data corresponding to the packets. The flow entry matches the flow, and the flow entry is delivered using the solution provided by the embodiment of this application.
  • Figure 8 illustrates an example diagram of the process of issuing forwarding rules provided by the embodiment of the present application. As shown in Figure 8, taking the message of the data flow in the form of a data packet as an example, the process It can be as follows:
  • the smart network card obtains the data packet sent by the virtual machine or the data packet sent to the virtual machine, it can be handed over to the hardware acceleration engine for processing; the hardware acceleration engine determines that no flow matching the data flow of the data packet is found in the flow table. Table item;
  • the hardware acceleration engine transmits data packets to the on-chip processor through the data channel
  • the on-chip processor (such as the data core in the on-chip processor) can generate flow table entries that match the data flow of the data packet based on the forwarding configuration information of the virtual switch, and carry the flow table entries and send them to the smart network card when needed.
  • the data packet can carry flow table entries, data information of the data packet, etc.;
  • the on-chip processor sends the data packet carrying the flow table entry to the hardware acceleration engine through the data channel (further, the on-chip processor can transmit the preset flag information at the same time);
  • the hardware acceleration engine can determine the flow table items carried in the data packets sent by the on-chip processor, and record the flow table items in the flow table;
  • the hardware acceleration engine forwards data packets based on flow table entries.
  • the forwarding rule delivery solution provided by the embodiments of this application carries the forwarding rules in the message, so that when the on-chip processor sends the message to the hardware acceleration engine through the data channel, the forwarding rules in the message can be passed synchronously.
  • the data channel is delivered to the hardware acceleration engine; the embodiment of this application uses the data channel to implement the delivery of forwarding rules, improving the delivery performance of forwarding rules; and, because the forwarding rules are delivered to the hardware acceleration engine synchronously along with the message, It is a synchronous event, so it can solve the reliability problem caused by asynchronous delivery of forwarding rules and improve the reliability of forwarding rule delivery.
  • the forwarding rule delivery solution provided by the embodiments of this application can improve the delivery performance of forwarding rules and improve the reliability of forwarding rule delivery. It can be effectively applied to business scenarios such as network short links and improve cloud computing and virtualization technologies. application value.
  • the smart network card provided by the embodiment of the present application is introduced below.
  • the functions of the smart network card described below can be mutually referenced with the above description.
  • the smart network card provided by the embodiment of the present application may include an on-chip processor and a hardware acceleration engine.
  • the functions of the on-chip processor described below can be implemented through software functions; the functions of the hardware acceleration engine described below can be implemented through hardware functions, or based on the programmability of the hardware acceleration engine, by hardware Implementation of software functions to accelerate engine programming.
  • the on-chip processor may be used to generate a forwarding rule for the message; carry the forwarding rule in the information structure of the message; generate a message carrying the forwarding rule according to the information structure carrying the forwarding rule. Forward the regular packets; use the data channel to deliver the generated packets to the hardware acceleration engine.
  • the hardware acceleration engine can be used to use the data channel to obtain the message sent by the on-chip processor.
  • the message carries the forwarding rule of the message; analyze the information structure of the message to determine the message.
  • the on-chip processor is configured to carry the forwarding rule in the information structure of the message, including:
  • the forwarding rule is carried in the header space field of the information structure; the header space field is located between the header field and the data field of the information structure.
  • the space size of the header space field is set to be equal to the preset length of the forwarding rule.
  • the on-chip processor is configured to carry the forwarding rules in the header space field of the information structure, including:
  • the forwarding rule of a preset length is carried; wherein the header space field is located in front of the data field.
  • the on-chip processor can also be used to: transfer preset flag information to the hardware acceleration engine.
  • the preset flag information Indicates that the packets sent carry forwarding rules.
  • the on-chip processor may also be configured to: when the forwarding rules of the message are not recorded in the hardware acceleration engine, obtain the message transmitted by the hardware acceleration engine through the data channel, so that the on-chip The processor enters the step of generating forwarding rules for the message;
  • the on-chip processor is configured to generate packet forwarding rules including:
  • the hardware acceleration engine is configured to parse the information structure of the message to determine that the forwarding rules carried in the message include:
  • the header space field of the information structure is parsed to determine the forwarding rule carried in the header space field; the header space field is located between the header field and the data field of the information structure.
  • the space size of the header space field is set to correspond to the preset length of the forwarding rule; a hardware acceleration engine is used to parse the header space field of the information structure to determine the The forwarding rules carried in the header space field include:
  • the forwarding rule of the preset length is separated from the front of the data field of the information structure; wherein the header space field is located in front of the data field.
  • the hardware acceleration engine when the hardware acceleration engine uses the data channel to obtain the delivered message, it can also be used to: obtain the transmitted preset flag information, and the preset flag information indicates that the delivered message carries There are forwarding rules.
  • the hardware acceleration engine can also be used to: when the forwarding rules of the packet are not recorded, transfer the packet to the on-chip processor through the data channel, so that the on-chip processor generates a packet carrying the forwarding rule. arts.
  • the packet is a packet of a data flow; the forwarding rule of the packet is a flow table entry that matches the data flow of the packet; the flow table entry is recorded in the flow table, and There are multiple flow entries recorded in the flow table, and one flow entry is used to indicate the forwarding rule of a data flow.
  • the hardware acceleration engine used to record the forwarding rules of the message includes:
  • Embodiments of the present application also provide a storage medium that stores one or more computer-executable instructions.
  • the on-chip acceleration engine execution as provided in the embodiments of the present application is implemented.
  • Embodiments of the present application also provide a computer program that, when executed, implements the forwarding rule issuance method executed by an on-chip acceleration engine as provided in the embodiments of the present application, or the method provided by the hardware acceleration engine in the embodiments of the present application. Executed forwarding rule delivery method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

本申请实施例提供一种转发规则下发方法、智能网卡及存储介质,其中方法包括:生成报文的转发规则;将所述转发规则承载在所述报文的信息结构中;根据承载所述转发规则的信息结构,生成携带所述转发规则的报文;利用数据通道下发所生成的报文。本申请实施例利用片上处理器与硬件加速引擎之间的数据通道,下发转发规则;并且转发规则承载在报文的信息结构中,使得转发规则能够与报文同步下发到硬件加速引擎,从而本申请实施例能够提升转发规则下发的下发性能和可靠性。

Description

转发规则下发方法、智能网卡及存储介质
本申请要求于2022年08月15日提交中国专利局、申请号为202210981665.X、申请名称为“转发规则下发方法、智能网卡及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及云计算技术领域,具体涉及一种转发规则下发方法、智能网卡及存储介质。
背景技术
随着云计算和虚拟化技术的发展,为了应对不断提升的网络带宽,并且以较低成本实现对虚拟化功能的支持,网络虚拟化等虚拟化功能可被卸载(offload)到智能网卡上。例如,可将运行于主机的虚拟交换机(Virtual switch,Vswitch)卸载到智能网卡上,以实现对于报文的高性能转发。此处所指的卸载可以理解为是将软件功能卸载到硬件,以由硬件实现的过程。
在虚拟化技术中,虚拟交换机可用于负责虚拟机的报文转发。例如,虚拟机发送报文和接收报文可由虚拟交换机基于转发规则实现。将虚拟交换机卸载到智能网卡的关键技术之一是,将虚拟交换机的转发规则下发到智能网卡的硬件部分,以由智能网卡实现报文转发的硬件加速。在此背景下,如何提升转发规则下发到智能网卡的下发性能和可靠性,成为了本领域技术人员亟需解决的技术问题。
发明内容
有鉴于此,本申请实施例提供一种转发规则下发方法、智能网卡及存储介质,以提升转发规则下发到智能网卡的下发性能和可靠性。
为实现上述目的,本申请实施例提供如下技术方案。
第一方面,本申请实施例提供一种转发规则下发方法,包括:
生成报文的转发规则;
将所述转发规则承载在所述报文的信息结构中;
根据承载所述转发规则的信息结构,生成携带所述转发规则的报文;
利用数据通道下发所生成的报文。
第二方面,本申请实施例提供一种转发规则下发方法,包括:
利用数据通道获取下发的报文;所述报文携带有所述报文的转发规则;
对所述报文的信息结构进行解析,以确定所述报文中携带的所述转发规则;其中,所述转发规则承载在所述报文的信息结构中;
记录所述报文的转发规则。
第三方面,本申请实施例提供一种智能网卡,包括:片上处理器和硬件加速引擎;所述片上处理器被配置为执行如上述第一方面所述的转发规则下发方法;所述硬件加速引擎被配置为执行如上述第二方面所述的转发规则下发方法。
第四方面,本申请实施例提供一种存储介质,所述存储介质存储一条或多条计算机可执行指令,所述一条或多条计算机可执行指令被执行时,实现如上述第一方面所述的转发规则下发方法,或者,如上述第二方面所述的转发规则下发方法。
本申请实施例提供的转发规则下发方法中,片上处理器可生成报文的转发规则,并且将生成的转发规则承载在报文的信息结构中;从而,片上处理器可根据承载转发规则的信息结构,生成报文,以使得所生成的报文中携带有报文的转发规则;进而,片上处理器可利用与硬件加速引擎进行数据报文传输的数据通道,下发所生成的报文,使得硬件加速引擎能够通过数据通道获得携带有转发规则的报文,以实现片上处理器向硬件加速引擎下发报文的转发规则。
本申请实施例在利用数据通道向硬件加速引擎下发报文时,能够同步在报文中携带转发规则,以使得转发规则能够通过数据通道进行下发,由于转发规则通过数据通道进行下发的下发性能,等同于片上处理器和硬件加速引擎在数据报文层面的传递性能,而片上处理器和硬件加速引擎在数据报文层面的传递性能,远大于配置通道的配置信息和命令的传递性能,因此本申请实施例能够使得转发规则的下发性能得到极大的提升;同时,由于转发规则是随同报文一同下发,因此转发规则的下发属于同步事件,能够解决异步下发转发规则所带来的可靠性问题,使得转发规则的下发方式能够满足网络短链接等业务场景的需求,从而提升转发规则下发的可靠性。可见,本申请实施例利用片上处理器与硬件加速引 擎之间的数据通道,下发转发规则;并且转发规则承载在报文的信息结构中,使得转发规则能够与报文同步下发到硬件加速引擎,从而本申请实施例能够提升转发规则下发的下发性能和可靠性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为具有虚拟交换机的系统结构示例图。
图2A为智能网卡的结构示例图。
图2B为智能网卡的另一结构示例图。
图3为网络短链接场景下转发规则的下发示例图。
图4为本申请实施例提供的转发规则下发方法的流程图。
图5为本申请实施例提供的转发规则下发方法的另一流程图。
图6为流表的结构示例图。
图7为报文的信息结构的示例图。
图8为本申请实施例提供的转发规则下发的过程示例图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1示例性的示出了具有虚拟交换机的系统结构示例图,如图1所示,该系统可以包括多台虚拟机101至10n(n为虚拟机的数量,具体可根据实际情况而定)以及虚拟交换机110。多台虚拟机101至10n的报文收发可通过虚拟交换机110实现;例如,虚拟交换机可基于转发规则,将虚拟机的报文转发到目的端口;虚拟交换机可基于转发规则,将主机接收的报文转发给虚拟机。
虚拟交换机110原本运行于主机上。例如,主机的处理器(例如CPU)可通过虚拟化技术,虚拟化出多台虚拟机101至10n,并且通过软件功能实现虚拟交换机110;从而,虚拟交换机110可负责多台虚拟机101至10n的报文收发。
随着云计算和网络技术的不断发展,用户对网络性能的要求越来越高,运行于主机的虚拟交换机的报文转发性能已无法满足性能要求;为了实现报文的高性能转发,运行于主机的虚拟交换机110可被卸载到智能网卡120上,以实现对于报文的高性能转发。
智能网卡除了能完成标准网卡所具有的网络传输功能之外,还可通过内置的可编程、可配置的硬件加速引擎,提升应用性能和大幅降低处理器(例如CPU)在通信中的消耗。在虚拟交换机卸载到智能网卡时,智能网卡可以理解为是利用硬件完成虚拟交换机的报文转发操作,并且拥有片上处理器(例如片上CPU)的网卡。图2A示例性的示出了智能网卡的结构示例图,如图2A所示,智能网卡可以包括片上处理器210和硬件加速引擎220。
片上处理器210可以理解为是智能网卡的处理器(例如CPU)以及操作系统的部分。在网络虚拟化等虚拟化功能卸载到智能网卡的情况下,片上处理器210可以负责原本在主机上运行的网络管理、配置等功能;另外,片上处理器210也可以负责处理不需要高性能的业务。当虚拟交换机卸载到智能网卡上时,虚拟交换机的进程可运行在智能网卡的片上处理器,并且虚拟交换机的报文转发工作和配置管理可通过片上处理器的处理器核实现;进一步结合图2A所示,片上处理器210的处理器核可以包括至少一个数据核211和至少一个控制核212,其中,数据核211负责虚拟交换机的报文转发工作,控制核212负责虚拟交换机的配置管理工作。
硬件加速引擎220为智能网卡的硬件部件,硬件加速引擎220相比于片上处理器210具有较高的性能,但是灵活性较差,可用于处理高性能的业务。在一个示例中,硬件加速引擎220可以包括NP(Network Processor,网络处理器)、FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列)或ASIC(Application Specific Integrated Circuit,专用集成电路)等硬件处理器件。在虚拟交换机卸载到智能网卡上时,智能网卡的硬件加速引擎220(例如FPGA、ASIC等硬件处理器件)可以设置转发规则表221,转发规则表221可用于记录报文的转发规则,从而对报文在硬件层面实现加速转发,提升报文的转发性能。
基于转发规则表221的一种报文转发方式可以是:当智能网卡获得报文时,智能网卡 中的硬件加速引擎220可查询转发规则表221是否记录有与报文相匹配的转发规则;若是,则按照所记录的转发规则,进行报文的转发;若否,则将报文传递到智能网卡的片上处理器210。片上处理器210在获得硬件加速引擎220传递的报文后(对应转发规则表221未记录报文的转发规则的情况),可查询虚拟交换机的多个转发配置信息,从而生成报文的转发规则;进而片上处理器210可将所生成的转发规则下发给硬件加速引擎220,由硬件加速引擎220将转发规则记录在转发规则表221,以实现虚拟交换机的转发规则下发到智能网卡的硬件部分。后续,当报文命中转发规则表时(即转发规则表记录有报文的转发规则),则硬件加速引擎220可按照转发规则表中命中的转发规则,进行报文的转发,而不必再将报文传递到片上处理器210,从而通过硬件方式加速报文的转发,提升报文的转发性能。
可以看出,当报文在硬件加速引擎未匹配转发规则时(例如硬件加速引擎的转发规则表,未记录报文的转发规则),片上处理器需要生成转发规则,并且下发给硬件加速引擎,以使得硬件加速引擎记录报文的转发规则(例如在转发规则表中记录报文的转发规则)。上述过程涉及到片上处理器将转发规则,下发到硬件加速引擎的过程;也就是本申请实施例所指的将虚拟交换机的转发规则下发到智能网卡的硬件部分的过程。
片上处理器在生成转发规则后,将转发规则下发到硬件加速引擎的一种方式可以是:利用片上处理器和硬件加速引擎之间的配置通道,将片上处理器生成的转发规则,下发到硬件加速引擎。为便于理解,图2B示例性的示出了智能网卡的另一结构示例图,结合图2A和图2B所示,片上处理器210和硬件加速引擎220之间存在配置通道231和数据通道232。其中,配置通道231用于片上处理器210和硬件加速引擎220之间的配置管理;例如,片上处理器210运行的软件可通过配置通道231,向硬件加速引擎220的硬件下发各种各样的配置命令和信息。数据通道232用于片上处理器210和硬件加速引擎220之间的数据、报文传输。
基于配置通道231用于传递配置命令和信息的属性,作为配置信息的转发规则可通过配置通道231,由片上处理器210下发给硬件加速引擎220,从而使得虚拟交换机的转发规则可下发到智能网卡的硬件部分。例如,在片上处理器中,虚拟交换机的控制核或数据核可生成转发规则,然后通过配置通道,将转发规则下发给智能网卡的硬件加速引擎。
然而,本申请的发明人在研究过程中发现,片上处理器利用配置通道,将转发规则下发给硬件加速引擎的方式,至少存在如下问题:
配置通道的传输速率较低,导致转发规则的下发性能较低;例如,配置通道的传输速率一般在每秒1兆,在面对CPS(Cyber Physical Systems,信息物理系统)等网络短链接的业务场景时,配置通道的较低传输速率,将无法满足转发规则的下发需求,还可能导致大量的转发规则存在下发失败的可能;例如,网络短链接的业务场景,要求转发规则需要在较短时间内在智能网卡的硬件层面进行记录,而配置通道的较低传输速率无法满足需求,可能导致大量的转发规则下发失败;
配置通道是采用异步方式下发转发规则,例如,利用配置通道下发转发规则时,转发规则是存放在与硬件加速引擎共享的共享内存空间,这需要硬件加速引擎从共享内存空间中逐一获取转发规则,并记录到转发规则表中;上述异步下发转发规则的方式,在面对网络短链接的业务场景时,可能导致转发规则真正下发到硬件加速引擎时(例如转发规则真正记录到硬件加速引擎的转发规则表中时),网络短链接已经终止,这不仅导致硬件加速引擎无法起到报文的加速转发效果,还可能浪费下发转发规则所使用的开销。
为便于理解上述指出的问题,以TCP(Transmission Control Protocol,传输控制协议)协议为例,图3示例性的示出了网络短链接场景下,转发规则的下发示例图。如图3所示,客户端与服务器之间通过三步握手建立连接,例如,客户端向服务器发送SYN(Synchronize Sequence Numbers,同步序列编号)请求,服务器向客户端发送SYNACK(Acknowledge Character,确认字符)信息,客户端向服务器发送ACK信息,从而客户端与服务器之间完成三步握手。客户端与服务器完成三步握手后,智能网卡的片上处理器的软件开始下发转发规则,由于配置通道的下发性能较低,同时由于异步下发带来的延时,在转发规则真正下发成功时(例如转发规则真正记录到转发规则表中时),客户端与服务器之间的链接已经进入完成状态或将要完成的状态,这导致下发的转发规则在本次网络短链接中无法使用,即智能网卡的硬件加速引擎无法在本次网络短链接的过程中,进行报文的加速转发。结合图3所示,从转发规则开始下发到转发规则下发成功时,客户端与服务器之间已进行了k次交互,并且客户端向服务器发送了交互完成信息,这意味着转发规则真正下发成功时,客户端与服务器之间的链接已经进入完成状态或将要完成的状态。需要进一步说明的 是,客户端在与服务器在完成交互时,客户端可向服务器发送交互完成信息,服务器向客户端发送交互完成确认;服务器在与客户端完成交互时,服务器向客户端发送交互完成信息,客户端向服务器发送交互完成确认。
可见,利用配置通道下发转发规则存在下发性能较低,并且可靠性较低的问题;例如在面对网络短链接的业务场景时,可能导致转发规则下发失败、转发规则在本次网络短链接中无法使用等不可靠情况。
基于此,本申请实施例提供改进的转发规则下发方案,以提升转发规则的下发性能以及可靠性。为实现上述目的,本申请实施例考虑利用片上处理器与硬件加速引擎之间的数据通道,下发转发规则;并且将转发规则承载在报文的信息结构中,以使得转发规则可与报文同步的从智能网卡的软件层面(即片上处理器的层面),下发到智能网卡的硬件层面(即硬件加速引擎的层面),从而提升转发规则的下发性能以及可靠性。
基于上述思路,图4示例性的示出了本申请实施例提供的转发规则下发方法的可选流程图,该方法流程可由智能网卡执行实现,例如,由智能网卡的片上处理器和硬件加速引擎执行实现;参照图4,该方法流程可以包括如下步骤。
在步骤S410中,片上处理器生成报文的转发规则。
在一些实施例中,智能网卡在获得报文后,可先查询智能网卡的硬件加速引擎中是否存在与报文相匹配的转发规则;如果硬件加速引擎中未存在与报文相匹配的转发规则,则硬件加速引擎可将报文传递给智能网卡的片上处理器;从而,片上处理器可根据虚拟交换机的转发配置信息,生成与报文相匹配的转发规则,以便将生成的转发规则下发给硬件加速引擎,使得硬件加速引擎能够记录该转发规则并进行报文的转发。
作为可选实现,硬件加速引擎中可以存放转发规则表,转发规则表可以记录报文的转发规则;硬件加速引擎可在获取报文后,可查询转发规则表中是否记录与报文相匹配的转发规则,以实现查询硬件加速引擎中是否存在与报文相匹配的转发规则;如果转发规则表中未记录与报文相匹配的转发规则,则硬件加速引擎可将报文传递给智能网卡的片上处理器;从而片上处理器可根据虚拟交换机的转发配置信息,生成与报文相匹配的转发规则,以便将生成的转发规则下发给硬件加速引擎,使得硬件加速引擎能够记录该转发规则并进行报文的转发。
需要说明的是,转发规则可以是用来描述报文转发行为的规则,包含了报文的匹配信息,如五元组(源IP、目的IP、源端口、目的端口和协议号)、其他必要的可以区分报文的数据流的信息,以及转发动作等信息。转发动作例如NAT(Network Address Translation,网络地址转换)、隧道的封装和/或解封装、转发出报文的port(端口)等。
作为可选实现,片上处理器在生成转发规则时所依据的虚拟交换机的转发配置信息,例如ACL(Access Control Lists,访问控制列表)信息、Qos(Quality of Service,服务质量)信息,路由信息等。在片上处理器生成与报文相匹配的转发规则的一种实现示例中,片上处理器可通过查询各个转发表,得到查询报文在各个转发表的查询结果,从而将各个转发表的查询结果的集合,作为与报文相匹配的转发规则;也就是说,报文的转发规则可以视为是报文在各个转发表的查询结果的集合。
在一个示例中,假设存在ACL表、路由表等转发表,ACL表的示例为:
192.168.0.0/24  accept(接受);
192.168.1.0/24  drop(丢弃)。
路由表的示例为:
192.168.0.0/24  port1;
192.168.1.0/24  port2。
则对于报文:1.1.1.1:1111->192.168.0.1:80,tcp(transmission control protocol,传输控制协议),在ACL表、路由表中查询报文的查询结果后,可将查询结果的集合作为报文的转发规则,例如,报文的转发规则为:1.1.1.1:1111->192.168.0.1:80,tcp,accept,port1。
在步骤S411中,片上处理器将所述转发规则承载在所述报文的信息结构中。
智能网卡的片上处理器与硬件加速引擎之间,存在通过数据通道进行报文交互的需求。例如,硬件加速引擎在确定报文未记录转发规则时,需要将报文传递给片上处理器(比如通过数据通道,将报文传递给片上处理器),然后片上处理器在生成报文的转发规则后,除将转发规则下发给硬件加速引擎,还需将硬件加速引擎传递的报文,再传递回硬件加速引擎(比如通过数据通道,将报文再传递回硬件加速引擎),以便硬件加速引擎利用片上处理器下发的转发规则,对报文进行硬件加速转发。
基于上述情况,在一些实施例中,片上处理器需要将报文的数据信息(例如报文的数 据内容等信息),承载在报文的信息结构中,从而基于该信息结构生成片上处理器传递给硬件加速引擎的报文。在本申请实施例中,片上处理器并不是通过配置通道,向硬件加速引擎下发报文的转发规则,而是利用数据通道向硬件加速引擎下发报文的转发规则;为使得报文的转发规则能够通过数据通道下发给硬件加速引擎,本申请实施例可在数据通道上传输的报文中携带片上处理器生成的转发规则,以使得片上处理器生成的转发规则随同报文一同传递给硬件加速引擎。
在一些实施例中,为实现在报文中携带转发规则,本申请实施例可调用报文的信息结构,从而在该信息结构中承载片上处理器生成的报文的转发规则;进而,片上处理器可基于承载报文的转发规则的信息结构,生成报文,以使得片上处理器生成的报文中能够携带片上处理器所生成的报文的转发规则。
在一些实施例中,本申请实施例可在报文的信息结构的头部空间(head room)字段中,承载所述转发规则。该头部空间字段可位于报文的信息结构的头部字段和数据字段之间。作为可选实现,头部空间字段的空间大小可以设置,在此基础上,本申请实施例可将头部空间字段的空间大小设置为与转发规则的预设长度相对应;从而,本申请实施例可在信息结构的数据字段的前方,承载预设长度的转发规则,以实现在信息结构的头部空间字段承载转发规则。
在步骤S412中,片上处理器根据承载所述转发规则的信息结构,生成携带所述转发规则的报文。
在步骤S413中,片上处理器利用数据通道,将生成的报文下发到硬件加速引擎。
片上处理器在报文的信息结构中承载报文的转发规则后,可基于已承载转发规则的信息结构生成报文,从而使得生成的报文中携带所述转发规则。进而,片上处理器可利用数据通道,将所生成的报文下发到硬件加速引擎中,使得硬件加速引擎在获得报文的同时,能够同步获得报文中携带的转发规则。
在进一步的一些实施例中,为使得硬件加速引擎确定片上处理器所传递的报文中携带转发规则,片上处理器在通过数据通道向硬件加速引擎下发报文时,可同时传递预设标志(flag)信息,以指示硬件加速引擎所下发的报文中携带转发规则。该预设标志信息可通过智能网卡的软件与硬件之间可直接传递的标志信息实现,也就是说,标志信息可在片上 处理器与硬件加速引擎之间直接传递。
需要说明的是,预设标志(flag)信息是一个标志信息,其可以是一个bit或者一个数字或者任意形式的标志信息,通过这个标志信息可以告诉硬件加速引擎当前传递的报文是携带有转发规则的,预设标志(flag)信息可以通过数据通道,由片上处理器传递给硬件加速引擎。
在步骤S414中,硬件加速引擎对所述报文的信息结构进行解析,以确定所述报文中携带的所述转发规则。
在步骤S415中,硬件加速引擎记录所述报文的转发规则。
硬件加速引擎在获得片上处理器通过数据通道下发的报文后,可对报文进行解析,从而从报文中确定转发规则。在一些实施例中,硬件加速引擎可解析报文的信息结构的头部空间(headroom)字段,以确定该头部空间字段中承载的所述转发规则。
作为可选实现,头部空间字段的空间大小可以设置,例如,头部空间字段的空间大小可设置为与转发规则的预设长度相对应;从而硬件加速引擎可根据转发规则的预设长度,从报文的信息结构的数据字段前方,分离出转发规则,以实现确定头部空间字段中承载的所述转发规则。需要说明的是,在报文的信息结构中,头部空间字段位于信息结构的头部字段和数据字段之间,并且位于数据字段的前方,因此基于转发规则的预设长度,将数据字段前方预设长度的信息分离出来,可得到报文中携带的转发规则。
在进一步的一些实施例中,硬件加速引擎可基于片上处理器传递的预设标志信息,确定报文中携带有转发规则,从而执行解析报文,以确定报文中携带的转发规则的步骤。进一步的,如果片上处理器在下发报文时,未同时直接传递预设标志信息,则硬件加速引擎可确定片上处理器下发的报文中未携带有转发规则,则硬件加速引擎可不执行确定报文中携带的转发规则。
硬件加速引擎在从报文中确定出转发规则后,可记录所述报文的转发规则。在进一步的一些实施例中,硬件加速引擎还可基于报文的转发规则,转发报文。后续,硬件加速引擎在获得该报文后,也可在已记录报文的转发规则的情况下,使得报文能够直接基于硬件加速引擎所记录的转发规则进行转发。
在一些实施例中,硬件加速引擎可设置转发规则表,硬件加速引擎可将从报文中确定 的转发规则记录在转发规则表中。作为可选实现,转发规则表可记录报文的报文标识与报文的转发规则的对应关系;在一个示例中,报文标识例如五元组信息,其也可以视为是转发规则的一部分;硬件加速引擎可基于片上处理器所下发的报文的报文标识以及报文中携带的转发规则,在转发规则表中记录报文的报文标识与报文的转发规则的对应关系,以实现硬件加速引擎记录报文的转发规则。作为可选实现,报文的报文标识可从报文的信息结构中确定。
在一个实现示例中,如果报文为数据流的报文,则报文的报文标识可由报文对应的数据流的数据流标识指示;报文的转发规则可以是报文的数据流对应的转发规则;相应的,转发规则表可以是流表(flowtable),用于记录各条数据流的转发规则;从而硬件加速引擎,可在流表中记录报文对应的数据流的转发规则(例如记录报文对应的数据流的流标识,与转发规则的对应关系),以实现在转发规则表中记录报文的转发规则。
值得注意的是,预设标志(flag)信息和转发规则表(例如流表)中的报文标识(例如match)是不同的,流表分为match(匹配)和action(行为),match即报文的五元组等信息,用于区分报文所属于的数据流,action用于指定报文所属的数据流应该如何进行转发。
本申请实施例提供的转发规则下发方法中,片上处理器可生成报文的转发规则,并且将生成的转发规则承载在报文的信息结构中;从而,片上处理器可根据承载转发规则的信息结构,生成报文,以使得所生成的报文中携带有报文的转发规则;进而,片上处理器可利用与硬件加速引擎进行数据报文传输的数据通道,下发所生成的报文,使得硬件加速引擎能够通过数据通道获得携带有转发规则的报文,以实现片上处理器向硬件加速引擎下发报文的转发规则。
本申请实施例在利用数据通道向硬件加速引擎下发报文时,能够同步在报文中携带转发规则,以使得转发规则能够通过数据通道进行下发,由于转发规则通过数据通道进行下发的下发性能,等同于片上处理器和硬件加速引擎在数据报文层面的传递性能,而片上处理器和硬件加速引擎在数据报文层面的传递性能,远大于配置通道的配置信息和命令的传递性能,因此本申请实施例能够使得转发规则的下发性能得到极大的提升;同时,由于转发规则是随同报文一同下发,因此转发规则的下发属于同步事件,能够解决异步下发转发 规则所带来的可靠性问题,使得转发规则的下发方式能够满足网络短链接等业务场景的需求,从而提升转发规则下发的可靠性。可见,本申请实施例利用片上处理器与硬件加速引擎之间的数据通道,下发转发规则;并且转发规则承载在报文的信息结构中,使得转发规则能够与报文同步下发到硬件加速引擎,从而本申请实施例能够提升转发规则下发的下发性能和可靠性。
作为可选实现,下面以报文为数据流的报文为例,对本申请实施例提供的转发规则下发方法进行介绍。作为可选实现,图5示例性的示出了本申请实施例提供的转发规则下发方法的另一可选流程图,该方法流程可由片上处理器和硬件加速引擎执行实现,参照图5,该方法流程可以包括如下步骤。
在步骤S510中,硬件加速引擎获取数据流的报文。
智能网卡在获得虚拟机发出的数据流的报文,或者发往虚拟机的数据流的报文后,可通过智能网卡的硬件加速引擎对数据流的报文进行硬件加速转发,从而硬件加速引擎可相应的获得数据流的报文。硬件加速引擎获得的数据流的报文,可以视为是待转发的数据流的报文(例如待通过硬件加速方式转发的数据流的报文)。
在步骤S511中,硬件加速引擎查询流表中是否记录有与报文的数据流相匹配的流表项,若是,执行步骤S512,若否,执行步骤S513。
在步骤S512中,硬件加速引擎基于与报文的数据流相匹配的流表项,转发数据流的报文。
在步骤S513中,硬件加速引擎将数据流的报文通过数据通道,传递给片上处理器。
硬件加速引擎在获得数据流的报文后,可查询流表(flow table)中是否记录有与报文的数据流相匹配的流表项(流表中的流表项可以指示数据流的转发规则)。如果流表中记录有与报文的数据流相匹配的流表项,则硬件加速引擎可基于流表中记录的与报文的数据流相匹配的流表项,对数据流的报文进行硬件加速转发,提升数据流的报文转发性能。如果流表中未记录有与报文的数据流相匹配的流表项,则硬件加速引擎需要将数据流的报文通过数据通道,传递给片上处理器;从而由片上处理器生成与报文的数据流相匹配的流表项。
在一些实施例中,数据流可以由一组相同类型的报文(报文例如数据包)形成,报文 的类型区分可以由报文的match(匹配)域的值决定,match域的值可以视为是报文中的匹配字段的值。
在一些实施例中,流表可以记录有多个流表项(flow entry),一个流表项可以用于指示一条数据流的转发规则。在一个示例中,一个流表项可以记录一条数据流的数据流标识,以及该数据流对应的转发规则。为便于理解,作为一个实现示例,图6示例性的示出了流表的结构示例图,如图6所示,流表中记录有多个流表项601至60m(m为流表项的数量,具体可根据实际情况而定),一个流表项可以记录一条数据流的数据流标识与数据流对应的转发规则。
作为步骤S511的一种可选实现,硬件加速引擎可根据报文的数据流对应的数据流标识,查询流表中是否记录有与数据流的数据流标识相匹配的流表项;若有,则说明硬件加速引擎记录有与报文的数据流相匹配的转发规则;若否,则说明硬件加速引擎未记录有与报文的数据流相匹配的转发规则。
在步骤S514中,片上处理器根据虚拟交换机的转发配置信息,生成与报文的数据流相匹配的流表项。
在步骤S515中,片上处理器将生成的流表项,承载在报文的信息结构的头部空间字段。
片上处理器在获得硬件加速引擎通过数据通道,传递的数据流的报文后,可查询虚拟交换机的各种转发配置信息(例如查询虚拟交换机的ACL信息、Qos信息、路由信息等),从而根据虚拟交换机的转发配置信息,生成与报文的数据流相匹配的流表项(与报文的数据流相匹配的流表项可以用于指示,与报文的数据流相匹配的转发规则)。
片上处理器在生成与报文的数据流相匹配的流表项后,可在报文的信息结构的头部空间字段中承载流表项,以使得流表项能够携带在报文中。
在一些实施例中,片上处理器用于运行虚拟交换机的数据核,可生成转发规则,并将转发规则承载在报文的信息结构的头部空间字段。例如数据核可生成与报文的数据流相匹配的流表项,并将流表项承载在报文的信息结构的头部空间字段。
在一个示例中,图7示例性的示出了报文的信息结构的示例图,如图7所示,报文的信息结构中包括用于承载报文数据的数据(data)字段。数据字段的前方为头部空间(head  room)字段,该头部空间字段的空间大小可以进行设置,从而本申请实施例可根据转发规则的预设长度(例如流表项的预设长度),设置头部空间字段的空间大小;进而在数据字段的前方,承载预设长度的转发规则(例如预设长度的流表项),以在数据字段前方的头部空间字段中承载转发规则。头部空间字段的前方为私用(private)字段,私用字段的前方为信息结构的结构体(structure)字段。私用字段和结构体字段可以视为是报文的信息结构中的头部字段。
在一个实现示例中,以报文的信息结构采用mbuf(memory buffer,存储器缓存)结构为例,mbuf结构中的数据(data)字段可以存放报文的数据信息,在mbuf结构中数据字段的前部还有一部分头部空间(head room)字段可以用于保存报文的控制信息,该头部空间字段的空间大小可以设置;从而,本申请实施例可以将转发规则(例如流表项)承载在mbuf结构的head room字段中。作为可选实现,mbuf结构可以应用于DPDK(Data Plane Development Kit,数据平面开发套件)等场景。
在步骤S516中,片上处理器根据承载流表项的信息结构,生成携带有流表项的报文。
在步骤S517中,片上处理器将生成的报文通过数据通道,下发给硬件加速引擎;并且,向硬件加速引擎传递预设标志信息,以用于指示下发的报文中携带流表项。
片上处理器在将流表项承载在报文的信息结构中后,可生成携带有流表项的报文;并且,将生成的报文通过数据通道,下发给硬件加速引擎,使得硬件加速引擎能够通过数据通道,获得携带有流表项的报文。进一步的,为向硬件加速引擎指示所下发的报文中携带有流表项,片上处理器在下发报文的同时,还可向硬件加速引擎传递预设标志信息;该预设标志信息可用于指示片上处理器下发的报文中携带有流表项。
在步骤S518中,硬件加速引擎根据所述预设标志信息,确定片上处理器通过数据通道下发的报文中携带有流表项;以及,从报文的信息结构的头部空间字段确定出流表项。
硬件加速引擎在获得片上处理器通过数据通道下发的报文,以及获得片上处理器传递的预设标志信息后,可基于该预设标志信息,确定所下发的报文携带有流表项;从而硬件加速引擎可从报文的信息结构的头部空间字段,确定出流表项(即报文的数据流相匹配的流表项)。在一些实施例中,硬件加速引擎可根据流表项的预设长度,从报文的信息结构的数据字段前方,分离出预设长度的信息,从而得到报文中携带的流表项。
在步骤S519中,硬件加速引擎将流表项记录在流表中;并且基于流表项,转发片上处理器下发的报文。
硬件加速引擎在从片上处理器下发的报文中,确定出报文的数据流相匹配的流表项后,可将流表项记录在流表中(例如在流表中插入流表项);并且,硬件加速引擎可基于该流表项指示的数据流的转发规则,将片上处理器下发的报文进行转发。
在进一步的一些实施例中,硬件加速引擎在后续获得相同数据流的报文后,可从流表中查询到与数据流的数据流标识相匹配的流表项,从而基于查询到的流表项,实现在硬件层面对数据流的报文进行加速转发,提升数据流的报文转发性能。也就是说,如果报文命中硬件加速引擎中的流表项,则报文不会再传递给片上处理器,硬件加速引擎可直接基于报文所命中的流表项,将报文进行转发;只有在报文未命中硬件加速引擎中的流表项时,硬件加速引擎才将报文传递给片上处理器,由片上处理器生成报文的流表项。片上处理器生成的流表项可携带在报文中,并通过数据通道下发到硬件加速引擎。
在进一步的一些实施例中,如果因为一些异常原因,导致硬件加速引擎将流表项插入流表失败(例如需插入的流表项与流表中已记录的流表项存在哈希冲突,导致流表项插入失败),则后续数据流的报文因为无法命中流表中记录的流表项,则数据流的报文可继续传递到片上处理器,由片上处理器生成与报文的数据流相匹配的流表项,并以本申请实施例提供的方案进行流表项的下发。
在一个实现示例中,图8示例性的示出了本申请实施例提供的转发规则下发的过程示例图,如图8所示,以数据流的报文为数据包形式为例,该过程可以如下:
①智能网卡在获得虚拟机发出的数据包或者发往虚拟机的数据包后,可交由硬件加速引擎处理;硬件加速引擎确定在流表中未查询到与数据包的数据流相匹配的流表项;
②硬件加速引擎通过数据通道,向片上处理器传递数据包;
③片上处理器(例如片上处理器中的数据核)可基于虚拟交换机的转发配置信息,生成与数据包的数据流相匹配的流表项,并且将流表项携带在需要下发给智能网卡的数据包中;该数据包可以携带流表项、数据包的数据信息等;
④片上处理器通过数据通道,向硬件加速引擎下发携带流表项的数据包(进一步的,片上处理器可同时传递预设标志信息);
⑤硬件加速引擎可从片上处理器下发的数据包中确定出所携带的流表项,并将流表项记录在流表中;
⑥硬件加速引擎基于流表项,转发数据包。
本申请实施例提供的转发规则下发方案,将转发规则携带在报文中,从而片上处理器在通过数据通道向硬件加速引擎下发报文的同时,报文中的转发规则能够同步的通过数据通道下发到硬件加速引擎;本申请实施例利用数据通道,实现转发规则的下发,提升了转发规则的下发性能;并且,由于转发规则随同报文同步的下发给硬件加速引擎,属于同步事件,因此能够解决异步下发转发规则导致的可靠性问题,提升转发规则下发的可靠性。本申请实施例提供的转发规则下发方案,能够提升转发规则的下发性能,并且提升转发规则下发的可靠性,能够有效的应用于网络短链接等业务场景,提升云计算和虚拟化技术的应用价值。
下面对本申请实施例提供的智能网卡进行介绍,下文描述的智能网卡的功能,可与上文描述内容相互对应参照。结合图2A和图2B所示,本申请实施例提供的智能网卡可以包括片上处理器和硬件加速引擎。作为可选实现,下文描述的片上处理器的功能可以通过软件功能方式实现;下文描述的硬件加速引擎的功能可以通过硬件功能方式实现,或者在硬件加速引擎的可编程能力的基础上,由硬件加速引擎编程的软件功能实现。
在本申请实施例中,片上处理器可用于,生成报文的转发规则;将所述转发规则承载在所述报文的信息结构中;根据承载所述转发规则的信息结构,生成携带所述转发规则的报文;利用数据通道向硬件加速引擎下发所生成的报文。
硬件加速引擎可用于,利用数据通道,获取片上处理器下发的报文,所述报文携带有所述报文的转发规则;对所述报文的信息结构进行解析,以确定所述报文中携带的所述转发规则,其中,所述转发规则承载在所述报文的信息结构中;记录所述报文的转发规则。
在一些实施例中,片上处理器用于,将所述转发规则承载在所述报文的信息结构中包括:
在所述信息结构的头部空间字段,承载所述转发规则;所述头部空间字段位于所述信息结构的头部字段和数据字段之间。
在一些实施例中,所述头部空间字段的空间大小设置为与所述转发规则的预设长度相 对应;片上处理器用于,在所述信息结构的头部空间字段,承载所述转发规则包括:
在所述信息结构的数据字段前方,承载预设长度的所述转发规则;其中,所述头部空间字段位于所述数据字段的前方。
在进一步的一些实施例中,片上处理器在利用数据通道向硬件加速引擎下发所生成的报文的同时,还可以用于:向硬件加速引擎传递预设标志信息,所述预设标志信息指示下发的报文携带有转发规则。
在进一步的一些实施例中,片上处理器还可以用于:在所述报文的转发规则未记录在硬件加速引擎中时,获得硬件加速引擎通过数据通道传递的所述报文,以使得片上处理器进入所述生成报文的转发规则的步骤;
在一些实施例中,片上处理器用于,生成报文的转发规则包括:
根据虚拟交换机的转发配置信息,生成所述报文的转发规则。
在一些实施例中,硬件加速引擎用于,对所述报文的信息结构进行解析,以确定所述报文中携带的所述转发规则包括:
解析所述信息结构的头部空间字段,以确定所述头部空间字段中承载的所述转发规则;所述头部空间字段位于所述信息结构的头部字段和数据字段之间。
在一些实施例中,所述头部空间字段的空间大小设置为与所述转发规则的预设长度相对应;硬件加速引擎用于,解析所述信息结构的头部空间字段,以确定所述头部空间字段中承载的所述转发规则包括:
根据所述预设长度,从所述信息结构的数据字段前方,分离出预设长度的所述转发规则;其中,所述头部空间字段位于所述数据字段的前方。
在进一步的一些实施例中,硬件加速引擎在利用数据通道获取下发的报文时,还可以用于:获取传递的预设标志信息,所述预设标志信息指示下发的报文中携带有转发规则。
在进一步的一些实施例中,硬件加速引擎还可以用于:在未记录报文的转发规则时,将报文通过数据通道传递给片上处理器,以使得片上处理器生成携带有转发规则的报文。
在一些实施例中,所述报文为数据流的报文;所述报文的转发规则为与所述报文的数据流相匹配的流表项;流表项记录在流表中,并且流表中记录有多个流表项,一个流表项用于指示一条数据流的转发规则。
相应的,硬件加速引擎,用于记录所述报文的转发规则包括:
将与所述报文的数据流相匹配的流表项,插入所述流表中。
本申请实施例还提供一种存储介质,该存储介质存储一条或者多条计算机可执行指令,该一条或者多条计算机可执行指令被执行时,实现如本申请实施例提供的由片上加速引擎执行的转发规则下发方法,或者,本申请实施例提供的由硬件加速引擎执行的转发规则下发方法。
本申请实施例还提供一种计算机程序,该计算机程序被执行时,实现如本申请实施例提供的由片上加速引擎执行的转发规则下发方法,或者,本申请实施例提供的由硬件加速引擎执行的转发规则下发方法。
上文描述了本申请实施例提供的多个实施例方案,各实施例方案介绍的各可选方式可在不冲突的情况下相互结合、交叉引用,从而延伸出多种可能的实施例方案,这些均可认为是本申请实施例披露、公开的实施例方案。
虽然本申请实施例披露如上,但本申请并非限定于此。任何本领域技术人员,在不脱离本申请的精神和范围内,均可作各种更动与修改,因此本申请的保护范围应当以权利要求所限定的范围为准。

Claims (12)

  1. 一种转发规则下发方法,其中,应用于片上处理器,所述方法包括:
    生成报文的转发规则;
    将所述转发规则承载在所述报文的信息结构中;
    根据承载所述转发规则的信息结构,生成携带所述转发规则的报文;
    利用数据通道下发所生成的报文。
  2. 根据权利要求1所述的方法,其中,所述将所述转发规则承载在所述报文的信息结构中包括:
    在所述信息结构的头部空间字段,承载所述转发规则;所述头部空间字段位于所述信息结构的头部字段和数据字段之间。
  3. 根据权利要求2所述的方法,其中,所述头部空间字段的空间大小设置为与所述转发规则的预设长度相对应;所述在所述信息结构的头部空间字段,承载所述转发规则包括:
    在所述信息结构的数据字段前方,承载预设长度的所述转发规则;其中,所述头部空间字段位于所述数据字段的前方。
  4. 根据权利要求1所述的方法,其中,在利用数据通道下发所生成的报文时,所述方法还包括:
    传递预设标志信息,所述预设标志信息指示下发的报文携带有转发规则;
    所述方法还包括:
    在所述报文的转发规则未记录在硬件加速引擎中时,获得硬件加速引擎通过数据通道传递的所述报文,以进入所述生成报文的转发规则的步骤;
    所述生成报文的转发规则包括:
    根据虚拟交换机的转发配置信息,生成所述报文的转发规则。
  5. 根据权利要求1-4任一项所述的方法,其中,所述报文为数据流的报文;所述报文的转发规则为与所述报文的数据流相匹配的流表项;流表项记录在硬件加速引擎存放的流表中,并且流表中记录有多个流表项,一个流表项用于指示一条数据流的转发规则。
  6. 一种转发规则下发方法,其中,应用于硬件加速引擎,所述方法包括:
    利用数据通道获取下发的报文;所述报文携带有所述报文的转发规则;
    对所述报文的信息结构进行解析,以确定所述报文中携带的所述转发规则;其中,所述转发规则承载在所述报文的信息结构中;
    记录所述报文的转发规则。
  7. 根据权利要求6所述的方法,其中,所述对所述报文的信息结构进行解析,以确定所述报文中携带的所述转发规则包括:
    解析所述信息结构的头部空间字段,以确定所述头部空间字段中承载的所述转发规则;所述头部空间字段位于所述信息结构的头部字段和数据字段之间。
  8. 根据权利要求7所述的方法,其中,所述头部空间字段的空间大小设置为与所述转发规则的预设长度相对应;所述解析所述信息结构的头部空间字段,以确定所述头部空间字段中承载的所述转发规则包括:
    根据所述预设长度,从所述信息结构的数据字段前方,分离出预设长度的所述转发规则;其中,所述头部空间字段位于所述数据字段的前方。
  9. 根据权利要求6所述的方法,其中,在利用数据通道获取下发的报文时,所述方法还包括:
    获取传递的预设标志信息,所述预设标志信息指示下发的报文携带有转发规则;
    所述方法还包括:
    在未记录报文的转发规则时,将报文通过数据通道传递给片上处理器,以使得片上处理器生成携带有转发规则的报文。
  10. 根据权利要求6-9任一项所述的方法,其中,所述报文为数据流的报文;所述报文的转发规则为与所述报文的数据流相匹配的流表项;流表项记录在流表中,并且流表中记录有多个流表项,一个流表项用于指示一条数据流的转发规则;所述记录所述报文的转发规则包括:
    将与所述报文的数据流相匹配的流表项,插入所述流表中。
  11. 一种智能网卡,其中,包括:片上处理器和硬件加速引擎;所述片上处理器被配置为执行如权利要求1-5任一项所述的转发规则下发方法;所述硬件加速引擎被配置为执行如权利要求6-10任一项所述的转发规则下发方法。
  12. 一种存储介质,其中,所述存储介质存储一条或多条计算机可执行指令,所述一条 或多条计算机可执行指令被执行时,实现如权利要求1-5任一项所述的转发规则下发方法,或者,如权利要求6-10任一项所述的转发规则下发方法。
PCT/CN2023/111395 2022-08-15 2023-08-07 转发规则下发方法、智能网卡及存储介质 WO2024037366A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210981665.XA CN115460145A (zh) 2022-08-15 2022-08-15 转发规则下发方法、智能网卡及存储介质
CN202210981665.X 2022-08-15

Publications (1)

Publication Number Publication Date
WO2024037366A1 true WO2024037366A1 (zh) 2024-02-22

Family

ID=84297847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/111395 WO2024037366A1 (zh) 2022-08-15 2023-08-07 转发规则下发方法、智能网卡及存储介质

Country Status (2)

Country Link
CN (1) CN115460145A (zh)
WO (1) WO2024037366A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460145A (zh) * 2022-08-15 2022-12-09 阿里云计算有限公司 转发规则下发方法、智能网卡及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866111A (zh) * 2019-11-28 2021-05-28 北京京东尚科信息技术有限公司 流表管理的方法和装置
CN113746893A (zh) * 2021-07-16 2021-12-03 苏州浪潮智能科技有限公司 一种基于fpga的智能网卡数据转发方法、系统及终端
WO2022033340A1 (zh) * 2020-08-10 2022-02-17 大唐移动通信设备有限公司 数据处理方法、用户面功能及装置
CN114615197A (zh) * 2022-04-07 2022-06-10 中国电信股份有限公司 报文转发方法、装置、电子设备及存储介质
CN114726788A (zh) * 2022-05-06 2022-07-08 深圳星云智联科技有限公司 应用于dpu的报文传输方法及相关装置
CN115460145A (zh) * 2022-08-15 2022-12-09 阿里云计算有限公司 转发规则下发方法、智能网卡及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104158749A (zh) * 2013-05-14 2014-11-19 华为技术有限公司 软件定义网络中报文转发方法、网络设备及软件定义网络
CN104283785B (zh) * 2014-10-29 2018-11-27 新华三技术有限公司 一种快速处理流表的方法和装置
CN106453204B (zh) * 2015-08-07 2020-11-17 中兴通讯股份有限公司 一种处理数据报文的方法及装置
CN114422367B (zh) * 2022-03-28 2022-09-06 阿里云计算有限公司 报文处理方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866111A (zh) * 2019-11-28 2021-05-28 北京京东尚科信息技术有限公司 流表管理的方法和装置
WO2022033340A1 (zh) * 2020-08-10 2022-02-17 大唐移动通信设备有限公司 数据处理方法、用户面功能及装置
CN113746893A (zh) * 2021-07-16 2021-12-03 苏州浪潮智能科技有限公司 一种基于fpga的智能网卡数据转发方法、系统及终端
CN114615197A (zh) * 2022-04-07 2022-06-10 中国电信股份有限公司 报文转发方法、装置、电子设备及存储介质
CN114726788A (zh) * 2022-05-06 2022-07-08 深圳星云智联科技有限公司 应用于dpu的报文传输方法及相关装置
CN115460145A (zh) * 2022-08-15 2022-12-09 阿里云计算有限公司 转发规则下发方法、智能网卡及存储介质

Also Published As

Publication number Publication date
CN115460145A (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
CN107046542B (zh) 一种在网络级采用硬件实现共识验证的方法
US9380134B2 (en) RoCE packet sequence acceleration
EP4009593A1 (en) Data transmission method and apparatus, network card and storage medium
US9203734B2 (en) Optimized bi-directional communication in an information centric network
US9356844B2 (en) Efficient application recognition in network traffic
US7961733B2 (en) Method and apparatus for performing network processing functions
US7586936B2 (en) Host Ethernet adapter for networking offload in server environment
CN101217493B (zh) 一种tcp数据包的传输方法
US8094670B1 (en) Method and apparatus for performing network processing functions
US20060242313A1 (en) Network content processor including packet engine
WO2016082371A1 (zh) 一种基于ssh协议的会话解析方法及系统
WO2018094743A1 (zh) 处理报文的方法和计算机设备
WO2023005773A1 (zh) 基于远程直接数据存储的报文转发方法、装置、网卡及设备
WO2024037366A1 (zh) 转发规则下发方法、智能网卡及存储介质
WO2014139481A1 (zh) 报文处理方法及设备
WO2021008591A1 (zh) 数据传输方法、装置及系统
WO2020073907A1 (zh) 转发表项的更新方法及装置
US20050169309A1 (en) System and method for vertical perimeter protection
WO2015027401A1 (zh) 报文处理方法、设备及系统
WO2012083654A1 (zh) Ip报文分片的处理方法和分布式系统
WO2020258302A1 (zh) 一种数据传输方法、交换机及站点
US20230336457A1 (en) Method and apparatus for processing forwarding entry
CN114422617B (zh) 一种报文处理方法、系统及计算机可读存储介质
CN112333162B (zh) 一种业务处理方法及设备
CN116089053A (zh) 一种数据处理方法、装置以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854269

Country of ref document: EP

Kind code of ref document: A1