WO2020063298A1 - 处理tcp报文的方法、toe组件以及网络设备 - Google Patents

处理tcp报文的方法、toe组件以及网络设备 Download PDF

Info

Publication number
WO2020063298A1
WO2020063298A1 PCT/CN2019/104721 CN2019104721W WO2020063298A1 WO 2020063298 A1 WO2020063298 A1 WO 2020063298A1 CN 2019104721 W CN2019104721 W CN 2019104721W WO 2020063298 A1 WO2020063298 A1 WO 2020063298A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
tcp
sent
storage address
data
Prior art date
Application number
PCT/CN2019/104721
Other languages
English (en)
French (fr)
Inventor
魏启坤
张明礼
韩艳飞
赵泓博
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19867965.6A priority Critical patent/EP3846405B1/en
Publication of WO2020063298A1 publication Critical patent/WO2020063298A1/zh
Priority to US17/213,582 priority patent/US11489945B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • H04L69/162Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/166IP fragmentation; TCP segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list

Definitions

  • the present application relates to the field of computer communications, and in particular, to a method for processing Transmission Control Protocol (TCP) messages, a TCP offload engine (TOE) component, and a network device.
  • TCP Transmission Control Protocol
  • TOE TCP offload engine
  • TCP traffic accounts for 90% of the total Internet traffic.
  • CPU central processing unit
  • TOE technology uses special hardware to process the TCP / IP protocol stack, thereby greatly reducing the processing load of the CPU.
  • the dedicated hardware circuit that handles the TCP / IP protocol stack is called a TOE component.
  • the application provides a method for processing a TCP message, a TOE component, and a chip and a network device including the TOE component, which are used to improve the processing efficiency of the TCP message.
  • the first aspect of the present application provides a method for processing a TCP message.
  • the TOE component of the TCP offload engine obtains a first storage address, where the first storage address is an address of a first storage block in the memory, and the first storage block stores a target TCP packet, where the target TCP packet includes a packet header and TCP load.
  • the TOE component obtains the packet header from the first storage block according to the first storage address.
  • the TOE component performs TCP-related protocol processing according to the packet header; wherein, in the process that the TOE component performs TCP-related protocol processing according to the packet header, the TCP load is not used by the TOE component Read out the first memory block.
  • the TOE component does not need to read the first storage block where the target TCP message is located during the process of performing TCP-related protocol processing according to the header of the target TCP message.
  • the problem of low processing efficiency caused by frequently reading the target TCP packet during the processing of the target TCP packet is avoided.
  • the TCP load of the TCP packet does not need to be read out from the first storage block where the target TCP packet is located. There is no need to allocate separate TCP buffers for different TCP processes, saving memory storage resources.
  • the TOE component further sends a second storage address to the central processing unit CPU; the second storage address is the first storage address, or the second storage address indicates a second storage block, so The second storage block is a start block of at least one storage block, and the at least one storage block includes the first storage block.
  • the TOE component sends a second storage address to the CPU, so that the CPU can determine the data to be sent according to the second storage address, which can prevent the TOE component and the CPU from sending a TCP packet or TCP load directly to the CPU Waste of interface resources.
  • the TOE component after obtaining the storage addresses of multiple TCP packets of the data stream to which the target TCP packet belongs, the TOE component generates a storage chain according to the storage addresses of the multiple TCP packets; the The second storage address is an address of a start block of the storage chain.
  • the TOE component generates a storage chain according to the storage addresses of multiple TCP packets, and sends the address of the start block of the storage chain to the CPU. In this way, the TOE component can send multiple TCP packet storage addresses to the CPU at one time, further saving the interface resources between the TOE component and the CPU, and improving processing efficiency.
  • the TOE component receives a third storage address sent by the CPU, where the third storage address indicates a storage block where data to be sent determined by the CPU is located, and the data to be sent includes the TCP load.
  • the TOE component obtains the data to be sent according to the third storage address, and the storage position of the data to be sent in the memory remains unchanged until the data to be sent is successfully sent.
  • the data to be sent includes the TCP load of the target TCP message, and the storage position of the data to be sent in the memory remains unchanged until the data to be sent is successfully sent.
  • the TOE component receives a third storage address sent by the central processing unit CPU, where the third storage address indicates a storage block where the data to be sent determined by the CPU is located.
  • the TOE component acquires the data to be sent according to the third storage address; before the data to be sent is successfully sent, the storage position of the data to be sent in the memory remains unchanged.
  • the data to be sent sent by the CPU does not include the TCP load of the target TCP packet.
  • the position of the data to be sent in the memory does not change, that is, the The transmission data is allocated with an additional transmission buffer to transmit the data to be transmitted, saving memory resources.
  • the second aspect of the present application provides a transmission control protocol offload engine TOE component, which TOE component includes an interface and a processor.
  • the processor obtains a first storage address through the interface; the first storage address is an address of a first storage block in a memory, the first storage block stores a target TCP packet, and the target TCP packet includes a packet Header and TCP load.
  • the processor obtains the packet header from the first storage block according to the first storage address, and performs TCP-related protocol processing according to the packet header.
  • the TCP load is not read by the TOE component to the first storage block.
  • the processor is further configured to send a second storage address to the central processing unit CPU through the interface; the second storage address is the first storage address, or the second storage address indicates A second storage block, where the second storage block is a start block of at least one storage block, and the at least one storage block includes the first storage block.
  • the processor after acquiring the storage addresses of multiple TCP packets of the data stream to which the target TCP packet belongs, the processor generates a storage chain according to the storage addresses of the multiple TCP packets.
  • the second storage address is an address of a start block in the storage chain.
  • the processor further receives, via the interface, a third storage address sent by the CPU, where the third storage address indicates a storage block where the data to be sent determined by the CPU is located, and the to-be-sent The data includes the TCP load.
  • the processor also acquires the data to be sent according to the third storage address, and the storage position of the data to be sent in the memory remains unchanged until the data to be sent is successfully sent.
  • the processor further receives, via the interface, a third storage address sent by the central processing unit CPU, where the third storage address indicates a storage block where the data to be sent determined by the CPU is located.
  • the processor acquires the data to be sent according to the third storage address; before the data to be sent is successfully sent, the storage position of the data to be sent in the memory remains unchanged.
  • a third aspect of the present application provides a chip, which includes the TOE component and the network processor in the second aspect or any implementation manner of the second aspect.
  • the chip may further include other components.
  • a fourth aspect of the present application provides a network device, which includes the chip provided by the third aspect and a central processing unit CPU.
  • a fifth aspect of the present application provides another network device, including a transmission control protocol offload engine TOE component and a memory.
  • the memory stores a transmission control protocol TCP message.
  • the TOE component obtains a first storage address, where the first storage address is an address of a first storage block in the memory, the first storage block stores a target TCP packet, and the target TCP packet includes a packet header.
  • TCP load The TOE component also obtains the packet header from the first storage block according to the first storage address, and performs TCP-related protocol processing according to the packet header. During the process that the TOE component performs TCP-related protocol processing according to the packet header, the TCP load is not read by the TOE component to the first storage block.
  • the network device further includes a central processing unit CPU.
  • the TOE component also sends a second storage address to the CPU; the second storage address is the first storage address, or the second storage address indicates a second storage block, and the second storage block is at least A start block of a memory block, and the at least one memory block includes the first memory block.
  • the CPU receives the second storage address, determines the data to be sent and a third storage address according to the second storage address, the third storage address indicates a storage block where the data to be sent is located, and the data to be sent Including the TCP load.
  • the TOE component After obtaining the storage addresses of multiple TCP packets of the data stream to which the target TCP packet belongs, the TOE component generates a storage chain according to the storage addresses of the multiple TCP packets; the second The storage address is an address of a start block of the storage chain.
  • the CPU runs a socket
  • the TOE component further sends the second storage address to the socket
  • the socket is configured to receive the second storage address
  • the CPU also runs an application program, the socket sends the second storage address to the application program, and receives the third storage address sent by the application program.
  • the application program receives the second storage address, determines the data to be sent and the third storage address according to the second storage address, and sends the third storage address to the socket.
  • the socket further sends the third storage address to the TOE component.
  • the TOE component further receives the third storage address, and acquires the data to be sent according to the third storage address; before the data to be sent is successfully sent, the storage of the data to be sent in the memory is performed. The location does not change.
  • the network device further includes a central processing unit CPU.
  • the CPU sends a third storage address to the TOE component, where the third storage address indicates a storage block where the data to be sent determined by the CPU is located.
  • the TOE component also obtains the data to be sent according to the third storage address; before the data to be sent is successfully sent, the storage position of the data to be sent in the memory remains unchanged.
  • FIG. 1 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a process for processing a TCP packet according to an embodiment of the present application
  • FIG. 3 is a flowchart of a method for processing a TCP packet according to an embodiment of the present application
  • FIG. 3A is a schematic diagram of a storage manner of TCP packets according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another process for processing a TCP packet according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an mbuf according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an mbuf chain according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a TOE component according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • first and second in the specification and claims of the present invention are used to distinguish different objects, rather than to describe a specific order of the objects.
  • first storage address and the second storage address are used to distinguish different storage addresses, rather than to specify a specific order of different storage addresses.
  • the network device 100 includes a packet processing engine (PPE) 101, a buffer management unit (BMU) 102, and a memory. 103, a network processor (NP) 104, a TOE component 105, and a CPU 106.
  • PPE 101, BMU 102, memory 103, NP 104, TOE component 105 and CPU 106 communicate with each other through bus 107.
  • PPE 101, BMU 102, NP 104, and TOE component 105 can be integrated on the same chip, or they can be deployed on different chips.
  • the network device 100 may include a gateway, a router, a bridge, a wireless access point, a switch, a firewall, and the like.
  • the network device 100 is capable of processing TCP messages.
  • FIG. 2 it is a schematic diagram of a network device 100 processing a TCP packet according to an embodiment of the present application.
  • the memory 103 is divided into a plurality of storage areas such as a BMU storage area, a TCP receiving buffer area, a user mode storage area, and a TCP sending buffer area for use by different components.
  • Figure 2 takes PPE 101, BMU 102, NP 104, and TOE components 105 as separate deployments on different chips as an example.
  • the TOE component 105 in FIG. 2 is divided into a TOE receiving module 105-1 and a TOE transmitting module 105-2.
  • the TOE component 105 also thinks that it is divided into more modules or does not divide the modules.
  • the process of transmitting a TCP load or a complete TCP message is indicated by a thick solid line with an arrow; the process of transmitting a message header is indicated by a thin solid line with an arrow; the process of transmitting other control messages is indicated by a dotted line with an arrow .
  • the process of the network device 100 receiving a TCP packet includes steps S201-S208.
  • the PPE 101 requests a buffer space from the BMU 102, and the BMU 102 allocates a buffer space for the PPE 102 according to the application.
  • the allocated buffer space is called a BMU storage area in this embodiment.
  • the BMU storage area usually includes multiple storage blocks, and each storage block has a corresponding storage address, and the storage block can be found by using the storage address.
  • the PPE 101 receives the TCP packet A, writes the TCP packet A into the BMU storage area, and sends a first storage address to the NP 104 to notify the NP 104 that there is a new TCP packet to be processed.
  • the first storage address is a storage address of the TCP packet A.
  • the first storage address indicates a storage block of the TCP packet A in the BMU storage area.
  • TCP message A is any TCP message.
  • the TCP packet includes a packet header and a payload, and the packet header includes a Layer 2 header, a Layer 3 header, and a TCP header.
  • NP 104 reads the packet header A of the TCP packet A from the BMU storage area according to the first storage address, and according to the information in the packet header A, such as a 5-tuple (source IP address, source port) Number, protocol type, destination IP address, destination port number), and the flow table stored in NP 104, determine that the TCP packet A is processed by the TOE component.
  • Each entry in the flow table stores a correspondence between a 5-tuple and an operation.
  • NP 104 looks up the flow table according to the quintuple in the packet header to get the corresponding operation. This operation can be forwarded to the CPU or to the TOE component. When the operation is sent to the TOE component, it indicates that the TCP packet A is processed by the TOE component.
  • the NP 104 sends the first storage address to the TOE receiving module 105-1, the TOE receiving module 105-1 obtains the first storage address, and reads the TCP packet from the BMU storage area according to the first storage address. A, and process the TCP packet A.
  • processing the TCP packet A includes calculating a checksum of the TCP packet A, separating the packet header A of the TCP packet A and the TCP load A, and storing the TCP load A in a temporary buffer (not shown in the figure). (Shown); TCP related protocol processing is performed according to the packet header A, such as synchronization, confirmation, out-of-order rearrangement, and the like.
  • the TOE receiving module 105-1 after processing the TCP message A, writes the TCP load A of the TCP message A into the TCP receiving buffer, and sends the TCP load A to the CPU socket.
  • the storage address in the TCP receiving buffer is the second storage address.
  • the TOE receiving module 105-1 triggers the BMU 102 to release the storage space corresponding to the first storage address.
  • the socket acquires the second storage address.
  • the socket reads the TCP load A from the TCP receive buffer according to the second storage address.
  • the socket writes the TCP load A into the specified storage location of the application in the user mode storage area, and the specified storage location corresponds to the third storage address.
  • the process in which the network device 100 sends a TCP packet includes steps S209-S216.
  • the application program calls the sending data interface of the socket, and sends the third storage address to the socket.
  • This embodiment is described by using the TCP load A as an example.
  • the application Before calling the socket's send data interface, the application may or may not have processed the TCP load A.
  • the socket reads the TCP load A from the user mode storage area according to the third storage address, writes the TCP load A into the TCP sending buffer, and sends the fourth to the TOE sending module 105-2.
  • the TOE sending module 105-2 applies for a buffer space from the BMU 102, and obtains a storage block corresponding to the fifth storage address.
  • the TOE sending module 105-2 reads the TCP load A from the TCP sending buffer according to the fourth storage address, and encapsulates a TCP header for the TCP load A.
  • steps S211 and S212 are not limited in execution order.
  • the TOE sending module 105-2 writes the TCP load A encapsulated with the TCP header into the storage block corresponding to the fifth storage address in the BMU storage area, and sends the fifth storage address to the NP 104.
  • NP 104 obtains the TCP load A with the TCP header encapsulated from the BMU storage area according to the fifth storage address, adds a layer 3 header and a layer 2 header to the TCP load A encapsulated with the TCP header, and obtains a TCP report. Text B, and send the fifth storage address to PPE 101.
  • a TCP packet B obtained by adding a Layer 3 header and a Layer 2 header to the load A that encapsulates the TCP header is still stored in a storage block corresponding to the fifth storage address.
  • PPE 101 receives the fifth storage address sent by NP 104 and reads the TCP packet B from the BMU storage area according to the fifth storage address and sends it.
  • the PPE 101 instructs the BMU 102 to release the storage block corresponding to the fifth storage address.
  • the TOE receiving module 105-1 and the TOE sending module 105-2 perform TCP-related processing.
  • the CPU 106 only needs to process the load of the TCP messages, which reduces the CPU 106. Processing burden.
  • the process of processing TCP packets shown in FIG. 2 needs to divide different storage areas for different components in the memory 103, and each component needs to frequently read and write TCP loads or complete TCP packets.
  • the process shown in 2 has the problem of wasting memory resources and memory bandwidth.
  • this application further provides another method for processing a TCP packet, which is performed by a method including a TOE component 105.
  • the method includes at least steps S301-S303.
  • the TOE component 105 obtains a first storage address.
  • the first storage address is an address of a first storage block in the memory, and the first storage block stores a target TCP packet, where the target TCP packet includes a packet header and a TCP payload.
  • the target TCP packet refers to any pending TCP packet.
  • the packet header includes a Layer 2 header, a Layer 3 header, and a TCP header.
  • the TCP payload is the payload of the target TCP packet.
  • the layer 2 header includes information related to the layer 2 protocol, and the layer 3 header may include information related to the layer 3 protocol.
  • the layer 2 protocol may be, for example, an Ethernet protocol, a Spanning Tree Protocol (STP), a Link Aggregation Control Protocol (LACP), or the like.
  • the three-layer protocol may be, for example, an Internet Protocol (IP), an Internet group control protocol, or the like.
  • Each storage block has a corresponding storage address, and the storage block can be found by using the storage address.
  • the storage block includes a data description of the TCP packet, a packet header of the TCP packet, and a TCP load.
  • the data description records the offset value of the TCP packet header relative to the storage block and the length of the TCP packet header.
  • the offset value and the length of the TCP packet header can be obtained from the storage block The TCP load.
  • FIG. 3A shows M storage blocks, and each storage block stores one TCP packet.
  • Each memory block includes a data description, a message header, and a TCP payload.
  • the data description of a TCP packet includes an offset value, the total length of the TCP packet, and the length of the packet header of the TCP packet.
  • the length of the packet header can be the total length of the packet header, or it can include the Layer 2 header length, the Layer 3 header length, and the TCP header length.
  • the offset value refers to the offset size of the second layer header from the start byte of the storage block. For example, in FIG. 3A, the offset value is 200 bytes.
  • the Layer 2 header can be read from the 201st byte of the storage block.
  • the size of each storage block is 2000 bytes, which is larger than the length of the TCP message, so one TCP message can be stored in one storage block.
  • the storage block is small, for example, 128 bytes, a TCP packet may occupy multiple storage blocks.
  • the header of the TCP packet is not included in the storage blocks other than the first storage block.
  • the TOE component receives the first storage address sent by PPE 101.
  • the TOE component 105 obtains a packet header of the target TCP packet from the first storage block according to the first storage address.
  • the TOE component does not need to process the TCP load of the TCP packet when performing the processing of the TCP protocol stack. Therefore, it only obtains the packet header of the first TCP packet.
  • the TOE component obtains the packet header according to the offset address and the length of the packet header in the first storage block.
  • the TOE component performs TCP-related protocol processing according to the packet header.
  • the TOE component does not need to migrate TCP packets or TCP loads from one buffer to another during the process of performing TCP-related protocol processing, that is, the TOE component performs TCP-related During the protocol processing, the TCP load of the target TCP message is not read out of the first storage block, and the storage position of the target TCP message in the memory remains unchanged.
  • the TOE component does not need to process the TCP load, and only needs to obtain the packet header of the TCP packet from the memory 103, which can avoid the waste of memory bandwidth caused by frequently reading the TCP load from the memory.
  • the embodiment of the present application may further include step S304.
  • the TOE component sends a second storage address to the central processing unit CPU.
  • the second storage address indicates a TCP load that can be processed by the CPU.
  • the second storage address is the first storage address.
  • the TCP load that can be processed by the CPU is the TCP load of the target TCP message.
  • the second storage address indicates a second storage block
  • the second storage block is a starting block of at least one storage block
  • the at least one storage block includes the first storage block.
  • the TCP load that can be processed by the CPU includes the load of multiple TCP messages
  • the load of the multiple TCP messages includes the TCP load of the target TCP message.
  • the CPU only processes the TCP load, after receiving the second storage address, the CPU learns that the TCP load in at least one storage block found according to the second storage address can be processed. Therefore, the second storage address is used to notify the CPU of the TCP load that can be processed by the CPU.
  • the CPU 106 includes a socket and an application program.
  • the socket is used to receive the second storage address sent by the TOE component and send the second storage address to the application program.
  • the application program determines whether to process the data obtained through the second storage address. TCP load.
  • the application program in the CPU 106 obtains and processes the TCP load according to the second storage address; in another embodiment, the application program in the CPU 106 does not need to process the TCP load.
  • the TOE component 105 after receiving the storage addresses of multiple TCP packets of the data stream to which the target TCP packet belongs, the TOE component 105 generates a first storage chain according to the storage addresses of the multiple TCP packets; central processing The unit of the second storage address is the address of the first storage chain, and the address of the first storage chain is the address of the start block of the first storage chain. Further, as shown in FIG. 3, the embodiment of the present application may further include step S305.
  • the TOE component receives the third storage address sent by the CPU.
  • the third storage address indicates a storage block where the data to be sent is determined by the CPU, and further indicates the data to be sent.
  • the data to be sent may include a TCP load processed by an application program in the CPU 106 or an application in the CPU 106 The program considers that it does not need to handle the TCP load.
  • the data to be sent may include a TCP load of one or more TCP messages.
  • the data to be sent includes the TCP load in S301.
  • the data to be sent may not include the TCP load in S301.
  • the third storage address is the first storage address. In this case, the third storage address only indicates the first storage block, that is, the data to be sent is the TCP load of the target TCP packet in S301.
  • the third storage address indicates a third storage block, and the third storage block is a starting block of at least one storage block.
  • the at least one storage block may include the first storage block or may not include the first storage block.
  • the third storage address indicates a plurality of storage blocks including the first storage block, and the first storage block is a starting block of the plurality of storage blocks
  • the third storage address is the first storage address.
  • the third storage address indicates a plurality of storage blocks including the first storage block, and the first storage block is not a starting block of the plurality of storage blocks
  • the third storage address is different from the first storage address .
  • the TOE component determines the data to be sent according to the third storage address, and the storage position of the data to be sent in the memory remains unchanged before the data to be sent is successfully sent.
  • the process of the TOE component processing the data to be sent includes steps S306-S309.
  • the TOE component determines window data according to the third storage address, and the window data is part or all of the data to be sent.
  • window data refers to data that the TOE component can send at one time. The process of determining window data by the TOE component will be described in detail later.
  • the TOE component sends a fourth storage address of the window data; the fourth storage address indicates a storage block of the window data in the memory.
  • the storage position of the window data in the memory does not change.
  • the window data as the TCP load of the target TCP message as an example, the TCP load is always stored in the first storage block before being sent successfully.
  • the TOE component receives a TCP acknowledgement message, which indicates that the window data is sent successfully.
  • the data to be sent needs to be sent multiple times before it can be sent.
  • the TOE component needs to execute S306-S308 multiple times.
  • the TOE component notifies the memory to release a memory resource corresponding to the window data.
  • the TOE component after receiving a message indicating that the window data is sent successfully, the TOE component immediately informs the memory to release the memory resource corresponding to the window data. In another embodiment, after receiving a TCP acknowledgement message corresponding to all window data included in the data to be sent, the TOE component notifies the memory to release a storage block corresponding to the fourth storage address. In the foregoing implementation manner of the present application, a storage block that stores a TCP packet when the network device receives the TCP packet is reused.
  • the network device does not need to allocate separate receive buffers and send buffers for the TOE component, and the TOE component After the data to be sent determined by the CPU is sent, the memory 103 is notified to release the storage block occupied by the data to be sent, thereby avoiding a waste of memory resources.
  • this embodiment of the present application further provides another schematic diagram of a network device 100 processing a TCP packet, and the processing process includes steps S401 to S413.
  • the process of transmitting TCP load or complete TCP packets is indicated by thick solid lines with arrows; the process of transmitting packet headers is indicated by thin solid lines with arrows; the processes of transmitting other control messages are indicated by dotted lines with arrows .
  • the processing process of FIG. 4 does not need to allocate corresponding storage areas for different components in the memory 103.
  • the method shown in FIG. 3 and the process shown in FIG. 4 can refer to each other.
  • the PPE 101 requests a buffer space from the BMU 102, and the BMU 102 allocates a buffer space for the PPE 102 from the memory 103 according to the application, and the allocated buffer space is called a BMU storage area in this embodiment.
  • PPE 101, NP 104, TOE component 105, and CPU 106 share the BMU storage area.
  • the BMU storage area is divided into multiple storage blocks, and each storage block corresponds to a storage address.
  • the plurality of storage blocks are the same size.
  • each storage block can store a complete TCP message.
  • the size of the storage block can be arbitrarily set.
  • the TCP packet can be stored in multiple storage blocks.
  • the PPE 101 receives the target TCP packet, performs a TCP check on the target TCP packet, and writes the target TCP packet to the first storage block in the BMU storage area after the TCP check passes.
  • the first storage address of the first storage block is sent to the NP 104.
  • the target TCP message includes a TCP header and a TCP payload.
  • the first storage block can be found by using the first storage address.
  • the first storage block includes a data description, a packet header of the target TCP packet, and a TCP load.
  • the data description includes an offset value, the length of the target TCP packet, the length of the packet header, and the offset value.
  • the offset value of the packet header of the target TCP packet in the first storage block such as In FIG. 3A, the offset value is 200 bytes.
  • the PPE 101 uses the mbuf structure to write the target TCP packet to the BMU storage area.
  • one storage block stores one mbuf structure.
  • FIG. 5 it is a schematic diagram of an mbuf structure (hereinafter abbreviated as mbuf).
  • the total length of the mbuf in Figure 5 is 1000 bytes, including a 20-byte header (corresponding to the data description in Figure 3A).
  • a mbuf can store up to 980 bytes of data, so when the length of the message is greater than 980 In bytes, the message needs to be stored in 2 mbufs (that is, 2 memory blocks).
  • the total length of Mbuf can be other lengths, for example, 2000 bytes.
  • Mbuf's header can also be other lengths, for example, 100 bytes.
  • the Mbuf header can include multiple fields, where m_next points to the next mbuf that stores the same message.
  • m_nextpkt points to The first mubf of another message
  • m_len indicates the size of the data stored in the mbuf, for example, 80 bytes in Figure 5
  • m_data is a pointer to the stored data
  • m_type indicates the type of data contained in the mbuf MT_header in Figure 5 indicates that the data contains a TCP header
  • m_flags can be M_PKTHDR, 0 or M_EXT, where M_PKTHDR indicates that this mbuf is the first in this mubf linked list, that is, the head of the linked list, 0 means that this mbuf contains only Data, M_EXT indicates that this mbuf uses an external cluster to store larger data.
  • the header of the first mbuf of a message can also include m_pkthdr.len and m_pkthdr.rcvif, where m_pkthdr.len indicates the length of the message header, and m_pkthdr.rcvif indicates a pointer to the interface structure, not the first mbuf M_pkthdr.len and m_pkthdr.rcvif are not needed.
  • the mbuf shown in Figure 5 includes the packet header, and the data part stores a 14-byte Layer 2 header, a 20-byte Layer 3 header, a 20-byte TCP header, and a 26-byte payload.
  • the gray part indicates the mbuf. Unoccupied bytes.
  • NP 104 reads the header of the target TCP packet from the first storage block in the BMU storage area according to the first storage address, and determines that the target TCP packet needs to be determined by the information in the packet header.
  • the TOE component 105 processes.
  • the packet header of the target TCP packet includes a Layer 2 header, a Layer 3 header, and a TCP header.
  • determining that the target TCP packet needs to be processed by the TOE component 105 may be determined according to the flow table and the flow characteristics indicated by the packet header.
  • the TOE component 105 needs to be determined by the TOE component 105 deal with.
  • Each entry in the flow table includes flow characteristics and corresponding actions.
  • the actions include forwarding to the CPU, forwarding to the outbound interface, or forwarding to the TOE component.
  • NP 104 determines that the target TCP packet needs to be processed by the TOE component 105.
  • the flow feature may be at least one of a flow identifier or a five-tuple (source IP address, destination IP address, source port number, destination port number, and transport layer protocol).
  • the NP 104 sends the first storage address to the TOE component 105.
  • the TOE component 105 receives the first storage address, obtains a packet header of the target TCP packet from the first storage block according to the first storage address, and performs TCP-related protocol processing according to the packet header.
  • the TOE component 105 performs TCP-related protocol processing according to the packet header, the storage position of the target TCP packet in the BMU storage area does not change, and the load of the target TCP packet is not read by the TOE component 105 Out the first memory block.
  • the NP 104 sends the first storage address to the TOE receiving module 105-1 of the TOE component 105.
  • the TCP-related protocol processing includes one or more of the following processes: state transition, congestion control, out-of-order reordering, packet loss retransmission, and round-trip time (RTT) calculation Wait.
  • TCP-related protocol processing may use any algorithm or manner well known to those skilled in the art.
  • the TOE component 105 sends a second storage address to the CPU 106.
  • the second storage address indicates a TCP load that can be processed by the CPU.
  • the TOE component 105 sends the second storage address to the socket of the CPU 106 through the TOE receiving module 105-1.
  • the second storage address is the first storage address; in another embodiment, the second storage address indicates a second storage block, and the second storage block is at least one storage block The at least one storage block includes the first storage block.
  • the TOE component 105 records the first storage address of the target TCP packet. After determining the storage addresses of multiple TCP packets of the data stream to which the target TCP packet belongs, the TOE component 105 determines the storage addresses of the multiple TCP packets according to the multiple The storage address of each TCP packet generates a first storage chain, and sends the address of the first storage chain to the CPU, that is, the address of the starting block of the first storage chain.
  • the plurality of TCP messages include the target TCP message.
  • the address of the first storage chain is the second storage address.
  • the address of the first storage chain is used by the CPU to obtain the TCP load in each storage block included in the first storage chain.
  • the first storage chain is an mbuf chain.
  • FIG. 6 it is a schematic diagram of the structure of an mbuf chain.
  • the mbuf chain indicates two TCP packets, namely, TCP packet A and TCP packet B.
  • TCP packet A occupies two mbufs
  • TCP packet B With an mbuf, the m_nextpkt pointer of TCP packet A points to m_nextpkt of TCP packet B, and m_nextpkt of TCP packet B is null.
  • the socket in the CPU 106 receives the second storage address, and sends the second storage address to the application program.
  • the socket receives the address of the mbuf chain through the mbuf_recv interface (the interface capable of receiving the mbuf chain address extended for this application).
  • the application sends a third storage address to the socket.
  • the third storage address indicates a storage block where the data to be sent determined by the application program is located.
  • the data to be sent includes a TCP load of the target TCP packet. In another embodiment, the data to be sent does not include the TCP load of the target TCP packet.
  • the application program may process the TCP load indicated by the second storage address, for example, modify the content in the TCP load, or may not process the TCP load indicated by the second storage address.
  • the application when the data to be sent determined by the application includes the load of multiple TCP packets, or when the data to be sent determined by the application is stored in multiple mbufs, the application generates a second data according to the data to be sent.
  • the storage chain calls the socket's mbuf_send interface (the interface capable of sending the mbuf chain address extended for this application), and sends the address of the second storage chain to the socket through the mbuf_send interface.
  • the address of the second storage chain is an address of a start block of the second storage chain.
  • the second storage chain may be the same as or different from the first storage chain.
  • the address of the second storage chain is the third storage address.
  • S406 and S407 can be summarized as follows: the CPU 106 receives the second storage address, and determines, according to the second storage address, data to be sent and a third storage address of a storage block in which the data to be sent is located.
  • the data to be sent is all or part of the data in the TCP load indicated by the second storage address. Accordingly, the third storage address and the second storage address may be the same or different.
  • the data to be sent includes a TCP load of the target TCP packet.
  • the CPU 106 sends the third storage address to the TOE component 105.
  • the socket in the CPU 106 sends the third storage address to the TOE sending module 105-2 of the TOE component 105.
  • the TOE component 105 modifies the TCP header corresponding to each TCP load indicated by the third storage address.
  • the TOE sending module 105-2 finds each TCP load to be sent according to the third address module, and modifies the TCP header corresponding to the TCP load as needed, for example, modifies the TCP port number in the TCP header.
  • S409 is an optional step of the present application.
  • the TOE component 105 determines window data according to the third storage address, where the window data is part or all of the data to be sent.
  • the TOE component 105 sends a fourth storage address to the NP 104, where the fourth storage address indicates a storage block of the window data in the memory.
  • the TOE sending module 105-2 in the TOE component 105 determines the window data according to the congestion window of the network device 100, the receiving window of the opposite end, and the data to be sent stored in the storage block corresponding to the third storage address.
  • the TOE component sending module 105-2 first determines the amount of window data according to the congestion window of the network device 100 and the receiving window of the opposite end, but determines the window from the data to be sent stored in the storage block corresponding to the third storage address according to the amount of window data data.
  • the amount of data in this window can be the number of mbuf, or the number of bytes to be sent.
  • the fourth storage address is determined according to the window data transmitted each time.
  • the data to be sent includes 5 mbufs, namely mbuf1, mbuf2, mbuf3, mbuf4, and mbuf5.
  • the TOE component 105 determines that it can send 3 mbufs for the first time according to the congestion window of the network device 100 and the receiving port of the peer ,
  • the third storage chain is generated according to mbuf1, mbuf2, and mbuf3, and the address of the third storage chain (that is, the storage address of the storage block where mbuf1 is located) is sent to the NP104 as the fourth storage address.
  • the TOE component 105 may compose the remaining two mbufs, namely, mbuf 4 and mbuf 5 to form a fourth storage chain, and the address of the fourth storage chain (that is, where mbuf 4 is located) The storage address of the storage block) is sent to the NP 104 as a fourth storage address.
  • the window data is part of data in one mbuf
  • the TOE component 105 may split the mbuf into multiple mbufs, and each mbuf in the newly split multiple mbufs occupies a storage block.
  • the first mbuf in the plurality of mbufs includes the window data
  • the fourth storage address is the storage address of the storage block in which the first mbuf is located.
  • the NP 104 receives the fourth storage address, modifies the Layer 3 header and Layer 2 header in the storage block corresponding to the fourth storage address, and obtains the modified TCP packet.
  • the modified TCP packet may be one or more.
  • the NP 104 sends the fourth storage address to the PPE 101.
  • PPE 101 receives the fourth storage address, reads the modified TCP packet from the storage block corresponding to the fourth storage address, calculates the TCP checksum, and adds the calculated TCP checksum to the modified In the TCP packet, the modified TCP packet is sent.
  • the modified TCP packet is still stored in a storage block corresponding to the fourth storage address.
  • the TOE component 105 (via the TOE receiving module 105-1) receives a TCP confirmation message, and the TCP confirmation message is used to notify the peer device of the received data.
  • the TOE component 105 determines that the window data is sent successfully according to the TCP confirmation message, and notifies the BMU 102 to release the storage block corresponding to the fourth storage address.
  • S414 is an implementation manner of S308 and S309.
  • the foregoing embodiments of the present application do not need to allocate independent sending buffers and receiving buffers for TCP connections, which saves a lot of storage resources. Assume that the method shown in FIG. 2 needs to allocate a 64 KB buffer for each TCP connection. Using the embodiment of the present application, when there are 10M data streams, 640 GB of memory space can be saved.
  • the length of an Ethernet packet is 64 bytes to 1500 bytes, and the length of the packet header is 20 bytes to 64 bytes.
  • PPE 101 writes a TCP packet to the BMU storage area.
  • TOE component 105 When 104 and TOE component 105 are processing a TCP message, they only need to read the header of the TCP message instead of frequently reading the load of the TCP message from the BMU storage area, which can reduce the access bandwidth of the memory 103 To improve processing efficiency.
  • the present application further provides a TOE component 700, which includes an interface 701 and a processor 702.
  • the interface 701 is used for the TOE component 100 to communicate with other components of the network device.
  • the processor 702 obtains a first storage address through the interface 701.
  • the first storage address is an address of a first storage block in the memory, and the first storage block stores a target TCP packet.
  • the target TCP packet includes a packet header and a TCP payload. .
  • the processor 702 obtains the packet header from the first storage block according to the first storage address.
  • the processor 702 performs TCP-related protocol processing according to the packet header; in the process of performing TCP-related protocol processing according to the packet header, the TCP load is not read by the TOE component to the first storage block.
  • the processor 702 also sends a second storage address to the central processing unit CPU through the interface 701; the second storage address is the first storage address, or the second storage address indicates a second storage block, and the first The two storage blocks are start blocks of at least one storage block, and the at least one storage block includes the first storage block.
  • the processor 702 after the processor 702 obtains the storage addresses of multiple TCP packets of the data stream to which the target TCP packet belongs, generates a storage chain according to the storage addresses of the multiple TCP packets; the second storage The address is the address of the starting block in the storage chain.
  • the processor 702 further receives, via the interface 701, a third storage address sent by the CPU, where the third storage address indicates a storage block determined by the CPU where data to be sent is located, and the data to be sent includes the TCP load.
  • the processor 702 also acquires the data to be sent according to the third storage address, and the storage position of the data to be sent in the memory remains unchanged until the data to be sent is successfully sent.
  • the processor 702 further receives, via the interface 701, a third storage address sent by the central processing unit CPU, where the third storage address indicates a storage block where the data to be sent determined by the CPU is located.
  • the processor 702 acquires the data to be sent according to the third storage address; before the data to be sent is successfully sent, the storage position of the data to be sent in the memory remains unchanged.
  • This application further provides a chip, which includes a TOE component and a network processor as shown in FIG. 7.
  • the network processor may be the NP 104 in FIG. 1, FIG. 2, or FIG. 3. Further, the chip may further include one or all of PPE 101 and BMU 102 in FIG. 1.
  • the network device includes a TOE component 801, a memory 802, and a CPU 803.
  • the CPU 803 runs a socket and an application program.
  • the TOE component 801, the memory 802, and the CPU 803 communicate with each other through a bus 804.
  • the memory 802 stores a transmission control protocol TCP message.
  • the TOE component 801 obtains a first storage address, where the first storage address is an address of a first storage block in the memory, and the first storage block stores a target TCP packet, where the target TCP packet includes a packet header and a TCP payload.
  • the TOE component 801 also obtains the packet header from the first storage block according to the first storage address, and performs TCP-related protocol processing according to the packet header; in the process of performing TCP-related protocol processing according to the packet header , The TCP load is not read by the TOE component to the first storage block.
  • the TOE component 801 also sends a second storage address to the CPU 803; the second storage address is the first storage address, or the second storage address indicates a second storage block, and the second storage block is A starting block of at least one memory block, the at least one memory block including the first memory block.
  • the CPU 803 receives the second storage address, and determines data to be sent and a third storage address according to the second storage address.
  • the third storage address indicates a storage block where the data to be sent is located, and the data to be sent includes the TCP load.
  • the TOE component 801 also obtains a storage address of multiple TCP packets of the data stream to which the target TCP packet belongs, and then generates a storage chain according to the storage addresses of the multiple TCP packets; the second The storage address is the address of the start block of the storage chain.
  • the TOE component 801 sends the second storage address to the socket.
  • the socket receives the second storage address, sends the second storage address to the application program, and receives the third storage address sent by the application program.
  • the application program receives the second storage address, determines the data to be sent and the third storage address according to the second storage address, and sends the third storage address to the socket.
  • the socket also sends the third storage address to the TOE component 801.
  • the TOE component 801 also receives the third storage address and obtains the data to be sent according to the third storage address; Before the sending is successful, the storage position of the data to be sent in the memory remains unchanged.
  • the CPU 803 sends a third storage address to the TOE component, and the third storage address indicates a storage block where the data to be sent determined by the CPU is located.
  • the TOE component 801 also acquires the data to be sent according to the third storage address; before the data to be sent is successfully sent, the storage position of the data to be sent in the memory remains unchanged.
  • the TOE component, chip, and network device of the present application process a TCP packet, they only need to read the header of the TCP packet, instead of frequently reading the load of the TCP packet from the memory, which can reduce the memory. Access bandwidth to improve processing efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种处理传输控制协议TCP报文的方法、TOE组件以及包括该TOE组件的网络设备,用于在处理TCP报文时减轻网络设备的CPU的负担并节省网络设备的存储资源。TCP卸载引擎TOE组件获取第一存储地址,所述第一存储地址为存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷。所述TOE组件根据所述第一存储地址从所述第一存储块获取所述报文头。所述TOE组件根据所述报文头执行TCP相关的协议处理,其中,在所述TOE组件根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。

Description

处理TCP报文的方法、TOE组件以及网络设备
本申请要求于2018年9月27日提交中国知识产权局、申请号为201811134308.X、发明名称为“处理TCP报文的方法、TOE组件以及网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机通信领域,尤其涉及处理传输控制协议(Transmission Control Protocol,TCP)报文的方法、TCP卸载引擎(TCP offload engine,TOE)组件以及网络设备。
背景技术
目前,TCP流量占据互联网(Internet)总流量的90%。网络设备通过软件协议栈处理TCP报文时,需要消耗大量的中央处理单元(central processing unit,CPU)资源。
为了提高TCP报文的处理效率,引入了TOE技术。TOE技术通过专用硬件来处理TCP/IP协议栈,从而大幅降低CPU的处理负担。处理TCP/IP协议栈的专用硬件电路被称为TOE组件。
发明内容
本申请提供了一种处理TCP报文的方法、TOE组件、以及包括该TOE组件的芯片和网络设备,用于提高TCP报文的处理效率。
本申请第一方面提供了一种处理TCP报文的方法。TCP卸载引擎TOE组件获取第一存储地址,所述第一存储地址为存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷。所述TOE组件根据所述第一存储地址从所述第一存储块获取所述报文头。所述TOE组件根据所述报文头执行TCP相关的协议处理;其中,在所述TOE组件根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。
本申请中,TOE组件在根据目标TCP报文的报文头执行TCP相关的协议处理的过程中,不需要将该TCP报文的TCP负荷读出该目标TCP报文所在的第一存储块,避免了在处理该目标TCP报文的过程中频繁读取该目标TCP报文导致的处理效率低的问题。并且,由于在根据目标TCP报文的报文头执行TCP相关的协议处理的过程中,不需要将该TCP报文的TCP负荷读出该目标TCP报文所在的第一存储块,本申请可以不需要为不同的TCP进程分配单独的TCP缓冲区,节省了存储器的存储资源。
在一个实施方式中,所述TOE组件还向中央处理单元CPU发送第二存储地址;所述第二存储地址为所述第一存储地址,或所述第二存储地址指示第二存储块,所述第二存储块为至少一个存储块的起始块,所述至少一个存储块包括所述第一存储块。
本实施方式中,TOE组件向CPU发送第二存储地址,以使该CPU可以根据该第二存 储地址确定待发送数据,能够避免向CPU直接发送TCP报文或TCP负荷导致的TOE组件与CPU之间的接口资源的浪费。
在一个实施方式中,所述TOE组件在获取到所述目标TCP报文所属数据流的多个TCP报文的存储地址后,根据所述多个TCP报文的存储地址生成存储链;所述第二存储地址为所述存储链的起始块的地址。
本实施方式中,TOE组件根据多个TCP报文的存储地址生成存储链,并将存储链的起始块的地址发送给CPU。通过这种方式,TOE组件可以一次将多个TCP报文的存储地址发送给CPU,进一步节省了TOE组件与CPU之间的接口资源,提高了处理效率。
进一步地,所述TOE组件接收所述CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块,所述待发送数据包括所述TCP负荷。所述TOE组件根据所述第三存储地址获取所述待发送数据,在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
本实施方式中,待发送数据包括目标TCP报文的TCP负荷,并且,在该待发送数据被发送成功前,该待发送数据在所述存储器中的存储位置不变。这表明,本实施方式中,在处理TCP报文的过程中,只需要为该TCP报文分配一个存储块。因此,本实施方式节约了存储器的存储空间。
在另一个实施方式中,该TOE组件接收中央处理单元CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块。该TOE组件根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
本实施方式中,CPU发送的待发送数据并不包括目标TCP报文的TCP负荷,该待发送数据在被发送成功前,该待发送数据在存储器中的位置不变,即不需要为该待发送数据分配额外的发送缓冲区即可发送该待发送数据,节约了存储器的资源。
本申请第二方面提供了一种传输控制协议卸载引擎TOE组件,该TOE组件包括接口和处理器。所述处理器通过所述接口获取第一存储地址;所述第一存储地址为存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷。所述处理器根据所述第一存储地址从所述第一存储块获取所述报文头,并根据所述报文头执行TCP相关的协议处理。在所述处理器根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。
在一个实施方式中,所述处理器还用于通过所述接口向中央处理单元CPU发送第二存储地址;所述第二存储地址为所述第一存储地址,或所述第二存储地址指示第二存储块,所述第二存储块为至少一个存储块的起始块,所述至少一个存储块包括所述第一存储块。
在一个实施方式中,所述处理器还在获取到所述目标TCP报文所属数据流的多个TCP报文的存储地址后,根据所述多个TCP报文的存储地址生成存储链;所述第二存储地址为所述存储链中的起始块的地址。
在一个实施方式中,所述处理器还通过所述接口接收所述CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块,所述待发送数据包括所述TCP负荷。所述处理器还根据所述第三存储地址获取所述待发送数据,在所述待发 送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
在一个实施方式中,所述处理器还通过所述接口接收中央处理单元CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块。所述处理器根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
本申请第三方面提供了一种芯片,该芯片包括上述第二方面或第二方面的任意一种实施方式中的TOE组件以及网络处理器。该芯片还可以进一步包括其他组件。
本申请第四方面提供了一种网络设备,该网络设备包括上述第三方面提供的芯片以及中央处理单元CPU。
本申请第五方面提供了另一种网络设备,包括传输控制协议卸载引擎TOE组件和存储器。所述存储器存储传输控制协议TCP报文。所述TOE组件获取第一存储地址,所述第一存储地址为所述存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷。所述TOE组件还根据所述第一存储地址从所述第一存储块获取所述报文头,并根据所述报文头执行TCP相关的协议处理。在所述TOE组件根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。
在一个实施方式中,所述网络设备还包括中央处理单元CPU。所述TOE组件还向所述CPU发送第二存储地址;所述第二存储地址为所述第一存储地址,或所述第二存储地址指示第二存储块,所述第二存储块为至少一个存储块的起始块,所述至少一个存储块包括所述第一存储块。所述CPU接收所述第二存储地址,根据所述第二存储地址确定待发送数据以及第三存储地址,所述第三存储地址指示所述待发送数据所在的存储块,所述待发送数据包括所述TCP负荷。
进一步地,所述TOE组件还在获取到所述目标TCP报文所属数据流的多个TCP报文的存储地址后,根据所述多个TCP报文的存储地址生成存储链;所述第二存储地址为所述存储链的起始块的地址。
进一步地,所述CPU运行套接字,所述TOE组件还向所述套接字发送所述第二存储地址;所述套接字用于接收所述第二存储地址。
进一步地,所述CPU还运行应用程序,所述套接字向所述应用程序发送所述第二存储地址,并接收所述应用程序发送的所述第三存储地址。所述应用程序接收所述第二存储地址,根据所述第二存储地址确定所述待发送数据以及所述第三存储地址,并向所述套接字发送所述第三存储地址。
进一步地,所述套接字还向所述TOE组件发送所述第三存储地址。所述TOE组件还接收所述第三存储地址,根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
在另一个实施方式中,该网络设备还包括中央处理单元CPU。所述CPU向所述TOE组件发送第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块。所述TOE组件还根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
本申请第二到第五方面及第二到五方面的各实施方式的有益效果可以参考第一方 面以及第一方面的各实施方式的描述。
附图说明
图1为本申请实施例提供的一种网络设备的结构示意图;
图2为本申请实施例提供的一种处理TCP报文的过程示意图;
图3为本申请实施例提供的一种处理TCP报文的方法流程图;
图3A为本申请实施例提供的一种TCP报文的存储方式示意图;
图4为本申请实施例提供的另一种处理TCP报文的过程示意图;
图5为本申请实施例提供的mbuf的结构示意图;
图6为本申请实施例提供的mbuf链的结构示意图;
图7为本申请实施例提供的一种TOE组件的结构示意图;
图8为本申请实施例提供的一种网络设备的结构示意图。
具体实施方式
本发明的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一存储地址和第二存储地址是用于区别不同的存储地址,而不是用于指定不同存储地址的特定顺序。“第一”、“第二”……“第n”之间不具有逻辑或时序上的依赖关系。
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述。
本申请实施例提供了一种网络设备100,如图1所示,该网络设备100包括:报文处理引擎(packet processing engine,PPE)101,缓存管理单元(buffer management unit,BMU)102,存储器103,网络处理器(network processor,NP)104,TOE组件105和CPU 106。PPE 101,BMU 102,存储器103,NP 104,TOE组件105和CPU 106通过总线107互相通信。CPU 106上运行有套接字和应用程序。这些组件的功能将在下文中详细介绍。其中,PPE 101,BMU 102,NP 104和TOE组件105可以集成在同一个芯片上,也可以部署在不同的芯片上。网络设备100可以包括网关,路由器,网桥,无线接入点,交换机,防火墙等。
网络设备100能够处理TCP报文。如图2所示,为本申请实施例提供的一种网络设备100处理TCP报文的过程示意图。网络设备中,存储器103被划分为BMU存储区,TCP接收缓冲区,用户态存储区,TCP发送缓冲区等多个存储区,以供不同的组件使用。为了更清楚地描述TCP报文的处理过程,图2以PPE 101,BMU 102,NP 104和TOE组件105分别部署在不同的芯片上为例。并且,为了方面描述,图2中的TOE组件105被划分为TOE接收模块105-1和TOE发送模块105-2。在实际应用中,TOE组件105还以为被划分为更多模块或不划分模块。图2中,传输TCP负荷或完整TCP报文的过程用带箭头的粗实线表示;传输报文头的过程用带箭头的细实线表示;传输其他控制消息的过程用带箭头的虚线表示。
如图2所示,网络设备100接收TCP报文的过程包括步骤S201-S208。
在S201中,PPE 101向BMU 102申请缓存空间,BMU 102根据该申请为PPE102分配缓存空间,该分配的缓存空间本实施例中称为BMU存储区。
BMU存储区通常包括多个存储块,每个存储块具有对应的存储地址,通过该存储地址即可查找到该存储块。
在S202中,PPE 101接收TCP报文A,将该TCP报文A写入BMU存储区,并向NP 104发送第一存储地址,以通知NP 104有新的TCP报文需要处理。第一存储地址为该TCP报文A的存储地址。该第一存储地址指示该TCP报文A在该BMU存储区的存储块。其中,TCP报文A为任意一个TCP报文。其中,TCP报文包括报文头和负荷,该报文头包括二层头,三层头和TCP头。
在S203中,NP 104根据该第一存储地址从BMU存储区读取该TCP报文A的报文头A,根据该报文头A中的信息,例如五元组(源IP地址,源端口号,协议类型,目的IP地址,目的端口号),以及NP 104存储的流表,确定该TCP报文A由TOE组件处理。该流表的每个表项存储五元组与操作的对应关系。NP 104根据该报文头中的五元组查找该流表,即可得到对应的操作,该操作可以是转发,发往CPU或者发往TOE组件等。当该操作是发往TOE组件时,指示该TCP报文A由TOE组件处理。在S204中,NP 104向TOE接收模块105-1发送该第一存储地址,TOE接收模块105-1获取该第一存储地址,根据该第一存储地址从BMU存储区中读取该TCP报文A,并处理该TCP报文A。
其中,处理该TCP报文A包括计算该TCP报文A的校验和,分离该TCP报文A的报文头A和TCP负荷A,将该TCP负荷A存放在临时缓冲区(图中未示出);根据该报文头A执行TCP相关的协议处理,例如同步,确认,乱序重排等。
在S205中,TOE接收模块105-1在处理完该TCP报文A后,将该TCP报文A的TCP负荷A写入TCP接收缓冲区,并向CPU的套接字发送该TCP负荷A在该TCP接收缓冲区中的存储地址,即第二存储地址。
在S206中,TOE接收模块105-1触发BMU 102释放该第一存储地址对应的存储空间。
在S207中,套接字获取该第二存储地址。在该应用程序调用套接字的接收数据接口时,该套接字根据该第二存储地址从TCP接收缓冲区中读取该TCP负荷A。
在S208中,套接字把TCP负荷A写入该应用程序在用户态存储区中指定的存储位置,该指定的存储位置对应第三存储地址。
进一步地,如图2所示,网络设备100发送TCP报文的过程包括步骤S209-S216。
在S209中,应用程序调用套接字的发送数据接口,向套接字发送该第三存储地址。
本实施例以发送TCP负荷A为例进行说明。在调用套接字的发送数据接口前,应用程序可能处理过该TCP负荷A,也可能没有处理过该TCP负荷A。
在S210中,套接字根据该第三存储地址从用户态存储区中读取该TCP负荷A,将该TCP负荷A写入TCP发送缓冲区中,并向TOE发送模块105-2发送第四存储地址,该第四存储地址指示该TCP负荷A在该TCP发送缓冲区中的存储位置。
在S211中,TOE发送模块105-2向BMU 102申请缓存空间,得到第五存储地址对应的存储块。
在S212中,TOE发送模块105-2根据该第四存储地址从该TCP发送缓冲区中读取该TCP负荷A,为该TCP负荷A封装TCP头。
其中,步骤S211和S212没有执行顺序的限制。
在S213中,TOE发送模块105-2将该封装了TCP头的TCP负荷A写入BMU存储区中该第五存储地址对应的存储块,并向NP 104发送该第五存储地址。
在S214中,NP 104根据该第五存储地址从该BMU存储区获取该封装了TCP头的TCP负荷A,为该封装了TCP头的TCP负荷A添加三层头和二层头,得到TCP报文B,并向PPE 101发送该第五存储地址。
本实施例,为封装了TCP头的负荷A添加三层头和二层头后得到的TCP报文B仍然存储在第五存储地址对应的存储块中。
在S215中,PPE 101收到NP 104发送的该第五存储地址后,根据该第五存储地址从该BMU存储区中读取该TCP报文B并发送。
在S216中,PPE 101在确认该TCP报文B发送成功后,通知BMU 102释放该第五存储地址对应的存储块。
本申请图2所示的处理TCP报文的过程中,由TOE接收模块105-1以及TOE发送模块105-2执行TCP相关的处理,CPU 106只需要处理TCP报文的负荷,降低了CPU 106的处理负担。
然而,图2所示的处理TCP报文的过程,需要在存储器103中为不同的组件划分不同的存储区,并且,各个组件需要频繁地读写TCP负荷或完整的TCP报文,因此,图2所示的过程存在浪费存储器资源和存储器带宽的问题。
基于图1所示的网络设备,如图3所示,本申请进一步提供了另一种处理TCP报文的方法,该处理TCP报文的方法由包括TOE组件105执行。该方法至少包括步骤S301-S303。
在S301中,TOE组件105获取第一存储地址。该第一存储地址为存储器中第一存储块的地址,该第一存储块存储目标TCP报文,该目标TCP报文包括报文头和TCP负荷。
该目标TCP报文是指任意一个待处理的TCP报文,该报文头包括二层头,三层头和TCP头,该TCP负荷为目标TCP报文的负荷。该二层头包括二层协议相关的信息,该三层头可以包括三层协议相关的信息。该二层协议例如可以是以太网协议,生成树协议(Spanning Tree Protocol,STP),链路聚合控制协议(Link aggregation Control Protocol,LACP)等。该三层协议例如可以是网际协议(Internet Protocol,IP),因特网组控制协议等。
其中,每个存储块都有对应的存储地址,通过该存储地址可查找到该存储块。当存储块中存储有TCP报文时,该存储块包括该TCP报文的数据描述,该TCP报文的报文头以及TCP负荷。该数据描述记录了该TCP报文头的相对该存储块的偏移值以及该TCP报文头的长度,通过该偏移值以及该TCP报文头的长度,可以从该存储块中获取到该TCP负荷。
在一个实施方式中,TCP报文在存储器103中的一种存储方式如图3A所示。图3A中示出了M个存储块,每个存储块存储一个TCP报文。每个存储块包括数据描述,报文头以及TCP负荷。相应地,一个TCP报文的数据描述包括偏移值,该TCP报文的总长度,以及该TCP报文的报文头的长度。该报文头的长度可以是报文头的总长度,也可以包括二层头长度,三层头长度以及TCP头长度。其中,偏移值指二层头相对该存储块的起始 字节的偏移大小。例如,图3A中,该偏移值为200字节。当存储块的大小为2000字节,偏移值为200时,则从该存储块的第201个字节开始可以读出二层头。图3A中,每个存储块的大小为2000字节,大于TCP报文的长度,因此一个TCP报文可以存储于一个存储块中。当存储块较小,例如,128字节时,一个TCP报文可能需要占用多个存储块。当一个TCP报文占用多个存储块时,除第一个存储块外的其他存储块中不包括该TCP报文的报文头。
在一个实施方式中,该TOE组件接收PPE 101发送的该第一存储地址。
在S302中,TOE组件105根据该第一存储地址从该第一存储块获取该目标TCP报文的报文头。
图3所示实施例中,TOE组件执行TCP协议栈的处理时不需要处理TCP报文的TCP负荷,因此,只获取该第一TCP报文的报文头。
在一个实施方式中,TOE组件根据该第一存储块中的偏移地址,报文头的长度获取该报文头。
在S303中,TOE组件根据该报文头执行TCP相关的协议处理。
由于TCP相关的协议处理方式和算法为本领域技术人员所熟知,本申请不做进一步探讨。
本申请中,TOE组件在执行TCP相关的协议处理的过程中不需要将TCP报文或TCP负荷从一个缓冲区迁移到另一个缓冲区,即在该TOE组件根据该报文头执行TCP相关的协议处理的过程中,该目标TCP报文的TCP负荷没有被读出该第一存储块,该目标TCP报文在该存储器中的存储位置不变。
本申请实施方式中,TOE组件不需要处理TCP负荷,只需要从存储器103中获取TCP报文的报文头,能够避免频繁从存储器中读取TCP负荷带来的存储器带宽的浪费。
进一步地,如图3所示,本申请实施例还可以包括步骤S304。
在S304中,TOE组件向中央处理单元CPU发送第二存储地址。该第二存储地址指示可被该CPU处理的TCP负荷。
在一个实施方式中,该第二存储地址为该第一存储地址。在这种情况下,可被该CPU处理的TCP负荷即为该目标TCP报文的TCP负荷。
在另一个实施方式中,该第二存储地址指示第二存储块,该第二存储块为至少一个存储块的起始块,该至少一个存储块包括该第一存储块。在这种情况下,可被该CPU处理的TCP负荷包括多个TCP报文的负荷,该多个TCP报文的负荷包括该目标TCP报文的TCP负荷。
由于CPU只处理TCP负荷,因此,CPU接收到第二存储地址后,即获知根据该第二存储地址查找到的至少一个存储块中的TCP负荷可以被处理。因此该第二存储地址用于向该CPU通知可被该CPU处理的TCP负荷。
CPU 106包括套接字以及应用程序,套接字用于接收TOE组件发送的第二存储地址,并将该第二存储地址发送给应用程序,应用程序决定是否处理通过该第二存储地址获取到的TCP负荷。
在一个实施方式中,CPU 106中的应用程序根据该第二存储地址获取并处理该TCP负荷;在另一个实施方式中,CPU 106中的应用程序并不需要处理该TCP负荷。
在一个实施方式中,该TOE组件105在接收到该目标TCP报文所属数据流的多个TCP报文的存储地址后,根据该多个TCP报文的存储地址生成第一存储链;中央处理单元该第二存储地址即为该第一存储链的地址,该第一存储链的地址为该第一存储链的起始块的地址。进一步地,如图3所示,本申请实施例还可以包括步骤S305。
在S305中,TOE组件接收CPU发送的第三存储地址。
该第三存储地址指示CPU确定的待发送数据所在的存储块,进而指示该待发送数据,该待发送数据可以包括CPU 106中的应用程序处理过的TCP负荷,也可以包括CPU 106中的应用程序认为不需要处理的TCP负荷。
该待发送数据可以包括一个或多个TCP报文的TCP负荷。在一个实现方式中,该待发送数据包括S301中的该TCP负荷。在另一个实现方式中,该待发送数据也可以不包括S301中的该TCP负荷。在一个实施方式中,该第三存储地址为该第一存储地址。在这种情况下,该第三存储地址仅指示该第一存储块,即该待发送数据为S301中的目标TCP报文的TCP负荷。
在另一个实施方式中,该第三存储地址指示第三存储块,该第三存储块为至少一个存储块的起始块,该至少一个存储块可以包括该第一存储块,也可以不包括该第一存储块。当该第三存储地址指示包括该第一存储块的多个存储块,且该第一存储块为该多个存储块中的起始块时,该第三存储地址为该第一存储地址。当该第三存储地址指示包括该第一存储块的多个存储块,且该第一存储块不是该多个存储块中的起始块时,该第三存储地址与该第一存储地址不同。
进一步地,TOE组件根据该第三存储地址确定该待发送数据,在该待发送数据被发送成功前,该待发送数据在该存储器中的存储位置不变。
在一个实施方式中,TOE组件处理该待发送数据的过程包括步骤S306-S309。
在S306中,TOE组件根据该第三存储地址确定窗口数据,该窗口数据为该待发送数据的部分或全部。
本申请中,窗口数据是指TOE组件一次能够发送的数据。TOE组件确定窗口数据的过程后续将详细描述。
在S307中,TOE组件发送该窗口数据的第四存储地址;该第四存储地址指示该窗口数据在该存储器中的存储块。
在被发送成功前,该窗口数据在该存储器中的存储位置不变。以窗口数据为该目标TCP报文的TCP负荷为例,该TCP负荷被发送成功前一直存储在该第一存储块中。
在S308中,TOE组件接收TCP确认消息,该TCP确认消息指示该窗口数据发送成功。
当窗口数据为待发送数据的部分数据时,该待发送数据需要经过多次才能发送完。在这种情况下,TOE组件需要多次执行S306-S308。
在S309中,TOE组件通知该存储器释放该窗口数据对应的存储器资源。
在一个实施方式中,TOE组件收到指示该窗口数据发送成功的消息后,立即通知该存储器释放该窗口数据对应的存储器资源。在另一个实施方式中,TOE组件收到该待发送数据包括的所有窗口数据对应的TCP确认消息后,通知该存储器释放该第四存储地址对应的存储块。本申请上述实施方式中,复用了网络设备在接收到TCP报文时存储该TCP报文的存储块,网络设备不需要为TOE组件分配单独的接收缓冲区和发送缓冲区,并且, TOE组件在发送完CPU确定的待发送数据后,通知存储器103释放该待发送数据占用的存储块,避免了存储器资源的浪费。
基于图3所示的方法,如图4所示,本申请实施例进一步提供了另一种网络设备100处理TCP报文的过程示意图,该处理过程包括步骤S401-S413。图4中,传输TCP负荷或完整TCP报文的过程用带箭头的粗实线表示;传输报文头的过程用带箭头的细实线表示;传输其他控制消息的过程用带箭头的虚线表示。图4的处理过程不需要在存储器103中为不同组件分配对应的存储区。图3所示的方法和图4所示的过程可互为参考。
在S401中,PPE 101向BMU 102申请缓存空间,BMU 102根据该申请从存储器103中为PPE102分配缓存空间,该分配的缓存空间本实施例中称为BMU存储区。本实施例中,PPE 101,NP 104,TOE组件105,CPU 106共用该BMU存储区。其中,该BMU存储区又被划分为多个存储块,每个存储块对应一个存储地址。
在一个实施方式中,该多个存储块大小相同。优选地,每个存储块能够存储一个完整的TCP报文。当然,存储块的大小可以随意设置,当存储块的大小小于TCP报文时,该TCP报文可以被存储到多个存储块中。
在S402中,PPE 101接收目标TCP报文,对该目标TCP报文做TCP校验,并在TCP校验通过后将该目标TCP报文写入BMU存储区中的第一存储块,将该第一存储块的第一存储地址发送给NP 104。该目标TCP报文包括TCP头和TCP负荷。
通过该第一存储地址可以查找到该第一存储块。该第一存储块包括数据描述,该目标TCP报文的报文头和TCP负荷。该数据描述包括偏移值,该目标TCP报文的长度,该报文头的长度,该偏移值为该目标TCP报文的报文头在该第一存储块中的偏移大小,例如图3A中,该偏移值为200字节。
在一个实施方式中,PPE 101采用mbuf结构将该目标TCP报文写入BMU存储区,这种情况下,一个存储块中存储一个mbuf结构。如图5所示,为mbuf结构(后续简称为mbuf)的示意图。图5中的mbuf的总长为1000字节,其中包括20字节的头部(对应图3A中的数据描述),则一个mbuf最多可存储980字节的数据,因此当报文的长度大于980字节时,该报文需要被存储在2个mbuf(即2个存储块)中。Mbuf的总长可以是其他长度,例如,2000字节。当mbuf长度为2000字节时,一个mbuf存储一个TCP报文。Mbuf的头部也可以是其他长度,例如,100字节。如图5所示,Mbuf的头部可以包括多个字段,其中,m_next指向存储同一个报文的下一个mbuf,只有当一个报文被存储于多个mubf时有效,否则为null;m_nextpkt指向另一个报文的第一个mubf;m_len指示该mbuf中存储的数据的大小,例如,图5中为80字节;m_data为指向存储的数据的指针;m_type指示包含在mbuf中的数据的类型,图5中为MT_header,指示该数据包含TCP头;m_flags可以为M_PKTHDR,0或M_EXT,其中,M_PKTHDR表示这个mbuf是此mubf链表中的第一个,即链表的头,0表示此mbuf只包含数据,M_EXT表示此mbuf用到了外部的簇来存储较大的数据。此外,一个报文的首个mbuf的头部还可以包括m_pkthdr.len以及m_pkthdr.rcvif,m_pkthdr.len指示报文的头部的长度,m_pkthdr.rcvif指示指向接口结构的指针,而非首个mbuf则不需要m_pkthdr.len和m_pkthdr.rcvif。图5所示的mbuf包括报文头,且数据部分存储了14字节的二层头,20字节的三层头和20字节的TCP头和26字节的负荷,灰色部分表示mbuf中未被占用 的字节。
在S403中,NP 104根据该第一存储地址从BMU存储区中的第一存储块读取该目标TCP报文的报文头,根据该报文头中的信息确定该目标TCP报文需要由TOE组件105处理。
在一个实施方式中,该目标TCP报文的报文头包括二层头,三层头和TCP头。根据该目标TCP报文的报文头中的信息确定该目标TCP报文需要由TOE组件105处理可以是根据流表以及该报文头指示的流特征确定该目标TCP报文需要由TOE组件105处理。流表中的每个表项包括流特征和对应的动作,该动作包括向CPU转发,向出接口转发或向TOE组件转发。当该报文头包括的流特征在该流表中对应的动作为向TOE组件转发时,NP 104确定该目标TCP报文需要由TOE组件105处理。其中,流特征可以是流标识,或者五元组(源IP地址,目的IP地址,源端口号,目的端口号,传输层协议)中的至少一项。
在S404中,NP 104向TOE组件105发送该第一存储地址。TOE组件105接收该第一存储地址,根据该第一存储地址从该第一存储块获取该目标TCP报文的报文头,根据该报文头执行TCP相关的协议处理。在该TOE组件105根据该报文头执行TCP相关的协议处理时,该目标TCP报文在该BMU存储区中的存储位置不变,且该目标TCP报文的负荷没有被该TOE组件105读出该第一存储块。
在一个实施方式中,NP 104向TOE组件105的TOE接收模块105-1发送该第一存储地址。
在一个实施方式中,该TCP相关的协议处理包括以下处理中的一个或多个:状态迁移,拥塞控制,乱序重排,丢包重传,往返时延(round-trip time,RTT)计算等。TCP相关的协议处理可以采用本领域技术人员熟知的任意算法或方式。
在S405中,TOE组件105向CPU 106发送第二存储地址。该第二存储地址指示可被该CPU处理的TCP负荷。
在一个实施方式中,TOE组件105通过TOE接收模块105-1向CPU 106的套接字发送该第二存储地址。
如前该,在一个实施方式中,该第二存储地址为该第一存储地址;在另一个实施方式中,该第二存储地址指示第二存储块,该第二存储块为至少一个存储块的起始块,该至少一个存储块包括该第一存储块。
在一个实施方式中,TOE组件105记录该目标TCP报文的第一存储地址,TOE组件105在确定接收到该目标TCP报文所属数据流的多个TCP报文的存储地址后,根据该多个TCP报文的存储地址生成第一存储链,并向CPU发送该第一存储链的地址,即该第一存储链的起始块的地址。在一个实施方式中,该多个TCP报文包括该目标TCP报文。本实施方式中,该第一存储链的地址即为该第二存储地址。
其中,该第一存储链的地址用于该CPU获取该第一存储链包括的每个存储块中的TCP负荷。
在一个实施方式中,该第一存储链为mbuf链。如图6所示,为一个mbuf链的结构示意图,该mbuf链指示两个TCP报文,即TCP报文A和TCP报文B,TCP报文A占用了两个mbuf,TCP报文B占用了一个mbuf,TCP报文A的m_nextpkt指针指向TCP报文 B的m_nextpkt,TCP报文B的m_nextpkt为空(null)。
在S406中,CPU 106中的套接字接收该第二存储地址,并将该第二存储地址发送给应用程序。
在一个实施方式中,套接字通过mbuf_recv接口(为本申请扩展的能够接收mbuf链地址的接口)接收该mbuf链的地址。
在S407中,应用程序向该套接字发送第三存储地址。该第三存储地址指示该应用程序确定的待发送数据所在的存储块。
在一个实施方式中,该待发送数据包括该目标TCP报文的TCP负荷。在另一个实施方式中,该待发送数据不包括该目标TCP报文的TCP负荷。
应用程序接收到该第二存储地址后,可以处理该第二存储地址指示的TCP负荷,例如,修改TCP负荷中的内容,也可以不处理该第二存储地址指示的TCP负荷。
在一个实施方式中,该应用程序确定的待发送数据包括多个TCP报文的负荷,或该应用程序确定的待发送数据存储在多个mbuf中时,该应用程序根据待发送数据生成第二存储链,调用套接字的mbuf_send接口(为本申请扩展的能够发送mbuf链地址的接口),通过该mbuf_send接口向套接字发送该第二存储链的地址。其中,该第二存储链的地址为该第二存储链的起始块的地址。该第二存储链可以与该第一存储链相同或不同。该第二存储链的地址即为该第三存储地址。
在一个实施方式中,S406和S407可以概括为:CPU 106接收该第二存储地址,并根据该第二存储地址确定待发送数据以及该待发送数据所在的存储块的第三存储地址。在一个实施方式中,该待发送数据为该第二存储地址指示的TCP负荷中的全部或部分数据。相应地,该第三存储地址与该第二存储地址可以相同也可以不同。在一个实施方式中,该待发送数据包括该目标TCP报文的TCP负荷。
在S408中,CPU 106向TOE组件105发送该第三存储地址。
在一个实现方式中,CPU 106中的套接字向TOE组件105的TOE发送模块105-2发送该第三存储地址。
在S409中,TOE组件105修改该第三存储地址指示的每个TCP负荷对应的TCP头。
在一个实施方式中,TOE发送模块105-2根据第三地址模块查找到待发送的每个TCP负荷,并根据需要修改该TCP负荷对应的TCP头,例如,修改TCP头中的TCP端口号。其中,S409为本申请的可选步骤。
在S410中,TOE组件105根据该第三存储地址确定窗口数据,该窗口数据为该待发送数据中的部分或全部数据。
进一步地,TOE组件105向NP 104发送第四存储地址,该第四存储地址指示该窗口数据在该存储器中的存储块。
在一个实现方式中,TOE组件105中的TOE发送模块105-2根据网络设备100的拥塞窗口以及对端的接收窗口以及该第三存储地址对应的存储块中存储的待发送数据确定窗口数据。TOE组件发送模块105-2先根据网络设备100的拥塞窗口以及对端的接收窗口确定窗口数据量,然而根据该窗口数据量从该第三存储地址对应的存储块中存储的待发送数据中确定窗口数据。该窗口数据量可以为mbuf个数,或者需要发送的字节数。该第四存储地址是根据每次发送的窗口数据确定的。
例如,待发送数据包括5个mbuf,即mbuf 1,mbuf 2,mbuf 3,mbuf 4和mbuf 5,TOE组件105根据网络设备100的拥塞窗口以及对端的接收端口确定第一次可以发送3个mbuf,则根据mbuf 1,mbuf 2和mbuf 3生成第三存储链,并将第三存储链的地址(即mbuf 1所在的存储块的存储地址)作为第四存储地址发送给NP 104。当第三存储链中的数据发送完成后,TOE组件105可以将剩余的2个mbuf,即mbuf 4和mbuf 5组成第四存储链,并将该第四存储链的地址(即mbuf 4所在的存储块的存储地址)作为第四存储地址发送给NP 104。
在另一个实施方式中,窗口数据为一个mbuf中的部分数据,则TOE组件105可以将该mbuf拆分为多个mbuf,新拆分的多个mbuf中的每个mbuf占用一个存储块,该多个mbuf中的第一mbuf包括该窗口数据,该第四存储地址即为该第一mbuf所在的存储块的存储地址。
在S411中,NP 104接收该第四存储地址,修改该第四存储地址对应的存储块中的三层头和二层头,得到修改后的TCP报文。
根据该第四存储地址对应的存储块的数量,该修改后的TCP报文可以是一个或多个。
在S412中,NP 104向PPE 101发送该第四存储地址。
在S413中,PPE 101接收该第四存储地址,从该第四存储地址对应的存储块读取修改后的TCP报文,计算TCP校验和,将计算得到的TCP校验和添加在修改后的TCP报文中,发送修改后的TCP报文。
上述过程中,该修改后的TCP报文仍然存储在该第四存储地址对应的存储块中。在S414中,TOE组件105(通过TOE接收模块105-1)接收TCP确认消息,该TCP确认消息用于通知对端设备接收的数据。TOE组件105根据该TCP确认消息确定该窗口数据发送成功,并通知BMU 102释放该第四存储地址对应的存储块。
其中,S414为S308和S309的一种实现方式。
本申请上述实施例不需要为TCP连接分配独立的发送缓冲区和接收缓冲区,节省了大量存储资源。假设图2所示的方法需要为每个TCP连接分配64KB的缓冲区,采用本申请实施例,在有10M条数据流的情况下,可以节省640GB的存储器空间。
通常,一个以太报文的长度在64字节到1500字节,而报文头的长度在20字节到64字节,本申请中,PPE 101将TCP报文写入BMU存储区后,NP 104和TOE组件105在处理TCP报文的时候,只需要读取该TCP报文的报文头,而不需要频繁从BMU存储区读取该TCP报文的负荷,能够降低存储器103的访问带宽,提高处理效率。
为了实现本申请实施例上述方法,如图7所示,本申请还提供了一种TOE组件700,该TOE组件包括接口701和处理器702。其中,该接口701用于TOE组件100与网络设备的其他组件通信。
处理器702通过接口701获取第一存储地址;该第一存储地址为存储器中第一存储块的地址,该第一存储块存储目标TCP报文,该目标TCP报文包括报文头和TCP负荷。处理器702根据该第一存储地址从该第一存储块获取该报文头。处理器702根据该报文头执行TCP相关的协议处理;在根据该报文头执行TCP相关的协议处理的过程中,该TCP负荷没有被该TOE组件读出该第一存储块。
在一个实施方式中,处理器702还通过接口701向中央处理单元CPU发送第二存储 地址;该第二存储地址为该第一存储地址,或该第二存储地址指示第二存储块,该第二存储块为至少一个存储块的起始块,该至少一个存储块包括该第一存储块。
在一个实施方式中,处理器702还在获取到该目标TCP报文所属数据流的多个TCP报文的存储地址后,根据该多个TCP报文的存储地址生成存储链;该第二存储地址为该存储链中的起始块的地址。
在一个实施方式中,处理器702还通过接口701接收该CPU发送的第三存储地址,该第三存储地址指示该CPU确定的待发送数据所在的存储块,该待发送数据包括该TCP负荷。处理器702还根据该第三存储地址获取该待发送数据,在该待发送数据被发送成功前,该待发送数据在该存储器中的存储位置不变。
在一个实施方式中处理器702还通过接口701接收中央处理单元CPU发送的第三存储地址,该第三存储地址指示该CPU确定的待发送数据所在的存储块。处理器702根据该第三存储地址获取该待发送数据;在该待发送数据被发送成功前,该待发送数据在该存储器中的存储位置不变。
本申请进一步提供了一种芯片,该芯片包括图7所示的TOE组件以及网络处理器。该网络处理器可以是图1,图2或图3中的NP 104。进一步地,该芯片还可以包括图1中的PPE 101和BMU 102中的一个或全部。
本申请进一步提供了一种网络设备,该网络设备如图8所示,包括TOE组件801,存储器802和CPU 803,该CPU 803运行套接字和应用程序。TOE组件801,存储器802和CPU 803通过总线804相互通信。存储器802存储传输控制协议TCP报文。
TOE组件801获取第一存储地址,该第一存储地址为该存储器中第一存储块的地址,该第一存储块存储目标TCP报文,该目标TCP报文包括报文头和TCP负荷。TOE组件801还根据该第一存储地址从该第一存储块获取该报文头,并根据该报文头执行TCP相关的协议处理;在根据该报文头执行TCP相关的协议处理的过程中,该TCP负荷没有被该TOE组件读出该第一存储块。
在一个实施方式中,TOE组件801还向该CPU 803发送第二存储地址;该第二存储地址为该第一存储地址,或该第二存储地址指示第二存储块,该第二存储块为至少一个存储块的起始块,该至少一个存储块包括该第一存储块。该CPU 803接收该第二存储地址,根据该第二存储地址确定待发送数据以及第三存储地址,该第三存储地址指示该待发送数据所在的存储块,该待发送数据包括该TCP负荷。
在一个实施方式中,该TOE组件801还在获取到该目标TCP报文所属数据流的多个TCP报文的存储地址后,根据该多个TCP报文的存储地址生成存储链;该第二存储地址为该存储链的起始块的地址。
在一个实施方式中,该TOE组件801向该套接字发送该第二存储地址。该套接字接收该第二存储地址,向该应用程序发送该第二存储地址,并接收该应用程序发送的该第三存储地址。该应用程序接收该第二存储地址,根据该第二存储地址确定该待发送数据以及该第三存储地址,并向该套接字发送该第三存储地址。
进一步地,该套接字还向该TOE组件801发送该第三存储地址.该TOE组件801还接收该第三存储地址,根据该第三存储地址获取该待发送数据;在该待发送数据被发送 成功前,该待发送数据在该存储器中的存储位置不变。
在另一个实施方式中,该CPU 803向该TOE组件发送第三存储地址,该第三存储地址指示该CPU确定的待发送数据所在的存储块。该TOE组件801还根据该第三存储地址获取该待发送数据;在该待发送数据被发送成功前,该待发送数据在该存储器中的存储位置不变。
本申请的TOE组件、芯片和网络设备在处理TCP报文的时候,只需要读取该TCP报文的报文头,而不需要频繁从存储器中读取该TCP报文的负荷,能够降低存储器的访问带宽,提高处理效率。
本申请提供的各实施方式在不冲突的情况下可以互相参考和应用。
以上该仅是本发明的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (18)

  1. 一种处理传输控制协议TCP报文的方法,其特征在于,包括:
    TCP卸载引擎TOE组件获取第一存储地址,所述第一存储地址为存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷;
    所述TOE组件根据所述第一存储地址从所述第一存储块获取所述报文头;
    所述TOE组件根据所述报文头执行TCP相关的协议处理;
    其中,在所述TOE组件根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    所述TOE组件向中央处理单元CPU发送第二存储地址;
    所述第二存储地址为所述第一存储地址,或所述第二存储地址指示第二存储块,所述第二存储块为至少一个存储块的起始块,所述至少一个存储块包括所述第一存储块。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    所述TOE组件在获取到所述目标TCP报文所属数据流的多个TCP报文的存储地址后,根据所述多个TCP报文的存储地址生成存储链;
    所述第二存储地址为所述存储链的起始块的地址。
  4. 根据权利要求2或3所述的方法,其特征在于,还包括:
    所述TOE组件接收所述CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块,所述待发送数据包括所述TCP负荷;
    根据所述第三存储地址获取所述待发送数据,在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    所述TOE组件接收中央处理单元CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块;
    根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
  6. 一种传输控制协议卸载引擎TOE组件,其特征在于,包括:接口和处理器,所述处理器用于:
    通过所述接口获取第一存储地址;所述第一存储地址为存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷;
    根据所述第一存储地址从所述第一存储块获取所述报文头;
    根据所述报文头执行TCP相关的协议处理;
    其中,在所述处理器根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。
  7. 根据权利要求6所述的TOE组件,其特征在于:
    所述处理器还用于通过所述接口向中央处理单元CPU发送第二存储地址;
    所述第二存储地址为所述第一存储地址,或所述第二存储地址指示第二存储块,所述第二存储块为至少一个存储块的起始块,所述至少一个存储块包括所述第一存储块。
  8. 根据权利要求7所述的TOE组件,其特征在于,所述处理器还用于:
    在获取到所述目标TCP报文所属数据流的多个TCP报文的存储地址后,根据所述多个TCP报文的存储地址生成存储链;
    所述第二存储地址为所述存储链中的起始块的地址。
  9. 根据权利要求7或8所述的TOE组件,其特征在于,所述处理器还用于:
    通过所述接口接收所述CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块,所述待发送数据包括所述TCP负荷;
    根据所述第三存储地址获取所述待发送数据,在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
  10. 根据权利要求6所述的TOE组件,其特征在于,所述处理器还用于:
    通过所述接口接收中央处理单元CPU发送的第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块;
    根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
  11. 一种芯片,其特征在于,包括权利要求6-10中任意一项所述的TOE组件以及网络处理器。
  12. 一种网络设备,其特征在于,包括传输控制协议卸载引擎TOE组件和存储器;
    所述存储器用于存储传输控制协议TCP报文;
    所述TOE组件用于:
    获取第一存储地址,所述第一存储地址为所述存储器中第一存储块的地址,所述第一存储块存储目标TCP报文,所述目标TCP报文包括报文头和TCP负荷;
    根据所述第一存储地址从所述第一存储块获取所述报文头;
    根据所述报文头执行TCP相关的协议处理;其中,在所述TOE组件根据所述报文头执行TCP相关的协议处理的过程中,所述TCP负荷没有被所述TOE组件读出所述第一存储块。
  13. 根据权利要求12所述的网络设备,其特征在于,还包括中央处理单元CPU:
    所述TOE组件还用于向所述CPU发送第二存储地址;所述第二存储地址为所述第一存储地址,或所述第二存储地址指示第二存储块,所述第二存储块为至少一个存储块的起始块,所述至少一个存储块包括所述第一存储块;
    所述CPU接收所述第二存储地址,根据所述第二存储地址确定待发送数据以及第三存储地址,所述第三存储地址指示所述待发送数据所在的存储块,所述待发送数据包括所述TCP负荷。
  14. 根据权利要求13所述的网络设备,其特征在于,所述TOE组件还用于在获取到所述目标TCP报文所属数据流的多个TCP报文的存储地址后,根据所述多个TCP报文 的存储地址生成存储链;所述第二存储地址为所述存储链的起始块的地址。
  15. 根据权利要求13或14所述的网络设备,其特征在于,所述CPU运行套接字,
    所述TOE组件还用于向所述套接字发送所述第二存储地址;所述套接字用于接收所述第二存储地址。
  16. 根据权利要求15所述的网络设备,其特征在于,所述CPU还运行应用程序,
    所述套接字用于向所述应用程序发送所述第二存储地址,并接收所述应用程序发送的所述第三存储地址;
    所述应用程序用于接收所述第二存储地址,根据所述第二存储地址确定所述待发送数据以及所述第三存储地址,并向所述套接字发送所述第三存储地址。
  17. 根据权利要求16所述的网络设备,其特征在于:
    所述套接字还用于向所述TOE组件发送所述第三存储地址;
    所述TOE组件还用于接收所述第三存储地址,根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
  18. 根据权利要求12所述的网络设备,其特征在于,还包括中央处理单元CPU;
    所述CPU用于向所述TOE组件发送第三存储地址,所述第三存储地址指示所述CPU确定的待发送数据所在的存储块;
    所述TOE组件还用于根据所述第三存储地址获取所述待发送数据;在所述待发送数据被发送成功前,所述待发送数据在所述存储器中的存储位置不变。
PCT/CN2019/104721 2018-09-27 2019-09-06 处理tcp报文的方法、toe组件以及网络设备 WO2020063298A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19867965.6A EP3846405B1 (en) 2018-09-27 2019-09-06 Method for processing tcp message, toe assembly, and network device
US17/213,582 US11489945B2 (en) 2018-09-27 2021-03-26 TCP packet processing method, toe component, and network device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811134308.X 2018-09-27
CN201811134308.XA CN110958213B (zh) 2018-09-27 2018-09-27 处理tcp报文的方法、toe组件以及网络设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/213,582 Continuation US11489945B2 (en) 2018-09-27 2021-03-26 TCP packet processing method, toe component, and network device

Publications (1)

Publication Number Publication Date
WO2020063298A1 true WO2020063298A1 (zh) 2020-04-02

Family

ID=69951167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104721 WO2020063298A1 (zh) 2018-09-27 2019-09-06 处理tcp报文的方法、toe组件以及网络设备

Country Status (4)

Country Link
US (1) US11489945B2 (zh)
EP (1) EP3846405B1 (zh)
CN (1) CN110958213B (zh)
WO (1) WO2020063298A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422617A (zh) * 2021-12-30 2022-04-29 苏州浪潮智能科技有限公司 一种报文处理方法、系统及计算机可读存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019645B (zh) * 2020-07-06 2021-07-30 中科驭数(北京)科技有限公司 基于toe的网络地址管理方法及装置
CN112073434B (zh) * 2020-09-28 2022-06-07 山东产研集成电路产业研究院有限公司 降低基于toe的高频交易终端接收通道传输延迟的方法
CN112953967A (zh) * 2021-03-30 2021-06-11 扬州万方电子技术有限责任公司 网络协议卸载装置和数据传输系统
CN113179327B (zh) * 2021-05-14 2023-06-02 中兴通讯股份有限公司 基于大容量内存的高并发协议栈卸载方法、设备、介质
CN115460300A (zh) * 2021-06-08 2022-12-09 中兴通讯股份有限公司 数据处理方法、toe硬件及计算机可读存储介质
CN115883672A (zh) * 2021-08-11 2023-03-31 大唐联仪科技有限公司 组包方法、数据传输方法、装置、电子设备及存储介质
CN115941762A (zh) * 2021-09-03 2023-04-07 华为技术有限公司 一种数据传输方法、电子设备和装置
CN113824706B (zh) * 2021-09-10 2023-07-25 杭州迪普信息技术有限公司 报文解析方法及网络设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050226238A1 (en) * 2004-03-31 2005-10-13 Yatin Hoskote Hardware-based multi-threading for packet processing
CN1747444A (zh) * 2004-09-10 2006-03-15 国际商业机器公司 数据处理系统网络中从主机单元分担数据流的方法及引擎
CN103546424A (zh) * 2012-07-10 2014-01-29 华为技术有限公司 一种tcp数据传输方法、tcp卸载引擎及系统
CN106034084A (zh) * 2015-03-16 2016-10-19 华为技术有限公司 一种数据传输方法及装置
CN106789708A (zh) * 2016-12-06 2017-05-31 中国电子科技集团公司第三十二研究所 Tcp/ip卸载引擎中的多通道处理方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7313623B2 (en) * 2002-08-30 2007-12-25 Broadcom Corporation System and method for TCP/IP offload independent of bandwidth delay product
US7397800B2 (en) * 2002-08-30 2008-07-08 Broadcom Corporation Method and system for data placement of out-of-order (OOO) TCP segments
US20050060538A1 (en) * 2003-09-15 2005-03-17 Intel Corporation Method, system, and program for processing of fragmented datagrams
US7275152B2 (en) * 2003-09-26 2007-09-25 Intel Corporation Firmware interfacing with network protocol offload engines to provide fast network booting, system repurposing, system provisioning, system manageability, and disaster recovery
US7562158B2 (en) * 2004-03-24 2009-07-14 Intel Corporation Message context based TCP transmission
US7783769B2 (en) * 2004-03-31 2010-08-24 Intel Corporation Accelerated TCP (Transport Control Protocol) stack processing
US7620071B2 (en) * 2004-11-16 2009-11-17 Intel Corporation Packet coalescing
US7523179B1 (en) * 2004-12-14 2009-04-21 Sun Microsystems, Inc. System and method for conducting direct data placement (DDP) using a TOE (TCP offload engine) capable network interface card
US7475167B2 (en) * 2005-04-15 2009-01-06 Intel Corporation Offloading data path functions
US8028071B1 (en) * 2006-02-15 2011-09-27 Vmware, Inc. TCP/IP offload engine virtualization system and methods
US7849214B2 (en) * 2006-12-04 2010-12-07 Electronics And Telecommunications Research Institute Packet receiving hardware apparatus for TCP offload engine and receiving system and method using the same
JP4921142B2 (ja) * 2006-12-12 2012-04-25 キヤノン株式会社 通信装置
US8194667B2 (en) * 2007-03-30 2012-06-05 Oracle America, Inc. Method and system for inheritance of network interface card capabilities
US8006297B2 (en) * 2007-04-25 2011-08-23 Oracle America, Inc. Method and system for combined security protocol and packet filter offload and onload
US8316276B2 (en) * 2008-01-15 2012-11-20 Hicamp Systems, Inc. Upper layer protocol (ULP) offloading for internet small computer system interface (ISCSI) without TCP offload engine (TOE)
KR101221045B1 (ko) * 2008-12-22 2013-01-10 한국전자통신연구원 패킷 처리 방법 및 이를 이용한 toe 장치
KR101712199B1 (ko) 2010-03-02 2017-03-03 삼성전자주식회사 메시징 서비스와 소셜 네트워크 서비스 간의 상호 연동을 통한 연락처 제공 장치 및 방법
US9154427B2 (en) * 2012-12-31 2015-10-06 Emulex Corporation Adaptive receive path learning to facilitate combining TCP offloading and network adapter teaming
US9286225B2 (en) * 2013-03-15 2016-03-15 Saratoga Speed, Inc. Flash-based storage system including reconfigurable circuitry
CN106330788B (zh) * 2016-08-19 2018-05-22 北京网迅科技有限公司杭州分公司 报文分片传输方法和装置
CN109714302B (zh) * 2017-10-25 2022-06-14 阿里巴巴集团控股有限公司 算法的卸载方法、装置和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050226238A1 (en) * 2004-03-31 2005-10-13 Yatin Hoskote Hardware-based multi-threading for packet processing
CN1747444A (zh) * 2004-09-10 2006-03-15 国际商业机器公司 数据处理系统网络中从主机单元分担数据流的方法及引擎
CN103546424A (zh) * 2012-07-10 2014-01-29 华为技术有限公司 一种tcp数据传输方法、tcp卸载引擎及系统
CN106034084A (zh) * 2015-03-16 2016-10-19 华为技术有限公司 一种数据传输方法及装置
CN106789708A (zh) * 2016-12-06 2017-05-31 中国电子科技集团公司第三十二研究所 Tcp/ip卸载引擎中的多通道处理方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422617A (zh) * 2021-12-30 2022-04-29 苏州浪潮智能科技有限公司 一种报文处理方法、系统及计算机可读存储介质
CN114422617B (zh) * 2021-12-30 2023-07-25 苏州浪潮智能科技有限公司 一种报文处理方法、系统及计算机可读存储介质

Also Published As

Publication number Publication date
US20210218831A1 (en) 2021-07-15
US11489945B2 (en) 2022-11-01
EP3846405A1 (en) 2021-07-07
EP3846405B1 (en) 2023-11-15
CN110958213A (zh) 2020-04-03
EP3846405A4 (en) 2021-09-22
CN110958213B (zh) 2021-10-22

Similar Documents

Publication Publication Date Title
WO2020063298A1 (zh) 处理tcp报文的方法、toe组件以及网络设备
WO2017215392A1 (zh) 一种网络拥塞控制方法、设备及系统
US10534601B1 (en) In-service software upgrade of virtual router with reduced packet loss
EP2928135B1 (en) Pcie-based host network accelerators (hnas) for data center overlay network
EP2928134B1 (en) High-performance, scalable and packet drop-free data center switch fabric
US6907042B1 (en) Packet processing device
US8175116B2 (en) Multiprocessor system for aggregation or concatenation of packets
US8311059B2 (en) Receive coalescing and automatic acknowledge in network interface controller
CN110022264B (zh) 控制网络拥塞的方法、接入设备和计算机可读存储介质
JP2009152953A (ja) ゲートウェイ装置およびパケット転送方法
US7139268B1 (en) Performance of intermediate nodes with flow splicing
WO2021013046A1 (zh) 通信方法和网卡
KR101401874B1 (ko) 통신제어 시스템, 스위칭 노드, 통신제어 방법, 및 통신제어용 프로그램
US20070291782A1 (en) Acknowledgement filtering
CN112333094B (zh) 数据传输处理方法、装置、网络设备及可读存储介质
JP2016504810A (ja) コンテンツベースの過負荷保護
WO2017162117A1 (zh) 一种集群精确限速方法和装置
US10701189B2 (en) Data transmission method and apparatus
WO2019001484A1 (zh) 一种实现发送端调速的方法、装置和系统
CN115002023B (zh) 一种链路聚合方法、链路聚合装置、电子设备及存储介质
CN114268518B (zh) 一种实现sdwan数据隧道转发加速的方法及系统
CN115348108A (zh) 用于维持互联网协议安全隧道的方法和设备
US8509235B2 (en) Layer-2 packet return in proxy-router communication protocol environments
CN113497767A (zh) 传输数据的方法、装置、计算设备及存储介质
JP6279970B2 (ja) プロセッサ、通信装置、通信システム、通信方法およびコンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19867965

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019867965

Country of ref document: EP

Effective date: 20210331