CN115103036A - Efficient TCP/IP datagram processing method and system - Google Patents

Efficient TCP/IP datagram processing method and system Download PDF

Info

Publication number
CN115103036A
CN115103036A CN202210557371.4A CN202210557371A CN115103036A CN 115103036 A CN115103036 A CN 115103036A CN 202210557371 A CN202210557371 A CN 202210557371A CN 115103036 A CN115103036 A CN 115103036A
Authority
CN
China
Prior art keywords
data
dma
datagram
address
network card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557371.4A
Other languages
Chinese (zh)
Inventor
王潇南
郝沁汾
叶笑春
范东睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN202210557371.4A priority Critical patent/CN115103036A/en
Publication of CN115103036A publication Critical patent/CN115103036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • H04L69/162Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms

Abstract

The invention provides a high-efficiency TCP/IP datagram processing method and a system, which utilize that in the communication process based on TCP, because the datagram required to be sent is directly handed over by DMA, only the memory address and the data size are processed in a protocol stack, and the process of copying a large amount of data generated in the protocol stack is reduced, thereby effectively reducing the time overhead of the process, and because DMA is introduced as data exchange between the memory and the internal memory, the resource occupation of a CPU is reduced. Therefore, the datagram processing method can effectively improve the transmission rate of the network datagram between the application program and the network card and save the occupancy rate of CPU resources.

Description

Efficient TCP/IP datagram processing method and system
Technical Field
The invention relates to the technical field of computer communication, in particular to a high-efficiency network message Transmission method, and particularly relates to a high-efficiency TCP/IP (Transmission Control Protocol/Internet Protocol) datagram processing method.
Background
At present, a general process of processing network data by an operating system (e.g., Linux) of a computer is to send a datagram (data packet) from an application program to a network protocol stack through a Socket interface, and a network card driver acquires data processed by the protocol stack from the network protocol stack. However, in an actual scenario, what performs datagram transmission with the application program is a user-mode network protocol stack, and what performs data interaction with the network card driver is a kernel-mode network protocol stack.
The TCP server and the client are two processes, and TCP/IP communication is carried out between the two processes through a socket. The server is a process at one end for establishing a socket by using local information and monitoring whether the socket is connected; the client is a socket established according to the local information of the server, and the socket is used as a target to request a process at one end of connection.
The method comprises the following steps that a kernel mode program and a user mode program which carry out datagram interaction based on a Socket mechanism, no matter from the user program to a protocol stack or from the protocol stack to a network card driver, a plurality of times of datagram copying processes need to be carried out, and taking conventional TCP/IP communication in the prior art as an example, the conventional network card data transmission can carry out three times of datagram copying processes which are participated in by a CPU: 1. user mode to kernel mode (sent to the network protocol stack by socket); 2. protocol stack to network card driver (i.e., data received by sk _ buff of network card driver); 3. the network card driver encapsulates the data packets into MAC frames (which contain the user data). This process requires a large amount of time slices, which is a major bottleneck in increasing the network datagram transmission rate. It would be an effective means to increase the efficiency of transmission of network datagrams to resolve the time slices consumed by the large number of copies required in the transmission process.
Disclosure of Invention
The invention aims to solve the technical problems that firstly, the datagram transmission rate is low due to time slice consumption caused by a large amount of copies required in the network datagram transmission process at present; and secondly, the problem of CPU resource occupation caused by data copying among processes.
Aiming at the defects of the prior art, the invention provides an efficient TCP/IP datagram processing method, which comprises the following steps:
executing step 1 when the server application program sends data; executing step 2 when the server application program receives the data;
step 1, a server application program establishes Socket connection by constructing a TCP server process, a datagram to be sent is written into a DMA (direct memory access), a memory offset address fed back by the DMA and the size of written data are obtained, the memory offset address and the size of the data are stored into sk _ buf of a network protocol stack through a Socket interface, a network card drive maps the datagram to a virtual address from the DMA according to the memory offset address and the size of the data in the sk _ buf, the network card drive reads the datagram in the DMA according to the virtual address to replace part data in the sk _ buf, the replaced sk _ buf is packaged into a frame data MAC (media access control) packet, and a physical network card sends the MAC frame out;
and 2, after the physical network card receives the MAC frame data packet, the physical network card drives the MAC frame data packet to be unpacked by the network card to obtain a sk _ buff data packet, the data part of the sk _ buff data packet is cut to the DMA, the memory address returned by the DMA and the written data size are used as the data part of the sk _ buff data packet to obtain a replacement data packet, the replacement data packet is delivered to a network protocol stack, a service end application program receives the memory address and the data size through the network protocol stack, the memory address is mapped into a user-state virtual address by the service end application program, and the service end application program accesses the virtual address to receive the datagram in the DMA.
In the method for efficiently processing the TCP/IP datagram, the step 1 of replacing part of the sk _ buff data includes: and replacing the offset address plus the data size of the user data in the sk _ buff with the datagram read from the DMA.
The DMA is used for replacing a network protocol stack to process the datagram to be sent by a user, and provides a read-write interface for a user-mode server application program through a DMA read-write and management driver program, wherein the DMA read-write and management driver program comprises DMA read-write address allocation and conflict detection, and judges whether the address can be written again by allocating a write address and judging whether the write data is read out or not.
In the efficient TCP/IP datagram processing method, the network card driver is transplanted according to the used equipment so as to be suitable for various network card equipment and virtual network cards;
in the step 1, the network card driver acquires a data packet from a kernel network protocol stack, analyzes the data packet to acquire an offset address and a data size written by DMA, and maps the address to the network card driver, thereby acquiring a datagram of the service end application program, replacing a data part of sk _ buff, and encapsulating the data part into an MAC frame format;
in step 2, the network card driver analyzes the MAC frame as sk _ buff, writes the analyzed data into the DMA, replaces the corresponding data in the sk _ buff with the memory offset address and the data size fed back from the DMA, and then delivers the replaced sk _ buff to the network protocol stack for processing.
The invention also provides a high-efficiency TCP/IP datagram processing system, which comprises:
calling a sending module when a server application program sends data; calling a receiving module when the server application program receives data;
the sending module is used for enabling a server application program to establish Socket connection by constructing a TCP server process, writing a datagram to be sent into a DMA (direct memory access), obtaining a memory offset address fed back by the DMA and the size of the written data, storing the memory offset address and the size of the data into sk _ buf of a network protocol stack through a Socket interface, mapping the datagram from the DMA to a virtual address by a network card driver according to the memory offset address and the size of the data in the sk _ buf, reading the datagram in the DMA by the network card driver according to the virtual address to replace part data in the sk _ buf, packaging the replaced sk _ buf into an MAC frame data packet, and sending the MAC frame out by a physical network card;
the receiving module is used for enabling the physical network card to receive the MAC frame data packet, unpacking the MAC frame data packet by a network card driver to obtain a sk _ buff data packet, cutting a data part of the sk _ buff data packet to the DMA, using a memory address returned by the DMA and written data size as the data part of the sk _ buff data packet to obtain a replacement data packet, handing the replacement data packet to a network protocol stack, receiving the memory address and the data size by a service end application program through the network protocol stack, mapping the memory address to a user-state virtual address by the service end application program, and accessing the virtual address by the service end application program to receive the data packet in the DMA.
In the efficient TCP/IP datagram processing system, the step of replacing part of the sk _ buff data in the sending module is specifically: and replacing the offset address plus the data size of the user data in the sk _ buff with the datagram read from the DMA.
The DMA is used for replacing a network protocol stack to process the datagram to be sent by a user, and provides a read-write interface for a user-mode server application program through a DMA read-write and management driving program, wherein the DMA read-write and management driving program comprises DMA read-write address allocation and conflict detection, and judges whether the address can be written again or not by allocating a write address and judging whether the write data are read or not.
In the efficient TCP/IP datagram processing system, the network card driver is transplanted according to the used equipment so as to be suitable for various network card equipment and virtual network cards;
the network card driver acquires a data packet from a kernel network protocol stack in the sending module, analyzes the data packet to acquire an offset address and data size written by DMA, and maps the address to the network card driver, thereby acquiring a datagram of the server application program, replacing the data part of the sk _ buff and packaging the data part into an MAC frame format;
and the network card driver analyzes the MAC frame into sk _ buff in the receiving module, writes the analyzed data into the DMA, replaces the corresponding data in the sk _ buff by the memory offset address and the data size fed back from the DMA, and then delivers the replaced sk _ buff to a network protocol stack for processing.
The present invention also provides a storage medium for storing a program for executing any one of the efficient TCP/IP datagram processing methods.
The invention also provides a client used for any one of the efficient TCP/IP datagram processing systems.
According to the scheme, the invention has the advantages that:
the datagram processing method provided by the invention can be applied to network equipment which should have the basic functions and network subsystems of an operating system, a user mode network protocol stack and a kernel network protocol stack, and also should include a Direct Memory Access (DMA) module. The method provided by the invention can effectively improve the transmission rate of the datagram and reduce the resource occupation of the CPU.
Drawings
FIG. 1 is a DMA read-write and management driver of the present invention;
FIG. 2 is a flow chart of the process of sending data packets by the network card driver according to the present invention;
FIG. 3 is a flow chart of the process of receiving data packets by the network card driver according to the present invention;
FIG. 4 is a flow chart of a server application sending and receiving datagrams;
FIG. 5 is a general framework and flow diagram for datagram delivery according to the present invention;
FIG. 6 is a general framework and flow diagram of datagram reception according to the present invention;
FIG. 7 is a diagram of a prior art data transmission process;
fig. 8 is a diagram of a data transmission process according to the present invention.
Detailed Description
The above technical problem of the present invention is solved by providing an efficient TCP/IP datagram processing method, different from the conventional method, the data transmission process is as follows:
after an application program establishes Socket connection, a datagram required to be sent is written into a DMA, a memory offset address fed back after the datagram is written into the DMA and the size of the written data are sent to a network protocol stack through a Socket interface, after the network card driver acquires sk _ buf (skb) from the protocol stack after the protocol stack is processed (address connection information, verification and other contents are processed), the network card driver can only operate on a virtual address in an operating system, and therefore the datagram of a user is read by mapping the memory offset address and the size of the data in the sk _ buf to a driven virtual address at the moment, replacement of a skb data part and encapsulation of an MAC frame are completed in the network card driver, and the network card sends the packet out. The sending target is the connection target established by the socket, and the skb data packet contains a source address IP and a destination address IP.
A data receiving process:
after the network card receives the data packet, the network card driver unpacks the data packet, the data part is written into the DMA, the original network data part is replaced by the written memory offset address and the data size, the replacement process of the skb data part is completed, the replaced skb is handed to a protocol stack, after the skb data part is processed by the protocol stack, the application program receives the memory offset address and the data size, and the receiving process is completed by reading the datagram in the mapped virtual memory.
The method introduces DMA for handling datagrams to be sent by a user instead of the protocol stack. As shown in fig. 1, the DMA read-write and management driver includes DMA read-write address allocation and conflict detection functions, and determines whether the address can be written again by allocating a write address and determining whether the write data is read out, so as to avoid data overwriting that may occur when writing data and data misreading that may occur when reading data (i.e., data read out from the address at the time of writing is different from data written in).
The method also realizes a set of network card driving program which can be transplanted according to the used equipment and is suitable for various network card equipment and virtual network cards. As shown in fig. 2, in the process of sending a datagram by the driver, the driver acquires a data packet from the kernel protocol stack, analyzes the data packet to acquire an offset address and a data size after DMA writing, maps the address to the driver to acquire a datagram of an application program, and replaces and encapsulates a data part of the skb into an MAC frame format. In the process of processing the received datagram by the driver shown in fig. 3, the driver analyzes the MAC frame into the sk _ buff format, writes the analyzed data part into the DMA, replaces the memory offset address and the data size obtained from the DMA feedback with the data part in the sk _ buff, and then delivers the replaced skb to the protocol stack for processing through the netif _ rx ().
In order to make the aforementioned features and effects of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Sending flow of the datagram:
as shown in fig. 5, which is a general framework and a flow for sending a datagram, a user state as shown in fig. 4 first establishes a TCP server process with a conventional TCP server establishment flow, that is, a socket descriptor is created and an IP address, a protocol type, a port number, and the like are bound, a TCP client is monitored and connected, then a datagram to be sent is written into a DMA, a memory address returned by a write function of the DMA is saved, and a write function flow driven by the DMA is shown in fig. 1. And calling a write () function to write the memory address and the written data size into the socket descriptor, and finishing the sending process by the user mode application program at the moment. As shown in fig. 2, after data written in the socket descriptor is processed by the network protocol stack, a data packet with a sk _ buff format is obtained from the kernel-state protocol stack by the network card driver, the data portion, i.e., the memory address and the data size, of the sk _ buff is obtained after the sk _ buff is analyzed by the network card driver, the memory address is mapped to the virtual memory address of the kernel state by the driver through an ioremap () function according to the data size, at this time, the data in the virtual address is a network datagram required to be sent by the application process, the data portion in the sk _ buff is replaced by the datagram by the network card driver, and the skb is encapsulated into a data packet with an MAC frame format and is delivered to the network card for processing. The flow of sending of this datagram is completed.
As shown in fig. 7, which is a data transmission process of the prior art, in contrast to fig. 8, which is a data transmission process of the present invention, in this process, the method of the present invention only transmits the DMA offset address and the datagram size for storing the user datagram, so that the sk _ buff in the network card driver does not really need to encapsulate the data into the MAC frame, and therefore, the offset address plus the datagram size of the user data that should be stored in the sk _ buff in this case needs to be replaced with the data packet that the user needs to send and is obtained from the DMA with this information. Compared with the prior art, the invention has technical progress in transmission efficiency because only two key data of DMA offset address and datagram size are transmitted, and the data volume of the datagram of the whole user is one corner of iceberg.
The receiving process of the datagram:
as shown in fig. 6, which is a general framework and a flow for receiving datagrams, the network card driver shown in fig. 3 acquires a data packet in the MAC frame format from the physical network card, the driver parses the data packet in the MAC frame format into a data packet in the sk _ buff format, writes a data portion in the sk _ buff data packet into the DMA, uses a memory offset address returned by the DMA and the written data size as the data portion of the sk _ buff data packet, and delivers the replaced skb to the kernel protocol stack for processing through a netif _ rx () function. After the skb data packet is processed correspondingly by the protocol stack, at this time, as shown in fig. 4, the application program can read the socket descriptor through the read () function to obtain the memory address and the size of the datagram, the application program calls the mmap () function to map the received memory address to the current process, that is, to map the memory address to the user-state virtual address, the application program accesses the data of the address, that is, the network datagram sent by the sending end, and the flow of receiving the datagram is completed.
Therefore, the method provided by the invention only needs to copy data with CPU participation once due to the introduction of DMA in the process: 1. (procedure 1 mentioned above), the DMA is responsible for carrying the data, the CPU does not intervene in the procedure; 2. (2 procedures mentioned above), handling is taken care of by the DMA; 3. (procedure 3 mentioned above), the network card is also responsible for DMA when acquiring data, and the CPU does not intervene. Although the CPU of the present invention still needs to copy very little content, including offset addresses of DMA, and datagram size, the amount of this data is very small and negligible compared to the conventional way, so the present invention will reduce the copying process at least three times compared to the prior art.
The process describes the datagram processing method using the invention by utilizing the communication process based on TCP, because the datagram to be sent is directly processed by DMA, only the memory address and the data size are processed in the protocol stack, the process of copying a large amount of data generated in the protocol stack is reduced, thereby effectively reducing the time overhead of the process, and the resource occupation of a CPU is reduced by introducing DMA as the data exchange between the memory and the memory. Therefore, the datagram processing method can effectively improve the transmission rate of the network datagram between the application program and the network card, and saves the occupancy rate of CPU resources.
The following are system examples corresponding to the above method examples, and this embodiment can be implemented in cooperation with the above embodiments. The related technical details mentioned in the above embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described embodiments.
The invention also provides an efficient TCP/IP datagram processing system, which comprises:
calling a sending module when the server application program sends data; calling a receiving module when the server application program receives data;
the sending module is used for enabling a server application program to establish Socket connection by constructing a TCP server process, writing a datagram to be sent into a DMA (direct memory access), obtaining a memory offset address fed back by the DMA and the size of the written data, and storing the memory offset address and the size of the data into sk _ buf of a network protocol stack through a Socket interface;
the receiving module is used for enabling the physical network card to receive the MAC frame data packet, then the physical network card is driven by the network card to unpack the MAC frame data packet to obtain a sk _ buff data packet, the data part of the sk _ buff data packet is cut to the DMA, the memory address returned by the DMA and the written data size are used as the data part of the sk _ buff data packet to obtain a replacement data packet, the replacement data packet is delivered to a network protocol stack, a server application program receives the memory address and the data size through the network protocol stack, the server application program maps the memory address into a user-state virtual address, and the server application program accesses the virtual address to receive the data packet in the DMA.
In the efficient TCP/IP datagram processing system, the step of replacing part of the sk _ buff data in the sending module specifically includes: and replacing the offset address plus the data size of the user data in the sk _ buff with the datagram read from the DMA.
The DMA is used for replacing a network protocol stack to process the datagram to be sent by a user, and provides a read-write interface for a user-mode server application program through a DMA read-write and management driving program, wherein the DMA read-write and management driving program comprises DMA read-write address allocation and conflict detection, and judges whether the address can be written again or not by allocating a write address and judging whether the write data are read or not.
In the efficient TCP/IP datagram processing system, the network card driver is transplanted according to the used equipment so as to be suitable for various network card equipment and virtual network cards;
the network card driver acquires a data packet from a kernel network protocol stack in the sending module, analyzes the data packet to acquire an offset address and data size written by the DMA, and maps the address to the network card driver, so that a datagram of the service end application program is acquired, and the data part of the sk _ buff is replaced and encapsulated into an MAC frame format;
and the network card driver analyzes the MAC frame into sk _ buff in the receiving module, writes the analyzed data into the DMA, replaces the corresponding data in the sk _ buff by the memory offset address and the data size fed back from the DMA, and then delivers the replaced sk _ buff to a network protocol stack for processing.
The present invention also provides a storage medium for storing a program for executing any one of the efficient TCP/IP datagram processing methods.
The invention also provides a client used for any one of the efficient TCP/IP datagram processing systems.

Claims (10)

1. An efficient TCP/IP datagram processing method, comprising:
step 1 is executed when the server application program sends data; executing step 2 when the server application program receives the data;
step 1, a server application program establishes Socket connection by constructing a TCP server process, a datagram to be sent is written into a DMA (direct memory access), a memory offset address fed back by the DMA and the size of the written data are obtained, the memory offset address and the size of the data are stored into sk _ buf of a network protocol stack through a Socket interface, a network card driver maps the datagram from the DMA to a virtual address according to the memory offset address and the size of the data in the sk _ buf, the network card driver reads the datagram in the DMA according to the virtual address to replace part data in the sk _ buf, the replaced sk _ buf is packaged into an MAC frame data packet, and a physical network card sends the MAC frame out;
and 2, after the physical network card receives the MAC frame data packet, the physical network card drives the MAC frame data packet to be unpacked by the network card to obtain a sk _ buff data packet, the data part of the sk _ buff data packet is cut to the DMA, the memory address returned by the DMA and the written data size are used as the data part of the sk _ buff data packet to obtain a replacement data packet, the replacement data packet is delivered to a network protocol stack, a service end application program receives the memory address and the data size through the network protocol stack, the memory address is mapped into a user-state virtual address by the service end application program, and the service end application program accesses the virtual address to receive the datagram in the DMA.
2. The method for efficient TCP/IP datagram processing as claimed in claim 1, wherein the step 1 of replacing part of the data in sk _ buff is specifically: the offset address plus data size of the user data in sk _ buff is replaced with the datagram read from the DMA.
3. The method as claimed in claim 1, wherein the DMA is used to replace a network protocol stack to process a datagram intended to be sent by a user, and provides a read/write interface for a client application through a DMA read/write and management driver, the DMA read/write and management driver includes DMA read/write address allocation and collision detection, allocates a write address, and determines whether the address can be written again according to whether the write data is read or not.
4. The efficient TCP/IP datagram processing method of claim 1, wherein the network card driver is migrated according to the used device to be suitable for various network card devices and virtual network cards;
in the step 1, the network card driver acquires a data packet from a kernel network protocol stack, analyzes the data packet to acquire an offset address and a data size written by DMA, and maps the address to the network card driver, thereby acquiring a datagram of the service end application program, replacing a data part of sk _ buff, and encapsulating the data part into an MAC frame format;
in step 2, the network card driver analyzes the MAC frame as sk _ buff, writes the analyzed data into the DMA, replaces the corresponding data in the sk _ buff with the memory offset address and the data size fed back from the DMA, and then delivers the replaced sk _ buff to the network protocol stack for processing.
5. An efficient TCP/IP datagram processing system, comprising:
calling a sending module when a server application program sends data; calling a receiving module when the server application program receives data;
the sending module is used for enabling a server application program to establish Socket connection by constructing a TCP server process, writing a datagram to be sent into a DMA (direct memory access), obtaining a memory offset address fed back by the DMA and the size of the written data, storing the memory offset address and the size of the data into sk _ buf of a network protocol stack through a Socket interface, mapping the datagram from the DMA to a virtual address by a network card driver according to the memory offset address and the size of the data in the sk _ buf, reading the datagram in the DMA by the network card driver according to the virtual address to replace part data in the sk _ buf, packaging the replaced sk _ buf into an MAC frame data packet, and sending the MAC frame out by a physical network card;
the receiving module is used for enabling the physical network card to receive the MAC frame data packet, then the physical network card is driven by the network card to unpack the MAC frame data packet to obtain a sk _ buff data packet, the data part of the sk _ buff data packet is cut to the DMA, the memory address returned by the DMA and the written data size are used as the data part of the sk _ buff data packet to obtain a replacement data packet, the replacement data packet is delivered to a network protocol stack, a server application program receives the memory address and the data size through the network protocol stack, the server application program maps the memory address into a user-state virtual address, and the server application program accesses the virtual address to receive the data packet in the DMA.
6. The efficient TCP/IP datagram processing system of claim 5, wherein the partial data in the replacing sk _ buff in the sending module is specifically: the offset address plus data size of the user data in sk _ buff is replaced with the datagram read from the DMA.
7. The efficient TCP/IP datagram processing system of claim 5, wherein the DMA, instead of the network protocol stack, is used to process the datagram to be sent by the user and provide a read/write interface for the user mode server application through a DMA read/write and management driver, which includes DMA read/write address allocation and collision detection, allocates a write address, and determines whether the address can be written again according to whether the write data is read or not.
8. The efficient TCP/IP datagram processing system of claim 5 wherein the network card driver is ported to accommodate multiple network card devices and virtual network cards depending on the device used;
the network card driver acquires a data packet from a kernel network protocol stack in the sending module, analyzes the data packet to acquire an offset address and data size written by DMA, and maps the address to the network card driver, thereby acquiring a datagram of the server application program, replacing the data part of the sk _ buff and packaging the data part into an MAC frame format;
and the network card driver analyzes the MAC frame into sk _ buff in the receiving module, writes the analyzed data into the DMA, replaces the corresponding data in the sk _ buff by the memory offset address and the data size fed back from the DMA, and then delivers the replaced sk _ buff to a network protocol stack for processing.
9. A storage medium storing a program for executing the efficient TCP/IP datagram processing method according to any one of claims 1 to 4.
10. A client for use in an efficient TCP/IP datagram processing system as claimed in any of claims 5 to 8.
CN202210557371.4A 2022-05-20 2022-05-20 Efficient TCP/IP datagram processing method and system Pending CN115103036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557371.4A CN115103036A (en) 2022-05-20 2022-05-20 Efficient TCP/IP datagram processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557371.4A CN115103036A (en) 2022-05-20 2022-05-20 Efficient TCP/IP datagram processing method and system

Publications (1)

Publication Number Publication Date
CN115103036A true CN115103036A (en) 2022-09-23

Family

ID=83288131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557371.4A Pending CN115103036A (en) 2022-05-20 2022-05-20 Efficient TCP/IP datagram processing method and system

Country Status (1)

Country Link
CN (1) CN115103036A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450058A (en) * 2023-06-19 2023-07-18 浪潮电子信息产业股份有限公司 Data transfer method, device, heterogeneous platform, equipment and medium
CN117318892A (en) * 2023-11-27 2023-12-29 阿里云计算有限公司 Computing system, data processing method, network card, host computer and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101135980A (en) * 2006-08-29 2008-03-05 飞塔信息科技(北京)有限公司 Device and method for realizing zero copy based on Linux operating system
CN101340574A (en) * 2008-08-04 2009-01-07 中兴通讯股份有限公司 Method and system realizing zero-copy transmission of stream media data
CN102402487A (en) * 2011-11-15 2012-04-04 北京天融信科技有限公司 Zero copy message reception method and system
CN108494679A (en) * 2018-06-01 2018-09-04 武汉绿色网络信息服务有限责任公司 A kind of SSH message forwarding methods and device for realizing router based on linux system
CN109413106A (en) * 2018-12-12 2019-03-01 中国航空工业集团公司西安航空计算技术研究所 A kind of ICP/IP protocol stack implementation method
CN109766187A (en) * 2019-01-10 2019-05-17 烽火通信科技股份有限公司 Network packet high speed processing retransmission method and system
CN113973091A (en) * 2020-07-23 2022-01-25 华为技术有限公司 Message processing method, network equipment and related equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101135980A (en) * 2006-08-29 2008-03-05 飞塔信息科技(北京)有限公司 Device and method for realizing zero copy based on Linux operating system
CN101340574A (en) * 2008-08-04 2009-01-07 中兴通讯股份有限公司 Method and system realizing zero-copy transmission of stream media data
CN102402487A (en) * 2011-11-15 2012-04-04 北京天融信科技有限公司 Zero copy message reception method and system
CN108494679A (en) * 2018-06-01 2018-09-04 武汉绿色网络信息服务有限责任公司 A kind of SSH message forwarding methods and device for realizing router based on linux system
CN109413106A (en) * 2018-12-12 2019-03-01 中国航空工业集团公司西安航空计算技术研究所 A kind of ICP/IP protocol stack implementation method
CN109766187A (en) * 2019-01-10 2019-05-17 烽火通信科技股份有限公司 Network packet high speed processing retransmission method and system
CN113973091A (en) * 2020-07-23 2022-01-25 华为技术有限公司 Message processing method, network equipment and related equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450058A (en) * 2023-06-19 2023-07-18 浪潮电子信息产业股份有限公司 Data transfer method, device, heterogeneous platform, equipment and medium
CN116450058B (en) * 2023-06-19 2023-09-19 浪潮电子信息产业股份有限公司 Data transfer method, device, heterogeneous platform, equipment and medium
CN117318892A (en) * 2023-11-27 2023-12-29 阿里云计算有限公司 Computing system, data processing method, network card, host computer and storage medium
CN117318892B (en) * 2023-11-27 2024-04-02 阿里云计算有限公司 Computing system, data processing method, network card, host computer and storage medium

Similar Documents

Publication Publication Date Title
CN115103036A (en) Efficient TCP/IP datagram processing method and system
US7996484B2 (en) Non-disruptive, reliable live migration of virtual machines with network data reception directly into virtual machines' memory
US7945699B2 (en) Obtaining a destination address so that a network interface device can write network data without headers directly into host memory
CN108268328B (en) Data processing device and computer
US11099872B2 (en) Techniques to copy a virtual machine
EP2312807A1 (en) Method and system for enabling zero-copy transmission of streaming media data
CN106598752B (en) Remote zero-copy method
EP0889623A2 (en) System and method for efficient remote disk I/O
CN114553635B (en) Data processing method, data interaction method and product in DPU network equipment
US20040213243A1 (en) Transmission components for processing VLAN tag and priority packets supported by using single chip's buffer structure
US11010165B2 (en) Buffer allocation with memory-based configuration
EP4177763A1 (en) Data access method and related device
CN111343148A (en) FGPA communication data processing method, system and device
CN113179327B (en) High concurrency protocol stack unloading method, equipment and medium based on large-capacity memory
CN114201268B (en) Data processing method, device and equipment and readable storage medium
CN113127139B (en) Memory allocation method and device based on DPDK of data plane development kit
WO2006073541A1 (en) Distributed graphics processing apparatus and method
CN110958216B (en) Secure online network packet transmission
CN111770054A (en) Interaction acceleration method and system for SMB protocol read request
CN113271336B (en) DPDK-based robot middleware DDS data transmission method, electronic equipment and computer-readable storage medium
CN111651282B (en) Message processing method, message processing device and electronic equipment
CN113572688A (en) Message forwarding method, terminal equipment and computer storage medium
KR20150048028A (en) Managing Data Transfer
CN114006859B (en) Message forwarding method, terminal and computer readable storage medium
CN114095572B (en) Data transfer method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination