CN111538694B - Data caching method for network interface to support multiple links and retransmission - Google Patents

Data caching method for network interface to support multiple links and retransmission Download PDF

Info

Publication number
CN111538694B
CN111538694B CN202010654252.1A CN202010654252A CN111538694B CN 111538694 B CN111538694 B CN 111538694B CN 202010654252 A CN202010654252 A CN 202010654252A CN 111538694 B CN111538694 B CN 111538694B
Authority
CN
China
Prior art keywords
data
pointer
link
shared cache
cache space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010654252.1A
Other languages
Chinese (zh)
Other versions
CN111538694A (en
Inventor
王自伟
王志奇
伍楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Nanfei Microelectronics Co ltd
Original Assignee
Changzhou Nanfei Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Nanfei Microelectronics Co ltd filed Critical Changzhou Nanfei Microelectronics Co ltd
Priority to CN202010654252.1A priority Critical patent/CN111538694B/en
Publication of CN111538694A publication Critical patent/CN111538694A/en
Application granted granted Critical
Publication of CN111538694B publication Critical patent/CN111538694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Abstract

The invention provides a data caching method for a network interface to support multi-link and retransmission, wherein the data caching and releasing on network interface equipment comprises the following steps: searching a free storage position in the shared cache space, and storing a plurality of data requesting cache into the free position of the shared cache space, wherein the data requesting cache is link-oriented data; and releasing the data facing the link in the shared cache space from the shared cache space according to a polling scheduling mode, and after the data message is sent for a period of time, if the confirmation of the data message receiving end is not received, reading the previous data from the shared cache space again and releasing the previous data from the shared cache space. The invention supports multi-link data caching, different link data share data storage space, and simultaneously supports the function of configurable data message retransmission, thereby ensuring that all user data can be successfully transmitted.

Description

Data caching method for network interface to support multiple links and retransmission
Technical Field
The invention relates to the field of computer network communication, in particular to a data caching method for a network interface to support multiple links and retransmission.
Background
Along with the rapid development of big data, the data operation scale is rapidly expanded, and great challenges are brought to the existing operation processing equipment. One way is to upgrade the computing power of the existing devices, and the other way is to coordinate the operation, and to decompose the operation into small computing units, and then to process the small computing units on different computing devices, which requires a large amount of data exchange between different computing devices, and thus has a high requirement on the performance of network interface data transmission.
The general network interface card is accessed to the system through a PCIE bus, and the configuration of the network interface card and the transmission of data are all through the PCIE. The core operation is completed on the CPU or the GPU, and if the CPU and the GPU on different devices want to complete the data transmission and reception, the core operation needs to be completed through the PCIE bus. However, many of the existing big data devices are implemented by using a programmable device for updating or changing the algorithm of the core arithmetic unit, and certainly, the communication between the big data device and the CPU can be realized by a PCIE bus. But the network interface function can be realized on the programmable device, the direct data transmission between the core operation devices is realized, and the transmission delay on the PCIE bus is subtracted. This requires a data buffer space for the core computing device, the buffer space on the conventional network interface card is very small and is only used for buffering messages, the data buffer is to be put into the memory of the host, and data transmission and reception require data transport between the memory and the core computing device.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides a data caching method for a network interface to support multi-link and retransmission, wherein the data cache is directly put on a network interface device, and a core operation device directly accesses the data cache of the network interface instead of a memory for data transportation between the core operation device and the network interface, so that the time delay generated by data transportation can be reduced.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a data caching method for a network interface to support multiple links and retransmission, wherein data caching and releasing on a network interface device comprises the following steps:
1) searching a free storage position in the shared cache space, and storing a plurality of data requesting cache into the free position of the shared cache space, wherein the data requesting cache is link-oriented data;
2) and releasing the data facing the link in the shared cache space from the shared cache space according to a polling scheduling mode, and after the data message is sent for a period of time, if the confirmation of the data message receiving end is not received, reading the previous data from the shared cache space again and releasing the previous data from the shared cache space.
Furthermore, the shared cache space is managed in a single linked list mode.
Further, step 1) includes a step of pre-configuration before, specifically including: setting the size of a shared cache space according to the parameters of the TCP message and the number of links cached at the same time, setting the size of a head pointer RAM space according to the number of link head pointers, setting the size of a tail pointer RAM space according to the number of link tail pointers, setting the size of a next-hop pointer RAM space according to the number of next-hop pointers corresponding to next-hop addresses of each storage unit in the shared cache space, and setting the size of a free pointer mapping RAM space for free pointer mapping according to the number of the storage units in the shared cache space.
Further, the process of storing the data into the shared cache space in step 1) is a process of establishing a linked list, and the specific steps of establishing the linked list include:
A1) and (3) idle operation: after resetting, the linked list establishing process is in a null operation state;
A2) initialization operation: initializing the linked list elements, sending an initialization completion report and monitoring a request after the initialization is completed, if the linked list elements are initialized, restarting the linked list element initialization operation, and if a port request is received, entering the step A3);
A3) and (3) pointer acquisition operation: judging the empty and full state of the free pointer FIFO, if the free pointer FIFO is in a non-full state, scanning and acquiring a pointer in an idle pointer mapping, wherein the pointer is in one-to-one correspondence with a storage unit in a shared cache space, writing the acquired pointer into the free pointer FIFO, and if the free pointer FIFO is in a non-empty state, entering the step A4);
A4) and (3) pointer updating operation: reading a pointer from the free pointer FIFO and changing the state of the pointer to be used, then receiving request data, and if the request data is the first data of a link, writing the read pointer into a head pointer RAM space of a corresponding link position as a head pointer; if the request data is the last data of the link, writing the read pointer as a tail pointer into a tail pointer RAM space of the corresponding link position; if the request data is the linked intermediate data, writing the read pointer as a next-hop pointer into a next-hop pointer RAM space;
A5) and (3) data writing operation: after the write address is acquired, the request data is respectively written into the storage units corresponding to the head pointer, the tail pointer or the next hop pointer according to the position in the link, if the request data is the last data, the step A1) is carried out, if the current storage unit is full, the step A3) is carried out, the next pointer in an idle state is acquired, and the address of the corresponding next storage unit is used as the next hop address of the current storage unit.
Further, the process of releasing the data from the shared cache space in step 2) is a process of releasing the linked list, and the specific steps of releasing the linked list include:
B1) and (3) idle operation: after the link list is reset, the link list releasing process is in a null operation state, if the link needing to be released is arbitrated out by the polling scheduling, a link list releasing request corresponding to the link is sent, and the step B2 is entered;
B2) and (3) pointer acquisition operation: matching a head pointer RAM space according to a link scheduled in a linked release linked list request to obtain a corresponding head pointer, simultaneously sending the release linked list request to a core operation module or a network interface, and entering a step B3 after receiving a request response);
B3) and (3) reading data operation: finding the corresponding storage unit from the shared cache space according to the head pointer, reading the data stored in the storage unit corresponding to the head pointer, and entering step B4 after the data is completely released);
B4) and (3) pointer updating operation: updating the state of the idle storage unit, if the next hop address of the storage unit corresponding to the head pointer is not null, matching the next hop address with the next hop pointer RAM space to obtain the next hop pointer which is used as a new head pointer, and returning to the step B3) to release the data of the next storage unit; and returning to the step B1) to wait for releasing the data of the next link if the next hop address of the storage unit corresponding to the head pointer is null or the released data is the last data of the link.
Further, in the step B3), if the data direction released by the linked list is the core operation module, the data is released until all the data are released; if the data direction of the linked list release is a network interface, the linked list release is stopped after the released data reaches the data volume of the current data message load, a head pointer and a next hop pointer are stored, the linked list release is continued after the confirmation is received and the stored head pointer and the stored next hop pointer are set as idle pointers, and if the waiting time is out, the data of the previous data message load is obtained from the shared cache space according to the stored head pointer and the stored next hop pointer and the linked list release is carried out again.
Further, step B1) further includes a step of null pointer collection, which specifically includes: and after resetting, if the free pointer FIFO is in a non-full state, searching a free storage unit in the shared cache space, writing a free pointer corresponding to the free storage unit into the free pointer FIFO, and if the free pointer FIFO is already full, stopping searching the free storage unit in the shared cache space.
The invention also provides a data caching system for a network interface supporting multiple links and retransmission, which comprises a computer device programmed or configured to execute the data caching method for the network interface supporting multiple links and retransmission.
The invention also provides a data caching system for a network interface to support multi-link and retransmission, which comprises a data caching module arranged on network interface equipment, wherein the data caching module comprises:
the core unit is used for automatically initializing the related linked list controlled by the queue, performing polling scheduling on connection-oriented cache data to complete the release of the data in the data cache, storing and managing the connection-oriented data and providing the shared cache space condition in real time;
a null pointer FIFO unit for providing a free pointer FIFO space;
a head pointer unit for providing a head pointer RAM space;
a next pointer unit for providing a next-hop pointer RAM space;
a tail pointer unit for providing a tail pointer RAM space;
the linked list establishing unit is used for replying a data storage request response according to the state of the data storage unit after receiving a data storage request signal, acquiring a null pointer from the null pointer FIFO unit if the data can be stored, updating a head pointer, a next pointer or a tail pointer according to the identification judgment of the received data, and writing the data into the data storage unit;
a null pointer mapping storage unit for providing a free pointer mapping RAM space;
a null pointer acquisition unit for automatically scanning the pointer state, updating the null pointer information state if the null pointer is found, writing the null pointer into a null pointer mapping storage unit and a null pointer FIFO unit, and updating the null pointer mapping state;
the polling arbitration unit is used for arbitrating the data of the link to be released through polling scheduling;
the data storage unit is used for providing a shared cache space;
the release linked list unit is used for acquiring a head pointer from the head pointer unit, reading data in the data storage unit according to the head pointer, then acquiring a next-hop pointer from the next pointer unit, reading the data in the data storage unit according to the next-hop pointer, updating the head pointer, acquiring the data according to the mode until acquiring data of a tail pointer from the tail pointer unit, reading the data in the data storage unit according to the tail pointer, and finishing the linked data transmission;
the high-level extensible interface unit is used for converting an interface of the data cache module and a high-level extensible interface bus interface;
and the instruction pointer register unit is used for configuring the functional registers of all the modules.
The present invention also proposes a computer-readable storage medium storing a computer program programmed or configured to execute the above-mentioned data caching method for a network interface supporting multiple links and retransmissions.
Compared with the prior art, the invention has the advantages that:
the invention supports multi-link data caching, different link data share data storage space, simultaneously supports the function of configurable data message retransmission, ensures that all user data can be successfully sent, and can design the capacity of the shared cache space according to the transmission speed of a network interface, the data load of the message and the requirement of a core operation module.
Drawings
FIG. 1 is a flowchart of a linked list establishment process according to an embodiment of the present invention.
FIG. 2 is a flowchart of a process for releasing a linked list according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a data caching module according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
The invention provides a data caching method for a network interface to support multi-link and retransmission, wherein the data caching and releasing on network interface equipment comprises the following steps:
1) searching a free storage position in the shared cache space, and storing a plurality of data requesting cache into the free position of the shared cache space, wherein the data requesting cache is link-oriented data;
2) and releasing the data facing the link in the shared cache space from the shared cache space according to a polling scheduling mode, and after the data message is sent for a period of time, if the confirmation of the data message receiving end is not received, reading the previous data from the shared cache space again and releasing the previous data from the shared cache space.
Through the steps, the method supports multi-link data caching, different link data share a data storage space, and simultaneously supports a configurable data message retransmission function, so that all user data can be successfully transmitted.
In this embodiment, the capacity of the shared cache space may be designed according to the transmission speed of the network interface, the data size of the packet load, and the requirement of the core operation module, and the step 1) further includes a step of pre-configuration, specifically including: setting the size of a shared cache space according to the parameters of the TCP message and the number of links cached at the same time, setting the size of a head pointer RAM space according to the number of link head pointers, setting the size of a tail pointer RAM space according to the number of link tail pointers, setting the size of a next-hop pointer RAM space according to the number of next-hop pointers corresponding to next-hop addresses of each storage unit in the shared cache space, and setting the size of a free pointer mapping RAM space for free pointer mapping according to the number of the storage units in the shared cache space.
For example, the size of the payload is calculated from the length of the entire TCP packet and the two-layer header, IP header, TCP header, CRC fields. Assuming that the length of the entire TCP packet does not exceed 1500B, the two-layer header 14B, the IP header 20B, the TCP header 28B, and the CRC field 4B, the load is 1434B at maximum, which is about 179.25 × 8B; determining the bit width of the data bus according to the equipment interface, wherein the bit width is 2^ n times of 8B; the size of a storage unit of the data storage space is 2^ m times of 8B; m and n are integers which are larger than 0, the greatest common divisor is taken, the load is 1024B at most, and therefore the length of the whole message is 1090B.
If the size of the shared cache space is 256Kb according to the capacity of simultaneous caching 128 (number of connections) × 2 (number of packets) × 1024B =2048 Kb. Assuming a data bus bit width of 8B × 2^1 = 16B, the depth of the shared buffer space can be calculated as 256KB/16B =2^ k, k being 14. Assuming that the size of the unit of the shared buffer space is 8B × 2^5, and exactly integral multiple is the size of the load, i.e. the data size of 4 memory units is enough to the data size of one load, the shared buffer space has 256KB/(32 × 8B) =1K total memory units. The next-hop pointer RAM corresponding to the next-hop address cache space of the storage unit is realized by using a true dual-port RAM, the depth is 10 bits, the width is 10 bits, the head pointer RAM and the tail pointer RAM are also realized by using the true dual-port RAM, the depth is 7 bits, and the width is 10 bits. Each link only has a head pointer and a tail pointer, the head pointer and the tail pointer of 128 links are stored, 128 positions are needed for storage respectively, and a RAM with the depth of 7 bits can just store 128. Since the number of connections is assumed to be 128, the efficient mapping of the corresponding head and tail pointers is implemented using a 128-bit wide register. The idle pointer mapping RAM is also implemented using a true dual port RAM, with a depth of 10 bits and a width of 10 bits. All links share pointers of the whole cache space, and therefore all links share the data cache space.
According to different application scenarios, the size of the assumed data cache space can be correspondingly expanded or cut, and the depth and width of the RAM for storing the head pointer, the tail pointer and the next-hop pointer are also correspondingly adjusted.
In this embodiment, the data storage and release in the shared cache space are performed by identifying the position of the data in the shared cache space through the pointer, so that the data in one link can be randomly stored in the shared cache space without being stored according to the sequence of the pointer, and the data can be stored in an empty storage unit. When the data is released, the data in the shared cache space is released according to the sequence of storing the data, namely the sequence of pointers.
The shared buffer space is managed in a mode of adopting a single linked list, and the initialization of linked list elements and the management of linked lists are realized by establishing linked lists and recycling linked lists, wherein:
as shown in fig. 1, the process of storing data into the shared cache space in step 1) is a process of establishing a linked list, and the specific steps of establishing a linked list include:
A1) and (3) idle operation: after resetting, the linked list establishing process is in a null operation state;
A2) initialization operation: initializing the linked list elements, sending an initialization completion report and monitoring a request after the initialization is completed, if the linked list elements are initialized, restarting the linked list element initialization operation, and if a port request is received, entering the step A3);
A3) and (3) pointer acquisition operation: judging the empty and full state of the free pointer FIFO, if the free pointer FIFO is in a non-full state, scanning and acquiring a pointer in an idle pointer mapping, wherein the pointer corresponds to a storage unit in a shared cache space one by one, the acquired pointer is written into the free pointer FIFO, and if the free pointer FIFO is in a non-empty state, the step A4 is carried out), the FIFO is a first-in first-out memory and is divided into a writing special area and a reading special area, the reading operation and the writing operation can be carried out asynchronously, and the data written into the writing special area is read from the reading special area according to the writing sequence;
A4) and (3) pointer updating operation: reading a pointer from the free pointer FIFO and changing the state of the pointer to be used, then receiving request data, and if the request data is the first data of a link, writing the read pointer into a head pointer RAM space of a corresponding link position as a head pointer; if the request data is the last data of the link, writing the read pointer as a tail pointer into a tail pointer RAM space of the corresponding link position; if the request data is the linked intermediate data, writing the read pointer as a next-hop pointer into a next-hop pointer RAM space;
A5) and (3) data writing operation: after the write address is acquired, the request data is respectively written into the storage units corresponding to the head pointer, the tail pointer or the next jump pointer according to the position in the link, if the request data is the last data, the step A1) is carried out, if the current storage unit is full, the step A3) is carried out, the next pointer in an idle state is acquired, the address of the corresponding next storage unit is used as the next jump address of the current storage unit, and after the write data operation is carried out once, the ready signal of the corresponding link data is set.
In step 2) of this embodiment, releasing link-oriented data in the shared cache space from the shared cache space according to a polling scheduling manner specifically includes: and performing polling scheduling on all linked data ready signals, and releasing the linked data from the shared cache space after acquiring the link of the data to be released. Because it takes a long time to release one memory unit, the scheduling efficiency is not high, and the scheduling can be realized by dividing into 2-level scheduling, each level is a polling scheduling mode.
In this embodiment, the process of releasing the data from the shared cache space in step 2) is a process of releasing the linked list, and the specific steps of releasing the linked list include:
B1) and (3) idle operation: after the link list is reset, the link list releasing process is in a null operation state, if the link needing to be released is arbitrated out by the polling scheduling, a link list releasing request corresponding to the link is sent, and the step B2 is entered;
B2) and (3) pointer acquisition operation: matching a head pointer RAM space according to a link scheduled in a linked release linked list request to obtain a corresponding head pointer, simultaneously sending the release linked list request to a core operation module or a network interface, and entering a step B3 after receiving a request response);
B3) and (3) reading data operation: finding the corresponding storage unit from the shared cache space according to the head pointer, reading the data stored in the storage unit corresponding to the head pointer, and entering step B4 after the data is completely released);
B4) and (3) pointer updating operation: updating the state of the idle storage unit, if the next hop address of the storage unit corresponding to the head pointer is not null, matching the next hop address with the next hop pointer RAM space to obtain the next hop pointer which is used as a new head pointer, and returning to the step B3) to release the data of the next storage unit; and returning to the step B1) to wait for releasing the data of the next link if the next hop address of the storage unit corresponding to the head pointer is null or the released data is the last data of the link.
In step B3), if the data direction released by the linked list is the core operation module, releasing the data until all the data are released; if the direction of the data released by the linked list is a network interface, the linked list release is stopped after the released data reaches the data volume of the current data message load, a head pointer and a next hop pointer are stored, the linked list release is continued after waiting and receiving confirmation, the stored head pointer and the stored next hop pointer are set as idle pointers, if the waiting is overtime, the data loaded by the previous data message is obtained from the shared cache space according to the stored head pointer and the stored next hop pointer, and the linked list release is carried out again.
Step B1) of this embodiment further includes a step of null pointer collection, which specifically includes: and after resetting, if the free pointer FIFO is in a non-full state, searching a free storage unit in the shared cache space, writing a free pointer corresponding to the free storage unit into the free pointer FIFO, and if the free pointer FIFO is already full, stopping searching the free storage unit in the shared cache space.
The invention also provides a data caching system for a network interface supporting multiple links and retransmission, which comprises a computer device programmed or configured to execute the data caching method for the network interface supporting multiple links and retransmission.
As shown in fig. 3, the data caching system for a network interface supporting multiple links and retransmission of the present invention further includes a data caching module disposed on the network interface device, where the data caching module includes:
a core unit (not shown in the figure) for the automatic initialization of the queue control related linked list, the polling scheduling of the connection-oriented cache data to complete the release of the data in the data cache, the connection-oriented data storage management and the real-time provision of the shared cache space condition;
a null pointer FIFO unit for providing a free pointer FIFO space;
a head pointer unit for providing a head pointer RAM space;
a next pointer unit for providing a next-hop pointer RAM space;
a tail pointer unit for providing a tail pointer RAM space;
the linked list establishing unit is used for replying a data storage request response according to the state of the data storage unit after receiving a data storage request signal, acquiring a null pointer from the null pointer FIFO unit if the data can be stored, updating a head pointer, a next pointer or a tail pointer according to the identification judgment of the received data, and writing the data into the data storage unit;
a null pointer mapping storage unit for providing a free pointer mapping RAM space;
a null pointer acquisition unit for automatically scanning the pointer state, updating the null pointer information state if the null pointer is found, writing the null pointer into a null pointer mapping storage unit and a null pointer FIFO unit, and updating the null pointer mapping state;
the polling arbitration unit is used for arbitrating the data of the link to be released through polling scheduling;
the data storage unit is used for providing a shared cache space;
the release linked list unit is used for acquiring a head pointer from the head pointer unit, reading data in the data storage unit according to the head pointer, then acquiring a next-hop pointer from the next pointer unit, reading the data in the data storage unit according to the next-hop pointer, updating the head pointer, acquiring the data according to the mode until acquiring data of a tail pointer from the tail pointer unit, reading the data in the data storage unit according to the tail pointer, and finishing the linked data transmission;
the high-level extensible interface unit is used for converting an interface of the data cache module and a high-level extensible interface bus interface, an FIFO memory is arranged in the high-level extensible interface unit and can be used for converting the interface of the data cache module and an AXI-Stream bus interface, and the FIFO memory can be used for clock isolation on one hand and can also be used for data bit width isolation on the other hand;
and the instruction pointer register unit is used for configuring functional registers of each module, such as an initialization enabling register, an initialization completing register, a data transmission completing register and a data message sending number register.
As shown in fig. 3, the data cache module of this embodiment further includes a data input port and a data output port, where the data input port is connected to the link list establishing unit, the empty pointer FIFO unit, the head pointer unit, the next pointer unit, the tail pointer unit and the data storage unit, the data output port is connected to the link list releasing unit, the head pointer unit, the next pointer unit, the tail pointer unit, the polling arbitration unit and the data storage unit, and in addition, the higher-level extensible interface unit is connected to the user module, the link list establishing unit and the link list releasing unit, and the instruction pointer register unit is connected to the link list establishing unit, the link list releasing unit and the empty pointer acquiring unit.
The data caching system for the network interface to support multi-link and retransmission directly puts the data caching module on the network interface equipment, and the core operation equipment directly accesses the data cached in the data caching module of the network interface without carrying the data between the core operation equipment and the network interface, so that the time delay generated by data carrying can be reduced, and simultaneously, the capacity of a shared caching space of the data caching module can be designed according to the transmission speed of the network interface, the message load data quantity and the requirement of the core operation module.
The present invention also proposes a computer-readable storage medium storing a computer program programmed or configured to execute the above-mentioned data caching method for a network interface supporting multiple links and retransmissions.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (5)

1. A data caching method for a network interface supporting multiple links and retransmissions, wherein the caching and releasing of data on a network interface device comprises the steps of:
setting the size of a shared cache space according to parameters of a TCP message and the number of links cached simultaneously, wherein the size of the shared cache space is the product of the number of the links cached simultaneously, the number of packets of each connection and the maximum load of the TCP message, setting the size of a head pointer RAM space according to the number of the link head pointers, setting the size of a tail pointer RAM space according to the number of the link tail pointers, setting the size of a next-hop pointer RAM space according to the number of next-hop pointers for the next-hop address of each storage unit in the shared cache space, and setting the size of an idle pointer mapping RAM space for idle pointer mapping according to the number of the storage units in the shared cache space;
1) searching an idle storage position in a shared cache space, storing a plurality of data requesting cache into the idle position of the shared cache space, wherein the data requesting cache is link-oriented data, the process of storing the data into the shared cache space is a process of establishing a linked list, and the specific steps of establishing the linked list comprise:
A1) and (3) idle operation: after resetting, the linked list establishing process is in a null operation state;
A2) initialization operation: initializing the linked list elements, sending an initialization completion report and monitoring a request after the initialization is completed, if the linked list elements are initialized, restarting the linked list element initialization operation, and if a port request is received, entering the step A3);
A3) and (3) pointer acquisition operation: judging the empty and full state of the free pointer FIFO, if the free pointer FIFO is in a non-full state, scanning and acquiring a pointer in an idle pointer mapping, wherein the pointer is in one-to-one correspondence with a storage unit in a shared cache space, writing the acquired pointer into the free pointer FIFO, and if the free pointer FIFO is in a non-empty state, entering the step A4);
A4) and (3) pointer updating operation: reading a pointer from the free pointer FIFO and changing the state of the pointer to be used, then receiving request data, and if the request data is the first data of a link, writing the read pointer into a head pointer RAM space of a corresponding link position as a head pointer; if the request data is the last data of the link, writing the read pointer as a tail pointer into a tail pointer RAM space of the corresponding link position; if the request data is the linked intermediate data, writing the read pointer as a next-hop pointer into a next-hop pointer RAM space;
A5) and (3) data writing operation: after the write address is obtained, respectively writing the request data into the storage units corresponding to the head pointer, the tail pointer or the next hop pointer according to the position in the link, if the request data is the last data, entering step A1), and if the current storage unit is full, entering step A3) to obtain the next pointer in an idle state and taking the address of the corresponding next storage unit as the next hop address of the current storage unit;
2) releasing data facing to links in the shared cache space from the shared cache space according to a polling scheduling mode, reading the previous data from the shared cache space again and releasing the data from the shared cache space if the confirmation of a data message receiving end is not received after the data message is sent for a period of time, wherein the process of releasing the data from the shared cache space is a process of releasing a linked list, and the specific steps of releasing the linked list comprise:
B1) and (3) idle operation: after the link list is reset, the link list releasing process is in a null operation state, if the link needing to be released is arbitrated out by the polling scheduling, a link list releasing request corresponding to the link is sent, and the step B2 is entered;
B2) and (3) pointer acquisition operation: matching a head pointer RAM space according to a link scheduled in a linked release linked list request to obtain a corresponding head pointer, simultaneously sending the release linked list request to a core operation module or a network interface, and entering a step B3 after receiving a request response);
B3) and (3) reading data operation: finding the corresponding storage unit from the shared cache space according to the head pointer, reading the data stored in the storage unit corresponding to the head pointer, and entering step B4) after the data is completely released; if the direction of the data released by the linked list is a network interface, stopping the release of the linked list after the released data reaches the data volume of the current data message load, storing a head pointer and a next hop pointer, continuing to release the linked list after waiting and receiving confirmation, setting the stored head pointer and the stored next hop pointer as idle pointers, and if the waiting is overtime, acquiring the data of the previous data message load from the shared cache space according to the stored head pointer and the stored next hop pointer and re-releasing the linked list;
B4) and (3) pointer updating operation: updating the state of the idle storage unit, if the next hop address of the storage unit corresponding to the head pointer is not null, matching the next hop address with the next hop pointer RAM space to obtain the next hop pointer which is used as a new head pointer, and returning to the step B3) to release the data of the next storage unit; and returning to the step B1) to wait for releasing the data of the next link if the next hop address of the storage unit corresponding to the head pointer is null or the released data is the last data of the link.
2. The method as claimed in claim 1, wherein the shared buffer space is managed in a single linked list manner.
3. The data buffering method for a network interface supporting multiple links and retransmission according to claim 1, wherein step B1) further includes a step of null pointer collection, which specifically includes: and after resetting, if the free pointer FIFO is in a non-full state, searching a free storage unit in the shared cache space, writing a free pointer corresponding to the free storage unit into the free pointer FIFO, and if the free pointer FIFO is already full, stopping searching the free storage unit in the shared cache space.
4. A data caching system for a network interface supporting multiple links and retransmissions, comprising a computer device, wherein the computer device is programmed or configured to perform the data caching method for a network interface supporting multiple links and retransmissions of any of claims 1 to 3.
5. A computer-readable storage medium storing a computer program programmed or configured to perform the data caching method for network interface supporting multiple links and retransmissions of any one of claims 1 to 3.
CN202010654252.1A 2020-07-09 2020-07-09 Data caching method for network interface to support multiple links and retransmission Active CN111538694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010654252.1A CN111538694B (en) 2020-07-09 2020-07-09 Data caching method for network interface to support multiple links and retransmission

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010654252.1A CN111538694B (en) 2020-07-09 2020-07-09 Data caching method for network interface to support multiple links and retransmission

Publications (2)

Publication Number Publication Date
CN111538694A CN111538694A (en) 2020-08-14
CN111538694B true CN111538694B (en) 2020-11-10

Family

ID=71979758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010654252.1A Active CN111538694B (en) 2020-07-09 2020-07-09 Data caching method for network interface to support multiple links and retransmission

Country Status (1)

Country Link
CN (1) CN111538694B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650449B (en) * 2020-12-23 2022-12-27 展讯半导体(南京)有限公司 Method and system for releasing cache space, electronic device and storage medium
CN113014308B (en) * 2021-02-23 2022-08-02 湖南斯北图科技有限公司 Satellite communication high-capacity channel parallel Internet of things data receiving method
CN115190085A (en) * 2022-05-26 2022-10-14 中科驭数(北京)科技有限公司 Data sharing method and device based on SMB transmission and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102195783B (en) * 2010-03-11 2015-04-08 瑞昱半导体股份有限公司 Network interface card capable of sharing buffer and buffer sharing method
CN106059957B (en) * 2016-05-18 2019-09-10 中国科学院信息工程研究所 Quickly flow stream searching method and system under a kind of high concurrent network environment
CN108111329A (en) * 2016-11-25 2018-06-01 广东亿迅科技有限公司 Mass users cut-in method and system based on TCP long links
CN109842585B (en) * 2017-11-27 2021-04-13 中国科学院沈阳自动化研究所 Network information safety protection unit and protection method for industrial embedded system
CN107995129B (en) * 2017-11-30 2021-12-17 锐捷网络股份有限公司 NFV message forwarding method and device
CN111371920A (en) * 2020-03-16 2020-07-03 广州根链国际网络研究院有限公司 DNS front-end analysis method and system

Also Published As

Publication number Publication date
CN111538694A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111538694B (en) Data caching method for network interface to support multiple links and retransmission
US6307789B1 (en) Scratchpad memory
US8719456B2 (en) Shared memory message switch and cache
US5752078A (en) System for minimizing latency data reception and handling data packet error if detected while transferring data packet from adapter memory to host memory
KR0169248B1 (en) Message sending apparatus and message sending controlling method in packet internetwork
US6970921B1 (en) Network interface supporting virtual paths for quality of service
US6895457B2 (en) Bus interface with a first-in-first-out memory
EP1826677A1 (en) Apparatus and method for performing DMA data transfer
CN111221759B (en) Data processing system and method based on DMA
US7447872B2 (en) Inter-chip processor control plane communication
US7860120B1 (en) Network interface supporting of virtual paths for quality of service with dynamic buffer allocation
CN113590512A (en) Self-starting DMA device capable of directly connecting peripheral equipment and application
US7552232B2 (en) Speculative method and system for rapid data communications
US20060259648A1 (en) Concurrent read response acknowledge enhanced direct memory access unit
JPH08241186A (en) Unit and method for buffer memory management
US10095643B2 (en) Direct memory access control device for at least one computing unit having a working memory
US9288163B2 (en) Low-latency packet receive method for networking devices
US20060184708A1 (en) Host controller device and method
US6535942B1 (en) Method for reducing processor interrupt load
US20170147517A1 (en) Direct memory access system using available descriptor mechanism and/or pre-fetch mechanism and associated direct memory access method
JP2924783B2 (en) Remote read processing method and device
US7620702B1 (en) Providing real-time control data for a network processor
JP3044653B2 (en) Gateway device
CN117312197A (en) Message processing method and device, electronic equipment and nonvolatile storage medium
CN117373508A (en) Multiport memory, read-write method and device of multiport memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant