CN117294660A - Network packet mixed processing frame and data packet distribution system applied to intelligent network card - Google Patents

Network packet mixed processing frame and data packet distribution system applied to intelligent network card Download PDF

Info

Publication number
CN117294660A
CN117294660A CN202311212734.1A CN202311212734A CN117294660A CN 117294660 A CN117294660 A CN 117294660A CN 202311212734 A CN202311212734 A CN 202311212734A CN 117294660 A CN117294660 A CN 117294660A
Authority
CN
China
Prior art keywords
data
packet
network
buffer
packets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311212734.1A
Other languages
Chinese (zh)
Inventor
饶东
王蓬
马勇
郁凯
孙健
贺再平
黄传明
余毅
刘军涛
章浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Zhuanwei Technology Co ltd
Original Assignee
Hubei Zhuanwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Zhuanwei Technology Co ltd filed Critical Hubei Zhuanwei Technology Co ltd
Priority to CN202311212734.1A priority Critical patent/CN117294660A/en
Publication of CN117294660A publication Critical patent/CN117294660A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/43Assembling or disassembling of packets, e.g. segmentation and reassembly [SAR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a network packet mixing processing frame and a data packet distribution system applied to an intelligent network card, which comprises the following steps: a first communication subsystem, a data processing subsystem and a second communication subsystem; the first communication subsystem is used for network communication and sending control instructions; the data processing subsystem is used for receiving and processing the control instruction, acquiring the processed instruction and sending the processed instruction to the second communication subsystem; the second communication subsystem is configured to receive the processed instruction. The invention combines the advantages of the two structures of the acceleration of the offload fixed network and the customized network data processing, effectively improves the network transmission bandwidth and keeps the cost.

Description

Network packet mixed processing frame and data packet distribution system applied to intelligent network card
Technical Field
The invention belongs to the technical field of network packet processing and data packet distribution, and particularly relates to a network packet mixed processing frame and a data packet distribution system applied to an intelligent network card.
Background
With the increase of the backbone transmission speed of the internet and the demand of a more reliable access mechanism of the user side, a related desktop system or data center often needs to face a large amount of calculation demands while the network communicates, and when the network speed exceeds 1Gb, the performance consumption of the network protocol is greatly increased, so that other calculation demands cannot be satisfied. The offlow characteristic in network virtualization is proposed for the above problem, and mainly refers to transferring operations such as IP slicing, TCP segmentation, reorganization, checksum verification and the like which are originally performed in an operating system protocol stack into hardware for performing, reducing consumption of a system CPU and improving processing performance. Mainly comprising TSO (Tcp Segmentation Offload), UFO (Udp Fragmentation Offload) and GSO (Generic Segmentation Offload) in transmit mode. The TSO and the UFO transfer the processes of data packet fragmentation, protocol packet header and frame header generation of each section into hardware for TCP and UDP protocols respectively, so that the CPU load is greatly reduced; GSO is a deferred slicing technology, which delays slicing until the last moment when data is sent to a network card, if the technology of TSO or UFO is supported by hardware, the technology of hardware slicing is adopted, and if the technology of UFO is not supported by hardware, the technology of software slicing is adopted. Mainly LRO (Large Receive Offload) and GRO (Generic Receive Offload) are included in the receive mode. Both techniques reorder and reassemble long packets as they are received, reducing the performance loss and delay of the CPU in processing regular packets. In addition, checksum calculation can be implemented on hardware or network data flow can be distributed to further reduce the burden of processing network data packets by the CPU.
Disclosure of Invention
In order to solve the technical problems, the invention provides a network packet mixing processing frame and a data packet distribution system applied to an intelligent network card, which combine the structural advantages of an offlow mechanism and a customized network card, improve the bandwidth, reduce the CPU consumption and enhance the flexibility.
In order to achieve the above object, the present invention provides a network packet hybrid processing framework and a packet splitting system applied to an intelligent network card, including: a first communication subsystem, a data processing subsystem and a second communication subsystem;
the first communication subsystem is used for network communication and sending control instructions;
the data processing subsystem is used for receiving and processing the control instruction, acquiring the processed instruction and sending the processed instruction to the second communication subsystem;
the second communication subsystem is configured to receive the processed instruction.
Optionally, the first communication subsystem includes: the GTX/GTP high-speed transceiver and the network MAC layer IP core are in network communication.
Optionally, the data processing subsystem includes: the system comprises a network packet processing module, a network data processing module and a data buffer area module;
the network packet processing module is used for network data distribution, obtaining distribution data and sending the distribution data to the network data processing module;
the network data processing module is used for receiving the split data and performing data processing to obtain processed data
The data buffer module is used for receiving and storing the processed data.
Optionally, the second communication subsystem includes: and the PCIE hard core is used for communicating the network card with the upper computer through the PCIE hard core.
Optionally, the sending process of the network data in the network packet processing module includes:
the upper computer transmits the data to be transmitted and the IP frame header information to a transmission buffer area;
dividing the data packet into a TCP packet, a UDP packet and other packets through a dividing unit, wherein the other packets comprise an ARP packet, an ICMP packet and other unidentified Ethernet packets;
the TCP packet is stored into a to-be-transmitted buffer after passing through the TCP segmentation unit, the TCP frame header processing unit, the checksum generating unit and the IP frame header processing unit; after passing through the IP segmentation, the UDP frame head processing unit, the checksum generating unit and the IP frame head processing unit, the UDP packet stores the complete network data packet into a to-be-transmitted buffer; other packets are directly formed by an upper computer and are directly stored into a buffer to be sent;
the data packets in the buffer to be transmitted are transmitted to the network MAC and the network data transmission is completed.
Optionally, the receiving process of the network data in the network packet processing module includes:
storing the received network data packet in a received buffer;
the received data packets, TCP packets, UDP packets, other packets and hardware processing data packets are subjected to shunt processing through a shunt unit, and the other packets comprise ARP packets, ICMP and other unidentified Ethernet packets;
after the TCP packet is subjected to IP frame header checksum verification, TCP frame header processing and TCP recombination, storing complete received data into a receiving buffer area; after passing through the IP frame header checksum verification, the UDP frame header processing unit and the IP recombination, the UDP packet stores the complete received data into a receiving buffer area; other packets are directly stored in the receiving buffer;
and after the hardware processing data packet is processed through the frame check and the frame header, extracting and transmitting effective data, and after the processing is completed through the reconfigurable data processing unit, framing is realized and the effective data is directly sent to a buffer area to be sent.
Optionally, the shunt unit includes: a control subunit and a data path subunit;
the control subunit is used for acquiring the control instruction, executing the control instruction, outputting the control instruction and sending the control instruction to the data path subunit;
and the data path subunit is used for shunting the input data buffer packet through different labels and outputting the data buffer packet.
Optionally, the data buffering module includes: the autonomous switching structure and the three data buffers comprise the following specific working processes:
in the network data transmission flow, all three buffer areas wait for the upper computer to write data packets and frame header information, the transmission buffer is firstly connected to the buffer area 1, and when the data packets and the frame header information are written into the buffer area 1, the self-defined switching structure connects the data buffer 1 with the shunting unit; at this time, the upper computer is connected to the buffer area 2, at this time, the data packet is processed in the buffer area 1 of the shunting unit, the data can still be written into the sending buffer 2, and so on;
in the network data receiving flow, all three buffer areas wait for receiving data packets and frame header information, the receiving buffer is firstly connected to the buffer area 1, and when the data packets and the frame header information are written into the buffer area 1, the self-defined switching structure connects the data buffer 1 with the shunting unit; the receiving unit is now connected to buffer 2, the packets are processed in the splitting unit buffer 1, the received data can still be written into the transmit buffer 2, and so on.
The invention has the technical effects that: the invention discloses a network packet mixed processing frame and a data packet distribution system applied to an intelligent network card, which have the technical advantages that the invention is characterized in that an Offload and network data packet distribution realization frame is realized at high speed and effectively, a network data packet analysis and distribution mechanism is added on the basis of realizing an Offload mechanism, the parallel working efficiency of the system is enhanced on the basis of the traditional architecture, the advantages of the two structures of accelerating an Offload fixed network and customizing network data processing are combined, the network transmission bandwidth is effectively improved, and the cost is kept.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a hardware block diagram of an intelligent network card according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a network packet hybrid processing frame and a packet splitting system applied to an intelligent network card according to an embodiment of the present invention;
FIG. 3 is a logic block diagram of a network packet processing module according to an embodiment of the present invention;
FIG. 4 is a block diagram of a split core logic according to an embodiment of the present invention;
FIG. 5 is a parallel block diagram of multiple split cores according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a third buffer structure according to an embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
As shown in fig. 2, the present invention provides a network packet mixing processing framework and a data packet splitting system applied to an intelligent network card, including: a first communication subsystem, a data processing subsystem and a second communication subsystem; the first communication subsystem is used for network communication and sending control instructions; the data processing subsystem is used for receiving and processing the control instruction, acquiring the processed instruction and sending the processed instruction to the second communication subsystem; and the second communication subsystem is used for receiving the processed instruction. The intelligent network card logic part mainly comprises a GTX/GTP high-speed transceiver, a network MAC layer IP core, a network packet processing module, a network data processing module, a data buffer area and a PCIE hard core. The GTX/GTP high-speed transceiver and the network MAC layer IP layer verify network communication, and the PCIE hard core realizes network card communication with the upper computer. The network packet processing module realizes the Offload operation such as network packet slicing, checksum, framing and the like, and realizes the network packet splitting operation, and forwards the data flow suitable for hardware processing to the network data processing module, and the network data processing module realizes the processing of corresponding data and sends the corresponding data through the network packet processing module, wherein the network data processing module can realize various scene data processing through a partial reconstruction technology.
As shown in fig. 1, the offlow mechanism of the intelligent network card mainly refers to transferring operations such as IP fragmentation, TCP segmentation, reassembly, checksum test and the like, which are originally performed in a software protocol stack, to network card hardware so as to reduce the system CPU load and improve the processing performance. The invention is mainly based on a mechanism for realizing offload by an FPGA and adds a customized network data processing function on the basis of the mechanism.
The network card hardware is simple to realize, and is mainly divided into a network interface part and an FPGA realization part, wherein the corresponding interface part is connected with the FPGA by adopting a gigabit Ethernet interface and adopting a traditional physical layer chip and using an RGMII interface or is connected with the FPGA by adopting a photoelectric conversion chip through the SGMII interface, if the network interface with ten thousand megabits or higher speed is used, the communication with the linear speed of more than 10Gb/s is realized through a high-speed transceiver of the FPGA, and in practice, the data receiving and transmitting are realized by adopting GTX or GTZ provided by Xilinx company, wherein the highest speed of GTX can reach 12.5Gb/s, and the highest speed of GTZ can reach 28.05Gb/s.
As shown in fig. 3, the network data transmission and network data receiving process involved in the network packet processing module specifically includes:
the network data transmission flow comprises the following steps:
(1) The upper computer transmits the data to be transmitted and the IP frame header information to a transmission buffer area;
(2) The data packets are split into TCP packets, UDP packets and other packets, such as ARP packets, ICMP and other unidentified ethernet packets, by the splitting unit.
(3) The TCP packet is stored into a to-be-transmitted buffer after passing through the TCP segmentation unit, the TCP frame header processing unit, the checksum generating unit and the IP frame header processing unit; after passing through the IP segmentation, the UDP frame head processing unit, the checksum generating unit and the IP frame head processing unit, the UDP packet stores the complete network data packet into a to-be-transmitted buffer; other packets are directly formed by an upper computer and are directly stored into a buffer to be sent;
(4) The data packets in the buffer to be transmitted are transmitted to the network MAC and the network data transmission is completed.
The network data receiving flow comprises the following steps:
(1) Storing the received network data packet in a received buffer;
(2) The received data packets, TCP packets, UDP packets, other packets such as ARP packets, ICMP and other unidentified Ethernet packets are split by a splitting unit.
(3) After the TCP packet is subjected to IP frame header checksum verification, TCP frame header processing and TCP recombination, storing complete received data into a receiving buffer area; after passing through the IP frame header checksum verification, the UDP frame header processing unit and the IP recombination, the UDP packet stores the complete received data into a receiving buffer area; other packets are directly stored in the receiving buffer;
(4) After the hardware processing data packet is processed through the frame check and the frame header, extracting and transmitting effective data, and after the processing is completed through the reconfigurable data processing unit, framing is realized and the effective data is directly sent to a buffer area to be sent; the hardware processing mode can be realized according to different logic of scene reconstruction, can be simple generation of heartbeat packet incremental data, can also be closed-loop calculation of a complex model, can effectively reduce CPU consumption, and is suitable for various scenes with low delay and high throughput rate.
As shown in fig. 4, the splitting unit needs to implement splitting of the network packet according to frame header information or other setting conditions, and in the implementation process, a plurality of parallel splitting cores are adopted to construct the splitting unit. Each shunting core is provided with own parameter control and data packet storage buffering, and data packets in a received buffer or a transmitted buffer area are shunted through a plurality of parallel shunting cores. The split core consists of a data path and a controller. The controller performs pipeline processing through three stages: and acquiring, executing and outputting the data packet, realizing rapid data packet processing and giving a shunt label. The data packet delay is reduced and the throughput rate is improved on a single shunt core in a pipeline mode. The parallel shunt cores can work together, and high-speed and complex shunt judgment can be realized by carrying out logic operation on the labels. Typically three parallel cores are sufficient to achieve common data splitting, e.g. selecting all data packets to/from a specific address or port range, more splitting cores may be used if more complex filtered data splitting needs to be achieved.
As shown in fig. 5, in order to improve throughput of network packets, in implementation, the received buffer and the sending buffer each adopt a three-buffer structure, and three fifo are maintained through custom interconnection exchange to represent different job queues. Custom interconnect exchanges are connected to one of the triple buffers at any given time.
As shown in fig. 6, in the network data transmission flow, all three buffers wait for the host to write the data packet and the frame header information, the transmission buffer is first connected to the buffer 1, and when the data packet and the frame header information are written into the buffer 1, the custom switch structure connects the data buffer 1 with the splitting unit. At this time, the host is connected to the buffer 2, and the packet is processed in the buffer 1 of the splitting unit, so that the data can still be written into the sending buffer 2, and so on. In the network data receiving flow, all three buffers wait for receiving data packets and frame header information, the receiving buffer is first connected to buffer 1, and when the data packets and frame header information are written into buffer 1, the custom switch structure connects data buffer 1 with the shunting unit. The receiving unit is now connected to buffer 2, the packets are processed in the splitting unit buffer 1, the received data can still be written into the transmit buffer 2, and so on.
The invention has the technical advantages that the invention is characterized in that the high-speed and effective Offload and network data packet distribution realization frame is realized, the network data packet analysis and distribution mechanism is added on the basis of realizing the Offload mechanism, the parallel working efficiency of the system is enhanced on the basis of the traditional architecture, the advantages of the two structures of the Offload fixed network acceleration and the customized network data processing are combined, the network transmission bandwidth is effectively improved, and the cost is maintained.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The network packet mixed processing framework and the data packet distribution system applied to the intelligent network card are characterized by comprising the following components: a first communication subsystem, a data processing subsystem and a second communication subsystem;
the first communication subsystem is used for network communication and sending control instructions;
the data processing subsystem is used for receiving and processing the control instruction, acquiring the processed instruction and sending the processed instruction to the second communication subsystem;
the second communication subsystem is configured to receive the processed instruction.
2. The network packet hybrid processing framework and packet distribution system for use with an intelligent network card of claim 1, wherein the first communication subsystem comprises: the GTX/GTP high-speed transceiver and the network MAC layer IP core are in network communication.
3. The network packet hybrid processing framework and packet distribution system for use with an intelligent network card of claim 1, wherein the data processing subsystem comprises: the system comprises a network packet processing module, a network data processing module and a data buffer area module;
the network packet processing module is used for network data distribution, obtaining distribution data and sending the distribution data to the network data processing module;
the network data processing module is used for receiving the split data and performing data processing to obtain processed data
The data buffer module is used for receiving and storing the processed data.
4. The network packet hybrid processing framework and packet distribution system for an intelligent network card of claim 1, wherein the second communication subsystem comprises: and the PCIE hard core is used for communicating the network card with the upper computer through the PCIE hard core.
5. The network packet hybrid processing framework and packet distribution system applied to the intelligent network card according to claim 3, wherein the network data transmission process in the network packet processing module comprises:
the upper computer transmits the data to be transmitted and the IP frame header information to a transmission buffer area;
dividing the data packet into a TCP packet, a UDP packet and other packets through a dividing unit, wherein the other packets comprise an ARP packet, an ICMP packet and other unidentified Ethernet packets;
the TCP packet is stored into a to-be-transmitted buffer after passing through the TCP segmentation unit, the TCP frame header processing unit, the checksum generating unit and the IP frame header processing unit; after passing through the IP segmentation, the UDP frame head processing unit, the checksum generating unit and the IP frame head processing unit, the UDP packet stores the complete network data packet into a to-be-transmitted buffer; other packets are directly formed by an upper computer and are directly stored into a buffer to be sent;
the data packets in the buffer to be transmitted are transmitted to the network MAC and the network data transmission is completed.
6. The network packet hybrid processing framework and packet distribution system for intelligent network card according to claim 5, wherein the network data receiving process in the network packet processing module comprises:
storing the received network data packet in a received buffer;
the received data packets, TCP packets, UDP packets, other packets and hardware processing data packets are subjected to shunt processing through a shunt unit, and the other packets comprise ARP packets, ICMP and other unidentified Ethernet packets;
after the TCP packet is subjected to IP frame header checksum verification, TCP frame header processing and TCP recombination, storing complete received data into a receiving buffer area; after passing through the IP frame header checksum verification, the UDP frame header processing unit and the IP recombination, the UDP packet stores the complete received data into a receiving buffer area; other packets are directly stored in the receiving buffer;
and after the hardware processing data packet is processed through the frame check and the frame header, extracting and transmitting effective data, and after the processing is completed through the reconfigurable data processing unit, framing is realized and the effective data is directly sent to a buffer area to be sent.
7. The network packet hybrid processing framework and packet splitting system for an intelligent network card of claim 5, wherein the splitting unit comprises: a control subunit and a data path subunit;
the control subunit is used for acquiring the control instruction, executing the control instruction, outputting the control instruction and sending the control instruction to the data path subunit;
and the data path subunit is used for shunting the input data buffer packet through different labels and outputting the data buffer packet.
8. The network packet hybrid processing framework and packet distribution system for an intelligent network card of claim 3, wherein the data buffering module comprises: the autonomous switching structure and the three data buffers comprise the following specific working processes:
in the network data transmission flow, all three buffer areas wait for the upper computer to write data packets and frame header information, the transmission buffer is firstly connected to the buffer area 1, and when the data packets and the frame header information are written into the buffer area 1, the self-defined switching structure connects the data buffer 1 with the shunting unit; at this time, the upper computer is connected to the buffer area 2, at this time, the data packet is processed in the buffer area 1 of the shunting unit, the data can still be written into the sending buffer 2, and so on;
in the network data receiving flow, all three buffer areas wait for receiving data packets and frame header information, the receiving buffer is firstly connected to the buffer area 1, and when the data packets and the frame header information are written into the buffer area 1, the self-defined switching structure connects the data buffer 1 with the shunting unit; the receiving unit is now connected to buffer 2, the packets are processed in the splitting unit buffer 1, the received data can still be written into the transmit buffer 2, and so on.
CN202311212734.1A 2023-09-20 2023-09-20 Network packet mixed processing frame and data packet distribution system applied to intelligent network card Pending CN117294660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311212734.1A CN117294660A (en) 2023-09-20 2023-09-20 Network packet mixed processing frame and data packet distribution system applied to intelligent network card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311212734.1A CN117294660A (en) 2023-09-20 2023-09-20 Network packet mixed processing frame and data packet distribution system applied to intelligent network card

Publications (1)

Publication Number Publication Date
CN117294660A true CN117294660A (en) 2023-12-26

Family

ID=89258123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311212734.1A Pending CN117294660A (en) 2023-09-20 2023-09-20 Network packet mixed processing frame and data packet distribution system applied to intelligent network card

Country Status (1)

Country Link
CN (1) CN117294660A (en)

Similar Documents

Publication Publication Date Title
US10210113B2 (en) SAN fabric online path diagnostics
JP4807861B2 (en) Host Ethernet adapter for networking offload in server environments
US10223314B2 (en) PCI express connected network switch
US8340120B2 (en) User selectable multiple protocol network interface device
JP6192803B2 (en) Method, apparatus and system for data scheduling and exchange
US8335884B2 (en) Multi-processor architecture implementing a serial switch and method of operating same
CN107426246B (en) FPGA-based high-speed data exchange system between gigabit Ethernet and RapidIO protocol
CN101325497B (en) Autonegotiation over an interface for which no autonegotiation standard exists
CN101227296B (en) Method, system for transmitting PCIE data and plate card thereof
EP3573297A1 (en) Packet processing method and apparatus
US9772968B2 (en) Network interface sharing
US11558315B2 (en) Converged network interface card, message coding method and message transmission method thereof
JP2004532457A (en) Network to increase transmission link layer core speed
CN102185833A (en) Fiber channel (FC) input/output (I/O) parallel processing method based on field programmable gate array (FPGA)
WO2019085815A1 (en) Method and relevant device for processing data of flexible ethernet
WO2018233560A1 (en) Dynamic scheduling method, device, and system
CN117294660A (en) Network packet mixed processing frame and data packet distribution system applied to intelligent network card
WO2012106905A1 (en) Message processing method and device
EP1302030B1 (en) In-band management of a stacked group of switches by a single cpu
CN114661650A (en) Communication device, electronic device, and communication method
CN117596309A (en) Message conversion system for multiple high-speed interface protocols
CN117221417A (en) TCP/IP protocol unloading engine device
CN117642855A (en) Data exchange device and data exchange method
CN113254202A (en) 5G base station forward-transmission lossless packet capturing method based on gigabit Ethernet port
JP2001345824A (en) Tag-converting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination