CN113485823A - Data transmission method, device, network equipment and storage medium - Google Patents

Data transmission method, device, network equipment and storage medium Download PDF

Info

Publication number
CN113485823A
CN113485823A CN202011323780.5A CN202011323780A CN113485823A CN 113485823 A CN113485823 A CN 113485823A CN 202011323780 A CN202011323780 A CN 202011323780A CN 113485823 A CN113485823 A CN 113485823A
Authority
CN
China
Prior art keywords
data
network node
protocol stack
network card
target network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011323780.5A
Other languages
Chinese (zh)
Inventor
金浩
屠要峰
韩银俊
郭斌
许军宁
杨洪章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202011323780.5A priority Critical patent/CN113485823A/en
Publication of CN113485823A publication Critical patent/CN113485823A/en
Priority to PCT/CN2021/131823 priority patent/WO2022105884A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details

Abstract

The embodiment of the application relates to the field of stored data transmission, in particular to a data transmission method, a data transmission device, network equipment and a storage medium. The embodiment of the invention reads data from a source network node to a memory space through a source network card; the data in the memory space is transmitted to the user mode protocol stack, the data processed by the user mode protocol stack is obtained, the processed data is sent to the target network node through the target network card, so that the same memory data are shared by the user protocol stack without passing through the kernel mode protocol stack by taking the memory as the center, the data are processed in the user mode, the CPU overhead caused by copying the memory data to the operating system protocol stack and switching the kernel mode from the user mode is reduced, and the data forwarding performance is better.

Description

Data transmission method, device, network equipment and storage medium
Technical Field
The embodiment of the application relates to the field of stored data transmission, in particular to a data transmission method, a data transmission device, network equipment and a storage medium.
Background
When a source network transmits data to a target network, and protocol stacks of the source network and the target network are not equal, network equipment needs to process the transmitted data to meet the requirement of the target network on the data.
However, when data is processed, the data needs to be switched and copied in the system state in the operating system protocol stack for copying the data from the user state to the kernel state, so that the overhead of system CPU resources is large.
Disclosure of Invention
The embodiment of the application mainly aims to provide a data transmission method, a data transmission device, network equipment and a storage medium, and the data transmission method, the network equipment and the storage medium can reduce the data CPU resource overhead.
In order to achieve the above object, an embodiment of the present application provides a data transmission method, including: reading data from a source network node to a memory space through a source network card; transmitting the data in the memory space to a user mode protocol stack to acquire the data processed by the user mode protocol stack; and sending the processed data to a target network node through a target network card.
In order to achieve the above object, an embodiment of the present application further provides a data transmission device, including: the data reading module is used for reading data from a source network node to the memory space through a source network card; the data processing module is used for acquiring data processed by the user mode protocol stack according to the data in the memory space; and the data sending module is used for sending the processed data to the target network node through the target network card.
In order to achieve the above object, an embodiment of the present application further provides a network device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data transmission method described above.
To achieve the above object, an embodiment of the present application further provides a computer-readable storage medium storing a computer program, where the computer program is executed by a processor to implement the data transmission method.
Compared with the related art, the embodiment of the invention reads data from the source network node to the memory space through the source network card; the data in the memory space is transmitted to the user mode protocol stack, the data processed by the user mode protocol stack is obtained, the processed data is sent to the target network node through the target network card, so that the memory is not needed to pass through the kernel mode protocol stack, the same memory data is shared through the user protocol stack by taking the memory as the center, the data is processed in the user mode, the CPU overhead caused by copying the memory data to the operating system protocol stack and switching the kernel mode from the user mode is reduced, and the data forwarding performance is better.
Drawings
Fig. 1 is a schematic diagram of a data transmission network according to the related art;
fig. 2 is a flow chart of a data transmission method according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of a data transmission network according to a first embodiment of the invention;
fig. 4 is a flow chart of a data transmission method according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a data transmission network according to a second embodiment of the invention;
fig. 6 is a schematic diagram of a data transmission apparatus according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a network device according to a fourth embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in the examples of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present application, and the embodiments may be mutually incorporated and referred to without contradiction.
In the data transmission process, the situation that the transmitting-receiving dual-transmission network stack is not equal may occur, the protocol stack of the operating system is used for converting the protocol in the related technology, in the conversion process, data copying and system state switching are required for many times, and the system resource overhead is large. The following describes a data transmission process in the related art by taking an example in which a source network, i.e., a sending end network, is an RDMA network, a target network, i.e., a receiving end network, is a common ethernet, and a distributed storage scenario client performs remote read operation. As shown in fig. 1, the node1 and the node2 are network nodes in RDMA, that is, source network nodes, the ENIC network card is a general ethernet network card, the RNIC network card is an RDMA network card, and has an RDMA engine, the client accesses the distributed file system master node through a TCP link, the master node reads required target data from other nodes through RDMA, writes the target data into a local RDMA memory, or possibly writes the target data into a local storage, and returns the data to the client through the TCP link, and the process is as follows;
step 1: the storage system main node reads data from other nodes in the cluster to the MR (MemeorryRegin) space of the node through an RNIC network card.
Step 2: and the storage system main node copies the data in the MR to a socket, then sends the socket to the buffer TxBuf, and can write the data into a local cache for storage if necessary.
And step 3: and sending the buffer data to an operating system protocol stack (OS) protocol stack. Data of the buffer TxBuf belongs to an application layer, and needs to be copied and switched from a user mode to a kernel mode to be sent to an operating system protocol stack.
And 4, step 4: and the operating system protocol stack sends the TCP message to the network card queue.
And 5: and finally, the ENIC network card sends the TCP message to the network. Sent to the target node through the network.
The whole process needs system state switching and data copying, the cost of system CPU resources is high, the response time delay of a client is obvious, and the forwarding flow becomes the bottleneck of the whole storage system.
Therefore, a first embodiment of the present invention provides a data transmission method, which is applicable to network devices such as a gateway, and the embodiment includes: reading data from a source network node to a memory space through a source network card; transmitting the data in the memory space to a user mode protocol stack to acquire the data processed by the user mode protocol stack; and sending the processed data to a target network node through a target network card. When the network battles are not equal and protocol conversion is needed, a kernel-mode protocol stack is not needed, but the memory is used as the center, the user-mode protocol stack shares the same memory data, the data is processed in the user mode, the CPU overhead caused by copying the memory data to the operating system protocol stack and switching the kernel mode from the user mode is reduced, and the data forwarding performance is better. The following describes implementation details of the data transmission method of the present embodiment in detail, and the following is only provided for easy understanding and is not necessary for implementing the present embodiment. A flow chart of a data transmission method according to a first embodiment of the present invention is shown in fig. 2.
Step 201, reading data from a source network node to a memory space through a source network card.
In one example, before reading data from a source network node to a memory space through a source network card, a forwarding rule of the data in the memory is configured according to links of the source network node and a target network node, where the forwarding rule includes: source network node information and target network node information.
In one example, the forwarding rules include: the method comprises the steps of storing space information, inputting information and outputting information; the input information includes source network node information and the output information includes: target network information; the source network node information includes: protocol, source information; the target network information comprises protocol, action and target information.
Illustratively, taking remote data downloading of a mobile phone client as an example, a server 80 port provides a web access service to the outside, requests data from a source network, and when the requested data is transmitted to a data gateway configured with a data gateway service, the data gateway enables the data gateway service according to configuration, the data gateway is compatible with two protocols of RDMA and TCP, and supports forwarding memory data to two target systems and allocating a memory buffer. For example, in the configured forwarding rule, the memory space information is a memory space address of 0x12323455, the size of the memory space is 16MB, the input information includes source network node information, the source network node information includes protocol RDMA of the source network, and the source information, i.e., node1, node2, and the output information includes protocol tcp protocol of the target network, action "trans", i.e., data conversion is performed, and the target information: 1000, forwarding data to a tcp link with a serial number of 1000, where the forwarding rule configures a forwarding rule of a memory by a json file, and the configured forwarding rule is as follows:
{
memory space, address, size, 16MB, etc.,
"input":
{ "protocol": rdma "," source information "[" node1"," node2"] },
"output":
[
{ "protocol": tcp "," action ": trans", "target information": 1000},
]
}
the rule specifies the corresponding memory space and size, as shown in fig. 3, data is read from the node1 and node2 nodes into the memory space, and the output stream is 1: the data is forwarded to the tcp link numbered 1000. The link is established in advance, after the link is established, a forwarding rule is configured, and when data needs to be forwarded, the data can be directly forwarded according to the forwarding rule, so that the forwarding efficiency is improved.
In this embodiment, a memory space is created according to the configured forwarding rule, and data is acquired from the source network node1 and the node2 through the source network card RNIC, and the memory space with an address of 0x12323455 and a size of 16MB is read.
Step 202, transmitting the data in the memory space to a user mode protocol stack, and acquiring the data processed by the user mode protocol stack.
In one example, the data gateway transmits data to the user state TCP/IP protocol stack according to the configuration, and obtains the data processed by the user TCP/IP protocol stack.
In one example, the user-mode TCP/IP protocol stack includes: the queue is completed.
For example, the user-mode TCP/IP protocol stack in this embodiment refers to implementation of an RDMA software stack, and is customized and modified for the user-mode TCP/IP protocol stack based on a DPDK (data plane development kit) framework. Defining a sending queue SQ (SendQueue) queue, a CQ (CompletQueue) queue and related memory instructions, and executing a write _ cmd instruction by the data gateway service to send a message to SQ, wherein the message carries memory information, target information and the like. Software protocol stack, namely user state TCP/IP protocol stack reads SQ queue and runs one by one, and write _ cmd instruction triggers TCP sending flow. A customized user state TCP/IP protocol stack adopts a header and data separation mode to construct TCP messages, each mbuf message carries a header content and a data pointer, and the data points to memory data to be sent. The network card detects a message to be sent, the header and the data form a complete message to be sent to the network, after the memory data are sent, the protocol stack stores a completion event CE (complete event) into the CQ queue, the gateway service checks the CQ queue to obtain a result of a sending instruction, and DB (door Bell) signaling is supported.
And 203, sending the processed data to a target network node through the target network card.
In one example, after the processed data is sent to the target network node through the target network card, a finishing event is stored into a finishing queue; and acquiring a sending result according to the end event in the completion queue.
In the above example, according to a carried target message, for example, the target information "1000" in the forwarding rule configured as described above, the processed data is sent to the ENIC network node through the target network card ENIC. That is, the network card detects a message to be sent, and the header and the data form a complete message to be sent to the network and are transmitted to the ENIC network node through the network. After the memory data is sent, the protocol stack stores an end event ce (complelet event) into the CQ queue, and the gateway service checks the CQ queue to obtain the result of the sending instruction, where db (door bell) signaling is supported.
In this embodiment, a completion queue storage end event is defined in the user protocol stack, and compared with the related art that a sending result needs to be returned after the network card is sent, this embodiment may synchronously perform sending and obtaining of the sending result. In addition, the defined user state TCP/IP protocol stack is realized by referring to an RDMA software stack, so that the contact degree between the user-defined user state TCP/IP protocol stack and the RDMA protocol stack is higher.
The source network card is taken as an RNIC (remote network interface card), the source network node is an RDMA (remote direct memory access) network node, the target network card is an ENIC (enhanced network interface card) and the target network node is an Ethernet node, namely the RDMA network node transmits data through the RNIC network card, and the data is transmitted to the Ethernet node through the ENIC network card after being processed by a user state TCP/IP (transmission control protocol/Internet protocol) stack. And the data transmission is bidirectional, or the configuration file can be changed, when the source network card is an ENIC network card, the source network node is an Ethernet node, the target network card is an RNIC network card, and the target network node is an RDMA network node, the data acquired by the ENIC network card is processed by a user state RDMA protocol stack, the data processed by the RDMA protocol stack is acquired, and the data is transmitted to the RDMA network node. The method realizes the sharing of data in the memory by a plurality of user state protocol stacks, namely RDMA protocol stacks and user state TCP/IP protocol stacks. The problem that the existing RDMA protocol network element and the traditional Ethernet node cannot be in butt joint is solved.
In one example, after the processed data is sent to the target network node through the target network card, the data of which the data forwarding is completed in the memory space is obtained, if the data of which the data forwarding is completed in the memory space is equal to the number of data output streams, the memory space is recycled, the number of the output streams is determined according to the output information in the forwarding rule, and the output information includes the information of the target network node.
In the above example, the created memory space is regarded as a directed graph node, and the input and output streams of the data node, i.e., the input information and the output information, are configured according to the service characteristics, in this embodiment, the data gateway is used as a node, and data needs to be read from other nodes and forwarded to the client, so that the data node is configured to have an in-degree of 1 and an out-degree of 1, the input stream is an RDMA network, the output stream is a TCP network, and the data gateway records the forwarding completion number of the node, and when the node forwarding number is equal to the out-degree value, the memory space of the node can be recycled. The memory space can be timely recycled.
In the embodiment, data is read from a source network node to a memory space through a source network card; the data in the memory space is acquired and transmitted to the user mode protocol stack, the data processed by the user mode protocol stack is acquired, and the processed data is sent to the target network node through the target network card, so that the memory is not needed to pass through the kernel mode protocol stack, the same memory data is shared by the user protocol stack by taking the memory as the center, the data is processed in the user mode, the CPU overhead caused by copying the memory data to the operating system protocol stack and switching the kernel mode from the user mode is reduced, and the data forwarding performance is better.
A second embodiment of the present invention relates to a data transmission method, and is substantially the same as the first embodiment, and the main differences are as follows: the forwarding rule also comprises local storage information, and the data processed by the user mode protocol stack is transmitted to the local storage according to the local storage information configured in the forwarding rule. The data gateway can be compatible with the conversion of multiple protocols, multiple user mode protocol stacks share the memory space, and can realize local storage and data forwarding in a network, and a flow chart of the second embodiment of the invention is shown in fig. 4.
Step 401, reading data from a source network node to a memory space through a source network card.
For example, before reading data from a source network node to a memory space through a source network card, taking a remote data download of a mobile phone client as an example, a port of the server 80 provides a web access service to the outside, requests data from the source network, and enables a data gateway service according to configuration when the requested data is transmitted to the data gateway configured with the data gateway service, where the data gateway is compatible with three protocols, namely RDMA, TCP, and nvme, and supports forwarding of memory data to three target systems. The forwarding rule configured in the data gateway includes memory space information, a memory buffer is allocated according to the memory space information, for example, if the memory space information is address "0x12323455" and size is 16MB, the memory space buffer with starting address "0x12323455" and size is 16MB is allocated, the forwarding rule further includes input information and output information, the input information includes protocol RDMA, the source information is node1 and node 2; the output information includes local storage information, such as protocol nvme, action write, and target information nvme, and also includes target network node information, such as protocol tcp, action trans, and target information 1000, and the json file configures the forwarding rule of the memory, as follows:
{
memory space, address, size, 16MB, etc.,
"input":
{ "protocol": rdma "," source information "[" node1"," node2"] },
"output":
[
{ "protocol": tcp "," action ": trans", "target information": 1000},
{ "protocol": nvme "," action ": write", "target information": nvme "}
]
}
The rule specifies that the corresponding memory reads data from the node1 and the node2, and the output stream is divided into 2 paths: and forwarding the data to a tcp link with the number of 1000, writing the data to the nvme device, and providing address information by a file system. Two kinds of information are included in the output information, one is output information for transmitting data to a network node, and one is output information for writing data to a local storage, for example, an nvme device.
Step 402, transmitting the data in the memory space to a user mode protocol stack, and acquiring the data processed by the user mode protocol stack.
For example, the data gateway uses data in the memory as a center, and multiple user mode protocol stacks, such as TCP, RDMA, and nvme, share the same memory data to implement zero-copy forwarding of data between different protocol stacks.
Step 403, transmitting the data processed by the user mode protocol stack to a local storage according to the local storage information configured in the forwarding rule.
Illustratively, the present embodiment transfers data to the local storage nvme according to the local storage information configured in the forwarding rule, i.e., "protocol": "nvme", "action": write ", and" target information ":" nvme "configured in the output.
Step 404, the processed data is sent to the target network node through the target network card.
Step 404 is substantially the same as step 203 of the first embodiment of the present invention, and will not be described herein again.
In one example, after the processed data is sent to the target network node through the target network card, data of which data forwarding is completed in the memory space is obtained, if the data of which data forwarding is completed in the memory space is equal to the number of data output streams, the memory space is recycled, the number of the output streams is determined according to output information in the forwarding rule, and the output information includes target network node information and local storage information.
In the above example, the created memory space is regarded as a directed graph node, and the input and output streams of the data node, i.e., the input information and the output information, are configured according to the service characteristics, in this embodiment, the master node needs to read data from other nodes and forward the data to the client, so that the data node configuration in-degree is 1, out-degree is 2, the input stream is an RDMA network, the output stream is a forwarding completion number of the TCP network and nvme storage and service gateway recording node, and when the node forwarding number is equal to the out-degree value, i.e., when the number of data forwarding completion in the memory space is equal to the number of data output streams, the memory space of the node can be recycled. The memory space can be timely recycled.
Fig. 5 shows a schematic diagram of a data transmission network in this embodiment, where a data gateway synchronizes data from other nodes in a cluster to a local node mr (memeoryregion) space through an RNIC; the data gateway forwards data to a user state TCP protocol stack and a user state nvme protocol stack according to configuration, writes the data processed by the user state nvme protocol stack into an nvme storage device, transmits a message obtained after the processing of the user state TCP/IP protocol stack to a node of the Ethernet, and the node of the Ethernet acquires the data through a network card and processes the acquired data.
The data gateway in this embodiment includes: a source network card, a target network card and a user mode protocol stack; the source network card is used for reading data from a source network node to the memory space, the user mode protocol stack is used for processing the data in the memory space, and the target network card is used for sending the data processed by the user mode protocol stack to the target network node.
It should be noted that, the conventional ethernet transport protocols TCP and UDP are defined for data packets and cannot be forwarded through the data gateway service, the data transmission method of this embodiment performs customized transformation on the user mode transport protocol stack, and both RDMA and NVMe protocols use a memory as an operation object and can seamlessly access the data gateway service, so that the memory data directly reaches the ENIC network card channel, and the data is read by the network card hardware DMA to complete the transmission process.
The data transmission method of the embodiment is compatible with a network protocol and a local storage protocol, and can realize zero-copy forwarding of data between a network node and local storage according to rules.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to a data transmission apparatus, as shown in fig. 6, including: a data reading module 601, configured to read data from a source network node to a memory space through a source network card; a data processing module 602, configured to transmit the data in the memory space to a user mode protocol stack, and obtain the data processed by the user mode protocol stack; a data sending module 603, configured to send the processed data to a target network node through a target network card.
In an example, the data reading module 601 is further configured to configure forwarding rules of data in the memory according to links of the source network node and the target network node, where the forwarding rules include: source network node information and target network node information.
In an example, the data sending module 603 is further configured to send the processed data to a target network node through a target network card according to the source network node information and the target network node information in the forwarding rule.
In one example, the forwarding rule further includes: the data sending module 603 is further configured to transmit the data processed by the user mode protocol stack to a local storage according to the local storage information configured in the forwarding rule.
In one example, the data sending module is further configured to obtain the number of completed data forwarding in the memory space; and if the quantity of the data forwarding completion in the memory space is equal to the quantity of the data output streams, recovering the memory space, wherein the quantity of the output streams is obtained according to the quantity of the target network nodes and the quantity of the local storage nodes.
In an example, the source network card is an RNIC network card, the source network node is an RDMA network node, the target network card is an ENIC network card, the target network node is an ethernet node, and the data processing module 602 is further configured to transmit data in the memory space to the user state TCP/IP protocol stack, and acquire data processed by the user state TCP/IP protocol stack.
In an example, the source network card is an ENIC network card, the source network node is an ethernet node, the target network card is an RNIC network card, the target network node is an RDMA network node, and the data processing module 602 is further configured to transmit the data in the memory space to a user-state RDMA protocol stack, and acquire the data processed by the user-state RDMA protocol stack.
In one example, the user-mode TCP/IP protocol stack includes: completing the queue; the data sending module 603 is further configured to store a termination event into the completion queue after sending the processed data to the target network node through the target network card; and acquiring a sending result according to the end event in the completion queue.
It should be understood that this embodiment is a system example corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fourth embodiment of the present invention relates to a network device, as shown in fig. 7, including: at least one processor 701; and a memory 702 communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data transmission method described above.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting together one or more of the various circuits of the processor and the memory. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A method of data transmission, comprising:
reading data from a source network node to a memory space through a source network card;
transmitting the data in the memory space to a user mode protocol stack to acquire the data processed by the user mode protocol stack;
and sending the processed data to a target network node through a target network card.
2. The data transmission method according to claim 1, wherein before reading the data from the source network node to the memory space through the source network card, the method further comprises:
configuring a forwarding rule of data in a memory according to links of the source network node and the target network node, wherein the forwarding rule comprises: source network node information and target network node information;
the sending the processed data to the target network node through the target network card includes:
and sending the processed data to a target network node through a target network card according to the source network node information and the target network node information in the forwarding rule.
3. The data transmission method according to claim 2, wherein the forwarding rule further comprises: storing the information locally;
after the data processed by the user mode protocol stack is obtained according to the data in the memory space, the method further includes:
and transmitting the data processed by the user mode protocol stack to a local storage according to the local storage information configured in the forwarding rule.
4. The data transmission method according to claim 3, wherein after the sending the processed data to the target network node through the target network card, the method further comprises:
acquiring the quantity of completed data forwarding in the memory space;
if the number of the completed data forwarding in the memory space is equal to the number of the data output streams, recovering the memory space, wherein the number of the data output streams is determined according to output information in the forwarding rule, and the output information includes: the target network node information and the locally stored information.
5. The data transmission method according to any one of claims 1 to 4, wherein the source network card is an RNIC network card, the source network node is an RDMA network node, the target network card is an ENIC network card, and the target network node is an Ethernet node;
or, the source network card is an ENIC network card, the source network node is an Ethernet node, the target network card is an RNIC network card, and the target network node is an RDMA network node.
6. The data transmission method according to claim 5, wherein the user mode protocol stack comprises: a user state RDMA protocol stack and a user state TCP/IP protocol stack;
if the source network card is an RNIC network card, the source network node is an RDMA network node, the target network card is an ENIC network card, and the target network node is an ethernet node, the transmitting the data in the memory space to a user mode protocol stack to obtain the data processed by the user mode protocol stack includes:
transmitting the data in the memory space to a user state TCP/IP protocol stack to acquire the data processed by the user state TCP/IP protocol stack;
if the source network card is an ENIC network card, the source network node is an ethernet node, the target network card is an RNIC network card, and the target network node is an RDMA network node, the transmitting the data in the memory space to a user mode protocol stack to obtain the data processed by the user mode protocol stack includes:
and transmitting the data in the memory space to a user state RDMA protocol stack, and acquiring the data processed by the user state RDMA protocol stack.
7. The data transmission method according to claim 6, wherein the user-state TCP/IP protocol stack comprises: completing the queue;
after the processed data is sent to the target network node through the target network card, the method further comprises the following steps:
storing an end event into the completion queue;
and acquiring a sending result according to the end event in the completion queue.
8. A data transmission apparatus, comprising:
the data reading module is used for reading data from a source network node to the memory space through a source network card;
the data processing module is used for transmitting the data in the memory space to a user mode protocol stack and acquiring the data processed by the user mode protocol stack;
and the data sending module is used for sending the processed data to the target network node through the target network card.
9. A network device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data transfer method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the data transmission method according to any one of claims 1 to 7.
CN202011323780.5A 2020-11-23 2020-11-23 Data transmission method, device, network equipment and storage medium Pending CN113485823A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011323780.5A CN113485823A (en) 2020-11-23 2020-11-23 Data transmission method, device, network equipment and storage medium
PCT/CN2021/131823 WO2022105884A1 (en) 2020-11-23 2021-11-19 Data transmission method and apparatus, network device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011323780.5A CN113485823A (en) 2020-11-23 2020-11-23 Data transmission method, device, network equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113485823A true CN113485823A (en) 2021-10-08

Family

ID=77932626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011323780.5A Pending CN113485823A (en) 2020-11-23 2020-11-23 Data transmission method, device, network equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113485823A (en)
WO (1) WO2022105884A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024910A (en) * 2021-10-29 2022-02-08 上海广策信息技术有限公司 Extremely-low-delay reliable communication system and method for financial transaction system
CN114363428A (en) * 2022-01-06 2022-04-15 齐鲁空天信息研究院 Socket-based data transmission method
WO2022105884A1 (en) * 2020-11-23 2022-05-27 中兴通讯股份有限公司 Data transmission method and apparatus, network device, and storage medium
CN115314159A (en) * 2022-08-02 2022-11-08 成都爱旗科技有限公司 Inter-chip data transmission method and device
CN115623018A (en) * 2022-11-30 2023-01-17 苏州浪潮智能科技有限公司 Sharing system based on multiple equipment nodes
CN115834665A (en) * 2023-02-08 2023-03-21 天翼云科技有限公司 Network communication method and device

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115296956B (en) * 2022-07-29 2024-03-22 广东浪潮智慧计算技术有限公司 Kernel tunnel forwarding method and device, electronic equipment and storage medium
CN115460470B (en) * 2022-08-19 2024-03-26 烽火通信科技股份有限公司 Multicast data forwarding method, device, equipment and readable storage medium
CN115499332B (en) * 2022-09-13 2023-12-15 科东(广州)软件科技有限公司 Method, device, equipment and medium for monitoring network message
CN115665073B (en) * 2022-12-06 2023-04-07 江苏为是科技有限公司 Message processing method and device
CN115904253B (en) * 2023-01-09 2023-06-13 苏州浪潮智能科技有限公司 Data transmission method, device, storage system, equipment and medium
CN116455612B (en) * 2023-03-23 2023-11-28 京信数据科技有限公司 Privacy calculation intermediate data stream zero-copy device and method
CN116450058B (en) * 2023-06-19 2023-09-19 浪潮电子信息产业股份有限公司 Data transfer method, device, heterogeneous platform, equipment and medium
CN116781650B (en) * 2023-07-11 2024-03-19 中科驭数(北京)科技有限公司 Data processing method and system
CN117097779B (en) * 2023-10-16 2024-01-30 之江实验室 Network communication method and device, storage medium and electronic equipment
CN117692416A (en) * 2024-02-04 2024-03-12 苏州元脑智能科技有限公司 Network message processing method, device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018006239A1 (en) * 2016-07-04 2018-01-11 华为技术有限公司 Method for determining user plane protocol stack, control plane network element and system
CN106302199B (en) * 2016-08-10 2019-12-17 成都广达新网科技股份有限公司 user mode protocol stack implementation method and system based on three-layer switch equipment
CN110073644B (en) * 2016-12-15 2021-08-31 华为技术有限公司 Information processing method and device
CN106850565B (en) * 2016-12-29 2019-06-18 河北远东通信系统工程有限公司 A kind of network data transmission method of high speed
CN111294293B (en) * 2018-12-07 2021-08-10 网宿科技股份有限公司 Network isolation method and device based on user mode protocol stack
CN109547580B (en) * 2019-01-22 2021-05-25 网宿科技股份有限公司 Method and device for processing data message
CN113485823A (en) * 2020-11-23 2021-10-08 中兴通讯股份有限公司 Data transmission method, device, network equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022105884A1 (en) * 2020-11-23 2022-05-27 中兴通讯股份有限公司 Data transmission method and apparatus, network device, and storage medium
CN114024910A (en) * 2021-10-29 2022-02-08 上海广策信息技术有限公司 Extremely-low-delay reliable communication system and method for financial transaction system
CN114363428A (en) * 2022-01-06 2022-04-15 齐鲁空天信息研究院 Socket-based data transmission method
CN114363428B (en) * 2022-01-06 2023-10-17 齐鲁空天信息研究院 Socket-based data transmission method
CN115314159A (en) * 2022-08-02 2022-11-08 成都爱旗科技有限公司 Inter-chip data transmission method and device
CN115314159B (en) * 2022-08-02 2023-08-04 成都爱旗科技有限公司 Method and device for transmitting data between chips
CN115623018A (en) * 2022-11-30 2023-01-17 苏州浪潮智能科技有限公司 Sharing system based on multiple equipment nodes
CN115834665A (en) * 2023-02-08 2023-03-21 天翼云科技有限公司 Network communication method and device
CN115834665B (en) * 2023-02-08 2023-06-23 天翼云科技有限公司 Network communication method and device

Also Published As

Publication number Publication date
WO2022105884A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN113485823A (en) Data transmission method, device, network equipment and storage medium
US11036669B2 (en) Scalable direct inter-node communication over peripheral component interconnect-express (PCIe)
US10642777B2 (en) System and method for maximizing bandwidth of PCI express peer-to-peer (P2P) connection
CN111277616B (en) RDMA-based data transmission method and distributed shared memory system
US10484472B2 (en) Methods and systems for efficiently moving data between nodes in a cluster
JP4768386B2 (en) System and apparatus having interface device capable of data communication with external device
CN108028833A (en) A kind of method, system and the relevant device of NAS data accesses
CN112291293B (en) Task processing method, related equipment and computer storage medium
CN112347015B (en) Communication device and method between heterogeneous multiprocessors of system on chip
US10609125B2 (en) Method and system for transmitting communication data
CN111966446B (en) RDMA virtualization method in container environment
WO2022017475A1 (en) Data access method and related device
CN114546913A (en) Method and device for high-speed data interaction among multiple hosts based on PCIE interface
CN114153778A (en) Cross-network bridging
WO2022083466A1 (en) Method and device for data processing
CN110430478B (en) Networking communication method, device, terminal equipment and storage medium
CN116471242A (en) RDMA-based transmitting end, RDMA-based receiving end, data transmission system and data transmission method
CN108959134B (en) Communication for field programmable gate array devices
CN111541624B (en) Space Ethernet buffer processing method
US8069273B2 (en) Processing module
CN110519242A (en) Data transmission method and device
CN116760504B (en) Session synchronization method, device, service node, terminal and readable storage medium
CN112333069B (en) Bus virtualization communication method
CN117041147B (en) Intelligent network card equipment, host equipment, method and system
WO2024077999A1 (en) Collective communication method and computing cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination