CN111404986A - Data transmission processing method, device and storage medium - Google Patents

Data transmission processing method, device and storage medium Download PDF

Info

Publication number
CN111404986A
CN111404986A CN201911277583.1A CN201911277583A CN111404986A CN 111404986 A CN111404986 A CN 111404986A CN 201911277583 A CN201911277583 A CN 201911277583A CN 111404986 A CN111404986 A CN 111404986A
Authority
CN
China
Prior art keywords
memory
end device
receiving
memory block
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911277583.1A
Other languages
Chinese (zh)
Other versions
CN111404986B (en
Inventor
唐盛武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201911277583.1A priority Critical patent/CN111404986B/en
Publication of CN111404986A publication Critical patent/CN111404986A/en
Application granted granted Critical
Publication of CN111404986B publication Critical patent/CN111404986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a data transmission processing method, equipment and a storage medium, wherein the method comprises the following steps: after the receiving end equipment establishes connection with the sending end equipment, a first memory block is allocated from the established memory pool; the receiving end device creates a receiving request in a receiving queue, where the receiving request includes an address of the first memory block; and the receiving end equipment stores the data sent by the sending end equipment into the first memory block according to the receiving request. The method of the embodiment of the invention improves the data transmission efficiency.

Description

Data transmission processing method, device and storage medium
Technical Field
The present invention relates to the field of network data transmission technologies, and in particular, to a data transmission processing method, device, and storage medium.
Background
With the development of communication technology, the transmission of data information through networks has become an important way for people to exchange information. Currently, the mainstream data Transmission mode is generally a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP) mode based on the ethernet. In the transmission process of the TCP/UDP data, an operating system needs to copy the data among the buffers for many times, and the transmission efficiency is low.
Disclosure of Invention
The invention provides a data transmission processing method, data transmission processing equipment and a storage medium, which are used for improving transmission efficiency.
In a first aspect, the present invention provides a data transmission processing method, including:
after the receiving end equipment establishes connection with the sending end equipment, a first memory block is allocated from the established memory pool;
the receiving end device creates a receiving request in a receiving queue, where the receiving request includes an address of the first memory block;
and the receiving end equipment stores the data sent by the sending end equipment into the first memory block according to the receiving request.
In a second aspect, the present invention provides a data transmission processing method, including:
after a sending end device establishes connection with a receiving end device, the sending end device allocates a first memory block from a created memory pool and writes data to be sent into the first memory block;
the sending end equipment creates a sending request in a sending queue so as to send the data to be sent to the receiving end equipment; the sending request includes an address of the first memory block.
In a third aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method described in any one of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a receiving end device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of the first aspects via execution of the executable instructions.
In a fifth aspect, an embodiment of the present invention provides a sending-end device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of the second aspects via execution of the executable instructions.
In the data transmission processing method, device and storage medium provided in the embodiments of the present invention, after a receiving end device establishes a connection with a sending end device, a first memory block is allocated from a created memory pool; the receiving end device creates a receiving request in a receiving queue, where the receiving request includes an address of the first memory block; the receiving end device stores the data sent by the sending end device into the first memory block according to the receiving request, and the user service and network transmission share the first memory block, namely share the memory, so that memory data copying is avoided, the transmission efficiency is improved, and the time consumption of memory registration is reduced by using a memory pool.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is an application scenario diagram provided in an embodiment of the present invention;
fig. 2 is a schematic flow chart of a data transmission processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a method provided by the present invention;
FIG. 4 is a flow chart illustrating a data transmission processing method according to another embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a data transmission processing apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of another embodiment of a data transmission processing apparatus according to the present invention;
fig. 7 is a schematic structural diagram of an embodiment of a receiving end device provided in the present invention;
fig. 8 is a schematic structural diagram of an embodiment of a sending-end device provided in the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this invention and the drawings described herein are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Firstly, the application scene related to the invention is introduced:
the method provided by the embodiment of the invention is applied to a network data transmission scene to improve the transmission efficiency.
The TCP transport requires operating system intervention in all operations, including the copying of buffers at both end nodes of the network. In a byte stream oriented network, there is no notion of message boundaries. When an application wants to send a data packet, the operating system places these bytes of data in an anonymous buffer in memory belonging to the operating system, and when the data transmission is completed, the operating system copies the data in its buffer to the receiving buffer of the application program. This process is repeated for each packet arrival until the entire byte stream is received.
The network management method provided by the embodiment of the invention is applied to a scene shown in fig. 1, wherein the scene comprises a sending end device and a receiving end device, so as to realize data transmission between the two devices. The sending end device may be a client end device or a server end, and the receiving end device may also be a client end device or a server end, which is not limited in the embodiment of the present invention. The sending end device and the receiving end device are connected through a network, such as a wired network. The wired network may include an InfiniBand (IB) network, and may also support an RDMA over switched Ethernet (RoCE), the Internet Wide Area RDMA Protocol (iWARP) Ethernet, also known as RDMAover TCP.
In the following embodiments, the sending end device takes a client device, and the receiving end device takes a server device as an example for description.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart illustrating a data transmission processing method according to an embodiment of the present invention. As shown in fig. 2, the method provided by this embodiment includes:
step 201, after the receiving end device establishes a connection with the sending end device, the receiving end device allocates a first memory block from the created memory pool.
Specifically, after the receiving end device establishes a connection with the sending end device, the receiving end device allocates first memory blocks from a pre-created memory pool, and the number of the first memory blocks may be one or more.
In an implementation manner, after receiving a memory application instruction of a user, a receiving end device allocates memory blocks from a memory pool, and how many allocated memory blocks can be determined by the application instruction of the user, that is, how many memory blocks that can be applied to can be included in the application instruction.
Step 202, the receiving end device creates a receiving request in the receiving queue, where the receiving request includes an address of the first memory block.
Specifically, as shown in fig. 3, the receiving end device creates a Receive Request (referred to as RR) in a Receive Queue (referred to as RQ), where the Receive Request RR includes an address of the first memory block. The number of the created receiving requests RR may be one or more, and the first memory block corresponding to each receiving request RR may be one or more.
In order to ensure that the memory of the receive queue RQ of the receive end is Ready before the transmit end submits the transmit request SR to the transmit queue SQ, the performance is Not affected by entering the flow of the memory Not Ready (referred to as RNR). When a connection is established, N (N is determined according to the concurrent operation condition of the transmitting end) receiving requests RR (i.e., a memory prepared in advance for storing data transmitted by the transmitting end) are created in the RQ, where N is an integer greater than 0.
In one possible implementation, several receive requests RR are created in the receive queue RQ at once by calling the RDMA interface ibv _ post _ recv, without waiting for its completion status;
and setting the work request identifier wr _ id in the data structure struct ibv _ recv _ wr in the RR as the address of the first memory block in the received request.
Step 203, the receiving end device stores the data sent by the sending end device into the first memory block according to the receiving request.
Specifically, the receiving end device stores the data sent by the sending end device in the first memory block according to the address of the first memory block included in the receiving request, and the subsequent user can read the data from the first memory block according to the address of the first memory block and process the service according to the data.
Further, step 203 may be followed by the following steps:
the receiving end equipment monitors whether a completion event notification exists in the completion queue;
if the completion event notification exists and the completion event notification includes the receiving completion state indication, the receiving end device notifies the address of the first memory block included in the completion event notification to the user, so that the user processes the data stored in the first memory block.
Specifically, as shown in fig. 3, the receiving end device monitors the completion queue, and if it is monitored that there is a completion event notification (for example, there is new data in the completion queue CQ) in the completion queue, acquires a reception completion status indication (for example, the reception completion status indication opcode is IBV _ WC _ RECV) from the completion event notification, which indicates that the data sent by the sending end is stored in the first memory block, and notifies the user of the address of the first memory block included in the completion event notification, so that the user reads the data stored in the first memory block according to the address of the first memory block, and processes a service according to the data.
The use of the event notification mechanism described above can reduce the overhead of the CPU.
In an implementation manner, the call interface ibv _ poll _ CQ obtains a completion event notification in the completion queue CQ, where a work request identifier wr _ id in a data structure struct ibv _ wc in the completion queue is an address of a memory block on which a receiving operation is completed, so as to complete receiving and sending of data in an asynchronous state.
In the above embodiment, the creation of the reception request and the checking of the completion status of the completion queue are performed asynchronously, making full use of the network bandwidth.
Further, the receiving end device releases the first memory block to the memory pool after receiving a release instruction from the user.
In the embodiment, the memory pool is used, the memory blocks in the memory pool can be used for multiple times by one-time registration, the time consumption of memory registration is avoided, the transmission efficiency is improved, the memory can be shared by user processing service and network transmission, the memory data copying is avoided, and the efficiency is improved; and the receiving queue is used for carrying out memory scheduling and finishing the memory scheduling through event notification indication, so that the data transmission performance is improved.
Based on the foregoing embodiment, further, before allocating the first memory block for receiving data for the current connection, the receiving end device needs to know the size of the first memory block to be allocated, that is, before step 201, the following operations may be performed:
the receiving end device receives a connection establishment request of the sending end device, wherein the connection establishment request carries the size of the first memory block.
Specifically, when the receiving end device establishes a connection with the sending end, the size of the first memory block can be known.
In other embodiments of the present invention, the receiving end device may also send a connection request to the sending end device, where the connection request carries the size of the first memory block.
The size of the first memory block may be set according to actual service requirements, and may be set to the size of the data block with the highest transmission frequency.
In an alternative implementation, the size of the first memory block is generally determined by the client device, and is notified to the server.
On the basis of the foregoing embodiment, further, before establishing the connection, a memory pool may be created in advance, which may specifically be implemented by the following method:
a receiving end device applies for an operating system of the receiving end device and registers a first preset number of second memory blocks in a network card of the receiving end device to form a memory pool; the first preset number is determined according to the preset minimum available memory number and the maximum application memory number of the memory pool; the second memory block includes the first memory block.
Specifically, before creating the memory pool, the minimum available memory number MIN and the maximum application memory number MAX of the memory pool may be set; the minimum available memory number can be determined according to business requirements, and the maximum application memory number is determined according to the memory condition of the equipment, so that the memory is prevented from being used infinitely;
when the memory pool is created, a plurality of second memory blocks are applied to an operating system of the device and registered in a network card of the device, where the first preset number may be determined according to MIN and MAX, for example, MIN + (MAX-MIN)/2 memory blocks. The sizes of the second memory blocks can be the same or different.
Further, the receiving end device monitors the use condition of the memory blocks in the memory pool, and when the number of the available memory blocks in the memory pool is smaller than a certain number, the receiving end device can apply and register a new memory block to the memory pool, and specifically can implement the following method:
and if the number of the available memory blocks of the memory pool is smaller than the minimum available memory number and the total memory number applied by the memory pool is smaller than the maximum applied memory number, applying to an operating system of the receiving terminal equipment and registering a second preset number of third memory blocks in a network card of the receiving terminal equipment into the memory pool.
Specifically, when the number of the available memory blocks in the memory pool is smaller than the threshold MIN, newly applying for and registering a second preset number of third memory blocks to the memory pool; the second predetermined amount is, for example, the minimum available memory amount, or determined according to the latest available memory amount.
When the total memory amount applied by the memory pool reaches the threshold MAX, the memory is not applied again, and the memory is prevented from being used infinitely. That is, when the number of the memory blocks available in the memory pool is smaller than the threshold MIN and the total memory number applied by the memory pool is smaller than the threshold MAX, a certain number of memory blocks may be applied and registered in the memory pool, for example, the MIN memory blocks are applied and registered in the memory pool. The second preset number of third memory blocks may have the same size, for example, if the available number of the memory blocks of the first memory block size is not enough, a new memory block is applied and registered.
The memory in the memory pool can be used for a plurality of times after being applied for registration, and the registration does not need to be applied again. The memory can be reused after being used and released into the memory pool, and registration does not need to be applied, so that the time consumption of memory registration is reduced. And the use of the memory pool also reduces the overhead of repeated memory application and release.
Further, step 203 may be followed by the following steps:
the receiving end equipment allocates a new first memory block from the memory pool and creates a new receiving request in a receiving queue; the new received request includes an address of the new first memory block.
Specifically, in the embodiment of the present invention, the first memory blocks refer to memory blocks with the same size, so that multiple information interactions between the sending end and the receiving end are avoided, and the transmission efficiency is improved.
And the memory blocks with the same size are fixedly used for each connection established by the sending end equipment and the receiving end equipment.
The receiving end device monitors the completion queue, and if it is monitored that a completion event notification (for example, new data exists in the completion queue CQ) exists in the completion queue, acquires a completion status indication (for example, the completion status indication opcode is IBV _ WC _ RECV) from the completion event notification, which indicates that the data sent by the sending end is stored in the first memory block, and represents that a memory block in the reception queue RQ is consumed, at this time, a new first memory block is rescheduled, a new reception request is created, and the new reception request includes an address of the new first memory block. The number of the new receiving requests may be one or more, and the number of the first memory chunks corresponding to each receiving request may be one or more. By this way, timely replenishment of consumed memory chunks in the RQ is ensured.
In one possible implementation, the number of new first memory chunks may be the same as the number of consumed first memory chunks.
In this embodiment, for one connection, the negotiated first memory block with the fixed size is used, so that multiple interactions in the information transmission process are avoided, and the transmission efficiency is improved.
Fig. 4 is a flowchart illustrating a data transmission processing method according to another embodiment of the present invention. The main execution body of the method of this embodiment is the sending end device. As shown in fig. 4, the method of the present embodiment includes:
step 401, after the sending end device establishes connection with the receiving end device, the sending end device allocates a first memory block from the created memory pool, and writes data to be sent into the first memory block.
Specifically, after the sending end device establishes a connection with the receiving end device, the sending end device allocates a first memory block from a pre-created memory pool, and writes data to be sent into the first memory block, where the number of the first memory blocks may be one or more.
In an implementation manner, after receiving an application memory instruction of a user, a sending-end device allocates memory blocks from a memory pool, and the number of allocated memory blocks can be determined by the application instruction of the user, that is, the application instruction may include the number of the applied memory blocks.
Step 402, a sending end device creates a sending request in a sending queue to send data to be sent to a receiving end device; the sending request includes an address of the first memory block.
Specifically, the sending end device creates a Send request (SendRequest, SR) in a Send Queue (SQ), where the Send request SR includes an address of the first memory block. The number of created sending requests SR may be one or more, and the first memory block corresponding to each sending request RR may be one or more.
In one implementation, several send requests SR are created in the send queue SQ at once by calling ibv _ post _ send, without waiting for their completion status;
the work request identifier wr _ id in the data structure struct ibv _ send _ wr in which the request is sent may be set to the address of the first memory block.
On the basis of the above embodiment, optionally, the following operations may also be performed after step 402:
the sending end equipment monitors whether a completion event notification exists in a completion queue;
if the completion event notification exists and the completion event notification includes a sending completion state indication, the sending end device notifies a user of an address of the first memory block included in the completion event notification, so that the user indicates to release the first memory block to the memory pool.
Specifically, the sending-end device monitors the completion queue, and acquires a sending completion status indication (for example, the sending completion status indication opcode is IBV _ WC _ SEND) from the completion event notification if a completion event notification (for example, new data exists in the completion queue CQ) exists in the completion queue, which indicates that data in the first memory block has been sent, and notifies a user of an address of the first memory block included in the completion event notification, so that the user indicates to release the first memory block to the memory pool.
In an implementation manner, a completion event notification in a completion queue CQ is obtained by calling ibv _ poll _ CQ, where wr _ id in a data structure struct ibv _ wc in the completion queue is an address of a first memory block where a sending operation is completed, so as to complete receiving and sending of memory data in an asynchronous state.
The sending-end device may release the first memory block to the memory pool after receiving the release instruction of the user.
Optionally, the following operations may be performed before step 401:
the sending end device sends a connection establishment request to the receiving end device, where the connection establishment request carries the size of the first memory block.
Specifically, reference may be made to the receiving end side method embodiment, which is not described herein again.
Optionally, the following operations may be performed before step 401:
the sending end device applies for an operating system of the sending end device and registers a first preset number of second memory blocks in a network card of the sending end device to form the memory pool; the first preset number is determined according to the minimum available memory number and the maximum application memory number of the memory pool, and the second memory block includes the first memory block.
Specifically, reference may be made to the receiving end side method embodiment, which is not described herein again.
Further, the method of this embodiment further includes:
if the number of the available memory blocks of the memory pool is less than the minimum available memory number, and the total memory number applied by the memory pool is less than or equal to the maximum applied memory number, applying to the operating system of the sending end device and registering a second preset number of third memory blocks in the network card of the sending end device into the memory pool.
Specifically, reference may be made to the receiving end side method embodiment, which is not described herein again.
The first preset number and the second preset number of the sending end side can be the same as or different from those of the receiving end side, and the minimum available memory number and the maximum application memory number of the memory pool of the sending end side can be the same as or different from those of the receiving end side.
In a possible implementation manner, data may be directly transmitted from the memory block to a memory block of another device through a network via the network card, so as to implement network data transmission between devices. Data can be firstly copied to a network card through a direct memory access (direct memory access, DMA for short) mode, then the data is transmitted to the network card of the other side device through the network, and then the data directly reaches the memory of the device without the need of an operating system to copy the data among buffer areas and the participation of a CPU for many times, so that the requirements on bandwidth and processor overhead are reduced, and the time delay is obviously reduced.
In the embodiment, the memory pool is used, the memory blocks in the memory pool can be used for multiple times by one-time registration, the time consumption of memory registration is avoided, the transmission efficiency is improved, the memory can be shared by user processing service and network transmission, the memory data copying is avoided, and the efficiency is improved; and the sending queue is used for carrying out memory scheduling and finishing the memory scheduling through event notification indication, so that the data transmission performance is improved.
Fig. 5 is a structural diagram of an embodiment of a data transmission processing apparatus provided in the present invention, and as shown in fig. 5, the data transmission processing apparatus of the embodiment is applied to a receiving end device, and the data transmission processing apparatus includes:
an allocating module 501, configured to allocate a first memory block from an established memory pool after a connection is established between a receiving end device and a sending end device;
a creating module 502, configured to create a receive request in a receive queue, where the receive request includes an address of the first memory block;
a processing module 503, configured to store the data sent by the sending-end device in the first memory block according to the receiving request.
In a possible implementation manner, the processing module 503 is further configured to: after the receiving end device stores the data sent by the sending end device into the first memory block according to the receiving request, monitoring whether a completion event notification exists in a completion queue;
if the completion event notification exists and the completion event notification includes a reception completion status indication, notifying a user of the address of the first memory block included in the completion event notification, so that the user processes data stored in the first memory block.
In one possible implementation manner, the method further includes:
a receiving module, configured to receive a connection establishment request of the sending-end device, where the connection establishment request carries a size of the first memory block.
In a possible implementation manner, the allocating module 501 is further configured to, after storing the data sent by the sending-end device in the first memory block according to the receiving request, allocate a new first memory block from the memory pool, and create a new receiving request in the receiving queue; the new receiving request includes an address of the new first memory block.
In one possible implementation manner, the processing module 503 is configured to:
the receiving end device applies for an operating system of the receiving end device and registers a first preset number of second memory blocks in a network card of the receiving end device to form the memory pool; the first preset number is determined according to the preset minimum available memory number and the maximum application memory number of the memory pool; the second memory block includes the first memory block.
In one possible implementation manner, the processing module 503 is configured to:
if the number of the available memory blocks of the memory pool is smaller than the minimum available memory number, and the total memory number applied by the memory pool is smaller than the maximum applied memory number, applying to the operating system of the receiving terminal device and registering a second preset number of third memory blocks in the network card of the receiving terminal device into the memory pool.
The apparatus of this embodiment may be configured to implement the technical solution of the above-mentioned receiving end side method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 6 is a structural diagram of an embodiment of a data transmission processing apparatus provided in the present invention, and as shown in fig. 6, the data transmission processing apparatus of the embodiment is applied to a sending end device, and the data transmission processing apparatus includes:
an allocating module 601, configured to allocate a first memory block from an established memory pool after a connection is established between a sending end device and a receiving end device, and write data to be sent into the first memory block;
a processing module 602, configured to create a sending request in a sending queue, so as to send the to-be-sent data to the receiving end device; the sending request includes an address of the first memory block.
In a possible implementation manner, the processing module 602 is further configured to:
after a sending request is created in a sending queue, monitoring whether a completion event notification exists in a completion queue;
if the completion event notification exists and the completion event notification includes a sending completion state indication, notifying a user of the address of the first memory block included in the completion event notification, so that the user indicates to release the first memory block to the memory pool.
In one possible implementation manner, the method further includes: a receiving module, configured to send a connection establishment request to the receiving end device before allocating a first memory block from a created memory pool, where the connection establishment request carries a size of the first memory block.
In a possible implementation manner, the processing module 602 is further configured to:
applying for an operating system of the sending end device and registering a first preset number of second memory blocks in a network card of the sending end device to form the memory pool; the first preset number is determined according to the minimum available memory number and the maximum application memory number of the memory pool, and the second memory block includes the first memory block.
In a possible implementation manner, the processing module 602 is further configured to:
if the number of the available memory blocks of the memory pool is less than the minimum available memory number, and the total memory number applied by the memory pool is less than or equal to the maximum applied memory number, applying to the operating system of the sending end device and registering a second preset number of third memory blocks in the network card of the sending end device into the memory pool.
The apparatus of this embodiment may be configured to implement the technical solution of the foregoing method embodiment on the sender side, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 7 is a structural diagram of an embodiment of a receiving end device provided in the present invention, and as shown in fig. 7, the receiving end device includes:
a processor 701, and a memory 502 for storing executable instructions for the processor 701.
Optionally, the method may further include: a communication interface 703 for enabling communication with other devices.
The above components may communicate over one or more buses.
Optionally, the method may further include: a network card, such as an (InfiniBand, abbreviated as IB) network card.
The processor 701 is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process of the method may refer to the foregoing method embodiment, which is not described herein again.
Fig. 8 is a structural diagram of an embodiment of a sending end device provided in the present invention, and as shown in fig. 8, the sending end device includes:
a processor 801, and a memory 802 for storing executable instructions for the processor 801.
Optionally, the method may further include: a communication interface 803 for enabling communication with other devices.
Optionally, the method may further include: a network card, such as an (InfiniBand, abbreviated as IB) network card.
The above components may communicate over one or more buses.
The processor 801 is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process of the method may refer to the foregoing method embodiment, which is not described herein again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A data transmission processing method, comprising:
after the receiving end equipment establishes connection with the sending end equipment, a first memory block is allocated from the established memory pool;
the receiving end device creates a receiving request in a receiving queue, where the receiving request includes an address of the first memory block;
and the receiving end equipment stores the data sent by the sending end equipment into the first memory block according to the receiving request.
2. The method according to claim 1, wherein after the receiving end device stores the data sent by the sending end device in the first memory block according to the receiving request, the method further includes:
the receiving end equipment monitors whether a completion event notification exists in a completion queue;
if the completion event notification exists and the completion event notification includes a reception completion status indication, the receiving end device notifies a user of the address of the first memory block included in the completion event notification, so that the user processes data stored in the first memory block.
3. The method according to claim 1 or 2, wherein before allocating the first memory block from the created memory pool, further comprising:
and the receiving end equipment receives a connection establishment request of the sending end equipment, wherein the connection establishment request carries the size of the first memory block.
4. The method according to claim 1 or 2, wherein after the receiving end device stores the data sent by the sending end device in the first memory block according to the receiving request, the method further includes:
the receiving end device allocates a new first memory block from the memory pool, and creates a new receiving request in the receiving queue; the new receiving request includes an address of the new first memory block.
5. The method according to claim 1 or 2, wherein before allocating the first memory block from the created memory pool, further comprising:
the receiving end device applies for an operating system of the receiving end device and registers a first preset number of second memory blocks in a network card of the receiving end device to form the memory pool; the first preset number is determined according to the preset minimum available memory number and the maximum application memory number of the memory pool; the second memory block includes the first memory block.
6. The method of claim 5, further comprising:
if the number of the available memory blocks of the memory pool is smaller than the minimum available memory number, and the total memory number applied by the memory pool is smaller than the maximum applied memory number, applying to the operating system of the receiving terminal device and registering a second preset number of third memory blocks in the network card of the receiving terminal device into the memory pool.
7. A data transmission processing method, comprising:
after a sending end device establishes connection with a receiving end device, the sending end device allocates a first memory block from a created memory pool and writes data to be sent into the first memory block;
the sending end equipment creates a sending request in a sending queue so as to send the data to be sent to the receiving end equipment; the sending request includes an address of the first memory block.
8. The method of claim 7, wherein after the sending end device creates the sending request in a sending queue, further comprising:
the sending end equipment monitors whether a completion event notification exists in a completion queue;
if the completion event notification exists and the completion event notification includes a sending completion state indication, the sending end device notifies a user of an address of the first memory block included in the completion event notification, so that the user indicates to release the first memory block to the memory pool.
9. The method according to claim 7 or 8, wherein before the sending-end device allocates the first memory block from the created memory pool, the method further comprises:
the sending end device sends a connection establishment request to the receiving end device, where the connection establishment request carries the size of the first memory block.
10. The method according to claim 7 or 8, wherein before the sending-end device allocates the first memory block from the created memory pool, the method further comprises:
the sending end device applies for an operating system of the sending end device and registers a first preset number of second memory blocks in a network card of the sending end device to form the memory pool; the first preset number is determined according to the minimum available memory number and the maximum application memory number of the memory pool, and the second memory block includes the first memory block.
11. The method of claim 10, further comprising:
if the number of the available memory blocks of the memory pool is less than the minimum available memory number, and the total memory number applied by the memory pool is less than or equal to the maximum applied memory number, applying to the operating system of the sending end device and registering a second preset number of third memory blocks in the network card of the sending end device into the memory pool.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-6, 7-11.
13. A receiving-end device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-6 via execution of the executable instructions.
14. A transmitting-end device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 7-11 via execution of the executable instructions.
CN201911277583.1A 2019-12-11 2019-12-11 Data transmission processing method, device and storage medium Active CN111404986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911277583.1A CN111404986B (en) 2019-12-11 2019-12-11 Data transmission processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911277583.1A CN111404986B (en) 2019-12-11 2019-12-11 Data transmission processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111404986A true CN111404986A (en) 2020-07-10
CN111404986B CN111404986B (en) 2023-07-21

Family

ID=71432504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911277583.1A Active CN111404986B (en) 2019-12-11 2019-12-11 Data transmission processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111404986B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822300A (en) * 2021-04-19 2021-05-18 北京易捷思达科技发展有限公司 RDMA (remote direct memory Access) -based data transmission method and device and electronic equipment
CN114003366A (en) * 2021-11-09 2022-02-01 京东科技信息技术有限公司 Network card packet receiving processing method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972215A (en) * 2006-12-06 2007-05-30 中国科学院计算技术研究所 A remote internal memory sharing system and its implementation method
CN101286878A (en) * 2008-04-22 2008-10-15 中兴通讯股份有限公司 Management method of memory pool for terminal
CN101504617A (en) * 2009-03-23 2009-08-12 华为技术有限公司 Data transmitting and receiving method and device based on processor sharing internal memory
CN102981773A (en) * 2011-09-02 2013-03-20 深圳市快播科技有限公司 Storage device access method and storage device access system and storage device access supervisor
CN103838859A (en) * 2014-03-19 2014-06-04 厦门雅迅网络股份有限公司 Method for reducing data copy among multiple processes under linux
CN104796337A (en) * 2015-04-10 2015-07-22 京信通信系统(广州)有限公司 Method and device for forwarding message
US20160342567A1 (en) * 2015-05-18 2016-11-24 Red Hat Israel, Ltd. Using completion queues for rdma event detection
CN106598736A (en) * 2016-12-13 2017-04-26 深圳中科讯联科技股份有限公司 Memory block calling method and memory block releasing method for memory pool and server
CN109144894A (en) * 2018-08-01 2019-01-04 浙江大学 Memory access patterns guard method based on data redundancy
CN109547727A (en) * 2018-10-16 2019-03-29 视联动力信息技术股份有限公司 Data cache method and device
CN109918203A (en) * 2019-03-18 2019-06-21 深圳市网心科技有限公司 Access server memory management optimization method, access server and communication system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972215A (en) * 2006-12-06 2007-05-30 中国科学院计算技术研究所 A remote internal memory sharing system and its implementation method
CN101286878A (en) * 2008-04-22 2008-10-15 中兴通讯股份有限公司 Management method of memory pool for terminal
CN101504617A (en) * 2009-03-23 2009-08-12 华为技术有限公司 Data transmitting and receiving method and device based on processor sharing internal memory
CN102981773A (en) * 2011-09-02 2013-03-20 深圳市快播科技有限公司 Storage device access method and storage device access system and storage device access supervisor
CN103838859A (en) * 2014-03-19 2014-06-04 厦门雅迅网络股份有限公司 Method for reducing data copy among multiple processes under linux
CN104796337A (en) * 2015-04-10 2015-07-22 京信通信系统(广州)有限公司 Method and device for forwarding message
US20160342567A1 (en) * 2015-05-18 2016-11-24 Red Hat Israel, Ltd. Using completion queues for rdma event detection
CN106598736A (en) * 2016-12-13 2017-04-26 深圳中科讯联科技股份有限公司 Memory block calling method and memory block releasing method for memory pool and server
CN109144894A (en) * 2018-08-01 2019-01-04 浙江大学 Memory access patterns guard method based on data redundancy
CN109547727A (en) * 2018-10-16 2019-03-29 视联动力信息技术股份有限公司 Data cache method and device
CN109918203A (en) * 2019-03-18 2019-06-21 深圳市网心科技有限公司 Access server memory management optimization method, access server and communication system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOSE: ""CMMI-ACQ: A Formal Implementation Sequences of the Processes Areas at Maturity Level 2"", 《IEEE》 *
裴鹏飞: ""支持事务的分布式消息队列中间件的设计与实现"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822300A (en) * 2021-04-19 2021-05-18 北京易捷思达科技发展有限公司 RDMA (remote direct memory Access) -based data transmission method and device and electronic equipment
CN112822300B (en) * 2021-04-19 2021-07-13 北京易捷思达科技发展有限公司 RDMA (remote direct memory Access) -based data transmission method and device and electronic equipment
CN114003366A (en) * 2021-11-09 2022-02-01 京东科技信息技术有限公司 Network card packet receiving processing method and device
WO2023082921A1 (en) * 2021-11-09 2023-05-19 京东科技信息技术有限公司 Network card packet receiving processing method and apparatus
CN114003366B (en) * 2021-11-09 2024-04-16 京东科技信息技术有限公司 Network card packet receiving processing method and device

Also Published As

Publication number Publication date
CN111404986B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
EP3719657A1 (en) Communication with accelerator via rdma-based network adapter
CN108536543B (en) Receive queue with stride-based data dispersal
KR100992282B1 (en) Apparatus and method for supporting connection establishment in an offload of network protocol processing
US7822053B2 (en) Apparatus and method for TCP buffer copy distributed parallel processing
US20090086732A1 (en) Obtaining a destination address so that a network interface device can write network data without headers directly into host memory
WO2000041365A1 (en) Method and system for credit-based data flow control
KR20070030285A (en) Apparatus and method for supporting memory management in an offload of network protocol processing
WO2014082562A1 (en) Method, device, and system for information processing based on distributed buses
CN112631788B (en) Data transmission method and data transmission server
US8539089B2 (en) System and method for vertical perimeter protection
CN110535811B (en) Remote memory management method and system, server, client and storage medium
CN111404986B (en) Data transmission processing method, device and storage medium
WO2017032152A1 (en) Method for writing data into storage device and storage device
CN113127139B (en) Memory allocation method and device based on DPDK of data plane development kit
CN100486248C (en) Zero-copy communication method under real-time environment
CN112311694B (en) Priority adjustment method and device
CN111404842B (en) Data transmission method, device and computer storage medium
CN111698274B (en) Data processing method and device
CN116471242A (en) RDMA-based transmitting end, RDMA-based receiving end, data transmission system and data transmission method
KR100812680B1 (en) Interprocessor communication protocol providing guaranteed quality of service and selective broadcasting
CN114095550A (en) Remote procedure calling method for directly reading reference parameter by server
CN112019450A (en) Inter-device streaming communication
WO2023222077A1 (en) Resource configuration method and apparatus, and related device
KR102211005B1 (en) A middleware apparatus of data distribution services for providing a efficient message processing
CN114116239A (en) Transmission method of reference parameter and remote procedure calling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant