CN115328408B - Method, apparatus, device and medium for data processing - Google Patents

Method, apparatus, device and medium for data processing Download PDF

Info

Publication number
CN115328408B
CN115328408B CN202211256303.0A CN202211256303A CN115328408B CN 115328408 B CN115328408 B CN 115328408B CN 202211256303 A CN202211256303 A CN 202211256303A CN 115328408 B CN115328408 B CN 115328408B
Authority
CN
China
Prior art keywords
data operation
operation request
read
memory access
direct memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211256303.0A
Other languages
Chinese (zh)
Other versions
CN115328408A (en
Inventor
汪权
柯克
韩月
韦新伟
刘军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Netapp Technology Ltd
Original Assignee
Lenovo Netapp Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Netapp Technology Ltd filed Critical Lenovo Netapp Technology Ltd
Priority to CN202211256303.0A priority Critical patent/CN115328408B/en
Publication of CN115328408A publication Critical patent/CN115328408A/en
Application granted granted Critical
Publication of CN115328408B publication Critical patent/CN115328408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Abstract

The present disclosure provides a method, apparatus, device and medium for data processing, wherein the method comprises: receiving a data operation request based on the server information block; responding to a data operation request of a read type or a write type, and processing the received data operation request through remote direct memory access; and in response to the data operation request being a non-read type data operation request and a non-write type data operation request, processing the received data operation request by the server information block. The method can classify the received data operation request, and can process the received data operation request through remote direct memory access under the condition that the data operation request is a read type or write type data operation request, so that the memory copy operation can be reduced, the time delay is reduced, and the consumption of system resources is reduced.

Description

Method, apparatus, device and medium for data processing
Technical Field
The present disclosure relates to the field of data processing, and more particularly, to a method, an apparatus, a device, and a medium for data processing.
Background
In existing data processing based on Server Messages Block (SMB), for example, in data processing in an application implemented based on SMB (such as Samba software), a conventional SMB-based application uses Transmission Control Protocol/Internet Protocol (TCP/IP) to transmit and receive a request. The request needs to go through multiple memory copy operations from the network card to the kernel and then to the application layer, which not only increases the time delay, but also consumes system resources.
In addition, in the conventional SMB-based data processing, all requests including read and write requests are processed using a single process mode, and a plurality of requests cannot be concurrently processed at the same time, so that the processing of the requests is relatively slow in the conventional SMB-based data processing.
Therefore, a new data processing method is required to solve the above problems.
Disclosure of Invention
In view of the above problems, the present disclosure provides a method for data processing, wherein the method may classify a received SMB-based data operation request, and may process the received data operation request through Remote Direct Memory Access (RDMA) in a case where the data operation request is a read-type data operation request or a write-type data operation request, thereby enabling a reduction in memory copy operations, a reduction in latency, and a reduction in consumption of system resources.
The embodiment of the present disclosure provides a method for data processing, where the method includes: receiving a data operation request based on the server information block; responding to the data operation request which is a read type data operation request or a write type data operation request, and processing the received data operation request through remote direct memory access; in response to the data operation request being a non-read type data operation request and a non-write type data operation request, processing the received data operation request by the server information block.
According to an embodiment of the present disclosure, the processing, in response to the data operation request being a read-type data operation request or a write-type data operation request, the received data operation request through remote direct memory access includes: reading the requested read first data content from a memory in the case that the data operation request is a read-type data operation request; storing the read first data content into a first cache region through remote direct memory access; and under the condition that the data operation request is a write-type data operation request, writing second data content needing to be written into the memory from the second cache region through remote direct memory access.
According to the embodiment of the present disclosure, the method further includes: after the read first data content is stored in the first cache region through remote direct memory access, the first data content is sent to external equipment through remote direct memory access; and after the second data content needing to be written is written into the memory from the second cache region through remote direct memory access, storing a response to the written type of data operation request into the first cache region, and sending the response to the external equipment through remote direct memory access.
According to an embodiment of the present disclosure, in response to that the data operation request is a data operation request of a non-read type and a data operation request of a non-write type, the processing, by the server information block, the received data operation request includes: obtaining, by the server information block, third data content based on the memory according to the data operation request; and storing the third data content into the first buffer area.
According to an embodiment of the present disclosure, the method further includes: and after the third data content is stored in the first cache region, sending the third data content to an external device through remote direct memory access.
According to an embodiment of the present disclosure, the processing, in response to the data operation request being a read-type data operation request or a write-type data operation request, the received data operation request through remote direct memory access includes: in response to the data operation request being a plurality of read-type data operation requests or a plurality of write-type data operation requests, processing the received plurality of read-type data operation requests or plurality of write-type data operation requests in parallel through remote direct memory access.
The embodiment of the present disclosure provides an apparatus for data processing, where the apparatus includes: a receiving module configured to receive a data operation request based on a server information block; a processing module configured to process the received data operation request through remote direct memory access in response to the data operation request being a read-type data operation request or a write-type data operation request; in response to the data operation request being a non-read type data operation request and a non-write type data operation request, processing the received data operation request by the server information block.
According to an embodiment of the present disclosure, the processing, by a remote direct memory access, a received data operation request in response to a data operation request being a read type data operation request or a write type data operation request, includes: reading the requested read first data content from a memory in the case that the data operation request is a read-type data operation request; storing the read first data content into a first cache region through remote direct memory access; and under the condition that the data operation request is a write-type data operation request, writing second data content needing to be written into the memory from the second cache region through remote direct memory access.
According to this disclosed embodiment, its characterized in that, the device still includes: a first sending module configured to send the read first data content to an external device through remote direct memory access after the first data content is stored in the first cache region through remote direct memory access; and the second sending module is configured to store a response to the written type data operation request into the first cache region after the second data content needing to be written is written into the storage from the second cache region through remote direct memory access, and send the response to the external device through the remote direct memory access.
According to an embodiment of the present disclosure, in response to that the data operation request is a data operation request of a non-read type and a data operation request of a non-write type, the processing, by the server information block, the received data operation request includes: obtaining, by the server information block, third data content based on the memory according to the data operation request; and storing the third data content into the first buffer area.
According to this disclosed embodiment, its characterized in that, the device still includes: a third sending module configured to send the third data content to an external device through remote direct memory access after storing the third data content in the first buffer.
According to an embodiment of the present disclosure, the processing, by a remote direct memory access, a received data operation request in response to a data operation request being a read type data operation request or a write type data operation request, includes: in response to the data operation request being a plurality of read-type data operation requests or a plurality of write-type data operation requests, processing the received plurality of read-type data operation requests or plurality of write-type data operation requests in parallel through remote direct memory access.
An embodiment of the present disclosure provides an apparatus for data processing, including: a processor, and a memory storing computer-executable instructions that, when executed by the processor, cause the processor to perform the method as described above.
The disclosed embodiments provide a computer-readable recording medium storing computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, cause the processor to perform the method as described above.
The present disclosure provides a method, apparatus, device, and medium for data processing. The method provided by the present disclosure can respond to different types of data operation requests to process the received data operation requests in different ways, so that the method provided by the present disclosure has a mechanism for classifying the requests, and the data operation requests of read type or the data operation requests of write type are processed directly through RDMA, while other requests are processed through SMB, thereby enabling the method provided by the present disclosure to reduce memory copy operations, reduce time delay, and reduce consumption of system resources. In addition, because the method provided by the disclosure can process a plurality of received data operation requests concurrently, the method provided by the disclosure further reduces time delay, further reduces consumption of system resources and improves throughput, thereby greatly improving performance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly introduced below. It is apparent that the drawings in the following description are only exemplary embodiments of the disclosure, and that other drawings may be derived from those drawings by a person of ordinary skill in the art without inventive effort.
FIG. 1 shows a flow diagram of a method 100 for data processing according to an embodiment of the present disclosure;
FIG. 2 illustrates a memory diagram based on RDMA services according to an embodiment of the disclosure;
FIG. 3A illustrates a structural schematic diagram of a traditional SAMBA application based on an SMB implementation in accordance with an embodiment of the present disclosure;
FIG. 3B illustrates a request processing diagram for a conventional SAMBA application based on an SMB implementation in accordance with an embodiment of the present disclosure;
FIG. 4A illustrates a simplified schematic diagram of an RDMA service model;
FIG. 4B shows a schematic structural diagram of an improved SAMBA application based on an SMB implementation employing the methods provided by the present disclosure, in accordance with an embodiment of the present disclosure;
FIG. 4C illustrates a request processing diagram for an improved SAMBA application based on an SMB implementation that employs methods provided by the present disclosure, in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a process flow of a data operation request according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an apparatus 600 for data processing according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of an apparatus 700 for data processing according to an embodiment of the present disclosure;
fig. 8 illustrates a schematic diagram of a computer-readable recording medium 800 according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and the like in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used only to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
In the prior art, traditional SMB-based applications use TCP/IP to transceive requests. The request needs to go through multiple memory copy operations from the network card to the kernel and then to the application layer, which not only increases the time delay, but also consumes system resources.
To solve the above problems, the present invention creatively adopts a technology based on Remote Direct Memory Access (RDMA) that has just been developed without being used in the conventional SMB-based application. As network cards supporting RDMA functionality are increasingly being used, the use of RDMA-based applications may enable the application to directly access data on the network card. On the basis, the method creatively provided by the disclosure can classify the received data operation request based on the SMB, and can process the received data operation request through RDMA in the case that the data operation request is a read type data operation request or a write type data operation request, so that the memory copy operation can be reduced, the time delay is reduced, and the consumption of system resources is reduced.
The above-described method provided by the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flow diagram of a method 100 for data processing according to an embodiment of the present disclosure. The method 100 may be performed by an SMB-based server device (e.g., a server).
Referring to fig. 1, in step S110, an SMB-based data operation request may be received.
As an example, in a SAMBA application implemented based on SMB, a user may initiate a data operation request for a data operation at a SAMBA client; the data operation request can be transmitted to the SAMBA server through TCP/IP; the SAMBA server side can receive the SMB-based data operation request.
As an example, the data operation request may be any type of data operation request. For example, the data operation request may be a read type data operation request or a write type data operation request. As another example, the data operation request may be a request for metadata for other operation files, such as a CREATE file request.
In step S120, in response to the data operation request being a read type data operation request or a write type data operation request, the received data operation request is processed through RDMA.
According to an embodiment of the present disclosure, in response to the data operation request being a read-type data operation request or a write-type data operation request, processing the received data operation request by RDMA may include: reading the requested read first data content from a memory in the case that the data operation request is a read-type data operation request; and storing the read first data content into the first cache region through remote direct memory access.
By way of example, the storage may be an actual back-end storage device, such as a storage server, memory, hard disk, or the like. The first buffer may be a buffer that registers a part of memory to RDMA in advance, that is, the buffer is mapped to the network card as a buffer for sending data to an external device, and in this case, the first buffer may be referred to as a sending area.
As an example, when the first data content on the user registration management part with the file name "registration management" needs to be read from the memory, the read first data content on the user registration management part may be stored in the above-mentioned send area by RDMA.
According to an embodiment of the present disclosure, in response to the data operation request being a read-type data operation request or a write-type data operation request, processing the received data operation request by RDMA may include: and under the condition that the data operation request is a write-in type data operation request, writing second data content needing to be written in the memory from the second cache region through remote direct memory access.
By way of example, the storage may be an actual back-end storage device, such as a storage server, a memory, a hard disk, and so forth. The second buffer may be a buffer that is used for registering a part of the memory to RDMA in advance, that is, the buffer is mapped to the network card as a buffer for receiving all external requests and corresponding data, and at this time, the second buffer may be referred to as a receiving area. The receive zone may be registered for RDMA in pairs with the send zone. As shown in fig. 2, fig. 2 shows a memory diagram based on RDMA service according to an embodiment of the disclosure. As can be seen from fig. 2, a plurality of pairs of receiving areas and transmitting areas are registered in advance. Each pair of receiving and transmitting zones may receive or process one of the data operation requests. Each time a request sent, for example, from a SAMBA client is processed, the above-mentioned pair of receiving area, which may be used to store the request sent by the client, and sending area, which may be used to store a reply (e.g., a response or read data content, etc.) to be sent to the client, is occupied. When a request is processed, the pair of receiving and transmitting areas are released to wait for the processing of the next request. That is, each pair of the receiving area and the transmitting area can be recycled.
As an example, upon receiving a write-type data operation request, the second data content that needs to be written (e.g., the data content written to the administrator registration management part with a file name "registration management") may be written from the second cache area via RDMA, for example, to a hard disk.
According to the embodiment of the disclosure, after the read first data content is stored in the first cache region through remote direct memory access, the first data content is sent to an external device through remote direct memory access.
As an example, after storing the read first data content with respect to the user registration management part in the above-mentioned sending area by RDMA, the read first data content with respect to the user registration management part stored in the above-mentioned sending area is sent to an external device, for example, an external device such as a mobile terminal, a fixed terminal, etc. implementing a SAMBA client, by RDMA.
According to the embodiment of the disclosure, after the second data content required to be written is written into the storage from the second cache region through remote direct memory access, a response to the data operation request of the written type is stored into the first cache region, and the response is sent to the external device through remote direct memory access.
As an example, after writing the second data content regarding the administrator registration management part, which needs to be written, for example, the above file name "registration management" succeeds in, for example, a hard disk, the fed-back write success response is stored in the above receiving area, and then the write success response is transmitted to an external device, for example, an external device such as a mobile terminal, a fixed terminal, or the like, which implements a SAMBA client, by RDMA.
Referring back to fig. 1, in step S130, the received data operation request may be processed by the server information block in response to the data operation request being a non-read type data operation request and a non-write type data operation request. It should be noted that although steps S120 and S130 are illustrated in a sequence numbering manner, steps S120 and S130 do not necessarily have an order limitation and do not necessarily implement both steps S120 and S130, but either of steps S120 and S130, or both of steps S120 and S130 may be selectively executed in a sequence or in a reverse order or simultaneously according to an actual application scenario.
According to an embodiment of the present disclosure, the processing, by the server information block, the received data operation request in response to the data operation request being a data operation request of a non-read type and a data operation request of a non-write type may include: obtaining, by the server information block, third data content based on the memory according to the data operation request; and storing the third data content into the first buffer area.
As an example, the data operation request may be a data operation request of a creation type, such as a data operation request of a create file type with a file name of "staff management", which is not a data operation request of a read type, and may be transmitted to the SAMBA service process implemented through SMB, and then the SAMBA service process may obtain, for example, a third data content for successful creation after communicating data with the storage, and finally the third data content for successful creation may be stored in the first buffer area (i.e., the sending area).
According to the embodiment of the present disclosure, after the third data content is stored in the first buffer, the third data content may be sent to an external device through remote direct memory access.
After the third data content successfully created by the above-mentioned "people management" file is stored in the above-mentioned sending area, the third data content is sent to an external device, for example, an external device such as a mobile terminal, a fixed terminal, etc. implementing a SAMBA client, by RDMA.
According to an embodiment of the present disclosure, the processing, by RDMA, the received data operation request in response to the data operation request being a read-type data operation request or a write-type data operation request in the method 100 shown in fig. 1 may include: in response to the data operation request being a plurality of read-type data operation requests or write-type data operation requests, processing the received plurality of read-type data operation requests or write-type data operation requests in parallel through remote direct memory access.
As an example, the data operation request may be a plurality of read type data operation requests, a plurality of write type data operation requests, or a plurality of data operation requests including a read type data operation request and a write type data operation request. In the case of the multiple data operation requests described above, the received multiple data operation requests may be processed concurrently via RDMA. That is, the multiple received data operation requests may be concurrently processed through multiple threads.
As can be seen from the foregoing method for processing data provided by the present disclosure, the method provided by the present disclosure may process the received data operation request in different manners in response to different types of data operation requests, so that the method provided by the present disclosure has a mechanism for classifying the requests, and the data operation request of a read type or the data operation request of a write type is directly processed through RDMA, while other requests are processed through SMB, so that the method provided by the present disclosure can reduce memory copy operations, reduce latency, and reduce consumption of system resources. In addition, the method provided by the disclosure can process a plurality of received data operation requests concurrently, so that the method provided by the disclosure further reduces time delay, further reduces consumption of system resources, improves throughput, and further greatly improves performance.
Further, as can be seen by the above-described method for data processing provided by the present disclosure, the method provided by the present disclosure receives all SAMBA requests and sends all SAMBA replies over RDMA. The application program can directly access the data in the network card memory, so that memory copy is reduced, and consumption of system resources is reduced. The requests except reading and writing are forwarded to the original SAMBA in the received requests, the reading and writing requests are processed directly in an RDMA process by using a multi-thread mode according to the processing of the original SAMBA flow, concurrency is improved, throughput is increased, and the reading and writing processes of the SAMBA are accelerated. Therefore, the method provided by the disclosure is suitable for a scene that SAMBA is used for reading and writing files more, and the reading and writing performance can be greatly improved by the method provided by the disclosure in the scene.
In order to better understand the above method for data processing provided by the present disclosure, the method provided by the present disclosure will be further described in an exemplary manner with reference to fig. 3A to 5.
Fig. 3A illustrates a structural schematic diagram of a conventional SMB-based implementation of a SAMBA application, according to an embodiment of the present disclosure. Fig. 3B illustrates a request processing diagram for a conventional SMB-based implementation of a SAMBA application, according to an embodiment of the present disclosure. FIG. 4A shows a simplified schematic of the RDMA service model. Fig. 4B shows a schematic structural diagram of an improved SMB-based implementation of a SAMBA application employing the methods provided by the present disclosure, in accordance with an embodiment of the present disclosure. Fig. 4C shows a request processing schematic diagram of an improved SAMBA application based on an SMB implementation employing the methods provided by the present disclosure, according to an embodiment of the present disclosure. FIG. 5 is a schematic diagram illustrating a processing flow of a data operation request according to an embodiment of the disclosure.
In a conventional SMB-based scheme, referring to fig. 3a, an SAMBA client interacts data with an SAMBA service process running on a server over a TCP/IP network, and the SAMBA service process may communicate data with a backend storage device (e.g., memory, storage server, hard drive, etc.) over various wired or wireless connections. For example, various types of data operation requests from the client, including read-type data operation requests, write-type data operation requests (hereinafter, read-type data operation requests and/or write-type data operation requests are referred to as read-write requests for the sake of brevity), and other types of data operation requests, are transmitted to the SAMBA service process through the TCP/IP network, as shown in fig. 3B. The SAMBA service process processes the read-write request or other request and communicates data with the back-end storage device, and then transmits response content (such as a response with a successful write or content of read data) to the client through the TCP/IP network.
Before illustrating the improved SMB-based scheme of the present invention, the underlying RDMA service model is briefly described. For RDMA, there is currently a mature set of development interfaces and development models. An RDMA process may be used to listen to a network card that supports RDMA functionality. A block of shared memory may be registered as receive and transmit memory for RDMA-related operations. This block of memory may be partitioned into multiple small blocks of memory for concurrent reception and transmission of multiple requests and responses simultaneously. That is, when a write-type request is processed, data received from the network is directly written into the block of memory, and thus is directly written from the block of memory to the backend storage; data that needs to be sent using RDMA, such as when handling read-type requests, is also written directly to the block of memory and sent directly from the block of memory to the external device. To employ this set of RDMA model for receiving and/or sending SMB based requests and/or corresponding feedback data, an RDMA-specific request header (header) needs to be added to the SMB based request, whereas there is a corresponding SAMBA RDMA request header available in existing SAMBA implementations, e.g., based on SMB. FIG. 4A is a simplified diagram of an RDMA services model, in which a client interacts data with an RDMA services process on a server over an RDMA network. The circular portion in fig. 4A represents a block of shared memory (i.e., buffer, buf). The block of shared memory may be partitioned into multiple blocks for receiving and transmitting data. Both data received from the RDMA network and data to be sent out via the RDMA network will be stored in these shared memories. And reading and processing read and write requests from a shared memory by a multithreading processing queue in the RDMA service process. If the request is a write request, writing the data in the request into the back-end storage equipment; and if the request is a read request, reading data from the back-end storage equipment and writing the data into the shared memory. The multithreaded processing queue may enable concurrent processing of multiple requests.
In the improved SMB-based scheme of the present invention, the client interacts data with the RDMA service process on the server over the RDMA network, as shown in fig. 4B. The RDMA service in the RDMA service process may determine the type of request received. In the case where the request is of the read-write type, the RDMA service may process the read-write request through the multithreaded processing queue and communicate data with the back-end storage device, and then transmit the content of the response to the request (e.g., the content of the successfully written response or the read data) to the client over the RDMA network. In the case where the request is of the non-read type, the RDMA service will transfer the request to the SAMBA service process via a network link with the SAMBA service process (via the 445 port if SAMBA is configured under the 445 port), communicate data between the SAMBA service process and the backend device, and then transfer the response content to the request to the client via the RDMA network.
As can be seen in FIG. 4B, the present disclosure adds an RDMA service process in addition to the original SAMBA service process. The newly added RDMA service process can monitor a network card supporting the RDMA function; receiving and sending all requests based on SAMBA; classifying the requests, forwarding the requests except for reading and writing to the original SAMBA service process for processing, and directly processing the reading and writing requests in the RDMA service process; the multithreaded processing queue is used to process read and write requests, and in the case of multiple read and write requests, the multiple read and write requests may be processed concurrently.
That is, referring to fig. 4C, a request from the outside is transmitted to the RDMA service process on the server through the network to be processed. And under the condition that the RDMA service process judges that the received request is a read-write request, the RDMA service process directly processes and stores communication data with a back end, and then sends the fed-back data to the external equipment through the network. In the case that the RDMA service process determines that the received request is another request, the RDMA service process transmits the other request to the SAMBA service process for processing through a network link with the SAMBA service process, so that the SAMBA service process processes the other request and stores communication data with the backend, and then transmits the feedback data to the RDMA service process through the network link and to an external device through the RDMA service process.
Referring to fig. 5, a process flow of a data operation request according to an embodiment of the present disclosure, which may be performed by a server, is explained taking an SMB-based SAMBA application as an example.
SAMBA applications are used primarily in windows operating systems. Window as SAMBA client can already support RDMA network communications. SAMBA currently supports two types of network communications, TCP/IP and RDMA, while RDMA network devices are TCP/IP compatible communications. On the premise, when the SAMBA client establishes a link with the SAMBA server, a TCP/IP link is established, and then a request is sent to the SAMBA server through the TCP/IP link to inquire the information of the network card. When the SAMBA client finds that the network card of the SAMBA server side supports the RDMA function, the SAMBA client side disconnects the established TCP/IP link and reinitiates a request to establish the RDMA link with the SAMBA server side.
The SAMBA server needs to register a block of memory in RDMA when establishing an RDMA link with the SAMBA client, where the block of memory will serve as a buffer for RDMA receive and transmit network requests, such as the pair of receive and transmit zones shown in fig. 2 above.
With continued reference to FIG. 5, at step S510, the data operation request from the client is buffered in RDMA memory (e.g., buffer, receive as described above), and the RDMA service process may obtain the request from the RDMA memory.
In step S520, the RDMA service process may parse the header information of the request data, and then in step S530, obtain the type of the request and the data information (e.g., read and write files, file locations, contents, etc.) included in the request, for example, a command (command) field exists in the request header, and generally occupies 16 bits. The value of this field may indicate the type of request, for example, 16 bits of all 0 may indicate the read type (i.e., read request), 16 bits of all 1 may indicate the write type (i.e., write request), 16 bits of 0000 0100 1000 0010 may indicate the above-mentioned CREATE type \8230
In step S540, the RDMA service process may determine the request type.
When the request is a read type request, the process proceeds to step S550, and the RDMA service process puts the request into a read request queue, which may be a first-in-first-out queue, a queue that may be arranged according to the priority of the request, or the like. The flow then proceeds to step S551, where the RDMA service process wakes up (which may also be referred to as activating) the read request processing thread because the thread is idle when no requests are being processed. After the read request processing thread is awakened, in step S552, the single processing thread starts to obtain the read request from the request queue; then, in step S553, the single processing thread sends the read request to the back-end storage device for processing, and then, in step S554, the back-end storage device directly stores the data content to be read (i.e., the first data content) into the RDMA send area (i.e., the first buffer area); after the data content is stored in the RDMA Send, the single processing thread triggers RDMA Send in step S555, and then the RDMA service process may send the contents of the RDMA Send to the client in step S556. After the current request is processed, the next request may be processed in step S580 to continue fetching requests from RDMA memory in step S510. Furthermore, when a plurality of read requests need to be processed simultaneously, there may be a plurality of read request processing threads, which read the read requests from the read request queue and process the read requests simultaneously, that is, the read request processing threads concurrently perform the operations of steps S552 to S555. After the plurality of read requests are processed concurrently, in step S556, the RDMA service process may send the content of the RDMA send area to the client.
When the request is a write type request, the process proceeds to step S560, and the RDMA service process puts the request into a write request queue, which may be a first-in-first-out queue, a queue that may be arranged according to the priority of the request, or the like. The flow then proceeds to step S561, where the RDMA service process wakes up (which may also be referred to as activating) the write request processing thread because the thread is in an idle state when no requests are being processed. After the write request processing thread is woken up, in step S562, a single processing thread starts to acquire a write request from the write request queue; then, in step S563, the single processing thread sends the write request to the back-end storage device for processing, and then, in step S564, the single processing thread writes the data content to be written (i.e., the second data content) directly from the RDMA receiving area (i.e., the second buffer area) into the back-end storage device; after the data content is written to the back-end storage device, the corresponding response, e.g., write success, is stored in the RDMA send area, and then in step S565 the single processing thread triggers RDMA send so that in step S566 the RDMA service process can send the content of the RDMA send area to the client. After the current request is processed, the next request may be processed in step S580 to continue fetching requests from RDMA memory in step S510. Further, when a plurality of write requests need to be processed simultaneously, there may be a plurality of write request processing threads that read and process the write requests from the write request queue simultaneously, i.e., that concurrently perform the operations of steps S562 to S565. After the plurality of write requests have been processed concurrently, in step S566, the RDMA service process may send the content of the RDMA send area to the client.
When other requests are of the non read-write type, then proceeding to step S570, the RDMA service process may forward the request to the SAMBA service process for processing over a network link with the SAMBA service process. The SAMBA service process may receive the request at step S571 and process the request at step S572. The SAMBA service process, while processing the request, may communicate data with the back-end storage device and, after processing the request in step S573, the SAMBA service process may send the feedback content (i.e., the third data content described above) to the RDMA service process over the network link as previously described. Next, at step S574, the RDMA service process may store the received feedback content in the RDMA send zone. After the RDMA send zone stores the feedback content, the RDMA service process may trigger the RDMA send in step S575, so that the RDMA service process sends the content of the RDMA send zone to the client in step S576. After the current request is processed, the next request may be processed in step S580 to continue fetching requests from RDMA memory in step S510.
It should be noted that, in the case where there are both read requests and write requests, the processing operation for the read request and the processing operation for the write request can be executed concurrently. The processing procedure is consistent with the above, and is not described again here.
The present disclosure provides, in addition to the above-described method for data processing, a corresponding apparatus, device and medium, which will be described next with reference to fig. 6 to 8.
Fig. 6 shows a block diagram of an apparatus 600 for data processing according to an embodiment of the present disclosure. The above description for the method 100 for data processing applies equally to the apparatus 600 unless explicitly stated otherwise. As an example, the apparatus 600 may be a server side.
Referring to fig. 6, an apparatus 600 for data processing may include a receiving module 610 and a processing module 620.
According to an embodiment of the present disclosure, the receiving module 610 may be configured to receive a data operation request based on a server information block.
As an example, in a SAMBA application implemented based on SMB, a user may initiate a data operation request for a data operation at a SAMBA client; the data operation request can be transmitted to the SAMBA server through TCP/IP; the SAMBA server side can receive the SMB-based data operation request.
As an example, the data operation request may be any type of data operation request. For example, the data operation request may be a read type data operation request or a write type data operation request. As another example, the data operation request may be a request for metadata for other operation files, such as a CREATE file request.
According to an embodiment of the present disclosure, the processing module 620 may be configured to process the received data operation request through remote direct memory access in response to the data operation request being a read type data operation request or a write type data operation request; in response to the data operation request being a non-read type data operation request and a non-write type data operation request, processing the received data operation request by the server information block.
According to an embodiment of the present disclosure, the processing, in response to the data operation request being a read-type data operation request or a write-type data operation request, the received data operation request through remote direct memory access may include: reading the requested read first data content from a memory in the case that the data operation request is a read-type data operation request; and storing the read first data content into the first cache region through remote direct memory access.
By way of example, the storage may be an actual back-end storage device, such as a storage server, a memory, a hard disk, and so forth. The first buffer may be a buffer that registers a part of the memory to RDMA in advance, that is, the buffer is mapped to the network card as a buffer for transmitting data to the external device, and in this case, the first buffer may be referred to as a transmission area.
As an example, when the first data content on the user registration management part with the file name "registration management" needs to be read from the memory, the read first data content on the user registration management part may be stored in the above-mentioned send area by RDMA.
According to an embodiment of the present disclosure, the processing, in response to the data operation request being a read-type data operation request or a write-type data operation request, the received data operation request through remote direct memory access may include: and under the condition that the data operation request is a write-in type data operation request, writing second data content needing to be written in the memory from the second cache region through remote direct memory access.
By way of example, the storage may be an actual back-end storage device, such as a storage server, a memory, a hard disk, and so forth. The second buffer may be a buffer that registers a part of the memory to the RDMA in advance, that is, the buffer is mapped to the network card as a buffer for receiving all external requests and corresponding data, and at this time, the second buffer may be referred to as a receiving area. The receive zone may be registered for RDMA in pairs with the send zone. As shown in fig. 2, a plurality of pairs of receiving areas and transmitting areas are registered in advance. Each pair of receiving and transmitting zones may receive or process one of the data operation requests. Each time a request sent, for example, from a SAMBA client is processed, the above-mentioned pair of receiving area, which may be used to store the request sent by the client, and sending area, which may be used to store a reply (e.g., a response or read data content, etc.) to be sent to the client, is occupied. When a request is processed, the pair of receiving and transmitting areas are released to wait for the processing of the next request. That is, each pair of the receiving area and the transmitting area can be recycled.
As an example, upon receiving a write-type data operation request, the second data content that needs to be written (e.g., the data content written to the administrator registration management part with a file name "registration management") may be written from the second cache area via RDMA, for example, to a hard disk.
According to the embodiment of the present disclosure, the apparatus 600 further includes a first sending module and a second sending module.
According to an embodiment of the present disclosure, the first sending module may be configured to send the read first data content to an external device through remote direct memory access after the first data content is stored in the first buffer through remote direct memory access.
As an example, after storing the read first data content with respect to the user registration management part in the above-mentioned sending area by RDMA, the read first data content with respect to the user registration management part stored in the above-mentioned sending area is sent to an external device, for example, an external device such as a mobile terminal, a fixed terminal, etc. implementing a SAMBA client, by RDMA.
According to the embodiment of the disclosure, the second sending module may be configured to store a response to the data operation request of the written type into the first cache region after writing the second data content to be written into the storage from the second cache region through remote direct memory access, and send the response to the external device through remote direct memory access.
As an example, after writing the second data content regarding the administrator registration management part, which needs to be written, for example, the above file name "registration management" succeeds in, for example, a hard disk, the fed-back write success response is stored in the above receiving area, and then the write success response is transmitted to an external device, for example, an external device such as a mobile terminal, a fixed terminal, or the like, which implements a SAMBA client, by RDMA.
Note that the first transmitting module and the second transmitting module may be the same module configured to implement the functions of the first transmitting module and the second transmitting module.
According to an embodiment of the present disclosure, the processing, by the server information block, the received data operation request in response to the data operation request being a data operation request of a non-read type and a data operation request of a non-write type may include: obtaining, by the server information block, third data content based on the memory according to the data operation request; and storing the third data content into the first buffer area.
As an example, the data operation request may be a data operation request of a creation type, such as a data operation request of a create file type with a file name of "staff management", which is not a data operation request of a read type, and may be transmitted to the SAMBA service process implemented through SMB, and then the SAMBA service process may obtain, for example, a third data content for successful creation after communicating data with the storage, and finally the third data content for successful creation may be stored in the first buffer area (i.e., the sending area).
According to an embodiment of the present disclosure, the apparatus 600 may further include a third sending module configured to send the third data content to an external device through remote direct memory access after storing the third data content in the first buffer. It should be noted that the first sending module, the second sending module, and the third sending module may be the same module, and the same module is configured to implement the functions of the first sending module, the second sending module, and the third sending module.
As an example, after the third data content successfully created by the above-mentioned "people management" file is stored in the above-mentioned sending area, the third data content is sent to an external device, for example, an external device such as a mobile terminal, a fixed terminal, etc. implementing a SAMBA client, by RDMA.
According to an embodiment of the present disclosure, the processing, by RDMA, the received data operation request in response to the data operation request being a read-type data operation request or a write-type data operation request may include: in response to the data operation request being a plurality of read-type data operation requests or a plurality of write-type data operation requests, processing the received plurality of read-type data operation requests or plurality of write-type data operation requests in parallel through remote direct memory access.
As an example, the data operation request may be a plurality of read type data operation requests, a plurality of write type data operation requests, or a plurality of data operation requests including a read type data operation request and a write type data operation request. In the case of the multiple data operation requests described above, the received multiple data operation requests may be processed concurrently via RDMA. That is, the multiple received data operation requests may be concurrently processed through multiple threads.
As can be seen from the foregoing apparatus for data processing provided by the present disclosure, the apparatus provided by the present disclosure may process received data operation requests in different manners in response to different types of data operation requests, so that the apparatus provided by the present disclosure has a mechanism for classifying requests, and directly processes a data operation request of a read type or a data operation request of a write type through RDMA, and processes other requests through SMB, thereby enabling the apparatus provided by the present disclosure to reduce memory copy operations, reduce latency, and reduce consumption of system resources. In addition, the device provided by the disclosure can process a plurality of received data operation requests concurrently, so that the device provided by the disclosure further reduces time delay, further reduces consumption of system resources, improves throughput, and further greatly improves performance.
Fig. 7 shows a block diagram of an apparatus 700 for data processing according to an embodiment of the disclosure.
Referring to fig. 7, an apparatus 700 for data processing may include a processor 701 and a memory 702. The processor 701 and the memory 702 may both be connected by a bus 703.
The processor 701 may perform various actions and processes according to programs stored in the memory 702. In particular, the processor 701 may be an integrated circuit chip having signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, either of the X86 architecture or the ARM architecture.
The memory 702 stores computer instructions that, when executed by the processor 701, implement the method for data processing described above. The memory 702 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Fig. 8 illustrates a schematic diagram of a computer-readable recording medium 800 according to an embodiment of the present disclosure.
As shown in fig. 8, the computer-readable recording medium 800 has stored thereon computer-executable instructions 810. The computer-executable instructions 810, when executed by a processor, may perform methods according to embodiments of the present disclosure described with reference to the above figures. The computer-readable recording medium in the embodiments of the present disclosure may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It is to be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises at least one executable instruction for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In general, the various example embodiments of this disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of embodiments of the disclosure have been illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The exemplary embodiments of the present disclosure described in detail above are merely illustrative, and not restrictive. It will be appreciated by those skilled in the art that various modifications and combinations of these embodiments or the features thereof are possible without departing from the spirit and scope of the disclosure, and that such modifications are intended to be within the scope of the disclosure.

Claims (14)

1. A method performed by a server for data processing, the method comprising:
receiving a data operation request based on a server information block from a client through a network based on remote direct memory access;
responding to the data operation request which is a read type data operation request or a write type data operation request, and processing the received data operation request through remote direct memory access;
in response to the data operation request being a non-read type data operation request and a non-write type data operation request, processing the received data operation request by the server information block.
2. The method of claim 1, wherein the processing the received data operation request via remote direct memory access in response to the data operation request being a read type data operation request or a write type data operation request comprises:
in the case where the data operation request is a read-type data operation request,
reading the requested read first data content from the memory;
storing the read first data content into a first cache region through remote direct memory access;
in the case where the data operation request is a write-type data operation request,
and writing the second data content needing to be written into the storage from the second cache region through remote direct memory access.
3. The method of claim 2, wherein the method further comprises:
after the read first data content is stored in the first cache region through remote direct memory access, the first data content is sent to external equipment through remote direct memory access;
and after the second data content needing to be written is written into the memory from the second cache region through remote direct memory access, storing a response to the written type of data operation request into the first cache region, and sending the response to the external equipment through remote direct memory access.
4. The method of claim 1, wherein the processing, by the server information block, the received data operation request in response to the data operation request being a non-read type data operation request and a non-write type data operation request, comprises:
obtaining third data content based on a memory through the server information block according to the data operation request;
and storing the third data content into the first buffer area.
5. The method of claim 4, wherein the method further comprises:
and after the third data content is stored in the first cache region, sending the third data content to an external device through remote direct memory access.
6. The method of claim 1, wherein the processing the received data operation request via remote direct memory access in response to the data operation request being a read type data operation request or a write type data operation request comprises:
in response to the data operation request being a plurality of read-type data operation requests or a plurality of write-type data operation requests, processing the received plurality of read-type data operation requests or plurality of write-type data operation requests in parallel through remote direct memory access.
7. A server for data processing, the server comprising:
the receiving module is configured to receive a data operation request based on the server information block from the client through a network based on remote direct memory access;
a processing module configured to process the received data operation request through remote direct memory access in response to the data operation request being a read-type data operation request or a write-type data operation request; in response to the data operation request being a non-read type data operation request and a non-write type data operation request, processing the received data operation request by the server information block.
8. The server according to claim 7, wherein the processing the received data operation request via remote direct memory access in response to the data operation request being a read type data operation request or a write type data operation request comprises:
in the case where the data operation request is a read-type data operation request,
reading the requested read first data content from the memory;
storing the read first data content into a first cache region through remote direct memory access;
in the case where the data operation request is a write-type data operation request,
and writing the second data content needing to be written into the storage from the second cache region through remote direct memory access.
9. The server of claim 8, wherein the server further comprises:
a first sending module configured to send the read first data content to an external device through remote direct memory access after the first data content is stored in the first cache region through remote direct memory access;
and the second sending module is configured to store a response to the written type data operation request into the first cache region after the second data content needing to be written is written into the storage from the second cache region through remote direct memory access, and send the response to the external device through the remote direct memory access.
10. The server of claim 7, wherein the processing the received data operation request by the server information block in response to the data operation request being a non-read type data operation request and a non-write type data operation request comprises:
obtaining third data content based on a memory through the server information block according to the data operation request;
and storing the third data content into the first buffer area.
11. The server of claim 10, wherein the server further comprises:
a third sending module configured to send the third data content to an external device through remote direct memory access after storing the third data content in the first buffer.
12. The server according to claim 7, wherein the processing the received data operation request via remote direct memory access in response to the data operation request being a read type data operation request or a write type data operation request comprises:
in response to the data operation request being a plurality of read-type data operation requests or write-type data operation requests, processing the received plurality of read-type data operation requests or write-type data operation requests in parallel through remote direct memory access.
13. An apparatus for data processing, comprising:
a processor, and
a memory storing computer-executable instructions that, when executed by the processor, cause the processor to perform the method of any one of claims 1-6.
14. A computer-readable recording medium storing computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, cause the processor to perform the method of any one of claims 1-6.
CN202211256303.0A 2022-10-14 2022-10-14 Method, apparatus, device and medium for data processing Active CN115328408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211256303.0A CN115328408B (en) 2022-10-14 2022-10-14 Method, apparatus, device and medium for data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211256303.0A CN115328408B (en) 2022-10-14 2022-10-14 Method, apparatus, device and medium for data processing

Publications (2)

Publication Number Publication Date
CN115328408A CN115328408A (en) 2022-11-11
CN115328408B true CN115328408B (en) 2023-01-03

Family

ID=83914177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211256303.0A Active CN115328408B (en) 2022-10-14 2022-10-14 Method, apparatus, device and medium for data processing

Country Status (1)

Country Link
CN (1) CN115328408B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459676B (en) * 2008-12-31 2012-01-04 中国科学院计算技术研究所 Message transmission frame and method based on high-speed network oriented to file system
US8806030B2 (en) * 2010-12-06 2014-08-12 Microsoft Corporation Multichannel connections in file system sessions
US9331955B2 (en) * 2011-06-29 2016-05-03 Microsoft Technology Licensing, Llc Transporting operations of arbitrary size over remote direct memory access
CN103226598B (en) * 2013-04-22 2016-06-22 华为技术有限公司 Access method and apparatus and the data base management system of data base
US10404520B2 (en) * 2013-05-29 2019-09-03 Microsoft Technology Licensing, Llc Efficient programmatic memory access over network file access protocols
US10257273B2 (en) * 2015-07-31 2019-04-09 Netapp, Inc. Systems, methods and devices for RDMA read/write operations
US10732893B2 (en) * 2017-05-25 2020-08-04 Western Digital Technologies, Inc. Non-volatile memory over fabric controller with memory bypass
US10911547B2 (en) * 2017-12-28 2021-02-02 Dell Products L.P. Systems and methods for SMB monitor dialect
CN111404931B (en) * 2020-03-13 2021-03-30 清华大学 Remote data transmission method based on persistent memory
US11573736B2 (en) * 2020-11-30 2023-02-07 EMC IP Holding Company LLC Managing host connectivity to a data storage system
CN114721995A (en) * 2022-04-01 2022-07-08 上海上讯信息技术股份有限公司 Data transmission method applied to virtual database and RDMA-based database virtualization method

Also Published As

Publication number Publication date
CN115328408A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
US20200358857A1 (en) Data transmission method, system and proxy server
US20060274748A1 (en) Communication device, method, and program
US10275163B2 (en) Methods for controlling data transfer speed of a data storage device and a host device utilizing the same
WO2015062228A1 (en) Method and device for accessing shared memory
US20230152978A1 (en) Data Access Method and Related Device
WO2019041670A1 (en) Method, device and system for reducing frequency of functional page requests, and storage medium
WO2017032152A1 (en) Method for writing data into storage device and storage device
CN104317716A (en) Method for transmitting data among distributed nodes and distributed node equipment
CN114285676B (en) Intelligent network card, network storage method and medium of intelligent network card
CN115328408B (en) Method, apparatus, device and medium for data processing
CN113259264B (en) Data transmission method and device, computer equipment and storage medium
JP6092352B2 (en) Cache coherency control system and control method
CN115904259B (en) Processing method and related device of nonvolatile memory standard NVMe instruction
WO2020119608A1 (en) Spark shuffle-based remote direct memory access system and method
CN116737069A (en) Data transmission method, device, system and computer equipment
US20220253238A1 (en) Method and apparatus for accessing solid state disk
WO2021175221A1 (en) Data processing method and device
WO2020238748A1 (en) Data synchronization processing method and apparatus, electronic device and computer storage medium
CN114827300A (en) Hardware-guaranteed data reliable transmission system, control method, equipment and terminal
CN113641604A (en) Data transmission method and system
CN113971158A (en) Network card-based memory access method, device and system
US20140359062A1 (en) Data transferring apparatus, data transferring system and non-transitory computer readable medium
CN111274176A (en) Information processing method, electronic equipment, system and storage medium
US11553031B2 (en) Method and apparatus for processing data
CN117609287A (en) Method and system for improving API query concurrency based on time window cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40078331

Country of ref document: HK