CN116069262A - Distributed storage unloading method and device, electronic equipment and storage medium - Google Patents

Distributed storage unloading method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116069262A
CN116069262A CN202310202864.0A CN202310202864A CN116069262A CN 116069262 A CN116069262 A CN 116069262A CN 202310202864 A CN202310202864 A CN 202310202864A CN 116069262 A CN116069262 A CN 116069262A
Authority
CN
China
Prior art keywords
opposite
memory
index
local
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310202864.0A
Other languages
Chinese (zh)
Other versions
CN116069262B (en
Inventor
马怀旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310202864.0A priority Critical patent/CN116069262B/en
Publication of CN116069262A publication Critical patent/CN116069262A/en
Application granted granted Critical
Publication of CN116069262B publication Critical patent/CN116069262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a distributed storage and unloading method, a device, electronic equipment and a storage medium, which are applied to the technical field of distributed storage, wherein the method comprises the following steps: establishing connection with a peer node; exchanging memory index information with the opposite terminal nodes to obtain memory indexes of the opposite terminals; determining a target opposite-end memory index from the opposite-end memory indexes; writing information to be transmitted into an opposite-end memory address corresponding to a target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through a local intelligent network card (DPU); according to the invention, the data writing can be realized by only one-time transmission in the unilateral operation, the unilateral operation performance of RDMA can be better exerted, and the data processing efficiency and the overall performance of distributed storage are improved.

Description

Distributed storage unloading method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of distributed storage, in particular to a distributed storage unloading method, a device, electronic equipment and a computer readable storage medium.
Background
With the rapid development of computer technology, hardware devices are developed from traditional tapes to physical hard disks and then to SSD (Solid State Drives, solid state disk) at the present stage, NVME (Non-Volatile Memory express) disks, PMem (Persistent Memory ) and DRAM (DRAM) computer memory, the performance of the hardware is better and better, network performance becomes a performance bottleneck of distributed storage under the HCI (Hyper Converged Infrastructure, super fusion architecture) scene, how to exert the performance of the HCI becomes a problem in the present distributed storage field, today's network transmission is from the past TCP (Transmission Control Protocol ) network to the present RDMA (Remote Direct Memory Access, remote direct data access) network, the network is completely offloaded to a DPU (intelligent network card) through the network to exert the performance of the whole network, and the performance of the whole network is improved, wherein RDMA is divided into unilateral operation and bilateral operation, wherein RDMA multi-channel mode performs the service/recv (namely sending/receiving) interaction needs to participate (specifically as shown in fig. 1), the traditional unilateral operation of RDMA is shown in fig. 2, the bold solid line is the content of processing, and the RDMA needs to be transmitted twice, so that the performance of the distributed storage field needs to exert the performance of the network card to be better offloaded.
Disclosure of Invention
The embodiment of the invention aims to provide a distributed storage unloading method, a distributed storage unloading device, electronic equipment and a computer readable storage medium, which can better exert the performance of RDMA unilateral operation in the use process, and improve the data processing efficiency and the overall performance of distributed storage.
In order to solve the above technical problems, an embodiment of the present invention provides a distributed storage offloading method, including:
establishing connection with a peer node;
exchanging memory index information with the opposite terminal nodes to obtain memory indexes of the opposite terminals;
determining a target opposite-end memory index from the opposite-end memory indexes;
and writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through the local intelligent network card DPU.
Optionally, before the connection is established with the opposite end node, the method further includes:
establishing a local memory pool for each node;
and establishing local memory indexes respectively corresponding to the local memory addresses of the local memory pool.
Optionally, the exchanging memory index information with the peer node to obtain each peer memory index includes:
Transmitting a plurality of the local memory indexes to the opposite node;
and receiving a plurality of opposite-end memory indexes sent by the opposite-end node, wherein the opposite-end memory indexes correspond to corresponding opposite-end memory addresses of an opposite-end memory pool.
Optionally, after the establishing the local memory indexes corresponding to the local memory addresses of the local memory pool, the method further includes:
respectively distributing a plurality of local memory indexes for other nodes;
the sending the plurality of local memory indexes to the peer node includes:
and sending a plurality of local memory indexes corresponding to the opposite end node.
Optionally, after the local memory pool is built, the method further includes:
registering each local memory pool into the local DPU.
Optionally, the determining the target peer memory index from the peer memory indexes includes:
and carrying out polling on each opposite-end memory index in the index information to determine the target opposite-end memory index.
Optionally, the determining the target opposite-end memory index includes:
performing line polling on each opposite-end memory index in the index information, and searching out an opposite-end memory index with an idle state as a state identifier;
And determining the opposite-end memory index with the state marked as the idle state as a target opposite-end memory index.
Optionally, after writing the information to be sent to the opposite-end memory address corresponding to the target opposite-end memory index, the method further includes:
and changing the state identifier of the target opposite-end memory index stored locally and the original opposite-end memory index corresponding to the opposite-end node into a written state.
Optionally, the method further comprises:
traversing each local memory index to determine a target local memory index with a written state identification;
and carrying out IO processing on the information in the local memory address corresponding to the target local memory index.
Optionally, the performing IO processing on information in the local memory address corresponding to the target local memory index includes:
and directly transmitting the data in the local memory address corresponding to the target local memory index to a physical disk.
Optionally, after performing IO processing on the information in the local memory address corresponding to the target local memory index, the method further includes:
and changing the state identification of the target local memory index into an idle state.
Optionally, after performing IO processing on the information in the local memory address corresponding to the target local memory index, the method further includes:
Generating local return information, wherein the local return information comprises an index head of the target local memory index;
determining a current target opposite-end memory index from the opposite-end memory indexes;
and writing the local return information into the opposite-end memory address corresponding to the current target opposite-end memory index by adopting an RDMA unilateral operation mode through the local DPU.
Optionally, the method further comprises:
when the information in the local memory address corresponding to the target local memory index is opposite-end return information, changing the state identification of the corresponding opposite-end memory index into an idle state based on the index head in the opposite-end return information.
Optionally, the local memory index includes a local memory address, a key and data header information corresponding to the local memory block;
the opposite terminal memory index comprises an opposite terminal memory address, a secret key and data head information corresponding to the opposite terminal memory block.
Optionally, the method further comprises:
when the target opposite-end memory index is not determined from the opposite-end memory indexes, sending a re-reading index information request to the opposite-end node;
and receiving the latest memory indexes of each opposite terminal sent by the opposite terminal node.
Optionally, the method further comprises:
and sending an index expansion request to the opposite terminal node under the condition that the target opposite terminal memory index is not determined based on the latest opposite terminal memory indexes.
Optionally, the determining the target peer memory index from the peer memory indexes includes:
determining a plurality of target opposite-end memory indexes from the opposite-end memory indexes based on the information quantity to be transmitted;
writing the information to be sent into the opposite-end memory address corresponding to the target opposite-end memory index, including:
writing the information to be sent into opposite terminal memory addresses respectively corresponding to the target opposite terminal indexes.
The embodiment of the invention also provides a distributed storage and unloading device, which comprises:
the connection module is used for establishing connection with the opposite end node;
the interaction module is used for exchanging memory index information with the opposite terminal nodes and obtaining memory indexes of the opposite terminals;
the determining module is used for determining a target opposite-end memory index from the opposite-end memory indexes;
and the operation module is used for writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through the local intelligent network card DPU.
Optionally, the method further comprises:
the first building module is used for building a local memory pool for each node;
and the second building module is used for building local memory indexes respectively corresponding to the local memory addresses of the local memory pool.
Optionally, the interaction module includes:
the first sending module is used for sending the local memory indexes to the opposite end node;
and the first receiving module is used for receiving a plurality of opposite-end memory indexes sent by the opposite-end node, and the opposite-end memory indexes correspond to corresponding opposite-end memory addresses of the opposite-end memory pool.
Optionally, the method further comprises:
the distribution module is used for respectively distributing a plurality of local memory indexes to other nodes;
and the first sending module is configured to send a plurality of local memory indexes corresponding to the peer node.
Optionally, the method further comprises:
and the registration module is used for registering each local memory pool into the local DPU.
Optionally, the determining module is configured to determine the target peer memory index by introducing a line poll to each peer memory index in the index information.
Optionally, the determining module includes:
the polling unit is used for carrying out polling on each opposite-end memory index in the index information, and finding out the opposite-end memory index with the idle state identified by the state identification;
And the determining unit is used for determining the opposite-end memory index with the state marked as the idle state as a target opposite-end memory index.
Optionally, the method further comprises:
the first changing module is used for changing the locally stored state identification of the target opposite-end memory index and the original opposite-end memory index corresponding to the opposite-end node into a written state.
Optionally, the method further comprises:
the traversing module is used for traversing each local memory index and determining a target local memory index with a written state as a state identifier;
and the processing module is used for carrying out IO processing on the information in the local memory address corresponding to the target local memory index.
Optionally, the processing module is configured to directly issue data in a local memory address corresponding to the target local memory index to a physical disk.
Optionally, the method further comprises:
and the second changing module is used for changing the state identification of the target local memory index into an idle state.
Optionally, the method further comprises:
the generation module is used for generating local return information, wherein the local return information comprises an index head of the target local memory index;
the determining module is further configured to determine a current target opposite-end memory index from the opposite-end memory indexes;
The operation module is further configured to write, by using an RDMA single-side operation manner by using the local DPU, the local return information into an opposite-end memory address corresponding to the current target opposite-end memory index.
Optionally, the method further comprises:
and the third changing module is used for changing the state identification of the corresponding opposite-end memory index based on the index head in the opposite-end return information into an idle state when the information in the local memory address corresponding to the target local memory index is the opposite-end return information.
Optionally, the method further comprises:
the second sending module is used for sending a re-reading index information request to the opposite terminal node when the target opposite terminal memory index is not determined from the opposite terminal memory indexes;
and the second receiving module is used for receiving the latest memory indexes of each opposite end sent by the opposite end node.
Optionally, the method further comprises:
and the third sending module is used for sending an index expansion request to the opposite terminal node under the condition that the target opposite terminal memory index is not determined based on the latest opposite terminal memory indexes.
Optionally, the determining module is configured to determine, based on the amount of information to be sent, a plurality of target peer memory indexes from the peer memory indexes;
And the operation module is used for writing the information to be sent into the opposite-end memory addresses respectively corresponding to the target opposite-end indexes.
The embodiment of the invention also provides electronic equipment, which comprises:
a memory for storing a computer program;
a processor for implementing the steps of the distributed storage offloading method as described above when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the distributed storage unloading method when being executed by a processor.
The embodiment of the invention provides a distributed storage and offloading method, a device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: establishing connection with a peer node; exchanging memory index information with the opposite terminal nodes to obtain memory indexes of the opposite terminals; determining a target opposite-end memory index from the opposite-end memory indexes; and writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through the local intelligent network card DPU.
It can be seen that, in the embodiment of the present invention, after the connection is established between the opposite end nodes, each opposite end memory index is obtained by exchanging memory index information with the opposite end node, then when information interaction is performed with the opposite end, a target opposite end memory index can be determined from the obtained opposite end memory indexes, and then the information to be sent is written into the opposite end memory address corresponding to the target opposite end memory index in the opposite end node by adopting RDMA unilateral operation through the local DPU; according to the embodiment of the invention, the data writing can be realized by only one-time transmission, the performance of RDMA single-side operation can be better exerted, and the data processing efficiency and the overall performance of distributed storage are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the prior art and the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of a conventional RDMA multichannel sense/recv approach;
FIG. 2 is a schematic diagram of a prior art RDMA communication;
FIG. 3 is a schematic flow chart of a distributed storage and offloading method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another distributed storage offloading method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a multi-node memory index mapping scheme according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a single RDMA single transmission whole-course single-side operation according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a distributed storage and offloading device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a computer readable storage medium according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a distributed storage unloading method, a device, electronic equipment and a computer readable storage medium, which can better exert the performance of RDMA unilateral operation in the use process, and improve the data processing efficiency and the overall performance of distributed storage.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 3, fig. 3 is a flow chart of a distributed storage unloading method according to an embodiment of the invention. The method comprises the following steps:
s110: establishing connection with a peer node;
it should be noted that, in the distributed system, a plurality of memory addresses may be allocated in advance for each node, where each memory address corresponds to one memory index, and then, when a local node needs to perform memory information interaction with a certain peer node, a connection corresponding to the peer node may be established.
S120: exchanging memory index information with the opposite terminal nodes to obtain memory indexes of the opposite terminals;
specifically, after connection with the opposite terminal node is established, memory index information interaction is performed with the opposite terminal node, specifically, the local memory index is sent to the opposite terminal node so that the opposite terminal node can receive the local memory index and store the local memory index, the opposite terminal node can also send the opposite terminal memory index to the local node, and the local node can receive the opposite terminal memory index and store the opposite terminal memory index so that the local node has memory read-write authority of the opposite terminal node.
S130: determining a target opposite-end memory index from the opposite-end memory indexes;
specifically, after the memory indexes are interacted, when the local node needs to send information to the opposite terminal, the target opposite terminal memory index can be determined from the stored opposite terminal memory indexes, specifically, a usable opposite terminal memory index can be randomly determined from a plurality of opposite terminal memory indexes to serve as the target opposite terminal memory index, and polling can be conducted on the opposite terminal memory indexes one by one according to a preset sequence to find out a usable opposite terminal memory index to serve as the target opposite terminal memory index.
S140: and writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through the local intelligent network card DPU.
It can be understood that after the target opposite-end memory index is determined, the information to be sent can be sent to the opposite end through the local DPU in an RDMA unilateral operation mode, the corresponding opposite-end memory address is determined from all the opposite-end memory addresses of the opposite end according to the target opposite-end memory index, and then the information to be sent is written into the opposite-end memory address, so that writing of information is realized, and network unloading is completed.
That is, in the embodiment of the present invention, all IOs (Input/Output) can reduce ack messages in message transmission through unilateral operation of a reas/write mode, reduce participation of CPUs (Central Processing Unit/processors, central processing units) on the other side, make the CPUs do independent things, do not participate in message transmission, accelerate IO processing speed, and accelerate message transmission.
It can be seen that, in the embodiment of the present invention, after the connection is established between the opposite end nodes, each opposite end memory index is obtained by exchanging memory index information with the opposite end node, then when information interaction is performed with the opposite end, a target opposite end memory index can be determined from the obtained opposite end memory indexes, and then the information to be sent is written into the opposite end memory address corresponding to the target opposite end memory index in the opposite end node by adopting RDMA unilateral operation through the local DPU; according to the embodiment of the invention, the data writing can be realized by only one-time transmission, the performance of RDMA single-side operation can be better exerted, and the data processing efficiency and the overall performance of distributed storage are improved.
On the basis of the above embodiment, the technical solution is further optimized and described in this embodiment, which is specifically as follows:
referring to fig. 4, the distributed storage offloading method provided in the embodiment of the present invention includes:
s210: establishing a local memory pool for each node;
specifically, in practical application, a local memory pool corresponding to each node in the distributed system may be established in advance for each node, and specifically, the local memory pool corresponding to the local node may be applied for when the local node is initialized.
S220: registering each local memory pool into a local DPU;
specifically, in the embodiment of the present invention, for each node, after a local memory pool is established, the local memory pool is registered in a local DPU of each node, so as to perform network offloading through the DPU.
S230: establishing local memory indexes respectively corresponding to all local memory addresses of a local memory pool;
it may be appreciated that, in the embodiment of the present invention, a node is taken as an example to describe in detail, after respective local memory pools are established, a corresponding local memory index may be established for each local memory pool address in the local memory pools, that is, a local memory address corresponds to a local memory index, and the local memory index includes a local memory address, a key, and header information hdr corresponding to a corresponding local memory block, so that the local memory address and the corresponding data header of the corresponding local memory block can be accurately located according to the local memory index.
It should be noted that, S210 to S230 may be performed once in advance when the node is initialized, and the subsequent local node may not repeatedly perform the process when the peer is set up to perform the memory interaction.
S240: establishing connection with a peer node;
after a local memory pool and a local memory index corresponding to each local memory address are established for each node, when the local node needs to perform memory information interaction with a certain opposite node, the local node may establish a connection corresponding to the opposite node.
S250: transmitting a plurality of local memory indexes to the opposite node;
after the connection with the opposite terminal node is established, the opposite terminal node performs memory index information interaction with the opposite terminal node, and the local node may send a plurality of local memory indexes to the opposite terminal node, so that after the opposite terminal node receives the plurality of local memory indexes sent by the local node, each local memory index is stored.
Specifically, after the connection is established, a plurality of local memory indexes can be actively sent to the opposite terminal node, or when a request for reading the local memory indexes sent by the opposite terminal node is received, the plurality of local memory indexes can be sent to the opposite terminal node.
In addition, the local node can directly conduct index distinction and use through reading each opposite-end memory index in the opposite-end node at one time, multiple times of interaction with the opposite-end node is not needed to obtain memory information sent by the message, and index information (local memory address, key and data head information hdr) is completely loaded into the CPU cache in a memory index mode, so that the memory judgment in the follow-up polling process is facilitated, and the overall performance is improved.
It should be further noted that, in the embodiment of the present invention, all the message memory interactions use a large page memory, so that the page-missing interruption of the memory is reduced by using the large page memory, the kernel switching terminal is reduced, the time delay consumed in the whole IO process is reduced, and the IO processing speed is improved. In addition, the used memory is registered in the DPU, all the memories are zero copied, when the message is required to be sent, the unloading is directly carried out through the DPU, and the DPU is responsible for forwarding the whole message. Wherein, a part of the large page memory is used as an index part.
Further, after establishing the local memory indexes corresponding to the local memory addresses of the local memory pool, the method further includes:
respectively distributing a plurality of local memory indexes for other nodes;
the sending the plurality of local memory indexes to the peer node includes:
And sending the plurality of local memory indexes corresponding to the opposite end node.
In practical application, in order to avoid that multiple nodes write the same block of memory at the same time, and thus cause loss of memory data, in the embodiment of the present invention, after local memory indexes corresponding to local memory addresses of local memory pools are established for each node, multiple local memory indexes may be allocated to other nodes in advance, and specifically, as shown in a multi-node memory index mapping diagram in fig. 5, multiple memory indexes are respectively allocated to each node in each memory index corresponding to each memory pool. After the local node establishes connection with the opposite terminal node, each local memory index corresponding to the opposite terminal node is sent to the opposite terminal node.
S260: receiving a plurality of opposite-end memory indexes sent by an opposite-end node, wherein the opposite-end memory indexes correspond to corresponding opposite-end memory addresses of an opposite-end memory pool;
of course, in practical application, the peer node may also send a plurality of peer memory indexes to the local node, and specifically may send a plurality of peer memory indexes corresponding to the local node, where the local node stores the plurality of peer memory indexes after receiving the plurality of peer memory indexes.
S270: determining a target opposite-end memory index from the opposite-end memory indexes;
specifically, after the memory indexes are interacted, when the local node needs to send information to the opposite terminal, the target opposite terminal memory index can be determined from the stored opposite terminal memory indexes, and specifically, the opposite terminal memory indexes in the opposite terminal index information can be polled in a line to determine the target opposite terminal memory index. That is, by introducing polling to a plurality of opposite-end memory indexes, one usable opposite-end memory index is found out as a target opposite-end memory index.
Specifically, for each node, a corresponding state identifier may be set for each local memory index corresponding to the node, where, when the memory index is not selected, the state identifier of the memory index is an idle state, and when the memory index is selected, the state identifier of the memory index is a writing state.
Therefore, the process of determining the target opposite-end memory index by introducing the polling to each opposite-end memory index in the opposite-end memory index information specifically may include:
performing line polling on each opposite-end memory index in the index information, and searching out an opposite-end memory index with an idle state as a state identifier;
And determining the opposite-end memory index with the state marked as the idle state as a target opposite-end memory index.
It can be understood that, in practical application, when polling is introduced into each opposite-end memory index, one opposite-end memory index whose state is identified as an idle state can be found out from each opposite-end memory index, and the opposite-end memory index is determined as a target opposite-end memory index.
S280: and writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting an RDMA unilateral operation mode through the local DPU.
Specifically, after the target opposite-end memory index is determined, the information to be sent can be sent to the opposite end through the local DPU in an RDMA unilateral operation mode, the corresponding opposite-end memory address is determined from all the opposite-end memory addresses of the opposite end according to the target opposite-end memory index, and then the information to be sent is written into the opposite-end memory address, so that writing of information is realized, and network unloading is completed. A schematic diagram of RDMA single-send full-scale single-side operation in the embodiment of the invention is shown in FIG. 6.
When writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index, the data segment can be written into the corresponding opposite-end memory address, and finally the data header hdr information is written into the corresponding opposite-end memory index, so that the opposite-end node can directly use the memory information through the corresponding memory index, and the order of data and hdr is kept through order keeping in the RDMA message, thereby achieving the consistency of the memory.
Further, the process of determining the target opposite-end memory index from the opposite-end memory indexes in S270 may specifically include:
determining a plurality of target opposite-end memory indexes from the opposite-end memory indexes based on the information quantity to be transmitted;
the process of writing the information to be sent to the opposite-end memory address corresponding to the target opposite-end memory index in S208 may specifically include:
and writing the information to be transmitted into opposite terminal memory addresses respectively corresponding to the target opposite terminal indexes.
It should be noted that, for any local node, according to the number of to-be-transmitted information, multiple target opposite end memory indexes can be determined from the stored opposite end memory indexes, that is, multiple target opposite end memory indexes with idle states are obtained at one time, and unilateral message transmission is directly performed, that is, each to-be-transmitted information is written into the opposite end memory address corresponding to each target opposite end memory index, so that multiple IO interactions are involved after the multiple target opposite end memory indexes are read at one time, message communication in the interaction process can be reduced, and IO efficiency and overall performance are improved.
Further, after writing the information to be sent into the opposite-end memory address corresponding to the target opposite-end memory index, the method may further include:
and changing the state identifier of the original opposite-end memory index corresponding to the target opposite-end memory index stored locally into a written state.
That is, after writing corresponding data into the opposite-end memory address corresponding to the target opposite-end memory index in the opposite-end node, it is indicated that the target opposite-end memory index has been used, and the corresponding opposite-end memory address has been written with data, at this time, the state identifier of the target opposite-end memory index stored locally is changed to the written state, and the state identifier of the corresponding original opposite-end memory index on the opposite-end node may also be changed to the written state. Therefore, in the embodiment of the invention, after the writing is successful, the state identifiers of the corresponding target opposite-end memory index and the corresponding original opposite-end memory index are directly changed into the written state, so that the interactive operation is not needed, and the message transmission in the process is reduced.
Further, the method may further include:
traversing each local memory index to determine a target local memory index with a written state identification;
And carrying out IO processing on the information in the local memory address corresponding to the target local memory index.
It should be noted that, after the peer node writes data into a certain local memory address of the local node, the state identifier of the corresponding local memory address is changed into a written state, so that the local node may search the target local memory index whose state identifier is the written state by traversing each local memory index, and then perform IO processing on the information in the local memory address corresponding to the target local memory index.
It can be understood that if there are multiple target local memory indexes in the traversal process, the information in the target memory address corresponding to each target memory index can be obtained based on the multiple target memory indexes at one time, multiple times of reading are not needed, and the IO speed and the overall performance are improved.
Further, the process of performing IO processing on information in the local memory address corresponding to the target local memory index may specifically include:
and directly issuing the data in the local memory address corresponding to the target local memory index to the physical disk.
It should be noted that, specifically, the data in the paired local memory address may be directly issued to the physical disk, for example, the request composed of the index memory and the data memory may be directly issued to the physical disk, so that the memory is directly issued to the data disk without copying, and the IO processing speed is increased, thereby increasing the IO speed in the whole HCI environment.
Further, after performing the IO processing on the information in the local memory address corresponding to the target local memory index, the method may further include:
and changing the state identification of the target local memory index into an idle state.
Specifically, after the data processing in the target local memory address corresponding to the target local memory index is completed, the state identifier of the target local memory index can be changed into an idle state, that is, the index is released, so that the target local memory index can be determined from the local memory indexes in all idle states when the subsequent opposite end node sends the data.
Further, after performing the IO processing on the information in the local memory address corresponding to the target local memory index, the method may further include:
generating local return information, wherein the local return information comprises an index head of a target local memory index;
determining a current target opposite-end memory index from the opposite-end memory indexes;
and writing the local return information into the opposite-end memory address corresponding to the current target opposite-end memory index by adopting an RDMA unilateral operation mode through the local DPU.
It should be noted that, in practical application, after the IO processing is performed on the information in the local memory address corresponding to the target local memory index, local return information including the index header of the target local memory index may be generated, the local return information is used as information to be sent, then a target opposite-end memory index is determined from the opposite-end memory indexes, and then the local return information is written into the opposite-end memory address corresponding to the target opposite-end memory index by adopting the RDMA unilateral operation mode through the local DPU, so that after the opposite-end node processes the information in the opposite-end memory address, the state identifier of the memory index corresponding to the index header stored by the opposite-end node is changed into an idle state.
Still further, the method may further comprise:
when the information in the local memory address corresponding to the target local memory index is opposite-end return information, changing the state identification of the corresponding opposite-end memory index into an idle state based on the index head in the opposite-end return information.
When the information in the local memory address corresponding to the target local memory index is subjected to IO processing, when the information in the local memory address is determined to be opposite-end return information returned by the opposite-end node, it is described that the data written into the opposite-end memory address corresponding to the target opposite-end memory index by the local node is processed, at this time, the corresponding opposite-end memory index can be determined in each opposite-end memory index stored in the local node according to the index head in the opposite-end return information, then the state identifier of the opposite-end memory index is changed, in particular to an idle state, and the corresponding data can be used when the subsequent data interaction is performed.
It should be noted that, in practical application, the local memory index in the embodiment of the present invention includes a local memory address, a key, and data header information corresponding to the local memory block; the opposite terminal memory index comprises an opposite terminal memory address, a secret key and data head information corresponding to the opposite terminal memory block.
Further, the method may further include:
when the target opposite-end memory index is not determined from the opposite-end memory indexes, sending a re-reading index information request to the opposite-end node;
and receiving the latest memory indexes of each opposite terminal sent by the opposite terminal node.
It should be noted that, when no opposite-end memory index whose state is identified as an idle state is stored in the local node, a request for re-acquiring index information may be sent to the opposite-end node, so that the opposite-end node returns each current latest memory index to the local node, thereby updating the index information once, reducing interactions, and after receiving each latest opposite-end memory index sent by the opposite-end node, the local node determines, from each latest opposite-end memory index, that the opposite-end memory index whose state is identified as an idle state as a target opposite-end memory index, so as to write data.
Still further, the method may further comprise:
and sending an index expansion request to the opposite terminal node under the condition that the target opposite terminal memory index is not determined based on the latest opposite terminal memory indexes.
It should be noted that, if the local node still does not find the opposite-end memory index whose state is identified as the idle state after obtaining the latest opposite-end local index of the opposite-end node, it is explained that the memory index allocated by the opposite-end node for the local node is insufficient at this time, so that an index expansion request can be sent to the opposite-end node to request the opposite-end node to expand the allocated memory index, thereby avoiding the problem of insufficient memory under high-voltage conditions.
It can be understood that in the embodiment of the invention, polling is introduced into the memory by a polling mode, so that polling bandwidth and polling time consumption are accelerated, kernel event notification in message communication is reduced, messages can be directly processed, meanwhile, requests in all user states are not required to be switched by the kernel, the CPU overhead in the whole message and IO process is reduced, the data processing speed is accelerated, the data processing time delay is reduced, the IO speed in the whole HCI environment is improved, IO time delay is reduced, the bandwidth performance of the whole DPU is exerted, and the bandwidth waste is reduced.
In other words, in a distributed scenario, because multiple copies exist, a scenario in which single data is sent to multiple nodes appears, and in a traditional communication model scenario, double-copy data transmission requires 10 TCP message transmissions, and two event notifications; according to the scheme, 4 times of TCP message transmission are required, and no event notification exists; network TCP messages Wen Jiaohu in a copy forwarding scene are greatly reduced, the efficiency of network transmission is improved, event notification is reduced, and the timeliness of event processing is accelerated; the total network TPS consumption is reduced by 60%, the communication performance is improved by more than 50%, the read-write IOPS of the cluster is improved, the time delay of single IO is reduced, and the competitiveness of the whole distributed storage is improved.
Based on the foregoing embodiments, the embodiment of the present invention further provides a distributed storage and offloading device, and particularly please refer to fig. 7. Comprising the following steps:
a connection module 11, configured to establish a connection with a peer node;
the interaction module 12 is configured to exchange memory index information with the peer nodes, and obtain memory indexes of the peer nodes;
the determining module 13 is configured to determine a target opposite-end memory index from the opposite-end memory indexes;
the operation module 14 is configured to write, by using the local intelligent network card DPU, information to be sent into an opposite-end memory address corresponding to the target opposite-end memory index in a manner of remote direct data access RDMA unilateral operation.
Further, the method further comprises the following steps:
the first building module is used for building a local memory pool for each node;
and the second building module is used for building local memory indexes respectively corresponding to the local memory addresses of the local memory pool.
Further, the interaction module 12 includes:
the first sending module is used for sending the plurality of local memory indexes to the opposite terminal node;
the first receiving module is used for receiving a plurality of opposite-end memory indexes sent by the opposite-end node, and the opposite-end memory indexes correspond to corresponding opposite-end memory addresses of the opposite-end memory pool.
Further, the method further comprises the following steps:
the distribution module is used for respectively distributing a plurality of local memory indexes to other nodes;
and the first sending module is used for sending the plurality of local memory indexes corresponding to the opposite terminal node.
Further, the method further comprises the following steps:
and the registration module is used for registering each local memory pool into the local DPU.
Further, the determining module 13 is configured to determine the target peer memory index by introducing a line poll to each peer memory index in the index information.
Further, the determining module 13 includes:
the polling unit is used for carrying out polling on each opposite-end memory index in the index information, and finding out the opposite-end memory index with the idle state identified by the state identification;
and the determining unit is used for determining the opposite-end memory index with the state marked as the idle state as a target opposite-end memory index.
Further, the method further comprises the following steps:
the first changing module is used for changing the locally stored target opposite-end memory index and the corresponding state identification of the original opposite-end memory index of the opposite-end node into a written state.
Further, the method further comprises the following steps:
the traversing module is used for traversing each local memory index and determining a target local memory index with a written state as a state identifier;
And the processing module is used for carrying out IO processing on the information in the local memory address corresponding to the target local memory index.
Further, the processing module is configured to directly issue data in the local memory address corresponding to the target local memory index to the physical disk.
Further, the method further comprises the following steps:
and the second changing module is used for changing the state identification of the target local memory index into an idle state.
Further, the method further comprises the following steps:
the generation module is used for generating local return information, wherein the local return information comprises an index head of a target local memory index;
the determining module 13 is further configured to determine a current target opposite-end memory index from the opposite-end memory indexes;
the operation module 14 is further configured to write, by using the local DPU in an RDMA single-side operation manner, the local return information into the opposite-end memory address corresponding to the current target opposite-end memory index.
Further, the method further comprises the following steps:
and the third changing module is used for changing the state identification of the corresponding opposite-end memory index based on the index head in the opposite-end return information into an idle state when the information in the local memory address corresponding to the target local memory index is the opposite-end return information.
Further, the method further comprises the following steps:
the second sending module is used for sending a re-reading index information request to the opposite terminal node when the target opposite terminal memory index is not determined from the opposite terminal memory indexes;
and the second receiving module is used for receiving the latest memory indexes of all the opposite ends sent by the opposite end node.
Further, the method further comprises the following steps:
and the third sending module is used for sending an index expansion request to the opposite terminal node under the condition that the target opposite terminal memory index is not determined based on the latest opposite terminal memory indexes.
Further, the determining module 13 is configured to determine, based on the amount of information to be sent, a plurality of target peer memory indexes from the peer memory indexes;
the operation module 14 is configured to write each information to be sent into the corresponding opposite-end memory address corresponding to each target opposite-end index.
It should be noted that, the distributed storage and offloading apparatus provided in the embodiment of the present invention has the same advantages as the distributed storage and offloading method provided in the above embodiment, and for a specific description of the distributed storage and offloading method related to the embodiment of the present invention, reference is made to the above embodiment, and the disclosure is not repeated herein.
As shown in fig. 8, on the basis of the foregoing embodiment, an embodiment of the present invention further provides an electronic device, including:
a memory 20 for storing a computer program;
a processor 21 for implementing the steps of the distributed storage offloading method as described above when executing a computer program.
The electronic device provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 21 may also comprise a main processor, which is a processor for processing data in an awake state, also called CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 21 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 20 is at least used for storing a computer program 201, where the computer program, when loaded and executed by the processor 21, is capable of implementing the relevant steps of the distributed storage offloading method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 20 may further include an operating system 202, data 203, and the like, where the storage manner may be transient storage or permanent storage. The operating system 202 may include Windows, unix, linux, among others. The data 203 may include, but is not limited to, a set offset, etc.
In some embodiments, the electronic device may further include a display 22, an input-output interface 23, a communication interface 24, a power supply 25, and a communication bus 26.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting of the electronic device and may include more or fewer components than shown.
It will be appreciated that the distributed storage offloading method of the above embodiments, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored on a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution contributing to the prior art, or in a software product stored in a storage medium, performing all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), an electrically erasable programmable ROM, registers, a hard disk, a removable disk, a CD-ROM, a magnetic disk, or an optical disk, etc. various media capable of storing program codes.
Based on this, as shown in fig. 9, the embodiment of the present invention further provides a computer readable storage medium, on which a computer program 31 is stored in the computer readable storage medium 30, and the computer program 31 implements the steps of the distributed storage offloading method as described above when being executed by a processor.
It should be noted that, the electronic device and the computer readable storage medium provided in the embodiments of the present invention have the same advantages as the distributed storage and offloading method provided in the above embodiments, and reference is made to the above embodiments for specific description of the distributed storage and offloading method related to the embodiments of the present invention, which is not repeated herein.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (20)

1. A distributed storage offloading method, comprising:
establishing connection with a peer node;
exchanging memory index information with the opposite terminal nodes to obtain memory indexes of the opposite terminals;
determining a target opposite-end memory index from the opposite-end memory indexes;
and writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through the local intelligent network card DPU.
2. The distributed storage offloading method of claim 1, further comprising, prior to the establishing a connection with the peer node:
establishing a local memory pool for each node;
and establishing local memory indexes respectively corresponding to the local memory addresses of the local memory pool.
3. The distributed storage offloading method of claim 2, wherein the exchanging memory index information with the peer node, obtaining each peer memory index, includes:
transmitting a plurality of the local memory indexes to the opposite node;
and receiving a plurality of opposite-end memory indexes sent by the opposite-end node, wherein the opposite-end memory indexes correspond to corresponding opposite-end memory addresses of an opposite-end memory pool.
4. The distributed storage offloading method of claim 3, wherein after the establishing the local memory indexes respectively corresponding to the local memory addresses of the local memory pool, further comprising:
respectively distributing a plurality of local memory indexes for other nodes;
the sending the plurality of local memory indexes to the peer node includes:
and sending a plurality of local memory indexes corresponding to the opposite end node.
5. The distributed storage offloading method of claim 2, further comprising, after the establishing the local memory pool:
registering each local memory pool into the local DPU.
6. The method of claim 2, wherein determining a target peer memory index from the peer memory indexes comprises:
and carrying out polling on each opposite-end memory index in the index information to determine the target opposite-end memory index.
7. The method of claim 6, wherein determining the target peer memory index by introducing a poll to each peer memory index in the pair index information, comprises:
Performing line polling on each opposite-end memory index in the index information, and searching out an opposite-end memory index with an idle state as a state identifier;
and determining the opposite-end memory index with the state marked as the idle state as a target opposite-end memory index.
8. The distributed storage offloading method of claim 2, wherein after the writing the information to be sent to the peer memory address corresponding to the target peer memory index, further comprising:
and changing the state identifier of the target opposite-end memory index stored locally and the original opposite-end memory index corresponding to the opposite-end node into a written state.
9. The distributed storage offloading method of claim 2, further comprising:
traversing each local memory index to determine a target local memory index with a written state identification;
and carrying out IO processing on the information in the local memory address corresponding to the target local memory index.
10. The distributed storage offloading method of claim 9, wherein IO processing the information in the local memory address corresponding to the target local memory index comprises:
And directly transmitting the data in the local memory address corresponding to the target local memory index to a physical disk.
11. The distributed storage offloading method of claim 9, wherein after IO processing the information in the local memory address corresponding to the target local memory index, further comprising:
and changing the state identification of the target local memory index into an idle state.
12. The distributed storage offloading method of claim 9, wherein after IO processing the information in the local memory address corresponding to the target local memory index, further comprising:
generating local return information, wherein the local return information comprises an index head of the target local memory index;
determining a current target opposite-end memory index from the opposite-end memory indexes;
and writing the local return information into the opposite-end memory address corresponding to the current target opposite-end memory index by adopting an RDMA unilateral operation mode through the local DPU.
13. The distributed storage offloading method of claim 9, further comprising:
when the information in the local memory address corresponding to the target local memory index is opposite-end return information, changing the state identification of the corresponding opposite-end memory index into an idle state based on the index head in the opposite-end return information.
14. The distributed storage offloading method of claim 2, wherein the local memory index includes a local memory address, a key, and header information corresponding to a local memory block;
the opposite terminal memory index comprises an opposite terminal memory address, a secret key and data head information corresponding to the opposite terminal memory block.
15. The distributed storage offloading method of any one of claims 1 to 14, further comprising:
when the target opposite-end memory index is not determined from the opposite-end memory indexes, sending a re-reading index information request to the opposite-end node;
and receiving the latest memory indexes of each opposite terminal sent by the opposite terminal node.
16. The distributed storage offloading method of claim 15, further comprising:
and sending an index expansion request to the opposite terminal node under the condition that the target opposite terminal memory index is not determined based on the latest opposite terminal memory indexes.
17. The method of any one of claims 1 to 14, wherein determining a target peer memory index from the peer memory indexes includes:
determining a plurality of target opposite-end memory indexes from the opposite-end memory indexes based on the information quantity to be transmitted;
Writing the information to be sent into the opposite-end memory address corresponding to the target opposite-end memory index, including:
writing the information to be sent into opposite terminal memory addresses respectively corresponding to the target opposite terminal indexes.
18. A distributed storage offloading apparatus, comprising:
the connection module is used for establishing connection with the opposite end node;
the interaction module is used for exchanging memory index information with the opposite terminal nodes and obtaining memory indexes of the opposite terminals;
the determining module is used for determining a target opposite-end memory index from the opposite-end memory indexes;
and the operation module is used for writing the information to be transmitted into the opposite-end memory address corresponding to the target opposite-end memory index by adopting a remote direct data access (RDMA) unilateral operation mode through the local intelligent network card DPU.
19. An electronic device, comprising:
a memory for storing a computer program;
processor for implementing the steps of the distributed storage offloading method of any one of claims 1 to 17 when executing the computer program.
20. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the distributed storage offloading method of any one of claims 1 to 17.
CN202310202864.0A 2023-03-06 2023-03-06 Distributed storage unloading method and device, electronic equipment and storage medium Active CN116069262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310202864.0A CN116069262B (en) 2023-03-06 2023-03-06 Distributed storage unloading method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310202864.0A CN116069262B (en) 2023-03-06 2023-03-06 Distributed storage unloading method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116069262A true CN116069262A (en) 2023-05-05
CN116069262B CN116069262B (en) 2023-07-14

Family

ID=86183762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310202864.0A Active CN116069262B (en) 2023-03-06 2023-03-06 Distributed storage unloading method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116069262B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117762572A (en) * 2024-01-03 2024-03-26 北京火山引擎科技有限公司 Unloading method and equipment for host and virtual machine shared directory file system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111125049A (en) * 2019-12-24 2020-05-08 上海交通大学 RDMA (remote direct memory Access) -and-nonvolatile-memory-based distributed file data block reading and writing method and system
CN113220693A (en) * 2021-06-02 2021-08-06 北京字节跳动网络技术有限公司 Computing storage separation system, data access method, medium and electronic device thereof
CN114461593A (en) * 2022-04-13 2022-05-10 云和恩墨(北京)信息技术有限公司 Log writing method and device, electronic equipment and storage medium
CN115168022A (en) * 2022-05-12 2022-10-11 阿里巴巴(中国)有限公司 Object processing method
CN115509435A (en) * 2021-06-23 2022-12-23 深信服科技股份有限公司 Data reading and writing method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111125049A (en) * 2019-12-24 2020-05-08 上海交通大学 RDMA (remote direct memory Access) -and-nonvolatile-memory-based distributed file data block reading and writing method and system
CN113220693A (en) * 2021-06-02 2021-08-06 北京字节跳动网络技术有限公司 Computing storage separation system, data access method, medium and electronic device thereof
CN115509435A (en) * 2021-06-23 2022-12-23 深信服科技股份有限公司 Data reading and writing method, device, equipment and medium
CN114461593A (en) * 2022-04-13 2022-05-10 云和恩墨(北京)信息技术有限公司 Log writing method and device, electronic equipment and storage medium
CN115168022A (en) * 2022-05-12 2022-10-11 阿里巴巴(中国)有限公司 Object processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117762572A (en) * 2024-01-03 2024-03-26 北京火山引擎科技有限公司 Unloading method and equipment for host and virtual machine shared directory file system

Also Published As

Publication number Publication date
CN116069262B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN110647480B (en) Data processing method, remote direct access network card and equipment
US9385976B1 (en) Systems and methods for storing message data
CN112948318B (en) RDMA-based data transmission method and device under Linux operating system
JP5280135B2 (en) Data transfer device
CN106453444B (en) Method and equipment for sharing cache data
CN116069262B (en) Distributed storage unloading method and device, electronic equipment and storage medium
CN111338806B (en) Service control method and device
WO2022017475A1 (en) Data access method and related device
CN115964319A (en) Data processing method for remote direct memory access and related product
CN111404931A (en) Remote data transmission method based on persistent memory
CN114461593B (en) Log writing method and device, electronic device and storage medium
CN117312229B (en) Data transmission device, data processing equipment, system, method and medium
CN117370046A (en) Inter-process communication method, system, device and storage medium
CN113238856A (en) RDMA (remote direct memory Access) -based memory management method and device
CN112052104A (en) Message queue management method based on multi-computer-room realization and electronic equipment
CN111400213B (en) Method, device and system for transmitting data
WO2023246236A1 (en) Node configuration method, transaction log synchronization method and node for distributed database
CN109976686B (en) Distributed display system and method
CN113641604B (en) Data transmission method and system
CN112954068B (en) RDMA (remote direct memory Access) -based data transmission method and device
CN113596085A (en) Data processing method, system and device
CN114691382A (en) RDMA-based communication method, node, system and medium
CN116601616A (en) Data processing device, method and related equipment
CN113608686B (en) Remote memory direct access method and related device
CN110519242A (en) Data transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant