CN114489465A - Method for processing data by using network card, network equipment and computer system - Google Patents

Method for processing data by using network card, network equipment and computer system Download PDF

Info

Publication number
CN114489465A
CN114489465A CN202011268999.XA CN202011268999A CN114489465A CN 114489465 A CN114489465 A CN 114489465A CN 202011268999 A CN202011268999 A CN 202011268999A CN 114489465 A CN114489465 A CN 114489465A
Authority
CN
China
Prior art keywords
write
address
write request
request
address space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011268999.XA
Other languages
Chinese (zh)
Inventor
陈晓雨
游俊
余博伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011268999.XA priority Critical patent/CN114489465A/en
Publication of CN114489465A publication Critical patent/CN114489465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/1607Details of the supervisory signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The embodiment of the application provides a computer system, which comprises a computing node and a storage node. And the computing node is connected to the storage node through the network card of the storage node. The computing node is configured to send a first write request, where the first write request includes a first write address, and the first write address is an address where data to be written is written in an address space accessed by the first write request; the network card is used for receiving the first write request and writing the data to be written into the storage node according to the first write address; upon determining that write requests accessing the address space and having write addresses preceding the first write address have all completed, notifying the compute node that the first write request has completed. According to the technical scheme provided by the embodiment of the application, the order-preserving writing of the address space is controlled through the network card of the storage node, and the load of the computing node is reduced.

Description

Method for processing data by using network card, network equipment and computer system
Technical Field
The present application relates to the field of storage, and in particular, to a method, a network device, and a computer system for processing data using a network card.
Background
In a computer system that is primarily used for storing data, it typically includes a plurality of compute nodes and a plurality of storage nodes. When a computing node needs to write data into a segment of address space in a storage node, in order to ensure that no scattered unused space (i.e., holes) occurs in the storage space of the storage node, the computing node needs to implement order-preserving write to the segment of address space. Specifically, after receiving a request for writing data into the address space from an upper Application (APP), the computing node first allocates a write address to the data in the address space, records the write request according to an address sequence of the storage space, and then sends the write request to a network card of the storage node. After receiving the write request, the network card of the storage node writes the data into the storage node based on a Remote Direct Memory Access (RDMA) technology and returns a completion message of the write request. And after receiving the completion message of the write request, the computing node returns the completion message of the write request to an upper application according to the recorded address sequence of the write request, thereby realizing order-preserving write-in of the address space. The technology realizes the order-preserving writing of the address space mainly controlled by the computing node, and increases the load of the computing node.
Disclosure of Invention
The embodiment of the application aims to provide a data processing scheme for order-preserving writing of a storage space of a storage node, and the scheme controls the order-preserving writing of an address space through a network card, so that the load of a computing node is reduced.
To achieve the above objective, one aspect of the present application provides a computer system including a compute node and a storage node. And the computing node is connected to the storage node through the network card of the storage node. The computing node is configured to send a write request for accessing a first address space to the network card, where the first address space corresponds to a second address space in the storage node. The network card is used for writing the data to be written corresponding to the write request into the second address space in the storage node, and returning a completion message of the write request to the computing node according to the address sequence of the write address corresponding to the write request in the first address space.
According to the technical scheme, the network card can return the completion message of the write request according to the address sequence of the write request in the address space, namely, the order-preserving write of the address space is controlled through the network card, so that the completion message of the write request can be directly returned to the application by the computing node after receiving the completion message of the write request, the order-preserving control is not needed, and the load of the computing node is reduced.
In an embodiment, when the network card is configured to return a completion message of the write request according to an address order of a write address corresponding to the write request in the first address space, the network card is specifically configured to: when determining that all write requests accessing the first address space and having write addresses before the write address corresponding to the write request are completed, returning a completion message of the write request to the computing node.
In an embodiment, the storage node includes a plurality of storage nodes, and the computing node is configured to send the write request to a network card of each of the plurality of storage nodes, and determine that the write request is completed after receiving a completion message of the write request sent by the plurality of storage nodes.
In the implementation mode, the computing node can determine that the write request is completed only after determining that the completion message of the write request returned by the plurality of storage nodes is received, without performing order preservation control, and the load of the computing node is reduced.
In an implementation manner, the network card is further configured to record information of the write request, where the information is used to indicate an address sequence of a write address of the write request in the address space, and the network card is further configured to return a completion message of the write request to the computing node according to the address sequence of the write address of the write request in the address space, which is determined by the information of the write request.
By recording the information of the write request by the network card, the network card can return the completion message of the write request in an order-preserving manner based on the information because the information indicates the address order of the write address of the write request in the address space.
In an embodiment, the information of the write request is a request number of the write request, the request number of the write request is assigned by the computing node based on a generation sequence of the write request, and the network card is specifically configured to, when it is determined that all write requests accessing the first address space and having write addresses before a write address corresponding to the write request have been completed: when the number of the write request is determined to be the minimum number in the numbers of the uncompleted write requests which are recorded in the network card and access the first address space, determining that the write requests which access the first address space and have write addresses before the write address corresponding to the write request are all completed.
By recording the request number of the write request by the network card, since the size of the number of the write request corresponds to the size of the write address of the write request, that is, the number of the write request can indicate the address order of the write address in the address space, the network card can determine whether the write request is order-preserving write by determining whether the number of the write request is the minimum number among the numbers of the outstanding write requests accessing the first address space, thereby returning the completion message of the write request in order-preserving.
In an embodiment, the information of the write request is a write address corresponding to the write request, and when the network card is configured to determine that the write request accessing the first address space and the write address of which is before the write address corresponding to the write request is completely completed, the network card is specifically configured to: and determining that the write address corresponding to the write request is the minimum write address in the write addresses included in the incomplete write requests which are recorded in the network card and access the first address space.
By recording the write address of the write request by the network card, whether the write of the write request is the order-preserving write can be determined by determining whether the write address is the minimum write address in the write addresses included in the uncompleted write request accessing the first address space, so that the completion message of the write request is returned in order-preserving manner.
In one embodiment, the network card is further configured to record information of the write request to an outstanding write request queue in the network card when it is determined that all write requests accessing the first address space and having write addresses before a write address corresponding to the write request are not completed.
In an implementation manner, the network card records information of a write request to be currently returned, and when it is determined that the information of the write request matches the information of the write request to be currently returned, the network card determines that all write requests which access the first address space and have write addresses before the write address of the write request are completed.
Another aspect of the present application provides a data processing method, where the method is performed by a network card of a storage node, and the storage node is connected to a computing node through the network card, where the method includes: receiving a first write request sent by a computing node, wherein the first write request comprises a first write address, and the first write address is an address for writing data to be written in an address space accessed by the first write request; writing the data to be written into the storage node according to the first write address; upon determining that write requests accessing the address space and having write addresses preceding the first write address have all completed, notifying the compute node that the first write request has completed.
In one embodiment, after writing the data to be written to the storage node, the method further comprises: recording information of the first write request, the information indicating an address order of the first write address in the address space, wherein determining that write requests accessing the address space and having write addresses before the first write address have all completed comprises determining, based on the recorded information of the first write request, that write requests accessing the address space and having write addresses before the first write address have all completed.
In one embodiment, the information of the first write request is a request number of the first write request, the request number of the first write request being assigned by the computing node based on a generation order of the first write request, wherein determining, based on the recorded information of the first write request, that all write requests accessing the address space and having write addresses before the first write address have been completed comprises: determining that the request number of the first write request is the smallest number of the recorded numbers of the outstanding write requests accessing the address space.
In one embodiment, the information of the first write request is the first write address, wherein determining that all write requests accessing the address space and having write addresses before the first write address have been completed based on the recorded information of the first write request comprises: determining the first write address as a minimum write address of write addresses included in the recorded outstanding write requests accessing the address space.
In one embodiment, the determining that all write requests accessing the address space and having write addresses preceding the first write address have been completed comprises determining that information of the first write request is equal to a value of a first variable used to record information of a write request having a smallest write address that has not been completed to access the address space, and after notifying the compute node that the first write request has been completed, the method further comprises: modifying a value of the first variable such that the value of the first variable indicates information of a next write request of the first write request.
In one embodiment, the storage node includes a storage space corresponding to the address space, and the writing of the data to be written of the first write request based on the first write address includes determining a second write address corresponding to the first write address in the storage space, and writing the data to be written at the second write address.
Another aspect of the present application provides a data processing apparatus, where the apparatus is deployed in a network card of a storage node, and the storage node is connected to a computing node through the network card, where the apparatus includes: a receiving unit, configured to receive a first write request sent by a computing node, where the first write request includes a first write address, and the first write address is an address where data to be written is written in an address space accessed by the first write request; the writing unit is used for writing the data to be written into the storage node according to the first writing address; a notification unit, configured to notify the computing node that the first write request has been completed when it is determined that all write requests accessing the address space and having write addresses before the first write address have been completed.
In one embodiment, the apparatus further includes a recording unit, configured to record information of the first write request after the data to be written is written into the storage node, where the information indicates an address order of the first write addresses in the address space, and the notification unit is further configured to determine, based on the recorded information of the first write request, that all write requests accessing the address space and having write addresses before the first write address have been completed.
In one embodiment, the information of the first write request is a request number of the first write request, and the request number of the first write request is assigned by the computing node based on a generation order of the first write request, wherein the notification unit is further configured to: determining that the request number of the first write request is the smallest number of the recorded numbers of the outstanding write requests accessing the address space.
In one embodiment, the information of the first write request is the first write address, and the notification unit is further configured to: determining the first write address as a minimum write address of write addresses included in the recorded outstanding write requests accessing the address space.
In one embodiment, the notification unit is further configured to determine that the information of the first write request is equal to a value of a first variable used for recording information of a write request with a smallest outstanding write address for accessing the address space, and the apparatus further includes a modification unit configured to modify the value of the first variable after notifying the computing node that the first write request has been completed, so that the value of the first variable indicates information of a next write request of the first write request.
Another aspect of the present application provides a network device, including a processing unit and a storage unit, where the storage unit stores executable code, and when the processing unit executes the executable code, the processing unit implements any one of the above methods.
Another aspect of the present application provides a network device, including: the communication interface is used for carrying out data transmission with the storage node and the computing node; and the processing unit is used for processing the data received by the communication interface so as to execute any one of the methods.
Another aspect of the present application provides a storage medium, where executable instructions are stored in the storage medium, and a network card of a storage node executes the executable instructions in the storage medium to implement any one of the above methods.
Another aspect of the present application provides a program product, and a network card of a storage node executes the program product to perform any one of the above methods.
Drawings
The embodiments of the present application can be made more clear by describing the embodiments with reference to the attached drawings:
FIG. 1 is a block diagram of a computer system to which embodiments of the present application are applied;
FIG. 2 is a flowchart of a method for writing data in a computer system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a process for a compute node to perform multiple copy writes to two storage nodes based on PLOG 1;
FIG. 4 is a schematic diagram of a process for a compute node to perform multi-slice writing to three storage nodes based on a PLOG 1;
fig. 5 is an architecture diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 8 is an architecture diagram of a cloud service system applied in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
FIG. 1 is a block diagram of a computer system 100 to which embodiments of the present invention are applied. As shown in fig. 1, the computer system 100 includes a computation layer 12 and a storage layer 13. Wherein the computation layer 12 comprises a plurality of computation nodes, two of which are schematically shown in fig. 1; the storage tier 13 includes a plurality of storage nodes, three of which are shown schematically in fig. 1. The computing nodes and the storage nodes may be physical servers, or may also be virtual machines, containers, and other virtual entities based on common hardware resource abstraction. The plurality of computing nodes may, for example, serve as application servers of a plurality of APPs to provide service processing services for users of the user terminals. The plurality of storage nodes may be used, for example, to store business data of APPs. Software and hardware architectures, such as an Infiniband architecture, a RoCE architecture, an iWARP architecture and the like, for implementing the RDMA technology are arranged in the computing nodes and the storage nodes, so that data transmission can be performed between the computing nodes and the storage nodes through the RDMA technology.
The computing node accesses a segment of logical address space corresponding to a storage space in the storage node, thereby accessing the storage space. The logical address space is, for example, persistent LOG space (PLOG). The PLOG is identified by a PLOG ID which is a unique identification of the PLOG, and stored data on the PLOG is stored in the form of an append write, i.e. an overwrite modification is not made to already stored data, but the modification is appended to a new address. Generally, a PLOG corresponds to a segment of continuous physical storage space in media such as SCM (single chip computer), Solid State Disk (SSD), etc. in a storage node, it is to be understood that this is not limited in this embodiment of the present application, and the PLOG may also correspond to other physical storage space or logical storage space in the storage node.
In the related art, metadata of a PLOG is stored in a computing node for managing the PLOG, the metadata including information such as an ID of the PLOG, a storage node corresponding to the PLOG, a start address of an unallocated space of the PLOG, and the like, wherein the start address of the unallocated address space in the PLOG is recorded in the metadata by, for example, a variable offset, and the address in the PLOG is represented by an offset address from the start address of the PLOG (i.e., address 0), and thus an initial value of the variable offset is 0. After a computing node receives a request for writing data to a PLOG from an upper application, in order to implement order-preserving writing to the address space of the PLOG, the computing node first allocates a write address to the data based on the start address of unallocated space recorded in the metadata of the PLOG, and updates the value of a variable offset in the PLOG metadata based on the length of the data. And after acquiring the allocated address, the computing node generates a write request for writing the data into the PLOG, wherein the write request comprises the allocated address. And then, the computing node records the write requests into an order-preserving queue corresponding to the PLOG according to the sequence of the write addresses, so that the write requests are returned to an upper layer application in an order-preserving manner subsequently. And then, the computing node sends the write request to a network card of the storage node recorded in the PLOG metadata, so that the network card of the storage node writes the data in the storage node at a storage address corresponding to the write address of the write request in an RDMA single-edge write mode. And the network card of the storage node returns the write request to the computing node after finishing writing the data. And after receiving the write request returned by the storage node, the computing node marks the write completion of the write request in the order-preserving queue. In the case that the PLOG corresponds to a plurality of storage nodes, the computing node respectively sends the write requests to the plurality of storage nodes, and after all the storage nodes return the write requests, the write request is marked in the order-preserving queue to be written. Meanwhile, the computing node scans the order-preserving queue and returns each completed write request to the upper-layer application according to the recording sequence of the write requests in the queue. Further, there is a possibility that an upper application of a compute node may request writing to a PLOG with high concurrency, and further, there is a possibility that an order-preserving queue may be scanned in parallel by a plurality of threads in the compute node.
The technical implementation mainly controls the order-preserving writing of the PLOG address space through the computing node, and the load of the computing node is increased.
In the embodiment of the application, after receiving a write request sent by a computing node, a network card of the storage node writes in the storage node, and returns a completion message of the write request according to an address sequence of the write request in a PLOG address space, that is, order-preserving writing in the storage space corresponding to the PLOG is controlled by the network card, so that after receiving the completion message of the write request, the computing node can directly return the completion message of the write request to an application without performing order-preserving control, and thus, the load of the computing node is reduced.
The data processing method provided by the embodiment of the present application will be described in detail below.
Fig. 2 is a flowchart of a method for writing data in a computer system according to an embodiment of the present application. The method is executed by a computing node and a network card of a storage node together, wherein the storage node is a storage node which is recorded in metadata of the PLOG and corresponds to the PLOG, a storage space which corresponds to an address space of the PLOG is arranged in each storage node which corresponds to the PLOG, and the computing node realizes writing in the storage space which corresponds to the PLOG in the storage node by sending a writing request for the PLOG to the network card of the storage node. Fig. 2 only shows steps executed by one storage node network card as an illustration, in the embodiment of the present application, a PLOG is not limited to correspond to one storage node, for example, a computing node may implement multi-copy writing or multi-segment writing on multiple storage nodes by writing on the PLOG to ensure reliability of written data, and for this reason, the PLOG is recorded in metadata of the PLOG and corresponds to multiple storage nodes. In the multi-copy writing storage mode, the computing nodes generate writing requests by taking data to be written as writing data for the PLOG, so that the data to be written are written into the corresponding storage spaces of the storage nodes respectively. In a multi-segment write storage mode, a computing node firstly divides data to be written into a plurality of segments (including data segments and check segments) based on a Redundant Array of Independent Disks (RAID) algorithm and an Erasure Coding (EC) algorithm, and then generates a plurality of write requests respectively with the plurality of segments of the data to be written as write data to a PLOG, so that the plurality of segments of the data to be written are written into the corresponding storage spaces of the storage nodes respectively. In the above two cases, the computing node will send a write request for the PLOG to the network cards of the plurality of storage nodes respectively to write into the storage nodes respectively, and the network card of each storage node in the plurality of storage nodes will execute the method steps executed by the network card of the storage node in fig. 2 after receiving the write request.
FIG. 3 is a schematic diagram of a process for a compute node to perform multiple copy writes to two storage nodes based on a PLOG 1. The computing node in fig. 3 may be any one of a plurality of computing nodes in a computer system, and the storage node Sa and the storage node Sb in fig. 3 are two storage nodes corresponding to the PLOG1 to which the computing node will write. The method shown in fig. 2 will be described below in connection with fig. 3.
As shown in FIG. 2, first, at step S202, the compute node generates a write request i to PLOG 1.
Where i is the number of the write request. Assuming that write requests are generated sequentially in a compute node from write instructions received from an upper-level application and are numbered sequentially, the number i of write requests to the PLOG1 may be sequentially incremented starting with 0, e.g., a write request to the PLOG1 includes write request 0, write request 1, write request 2, etc., in sequential order. It will be appreciated that the number of write requests is not limited to being determined in the manner described above, and may also be generated sequentially based on predetermined rules, e.g., the number i of write requests to the PLOG1 may be incremented by 2 each time, starting with 0, and write requests to the PLOG1 may include write request 0, write request 2, write request 4, etc., in sequential order.
An upper layer application in the compute node may initiate write instructions to PLOG1 multi-threaded concurrently, assuming the compute node CPU receives first an instruction to write data a of length 10 bytes in PLOG1, and then an instruction to write data B of length 100 bytes in PLOG 1. The CPU of the computing node firstly generates a write request 1 relative to the instruction of the write data A according to the sequence of the instruction of the received write data. Specifically, as shown in the computing node in fig. 3, the CPU first determines the number of the write request for writing the data a to be 1 from the number of the existing write request, that is, the write request is write request 1 in fig. 3. Thereafter, the CPU looks up the metadata of PLOG1 cached in the memory based on the PLOG flag (i.e., PLOG1), reads the value of the variable offset therein (hereinafter, the read address is denoted as offset1) as the start address of address 1 assigned to data a, and mutually exclusively modifies the value of the variable offset in the metadata to offset1+ 10. As shown in fig. 3, assuming that offset1 is 10, the CPU takes 10 as the starting address assigned to write request 1 and modifies the value of the variable offset to 10+ 10-20, i.e., the CPU assigns the address space from addresses 10 to 20 in PLOG1 in fig. 3 (hereinafter referred to as address 1, address space labeled a in PLOG1 in fig. 3) to write request 1. The mutually exclusive modification of the variable offset by the CPU may be achieved by locking the variable offset, or may be achieved by, for example, an Atomic Fetch and ADD (FAA) Atomic operation that ensures that a series of operations are not interrupted, thereby ensuring mutually exclusive operations on other threads. Thus, the compute node CPU can generate a write request 1 including the request number "1", the data a to be written, and the write address (address 1) of the data a in the write request 1. Address 1 may be represented by a starting address and the length of address 1, and thus, offset1 may be included in write request 1 to indicate address 1.
After generating the write request 1, the CPU in the computing node may generate a write request 2 for writing data B in the same manner, and assuming that the write request 2 is a write request subsequent to the write request 1, the CPU in the computing node allocates an address 2 with a length of 100 bytes (as shown by the address labeled B in PLOG1 in fig. 3) starting from 20 to the write request 2 for writing data B, so that the CPU in the computing node generates the write request 2 for writing data B, where the write request 2 includes the number "2", data B, and address 2.
In step S204, the computing node sends a write request i to the storage node network card.
In this embodiment of the present application, a computing node sends a write request i to a network card of each storage node corresponding to PLOG1 through an RDMA technique through a network card (NIC). As shown in fig. 3, to implement RDMA transmission, an upper layer application in a computing node establishes a connection channel with an application in each storage node through a network card and a network card of each storage node, where the connection channel includes a Send Queue (SQ) in the NIC of the computing node and a Receive Queue (RQ) in the NIC of the storage node. Specifically, as shown in fig. 3, an SQa queue established with respect to the storage node Sa and a SQb queue established with respect to the storage node Sb are included in the NIC of the computing node. RQa queues established with respect to the compute node are included in the NICA of storage node Sa and RQb queues established with respect to the compute node are included in the NICB of storage node Sb.
After the computing node generates write request 1 and write request 2 as described above, in order to transmit them to storage node Sa and storage node Sb corresponding to PLOG1, the CPU in the computing node performs transmission of write request 1 to NIC and transmission of write request 2 to NIC in parallel by a plurality of threads, and recording of write request 1 in SQa queue, recording of write request 2 in SQa queue, recording of write request 2 in SQb queue, and recording of write request 1 in SQb queue are performed in parallel in the NIC. Thus, it is possible to have write request 1 and write request 2 recorded sequentially in the SQa queue and write request 2 and write request 1 recorded sequentially in the SQb queue, where write request 1 is represented by the box labeled 1 in each queue and write request 2 is represented by the box labeled 2 in each queue in FIG. 3.
And the NIC of the computing node sequentially sends each write request to the NIC of the storage node corresponding to each SQ queue according to the arrangement sequence of the write requests in each SQ queue, so that the NIC of the storage node sequentially records the received write requests in the RQ queue according to the receiving sequence. Specifically, the NIC of the computing node sequentially sends SQa write request 1 and write request 2 in the queue to the NICa of the storage node Sa, so that write request 1 and write request 2 are sequentially recorded in the RQa queue of the NICa. The NIC of the computing node sequentially transmits SQb write request 2 and write request 1 in the queue to the NICb of the storage node Sb, thereby sequentially recording write request 2 and write request 1 in the RQb queue of the NICb.
In step S206, the NIC of the storage node writes data to the storage node according to the write request i.
Referring to fig. 3, the NICa of the storage node Sa writes according to the respective write requests in the order of the write requests received in the RQa queue, respectively. Specifically, the NICa stores the start address of the storage space in the storage node Sa corresponding to PLOG1, and after receiving the write request 1 and the write request 2 in the RQa queue in sequence, the NICa first determines the write address of the write request 1 in the storage node Sa according to the address 1 in the write request 1 and the start address of the corresponding storage space. For example, as described above, the start address of address 1 is offset1, the length of data a is 10, since the offset1 is the offset address of PLOG1 relative to address 0, the start address of the write address of write request 1 in storage node Sa is the start address of the storage space plus offset1, and the length of the write address is the length of data a (i.e. 10 bytes). After determining the write address of the write request 1 in the storage node, the NICa writes the data a in the write request 1 to the determined write address in the storage node Sa by RDMA single-sided write. After the NICa writes data a to the storage node Sa in accordance with Write request 1, information of Write request 1 (information of Write request 1 is shown in a circle labeled 1 in fig. 3) indicating a position of an address in the Write request 1 in the PLOG is recorded in a Write Complete queue (WC) WCa in the NICa (information of Write request 1 is shown by a circle labeled 1 in fig. 3), thereby enabling an order-preserving return of Write requests accessing the PLOG1 to the computing node by the information. The WCa queue may be stored in a storage unit of the NICa, or the WCa queue may be stored in a hardware queue of the NICa, that is, the WCa queue may be implemented in a form of software or hardware. In one embodiment, the information of the write request 1 recorded in the WCa queue is a number of the write request 1, and since the computing node allocates a number to the write request and allocates a write address to the write request based on the order of receiving the write instruction, the number of the write request 1 represents the position order of the write address of the write request 1 in the PLOG 1. In another embodiment, the information of write request 1 recorded in the WCa queue is address information of write request 1, for example, the information is the above-mentioned offset1, and the position of the write address of write request 1 in PLOG1 can be indicated by the offset 1.
Similarly, the storage node Sa writes data B to the storage node Sa according to write request 2 after writing data a to the storage node Sa according to write request 1, and records information of write request 2 (e.g., the number of write request 2) in the Wca queue. Similarly, storage node Sb first writes data B to storage node Sb according to the order of write requests in the RQb queue according to write request 2 and records the information of write request 2 in the WCb queue in NICb, then writes data a to storage node Sb according to write request 1 and records the information of write request 1 in the WCb queue.
In step S208, the NIC of the storage node determines that the write request to PLOG1 with the write address preceding write request i is all completed, and in step S210, the NIC of the storage node returns information that write request i is completed to the compute node.
In the embodiment of the application, after the network card of the storage node corresponding to the PLOG writes into the storage node according to the write request, the network card does not immediately return the write request to the computing node, but determines whether the writing of the write request is order-preserving writing, that is, determines whether the write request of the write address to the PLOG before the write request is completely completed, if the writing is completely completed, it is determined that the writing of the write request is order-preserving writing, and if the writing is not completely completed, it is determined that the writing of the write request is not order-preserving writing.
In one embodiment, an order-preserving variable is set in a network card of the storage node for recording the write request with the minimum number or the minimum write address in the incomplete write requests. Assuming that the order-preserving variable corresponds to the number of write requests and the number of write requests to the PLOG sequentially increases from 0, the initial value of the order-preserving variable is 0 and the value of the order-preserving variable is incremented by 1 every time a write request is completed in the network card of the storage node for indicating the write request with the smallest number among the outstanding write requests. In the embodiment of the present application, the order-preserving variable is not limited to being added by 1 after each completion of a write request as described above, for example, the number of write requests changes according to a predetermined rule instead of being sequentially increased, and the value of the order-preserving variable may be modified according to the predetermined rule after each completion of one write request. In another embodiment, the order-preserving variable corresponds to a start address of a write address in a write request, the write request is assigned an address starting from address 0 in the PLOG, so that the initial value of the order-preserving variable is 0, and the value of the order-preserving variable is updated according to the length of the write data of the write request each time a write request is completed in the network card of the storage node, so that the value of the order-preserving variable equals the sum of the current value plus the length of the write data of the write request, the updated value of the order-preserving variable indicating the start address of the write address of the next write request of the write request.
As shown in fig. 3, in the storage node Sa, after the NICa sequentially records the information of write request 1 and write request 2 in the WCa queue, the NICa determines whether the write request in the WCa queue is an order-preserving write according to an order-preserving variable. Assuming that the value of the order-preserving variable is write request number 1 when determining the information for write request 1, i.e., it can be determined that write request 1 is the least numbered write request among the outstanding write requests, since the number of write requests can indicate the front and rear positions of the address of the write request in POLG as described above, it can be determined that the write requests of the write address preceding write request 1 have all been completed. Thus, based on the determination, the NICA may return to the compute node that write request 1 completed. A completion queue (Complete queue) CQa corresponding to the storage node Sa is provided in the NIC of the computing node, and is configured to receive write request completion information of the NICa from the storage node Sa, where the write request completion information includes, for example, a number or a write address of the write request. As shown in fig. 3, after receiving the completion information of the write request 1 from the NICa of the storage node Sa, the NIC of the computing node records the information of the write request 1 (the number of the write request 1, the write address, or the like) in the CQa queue, which is shown by a circle labeled 1 in the CQa queue in fig. 3. After the NICa returns the completion information of write request 1 to the compute node, it deletes the information of write request 1 in the WCa queue (the dashed circle labeled 1 in WCa in fig. 3 indicates that the information of write request 1 has been deleted), and adds 1 to the value of the order-preserving variable, i.e., modifies it to 2. Thereafter, the NICa similarly returns write request 2 completion information to the compute node and modifies the value of the order-preserving variable to 3.
In the storage node Sb, after the NICb sequentially records the information of the write request 2 and the write request 1 in the WCb queue, the NICb first determines whether the write request 2 is an order-preserving write based on the recording order in WCb, and since the write request 1 is not returned to the compute node at this time, the value of the order-preserving variable should be less than or equal to 1 at this time, assuming that the value of the order-preserving variable is determined to be 1 at this time, so that the NICb does not return the write request 2 to the compute node, but continues to determine whether the subsequent write request recorded in WCb is an order-preserving write, that is, the determination is made as shown in fig. 3 for the write request 1. Since the value of the order-preserving variable is 1 at this time, it can be determined that the writing of the write request 1 is order-preserving writing, therefore, the NICb returns the information that the write request 1 is completed to the compute node, deletes WCb the information of the write request 1, and modifies the value of the order-preserving variable to 2. The NIC of the computing node is also provided with a CQ queue (i.e., CQb queue) corresponding to the storage node Sb, and after receiving the information that the write request 1 is completed from the storage node Sb, the NIC of the computing node records the information of the write request 1 (the number of the write request 1, the write address, or the like) in the CQb queue.
In step S212, the computing node determines that the write request i is complete.
If the PLOG corresponds to a storage node, the computing node, after receiving write request completion information from the storage node, may determine that the write request is an out-of-order write, and may determine that the write request is complete and may return the write request to the upper-level application.
If the PLOG corresponds to at least two storage nodes, as shown in fig. 3, in the computing node, the NIC polls CQa queue and CQb queue for information of write requests, for example, the NIC polls CQa queue and CQb queue to include write request 1 information and notifies the CPU of the computing node, then the CPU can determine that write request 1 is written in order-preserving manner in both storage node Sa and storage node Sb corresponding to PLOG1, and thus can determine that write request 1 is completed, and return the write request 1 to the upper application. Meanwhile, the NIC in the compute node only polled the write request 2 message in the CQa queue and notified the CPU, but has not polled the write request 2 message in the CQb queue, so the CPU has not been able to determine that write request 2 is complete. In one embodiment, after the computing node NIC polls the information of the write request 2 returned to one storage node and notifies the CPU of the computing node, the CPU may record the reply count of the write request 2 as 1, and after the computing node NIC polls the information of the write request 2 returned to another storage node and notifies the CPU of the computing node, the CPU modifies the reply count of the write request 2 to 2, and may determine that the write request 2 is completed based on the reply count.
In the above-described embodiment, it is determined whether the write request is an order-preserving write based on an order-preserving variable in the NIC of the storage node, and the embodiment of the present application is not limited thereto. Embodiments of the present application include any implementation for determining whether a write request is an ordered write.
In the foregoing, the data writing method provided in the embodiment of the present application is mainly described by taking a multi-copy storage mode to guarantee data reliability as an example, the embodiment of the present application can also be applied to a scenario where data reliability is guaranteed in a data fragmentation storage mode, and fig. 4 is a schematic diagram of a process in which a computing node performs multi-fragmentation writing on three storage nodes based on PLOG 1.
As shown in fig. 4, after receiving an instruction to write data a having a length of 10 bytes in PLOG1 from an upper application, the CPU of the computing node assigns data a with a number "1" of a write request, and may divide data a into a plurality of data fragments according to a preset EC algorithm or RAID algorithm and calculate check fragments of the plurality of data fragments. Referring to fig. 4, the CPU may equally divide the data a into two data slices D1 and D2 (shown as small white rectangles in fig. 4) of the same size, where the data slices D1 and D2 both have a length of 5 bytes, and then calculate one check slice C1 (shown as small gray rectangles in fig. 4) of the two data slices based on an EC algorithm or a RAID algorithm, thereby obtaining three slices D1, D2, and C1 of the data a, and according to the slicing algorithm, the transaction slice C1 also has a length of 5 bytes, that is, the data lengths of the three slices are all identical. The three fragments of the data A are in the form of 2 data fragments and 1 check fragment, so that when the three fragments are stored in three storage nodes respectively, one of the three storage nodes can be allowed to be abnormal. Specifically, data a can be directly spliced through D1+ D2, and if a storage node storing D1 is abnormal, data a can be restored through D2+ C1 based on an EC algorithm or a RAID algorithm.
The CPU, after acquiring the respective slices of data a, allocates an address in PLOG1 for write request 1. Unlike the manner of multi-copy writing described above, in this embodiment, write request 1 is assigned an address in PLOG1 based on the size of the individual slices. As shown in fig. 4, in the case of dividing the data a into two data slices and one check slice, the size of each slice is half the size of the data a (schematically shown as a/2 in fig. 4), i.e., 5 bytes. Assuming that the value of offset recorded before the address is allocated to write request 1 is offset1, the CPU allocates an address of 5 bytes in length with offset1 as the start address for write request 1 in PLOG 1.
After assigning the address in PLOG1 for write request 1, the CPU generates write request 1 which is sent to the various storage nodes. Unlike the multi-copy writing method described above, since the three pieces into which the data a is divided are different from each other, the write request 1 generated for each storage node corresponding to the PLOG1 includes different write data. For example, if the three storage nodes are storage nodes Sa, Sb, and Sc (not shown in fig. 4), the write request 1 (hereinafter, referred to as write request 1 a) generated with respect to the storage node Sa includes the number of the write request 1, the address assigned to the write request 1, and the data slice D1, the write request 1 (hereinafter, referred to as write request 1 b) generated with respect to the storage node Sb includes the data slice D2 in addition to the same request number and write address as the write request 1a, and the write request 1 (hereinafter, referred to as write request 1C) generated with respect to the storage node Sc includes the check slice C1 in addition to the request number and write address of the write request 1. As schematically shown in fig. 4, after generating the above-described write request 1a including data D1, write request 1b including data D2, and write request 1C including data C1, the CPU transmits them to the NIC of the computing node, respectively, thereby recording the write requests 1 in the SQa queue, SQb queue, and SQc queue of the NIC, respectively, for transmission to the storage node Sa, storage node Sb, and storage node Sc, respectively. Similarly to the above, the NIC of the computing node is further provided with an CQa queue, a CQb queue and a CQc queue, so that the CPU can determine that write request 1 is completed after learning that the CQa queue, CQb queue and CQc queue all include the information of write request 1. In the multi-slice writing manner, the method steps executed in each storage node may refer to the method steps executed by the storage node described above with reference to fig. 2 and fig. 3, and are not described again here. Similar to the multi-copy writing mode, the storage node determines that the writing of each writing request is order-preserving writing, and returns the writing request to the computing node after the order-preserving writing is determined, so that the CPU resource overhead of the computing node is saved, the time delay of returning the writing request to the upper-layer application is reduced, and the efficiency of the computer system is improved.
Fig. 5 is an architecture diagram of a data processing apparatus 500 provided in an embodiment of the present application, where the apparatus 500 is deployed in a network card of a storage node, the storage node is connected to a computing node through the network card, and the apparatus is configured to implement the data processing methods shown in fig. 2 to fig. 4, and the apparatus 500 includes:
a receiving unit 51, configured to receive a first write request sent by a computing node, where the first write request includes a first write address, and the first write address is an address where data to be written is written in an address space accessed by the first write request;
a writing unit 52, configured to write the data to be written into the storage node according to the first writing address;
a notifying unit 53, configured to notify the computing node that the first write request has been completed when it is determined that all write requests accessing the address space and having write addresses before the first write address have been completed.
In one embodiment, the apparatus 500 further includes a recording unit 54, configured to record information of the first write request after writing the data to be written into the storage node, where the information indicates an address order of the first write addresses in the address space, and the notification unit 53 is further configured to determine, based on the recorded information of the first write request, that all write requests accessing the address space and having write addresses before the first write address have been completed.
In one embodiment, the information of the first write request is a request number of the first write request, and the request number of the first write request is assigned by the computing node based on a generation order of the first write request, where the notification unit 53 is further configured to: determining that the request number of the first write request is the smallest number of the recorded numbers of the outstanding write requests accessing the address space.
In an embodiment, the information of the first write request is the first write address, wherein the notification unit 53 is further configured to: determining the first write address as a minimum write address of write addresses included in the recorded outstanding write requests accessing the address space.
Fig. 6 is a schematic structural diagram of a network device 600 provided in an embodiment of the present application, where the network device 600 is, for example, a network card, and includes a processing unit 61 and a storage unit 62, where the storage unit stores executable codes, and when the processing unit executes the executable codes, the data processing method shown in fig. 2 to 4 is implemented.
Fig. 7 is a schematic structural diagram of a network device 700 according to an embodiment of the present application. As shown in fig. 7, the network device 700 includes: a communication interface 71 for data transmission with the storage node and the computing node; a processing unit 72, configured to process data received by the communication interface, so as to implement the data processing method shown in fig. 2 to 4, where the processing unit 72 is, for example, in the form of a logic circuit.
It is understood that the computing nodes and the storage nodes may be physical servers, or cloud servers (e.g., virtual servers). Fig. 8 is a schematic diagram of a cloud service system 800 according to an embodiment of the present application. Referring to fig. 8, the system 800 includes: a computing device 801 and a storage device 802. The computing device 801 includes a hardware layer 8016 and a Virtual Machine Monitor (VMM)8011 running on top of the hardware layer 8016, and a plurality of Virtual Machines (VMs)i)8012. Any of the virtual machines 8012 may serve as a virtual compute node of the cloud service system 800. The storage device 802 and a computing device801 similarly includes a hardware layer and a Virtual Machine Monitor (VMM) running above the hardware layer, and a plurality of Virtual Machines (VMs)j) Any of the virtual machines may serve as a virtual storage node of the cloud service system 800. The composition of computing device 801 is described in detail below as an example.
Specifically, the virtual machine 8012 is a virtual computer (server) that is simulated on a common hardware resource by virtual machine software, and an operating system and an application program can be installed on the virtual machine, and the virtual machine can also access network resources. For applications running in a virtual machine, the virtual machine operates as if it were a real computer.
The hardware layer 8016 is the hardware platform on which the virtualized environment runs, abstracted from the hardware resources of one or more physical hosts. The hardware layer may include various hardware, for example, the hardware layer 8016 includes a processor 8014 (e.g., a CPU) and a memory 8015, and may further include a network card (i.e., NIC)8013, a high-speed/low-speed Input/Output (I/O) device, and other devices with specific processing functions. The memory 8015 may be a volatile memory (volatile memory), such as a random-access memory (RAM), a Dynamic random-access memory (DRAM); the memory 8015 may also be a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), a solid-state drive (SSD), a Storage Class Memory (SCM), and the like; the memory 8015 may also comprise a combination of memories of the kind described above. The virtual machine 8012 runs an executable program based on the VMM 8011 and hardware resources provided by the hardware layer 8016 to implement the method steps performed by the compute node in the above embodiments. For brevity, no further description is provided herein.
According to the technical scheme provided by the embodiment of the application, the network card of the storage node controls order-preserving writing to the storage space corresponding to the PLOG, so that the completion message of the write request can be directly returned to the application by the computing node after receiving the completion message of the write request, order-preserving control is not needed, and the load of the computing node is reduced.
It is to be understood that the terms "first," "second," and the like, herein are used for descriptive purposes only and not for purposes of limitation, to distinguish between similar concepts.
It will be further appreciated by those of ordinary skill in the art that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (17)

1. A computer system comprising a compute node and a storage node, said compute node being connected to said storage node by a network card of said storage node,
the computing node is used for sending a write request for accessing a first address space to the network card, wherein the first address space corresponds to a second address space in the storage node;
the network card is used for writing the data to be written corresponding to the write request into the second address space in the storage node, and returning a completion message of the write request to the computing node according to the address sequence of the write address corresponding to the write request in the first address space.
2. The computer system according to claim 1, wherein the network card, when configured to return the completion message of the write request according to the address order of the write address corresponding to the write request in the first address space, is specifically configured to: when determining that all write requests accessing the first address space and having write addresses before the write address corresponding to the write request are completed, returning a completion message of the write request to the computing node.
3. The computer system according to claim 1 or 2, wherein the storage node comprises a plurality of storage nodes, and the computing node is configured to send the write request to a network card of each of the plurality of storage nodes, and determine that the write request is completed after receiving a completion message of the write request sent by the plurality of storage nodes.
4. The computer system according to any one of claims 1 to 3, wherein the network card is further configured to record information of the write request, where the information is used to indicate an address order of the write address of the write request in the address space, and the network card is further configured to return a completion message of the write request to the computing node according to the address order of the write address of the write request in the address space, which is determined by the information of the write request.
5. The computer system according to claim 4, wherein the information of the write request is a request number of the write request, the request number of the write request is assigned by the computing node based on a generation order of the write request, and the network card is specifically configured to, when it is determined that the write request that accesses the first address space and has a write address before the write address corresponding to the write request is completely completed: when the number of the write request is determined to be the minimum number in the numbers of the uncompleted write requests which are recorded in the network card and access the first address space, determining that the write requests which access the first address space and have write addresses before the write address corresponding to the write request are all completed.
6. The computer system of claim 4, wherein the information of the write request is a write address corresponding to the write request, and the network card is specifically configured to, when it is determined that the write request that accesses the first address space and has a write address before the write address corresponding to the write request is completely completed:
and determining that the write address corresponding to the write request is the minimum write address in the write addresses included in the incomplete write requests which are recorded in the network card and access the first address space.
7. The computer system of any one of claims 1-6, wherein the network card is further configured to record information of the write request in an outstanding write request queue in the network card when it is determined that all write requests accessing the first address space and having write addresses before a write address corresponding to the write request are not completed.
8. A data processing method, the method being performed by a network card of a storage node, the storage node being connected to a computing node via the network card, the method comprising:
receiving a first write request sent by a computing node, wherein the first write request comprises a first write address, and the first write address is an address for writing data to be written in an address space accessed by the first write request;
writing the data to be written into the storage node according to the first write address;
upon determining that write requests accessing the address space and having write addresses preceding the first write address have all completed, notifying the compute node that the first write request has completed.
9. The method of claim 8, wherein after writing the data to be written to the storage node, the method further comprises: recording information of the first write request, the information indicating an address order of the first write address in the address space, wherein determining that write requests accessing the address space and having write addresses before the first write address have all completed comprises determining, based on the information of the first write request, that write requests accessing the address space and having write addresses before the first write address have all completed.
10. The method of claim 9, wherein the information of the first write request is a request number of the first write request, the request number of the first write request being assigned by the compute node based on a generation order of the first write request, wherein determining, based on the recorded information of the first write request, that all write requests accessing the address space and having write addresses before the first write address have been completed comprises:
determining that the request number of the first write request is the smallest number of the recorded numbers of the outstanding write requests accessing the address space.
11. The method of claim 9, wherein the information of the first write request is the first write address, and wherein determining that the write requests accessing the address space and having write addresses before the first write address have all completed based on the recorded information of the first write request comprises:
determining the first write address as a minimum write address of write addresses included in the recorded outstanding write requests accessing the address space.
12. A data processing device, said device disposes in the network card of the storage node, said storage node is connected with computational node through the said network card, characterized by that, the said device includes:
a receiving unit, configured to receive a first write request sent by a compute node, where the first write request includes a first write address, and the first write address is an address where data to be written is written in an address space accessed by the first write request;
the writing unit is used for writing the data to be written into the storage node according to the first writing address;
a notification unit, configured to notify the computing node that the first write request has been completed when it is determined that all write requests accessing the address space and having write addresses before the first write address have been completed.
13. The apparatus according to claim 12, further comprising a recording unit configured to record information of the first write request after writing the data to be written into the storage node, the information indicating an address order of the first write addresses in the address space, wherein the notifying unit is further configured to determine, based on the recorded information of the first write request, that all write requests accessing the address space and having write addresses before the first write address have been completed.
14. The apparatus of claim 13, wherein the information of the first write request is a request number of the first write request, and wherein the request number of the first write request is assigned by the computing node based on a generation order of the first write request, and wherein the notifying unit is further configured to:
determining that the request number of the first write request is the smallest number of the recorded numbers of the outstanding write requests accessing the address space.
15. The apparatus of claim 13, wherein the information of the first write request is the first write address, and wherein the notification unit is further configured to:
determining the first write address as a minimum write address of write addresses included in the recorded outstanding write requests accessing the address space.
16. A network device comprising a processing unit and a memory unit, the memory unit having stored therein executable code, the processing unit when executing the executable code implementing the method of any one of claims 8-11.
17. A network device, comprising:
the communication interface is used for carrying out data transmission with the storage node and the computing node;
a processing unit for processing data received by the communication interface to perform the method of any of claims 8-11.
CN202011268999.XA 2020-11-13 2020-11-13 Method for processing data by using network card, network equipment and computer system Pending CN114489465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011268999.XA CN114489465A (en) 2020-11-13 2020-11-13 Method for processing data by using network card, network equipment and computer system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011268999.XA CN114489465A (en) 2020-11-13 2020-11-13 Method for processing data by using network card, network equipment and computer system

Publications (1)

Publication Number Publication Date
CN114489465A true CN114489465A (en) 2022-05-13

Family

ID=81490187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011268999.XA Pending CN114489465A (en) 2020-11-13 2020-11-13 Method for processing data by using network card, network equipment and computer system

Country Status (1)

Country Link
CN (1) CN114489465A (en)

Similar Documents

Publication Publication Date Title
US11379142B2 (en) Snapshot-enabled storage system implementing algorithm for efficient reclamation of snapshot storage space
US20190340157A1 (en) Log-structured storage device format
US11249834B2 (en) Storage system with coordinated recovery across multiple input-output journals of different types
US11936731B2 (en) Traffic priority based creation of a storage volume within a cluster of storage nodes
CN112527186B (en) Storage system, storage node and data storage method
CN108780404A (en) Effective real-time migration of remote access data
CN109144406B (en) Metadata storage method, system and storage medium in distributed storage system
CN109582213A (en) Data reconstruction method and device, data-storage system
KR20220125198A (en) Data additional writing method, apparatus, electronic device, storage medium and computer programs
CN113961139A (en) Method for processing data by using intermediate device, computer system and intermediate device
US20210117235A1 (en) Storage system with efficient release of address lock waiters during synchronous replication
US10776173B1 (en) Local placement of resource instances in a distributed system
US11467735B2 (en) I/O operations in log structured arrays
CN113535073B (en) Method for managing storage unit, electronic device and computer readable storage medium
CN117492661A (en) Data writing method, medium, device and computing equipment
US11467906B2 (en) Storage system resource rebuild based on input-output operation indicator
CN111435323B (en) Information transmission method, device, terminal, server and storage medium
CN115981559A (en) Distributed data storage method and device, electronic equipment and readable medium
CN114489465A (en) Method for processing data by using network card, network equipment and computer system
US11640339B2 (en) Creating a backup data set
US11321015B2 (en) Aggressive intent write request cancellation
CN114003342A (en) Distributed storage method and device, electronic equipment and storage medium
US20210240399A1 (en) Storage system with continuous data verification for synchronous replication of logical storage volumes
US11121981B1 (en) Optimistically granting permission to host computing resources
US20210117234A1 (en) Storage system with efficient release of failed component resources during synchronous replication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination