CN112783698A - Method and device for managing metadata in storage system - Google Patents

Method and device for managing metadata in storage system Download PDF

Info

Publication number
CN112783698A
CN112783698A CN202010021351.6A CN202010021351A CN112783698A CN 112783698 A CN112783698 A CN 112783698A CN 202010021351 A CN202010021351 A CN 202010021351A CN 112783698 A CN112783698 A CN 112783698A
Authority
CN
China
Prior art keywords
metadata
data
storage
storage unit
written
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010021351.6A
Other languages
Chinese (zh)
Inventor
王晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2020/119929 priority Critical patent/WO2021088586A1/en
Publication of CN112783698A publication Critical patent/CN112783698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2043Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share a common memory address space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

In the method, after metadata corresponding to data to be written is generated in the storage system, a storage unit used for storing the metadata is determined from the plurality of storage units included in the storage system, so that the metadata is stored in the at least two storage devices corresponding to the determined storage unit. Because each storage unit can be mapped to the physical storage space corresponding to at least two storage devices, when one storage device in the plurality of storage devices corresponding to a certain storage unit fails, the metadata can be recovered from the rest storage devices corresponding to the storage unit, so that the redundant protection of the metadata can be realized.

Description

Method and device for managing metadata in storage system
The present application claims priority from the chinese patent application filed on 2019, 11/05/h, under the application number 201911072812.6 entitled "a hard disk", which is incorporated herein by reference in its entirety.
Technical Field
The present application relates to the field of storage technologies, and in particular, to a method and an apparatus for managing metadata in a storage system.
Background
In a storage system, in order to ensure the reliability of data and metadata stored in the storage system, redundancy protection is generally required for the data and the metadata.
Taking redundancy protection of metadata as an example, one way is to create a plurality of metadata instances in a physical storage space, and store one copy of metadata in each instance, so that when one of the metadata instances fails, metadata of data stored in the physical storage space can be obtained through other metadata instances, so as to implement redundancy protection of metadata of data stored in the physical storage space. The metadata instance may be understood as a program code for implementing a value added service based on metadata, such as a service for snapshotting metadata or a service for cloning metadata.
In the technical scheme, the redundancy protection of the metadata can be realized only by creating a plurality of metadata instances, so that the realization mode is complex.
Disclosure of Invention
The application provides a management method and a management device for metadata in a storage system, which are used for simplifying the step of carrying out redundancy protection on the metadata.
In a first aspect, a method for managing metadata in a storage system is provided, where the storage system includes a plurality of storage units, and each storage unit is mapped to a physical storage space corresponding to at least two storage devices included in the storage system, that is, the storage unit is a logical storage unit.
In the above technical solution, each storage unit is mapped to a physical storage space corresponding to at least two storage devices, so that when one storage device of the plurality of storage devices corresponding to a certain storage unit fails, the metadata can be recovered from the remaining storage devices corresponding to the storage unit, thereby implementing redundancy protection on the metadata. Therefore, in the embodiment of the application, a simpler method for performing redundancy protection on metadata is provided without creating a plurality of metadata instances storing the same metadata.
In one possible design, the storage unit may store the metadata in an append write manner.
By additionally writing, the writing efficiency of the metadata can be improved, and after new data is additionally written in the storage system, old data (that is, previously stored data) may be determined as invalid data, so that a plurality of pieces of previously stored consecutive old data are all invalid data, and a plurality of consecutive storage units corresponding to the plurality of invalid data are all storage units requiring garbage collection, thereby reducing the overhead of garbage collection.
In one possible design, before determining the storage unit for storing the metadata, a data write request for writing the data to be written into the storage system may be received, and a record item corresponding to the metadata may be generated according to the data write request and the metadata, where the record item includes a write data operation corresponding to the data write request and metadata updated after the write data operation is performed.
In this way, when a storage unit for storing metadata fails, the metadata before the failure can be restored by the contents in the entry, and the stability of the storage system can be increased.
In one possible design, the metadata includes:
the logical address and the physical address of each fragment of the data to be written correspond to each other, the logical address of the storage unit occupied by the data to be written corresponds to the logical address of each fragment contained in the data to be written, and the logical address of each fragment is the logical address corresponding to the storage unit occupied by the fragment; or the like, or, alternatively,
the metadata includes:
the logical address of each copy of the data to be written corresponds to the physical address, the logical address of the data to be written corresponds to the logical address of each copy included in the data to be written, and the logical address of each copy is a logical address corresponding to a storage unit occupied by the copy;
the set of the logical addresses of the respective segments included in the data to be written or the logical address of each copy included in the data to be written, that is, the logical address of the data to be written.
In the technical scheme, the metadata can record various different contents according to actual use requirements, and the flexibility and the applicability of the storage system can be improved.
In one possible design, the storage system may also create a first metadata instance for business operations on metadata in a preset storage unit.
In the above technical solution, the metadata instance does not perform a business operation on the metadata in the preset physical storage space any more, but performs an operation on the metadata in the preset storage unit, thereby providing a new metadata instance creation mode.
In one possible design, after the failure of the first metadata instance, a second metadata instance may be created, and the second metadata instance may access the metadata stored in the preset storage unit.
In the technical scheme, after the new metadata instance is created, the new metadata instance can directly use the metadata in the shared storage unit, so that the processes of copying and transmitting the metadata to the new metadata instance are reduced, the time delay of creating the new metadata instance can be reduced, and the efficiency is improved. Further, since metadata is not transmitted among a plurality of metadata instances, transmission resources can be saved.
In a second aspect, a management apparatus for metadata in a storage system is provided, where the management apparatus may be a management node or a management server, or an apparatus in the management node or the management server. The management apparatus comprises a processor for implementing the method described in the first aspect above. The management device may also include a memory for storing program instructions and data. The memory is coupled to the processor, and the processor can call and execute the program instructions stored in the memory for implementing any one of the methods described in the first aspect above.
In one possible design, the processor of the management device of the metadata executes program instructions in the memory to implement the following functions:
generating metadata corresponding to data to be written;
determining a storage unit for storing the metadata, wherein the storage system comprises a plurality of storage units, and each storage unit is mapped to a physical storage space corresponding to at least two storage devices included in the storage system;
and storing the metadata into at least two storage devices corresponding to the storage unit.
In one possible design, the storage unit stores the metadata in an append write manner.
In one possible design, the processor executes program instructions stored in the memory to implement the following functions:
receiving a data writing request, wherein the data writing request is used for writing the data to be written into the storage system;
generating a record item corresponding to the metadata according to the data writing request and the metadata; the record item comprises a data writing operation corresponding to the data writing request and metadata updated after the data writing operation is executed.
In a possible design, the description of the metadata is similar to that in the first aspect, and is not repeated here.
In one possible design, the processor executes program instructions stored in the memory to implement the following functions:
creating a first metadata instance, wherein the first metadata instance is used for performing business operation on metadata in a preset storage unit.
In one possible design, the processor executes program instructions stored in the memory to implement the following functions:
and after the first metadata instance fails, creating a second metadata instance, wherein the second metadata instance can access the metadata stored in the preset storage unit.
In a third aspect, a management apparatus for metadata in a storage system is provided, where the management apparatus may be a management node or a management server, or an apparatus in the management node or the management server. The management device may include a generating unit, a determining unit, and an executing unit, where the generating unit, the determining unit, and the executing unit may execute corresponding functions performed in any one of the design examples of the first aspect, specifically:
the generating unit is used for generating metadata corresponding to the data to be written;
a determining unit, configured to determine a storage unit for storing the metadata, where the storage system includes multiple storage units, and each storage unit is mapped to a physical storage space corresponding to at least two storage devices included in the storage system;
and the execution unit is used for storing the metadata into at least two storage devices corresponding to the storage unit.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of any one of the first aspects.
In a fifth aspect, the present application provides a computer program product, which stores a computer program, the computer program comprising program instructions, which, when executed by a computer, cause the computer to perform the method of any one of the first aspect.
In a sixth aspect, the present application provides a chip system, which includes a processor and may further include a memory, and is configured to implement the method of the first aspect. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
In a seventh aspect, an embodiment of the present application provides a storage system, where the storage system includes the management apparatus for metadata of the storage system described in any one of the second aspect and the second aspect, or the storage system includes the management apparatus for metadata of the storage system described in any one of the third aspect and the third aspect.
Advantageous effects of the second to seventh aspects and implementations thereof described above reference may be made to the description of the method of the first aspect and advantageous effects of implementations thereof.
Drawings
FIG. 1 is a diagram illustrating an example of an application scenario in accordance with an embodiment of the present application;
fig. 2 is a schematic structural diagram of an example of a memory cell provided in the present embodiment;
FIG. 3 is a flow chart of a process of storing data in an embodiment of the present application;
FIG. 4 is a schematic diagram of an example of a plurality of stripes included in a memory cell in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an example of a mapping relationship between a storage unit and a storage device in an embodiment of the present application;
FIG. 6 is a flowchart of a process of storing metadata in an embodiment of the present application;
FIG. 7 is a diagram illustrating an example of writing metadata to a storage unit in an embodiment of the present application;
FIG. 8 is a diagram showing an example of a metadata structure in an embodiment of the present application;
FIG. 9 is a flowchart of a garbage collection process for metadata in an embodiment of the present application;
FIG. 10 is a flowchart of a process for managing metadata instances in an embodiment of the present application;
fig. 11 is a schematic structural diagram of an example of a management apparatus of metadata of a storage system provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of another example of a management apparatus for metadata of a storage system provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
In the embodiments of the present application, "a plurality" means two or more, and in view of this, the "plurality" may also be understood as "at least two". "at least one" is to be understood as meaning one or more, for example one, two or more. For example, including at least one means including one, two, or more, and does not limit which ones are included, for example, including at least one of A, B and C, then including may be A, B, C, A and B, A and C, B and C, or a and B and C. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the preceding and following related objects are in an "or" relationship, unless otherwise specified. In the embodiments of the present application, "node" and "node" may be used interchangeably.
Unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing between a plurality of objects, and do not limit the sequence, timing, priority, or importance of the plurality of objects.
The management method of metadata provided in the embodiment of the present application may be applied to various storage systems, for example, the management method may be a centralized storage system, or may be a distributed storage system, or may also be a cloud storage system such as a public cloud or a private cloud, and the like, which is not limited herein. For convenience of explanation, the metadata management method is applied to a distributed storage system as an example hereinafter.
Please refer to fig. 1, which is a schematic diagram of an example of an application scenario provided in an embodiment of the present application. In FIG. 1, a client server (client server)100 and a storage system 110 are included, the client server 100 communicating with the storage system 110. The storage system 110 includes a management module 111 and at least one storage node 112 (in fig. 1, 3 storage nodes 112 are taken as examples, and storage node 1 to storage node 3 respectively), and the management module 111 is configured to write data into each storage node 112 and read data from at least one storage node 112.
The storage node 112 in fig. 1 may be a stand-alone server, or may be a storage array including at least one storage device, where the storage device may be a Hard Disk Drive (HDD) disk device, a Solid State Drive (SSD) disk device, a Serial Advanced Technology Attachment (SATA) disk device, a Small Computer System Interface (SCSI) disk device, a serial attached SCSI interface (SAS) disk device, or a fibre channel interface (FC) disk device.
The management module 111 and the at least one storage node 112 in fig. 1 may be independent devices, for example, the management module 111 is an independent server; alternatively, the management module 111 may also be a software module, and is deployed on one of the storage nodes 112, for example, the management module 111 and one of the storage nodes 112 run on the same server, and the specific forms of the management module 111 and the storage node 112 are not limited herein.
In this embodiment, each storage node includes at least one storage unit, and the storage unit is a segment of logical space obtained by mapping physical spaces of storage devices included in the storage node, that is, an actual physical space still comes from a plurality of storage nodes.
Fig. 2 is a schematic structural diagram of an example of a memory cell provided in this embodiment. In fig. 2, a memory cell is a set including a plurality of logical blocks. The logical block is a logical space concept, and is obtained by dividing the space of the storage device, and the size of one logical block may be 4KB or 8KB, and the size of the logical block is not limited herein. Each logical block corresponds to a physical storage space in the storage device that is the same size as the logical block. It should be noted that, the plurality of logic blocks included in one storage unit are from a plurality of storage devices, and the plurality of storage devices may be from different storage nodes or may also be from the same storage device, which is not limited herein.
Taking a storage device included in a storage node 112 in a storage system, as an example, a plurality of logical blocks included in one storage unit are from the same storage node, the storage node 112 may map the logical blocks in a logical block set included in the storage unit to data storage units according to a set Redundant Array of Independent Disks (RAID) type, where the logical blocks are used to store data fragments, generate check data fragments according to the data fragments stored in each logical block, store the check data fragments to the check storage units, and the data storage units and the check data storage units form one stripe. A memory cell contains one or more stripes. The data storage unit comprises at least two logic blocks, and the verification storage unit comprises at least one logic block. For example, the storage node 112 extracts one logical block from each of 4 storage devices, e.g., storage device a to storage device D, to constitute a storage unit, the 4 logical blocks constitute a striped data storage unit, and then extracts one logical block from each of the other 2 storage devices to constitute a check storage unit. Thus, when any two logic blocks in the stripe fail, the any two logic blocks may be logic blocks corresponding to any two data storage units or any two verification storage units, or may also be logic blocks corresponding to one data storage unit and one verification storage unit, and data in the failed logic block may be reconstructed according to data in the remaining logic blocks.
As another example, the storage node 112 may also divide a plurality of logical blocks in the logical block set included in the storage unit into copy units according to the set multiple copy types. Each copy unit comprises at least one logic block, data are stored in the at least one logic block, and the data stored in the copy units are the same. For example, if one copy unit includes 2 logical blocks, the storage node 112 extracts one logical block from each of the 2 storage devices to form one copy unit, and if the multiple copy type is copy type 3, that is, one data needs to be stored in 3 copies, the storage node 112 may extract one logical block from each of the other 4 storage devices, and form each two logical blocks into one copy unit to obtain the other 2 copy units, where the 3 copy units store the same data. Thus, when any one copy unit fails, data can be acquired from the other two copy units.
Next, a metadata management method provided in an embodiment of the present application will be described by taking an application scenario shown in fig. 1 as an example. For ease of understanding, the technical solutions of the embodiments of the present application will be described in the following four aspects. In the following description, the steps performed by the storage system 110 may each be performed by the management module 111 of the storage system 110.
In a first aspect, a process for storing data.
Please refer to fig. 3, which is a flowchart illustrating a data storage process according to an embodiment of the present application, where the flowchart illustrates the following:
s31, the client server 100 sends a data write request to the storage system 110.
The data writing request comprises data to be written and a virtual storage address of the data to be written. The virtual storage address refers to an identification and offset of a Logical Unit (LU) to which the data to be written is to be written, and is an address visible to the client server 100. The data write request may be obtained by the client server 100 according to an operation of a user, or may be generated according to a system requirement during an operation.
S32, the storage system 110 determines a storage location for storing the data to be written.
After receiving the data write request, the management module 111 of the storage system 110 determines the storage unit of the data to be written according to the use condition of the storage unit in the storage system 110 and the size of the data to be written carried in the data write request.
As an example, assuming that the size of the data to be written is 1MB and the size of each storage unit is 1MB, the storage system 110 determines that the data to be written needs to occupy 1 storage unit. The storage system 110 determines that no data is stored before receiving the data write request, and determines that the storage unit occupied by the data to be written is storage unit 0. In this example, the initial memory cell is taken as the memory cell 0, and in other embodiments, the initial memory cell may also be the memory cell 1, which is not limited herein.
As another example, a storage unit may contain multiple stripes, that is, a striped data storage unit includes a part of the logical blocks in the logical block set corresponding to the storage unit. Referring to fig. 4, one memory unit includes 3 stripes, and if the size of data stored in one stripe is 32KB, the size of one memory unit is 96 KB. If the size of the data to be written is smaller than the size of one storage unit, it may be determined that the data to be written is stored in a partial logic block included in one storage unit, for example, in a logic block corresponding to at least one stripe. For example, 12 logical blocks are included in one storage unit, and each 4 logical blocks corresponds to one stripe, that is, each 4 logical blocks can store data with a data size of 32KB, and if the size of the data to be written is 32KB, the storage system 110 determines that data is already stored in the first 4 logical blocks (i.e., logical block 0 to logical block 3) of the storage unit 0 before receiving the data write request, and may determine that the data to be written is stored in logical block 4 to logical block 7 of the storage unit 0.
It should be noted that, in an actual use process, more than 3 stripes may correspond to one storage unit, for example, tens or hundreds of stripes may correspond to the storage unit, and the number of the stripes shown in fig. 4 is only an example and should not be construed as a limitation to the storage unit.
In other embodiments, each storage device may provide a segment of a logical address to a storage unit, rather than in the form of a logical block, in which case the storage unit is a collection of multiple segments of logical addresses.
S33, the storage system 110 stores the data to be written according to the determined storage unit for storing the data to be written.
The management module 111 of the storage system 110 stores mapping relationships between the storage units and the storage devices of the storage nodes in advance, and writes the data to be written into the corresponding storage nodes according to the mapping relationships after determining the storage units for storing the data to be written.
As an example, the management module 111 of the storage system 110 stores data written to the storage unit according to a preset RAID type. With continued reference to fig. 4, the storage unit 0 includes 12 logical blocks, where each 4 logical blocks corresponds to one stripe, and the 4 logical blocks are used for storing data slices. For example, logic block 0 to logic block 3 are logic blocks for storing data slices in the first stripe, logic block 4 to logic block 7 are logic blocks for storing data slices in the second stripe, logic block 8 to logic block 11 are logic blocks for storing data slices in the third stripe, and each stripe further includes logic blocks for storing check data slices, for example, the first stripe further includes logic block P0 and logic block Q0, the second stripe further includes logic block P1 and logic block Q1, and the third stripe further includes logic block P2 and logic block Q2.
The storage system 110 presets a mapping relationship between the logical blocks included in each stripe and the storage devices of the storage nodes. For example, the mapping relationship is: the 4 logical blocks used for storing the data fragments in each stripe sequentially correspond to storage devices A in the storage nodes 1 to 4, and the logical blocks used for verifying the data fragments in each stripe sequentially correspond to storage devices A in the storage nodes 5 and 6. In fig. 4, in the multiple stripes corresponding to one storage unit, the logic blocks with the same location come from the same storage node. For example, the memory cell shown in FIG. 4 includes 3 stripes, where the first stripe includes logic block 0 through logic block 3, logic block P0, and logic block Q0, and the second stripe includes logic block 4 through logic block 7, logic block P1, and logic block Q1, then logic block 0 and logic block 4 are in the same location, logic block 1 and logic block 5 are in the same location, and so on.
After receiving the data to be written, the management module 111 may divide the data to be written into a plurality of data fragments according to a preset RAID type, calculate to obtain a check fragment, and store the data fragment and the check fragment in the storage device corresponding to each logical block. For example, the size of the data to be written is 32KB, and it is determined that the data to be written is stored in logical blocks 4 to 7, the management module 111 divides the data to be written into 4 data fragments, where the size of each data fragment is 8KB, and then calculates and obtains 2 check data fragments according to the 4 data fragments, where the size of each check fragment is also 8 KB. Then, the management module 111 sends each data fragment and the check data fragment to the corresponding storage node for persistent storage. As described above in terms of mapping relationship, the management module 111 sends 4 data fragments to the storage nodes 1 to 4, and sends 2 check data fragments to the storage nodes 5 and 6, respectively, and each storage node stores corresponding data in a preset storage device.
As another example, the management module 111 of the storage system 110 stores data written to the storage unit according to a preset multi-copy type. Referring to fig. 5, the memory unit 0 includes 12 logic blocks, and each logic block is used for storing data. The mapping relationship between each logical block and the storage device of the storage node is preset in the storage system 110. For example, if the multiple copy type is 2 copies, each logical block may correspond to 2 different storage devices on one storage node, and the mapping relationship is: the logic blocks 0 to 3 correspond to the storage device a and the storage device B on the storage nodes 1 to 4 in sequence, and the mapping relationship between other logic blocks and the storage device may be similar to the logic blocks 0 to 3, which is not described herein again.
After receiving the data to be written, the management module 111 may copy the data to be written into a plurality of data according to a preset multi-copy type, and store the data to be written and the copied data in the storage devices corresponding to the respective logic blocks. For example, the size of the data to be written is 32KB, the size of each logical block is 4KB, it is determined that the data to be written is written into logical blocks 0 to 4, the management module 111 divides the data to be written into 4 parts, the size of each part of data is 8KB, then the 4 parts of data are copied to obtain 8 parts of data, and then the management module 111 sends the 8 parts of data to the corresponding storage node for persistent storage. As described above, the management module 111 sends two identical pieces of data in the 8 pieces of data to the storage nodes 1 to 4, respectively, and each storage node stores the corresponding data in a preset storage device.
Logically, the data to be written is written into the memory cells of the memory system 110. Physically, the data is still ultimately stored in multiple storage nodes. For each slice, the identity of the storage unit in which it is located and the location inside the storage unit is the logical address of the slice, and the actual address of the slice in the storage node is the physical address of the slice.
In a second aspect, a process for storing metadata.
After the data to be written is stored in the storage device, in order to facilitate subsequent searching or reading of the data, the storage system 110 further needs to store description information of the data, and when the storage node receives a data read request, the storage node usually finds metadata of the data to be read according to information (e.g., a data name or a virtual address) carried in the data read request, and then further obtains the data to be read according to the metadata. Metadata includes, but is not limited to: the mapping relationship between the logical address and the physical address of each fragment, the mapping relationship between the logical address of the data and the logical address of each fragment contained in the data, the mapping relationship between the logical address and the physical address of each copy, and the mapping relationship between the logical address of the data and the logical address of the copy of the data. The set of logical addresses of the respective slices or the logical address of the respective copy contained in the data is the logical address of the data.
Referring to fig. 6, a flowchart of a storage process of metadata in an embodiment of the present application is described as follows:
s61, the storage system 110 generates metadata.
After the data to be written is stored in the storage system 110, the management module 111 of the storage system 110 generates metadata of the data to be written. For example, in the embodiment shown in fig. 3, the management module 111 stores the data to be written into the logical blocks 0 to 4 of the storage unit, and then the management module 111 generates the metadata of the data to be written according to the information such as the size and the storage address of the data to be written. The content specifically included in the metadata is not limited herein.
S62, the storage system 110 determines a storage location for storing the metadata.
In the embodiment of the present application, the physical storage space of the storage system 110 for storing data and the physical storage space for storing metadata are separated, for example, if each storage node includes 4 storage devices, in general, the metadata of the data occupies a smaller storage space than the data itself, and therefore, the storage devices a to C in each storage node in the storage system 110 may be configured to store the data, and the storage device D in each storage node is configured to store the metadata; alternatively, if the storage system 110 includes 4 storage nodes, all storage devices in the storage nodes 1 to 3 may be configured to store data, and all storage devices in the storage node 4 may be configured to store metadata. In the embodiment of the present application, the storage unit for storing data and the storage unit for storing metadata are the same in nature, except that the contents stored in the storage units are different, or the storage unit for storing data and the storage unit for storing metadata come from different storage devices.
As an example, after generating the metadata, the management module 111 may determine a storage unit for storing the metadata according to a usage of the storage unit for storing the metadata in the storage system 110. For example, referring to fig. 7, a storage unit for storing metadata includes 6 logical blocks, each of 2 logical blocks corresponds to a stripe, the management module 111 determines that data is already stored in the first 2 logical blocks (i.e., logical block 0 and logical block 1) of a storage unit 0 for storing metadata before generating the metadata, and the management module 111 may determine to store the generated metadata in logical block 2 and logical block 3 of the storage unit 0. This way it can be understood that the metadata is stored in the storage unit in an additionally written way.
In other embodiments, step S63 may also be performed before step S62.
S63, the storage system 110 generates a record item corresponding to the metadata.
After generating the metadata, the management module 111 may obtain a log (WAL) entry corresponding to the metadata according to the metadata and an operation corresponding to the metadata, and form a WAL log after the WAL entry is stored in a corresponding storage space.
For example, if the metadata is generated according to a data write request sent by the client server 100, the operation corresponding to the metadata is a data write operation. Then, the management module 111 stores the record in a memory, which may be understood as a memory of a server or a node where the management module 111 is located. When the WAL entries recorded in the memory satisfy a preset condition, for example, the preset condition may be that the number of WAL entries recorded in the memory reaches a threshold value, it is determined to write the metadata in the plurality of WAL entries recorded in the memory into the storage unit, so that step S62 is performed to determine the storage unit corresponding to the metadata in each WAL entry. The manner of determining the storage unit corresponding to the metadata in each WAL entry may be similar to step S62, that is, the storage unit for storing the metadata in each WAL entry is sequentially determined according to the usage of the storage unit for storing the metadata, and details are not repeated here.
Since the metadata and the corresponding operation are recorded in the WAL entry, in this manner, when a storage unit for storing the metadata fails, the metadata before the failure can be restored from the content in the WAL entry, and the stability of the storage system 110 can be increased.
S64, the storage system 110 writes the metadata to the determined storage unit.
Step S64 is similar to step S33, and is explained below as a specific example.
With continued reference to fig. 7, the storage unit 0 for storing metadata includes 6 logical blocks, each 2 logical blocks corresponds to one stripe, that is, the logical block 0 and the logical block 1 correspond to a first stripe, the logical block 2 and the logical block 3 correspond to a second stripe, and the logical block 4 and the logical block 5 correspond to a third stripe, where these logical blocks correspond to logical blocks used for storing metadata fragments in each stripe. And each stripe also includes logical blocks for storing parity metadata, e.g., logical block P0 is included in the first stripe, logical block P1 is included in the second stripe, and logical block P2 is included in the third stripe.
When the management module 111 determines to store the generated metadata in the logical block 2 and the logical block 3 of the storage unit 0, the data to be written may be divided into a plurality of metadata fragments according to a preset RAID type, and a check fragment is obtained by calculation, and the metadata fragment and the check fragment are stored in the storage device corresponding to each logical block.
Or, the management module 111 copies each metadata fragment according to a preset multi-copy type, and then stores each metadata fragment and the copied metadata fragments in each storage device. Similar to step S33, further description is omitted here.
As can be seen from the above description, after the management module 111 generates the metadata, the steps S62 and S64 may be performed, or the steps S62 to S64 may be performed to store the metadata in the corresponding storage device, that is, the management module 111 may store the metadata in two ways. Then, the management module 111 may select which of the two ways to store the metadata according to a preset determination condition. As an example, the preset determination condition may be that the metadata is determined to be metadata of new data or metadata for updating old data, if the metadata is metadata of new data, it may be understood that metadata does not need to be updated in place, step S62 and step S64 may be performed, and if the metadata is metadata for updating old data, it may be understood that metadata needs to be updated in place, step S62 to step S64 may be performed. The preset determination condition may also be other contents, and is not limited herein.
S65, the storage system 110 updates the metadata structure.
After the management module 111 writes the metadata into the corresponding storage device, the management module 111 needs to update the metadata structure of the storage system 110. In the embodiment of the present application, the metadata structure may be a binary tree (Btree), a log-structured merge-tree (LSM tree), or other metadata structures that can be stored in an additional write manner, and the metadata structure is not limited herein.
For example, referring to fig. 8(a), for Btree corresponding to metadata already stored in the storage system 110, after the management module 111 stores the metadata in the corresponding storage device, the Btree may be updated according to the content in the metadata of the data to be written. For example, if fig. 8(a) includes metadata h, metadata e, metadata s, metadata a, metadata f, and metadata q, the name of the metadata corresponding to the data to be written is metadata z, and the metadata z includes metadata s, the metadata z is used as a child node of the metadata s, and Btree shown in fig. 8(b) is obtained.
For another example, if the name of the metadata corresponding to the data to be written is metadata h ', and the metadata h ' includes metadata e and metadata s, the metadata h ' is used as a parent node of the metadata e and the metadata s, and Btree as shown in fig. 8(c) is obtained.
Step S65 is an optional step, and is shown in dashed lines in fig. 6.
In a third aspect, a process for garbage collection of metadata.
To make reasonable use of storage space in the metadata partition, garbage collection may be initiated when there is more garbage metadata in the storage system 100. Please refer to fig. 9, which is a flowchart illustrating a garbage collection process of metadata according to an embodiment of the present application, wherein the flowchart illustrates the following:
s91, the storage system 110 determines a storage unit for garbage collection.
In this embodiment, garbage collection is performed in units of storage units. The storage unit for garbage collection may be a storage unit that contains garbage metadata up to a first set threshold, or a storage unit that contains the most garbage metadata among the plurality of storage units, or an effective metadata that is contained in the storage unit and is lower than a second set threshold, or a storage unit that contains the least effective metadata among the plurality of storage units. For example, in Btree shown in fig. 8(c), metadata h and metadata h 'are both parent nodes of metadata e and metadata s, and metadata h' is stored after metadata h, and thus the management module 111 may determine that metadata h is garbage metadata. And the logic blocks occupied by the metadata h are logic block 1 and logic block 2 of the storage unit 0, so that it is determined that the storage unit 0 includes 2 garbage logic blocks. When the number of the garbage logic blocks in one storage unit reaches a preset threshold, which may be 3, determining that the storage unit is a storage unit for garbage collection. For convenience of description, the storage unit for garbage collection will be referred to as a storage unit 0 hereinafter as an example.
S92, the storage system 110 migrates the valid metadata in the storage unit for garbage collection to other storage units.
And when the storage unit 0 is determined to be a storage unit for garbage collection, migrating the valid metadata in the storage unit 0 to other storage units. For example, with continued reference to FIG. 7, if garbage metadata is stored in logical block 1 through logical block 4 of storage unit 0, and valid metadata is stored in logical block 5 and logical block 6, then the management module 111 migrates the valid metadata stored in logical block 5 and logical block 6 to a new storage unit, such as storage unit 2.
S93, the storage system 110 releases the storage space occupied by the storage unit for garbage collection.
Specifically, the management module 111 may send a deletion instruction to the storage node corresponding to the storage unit 0 to delete the metadata shard corresponding to the storage unit 0 or check the metadata shard.
In a fourth aspect, a process for managing metadata instances.
The storage system 110 may implement various value added services by creating different metadata instances, such as a service for snapshotting metadata or a service for cloning metadata, etc. A metadata instance may be understood as a program code for implementing a certain value added service. Please refer to fig. 10, which is a flowchart illustrating a process of managing metadata instances according to an embodiment of the present application, where the flowchart illustrates the following:
s101, the storage system 110 creates a first metadata instance.
The first metadata instance is used for performing business operation on metadata stored in a preset storage unit. As an example, the business operation is a snapshot operation, that is, the first metadata instance is an instance of snapshot on metadata in a preset storage unit. The preset storage unit may be a part or all of the storage units for storing the metadata in the storage system 110, for example, the storage units for storing the metadata in the storage system 110 include storage units 0 to 4, and the preset storage units may be storage unit 0 and storage unit 1, and are set according to actual use situations. When the management module 111 runs the program code corresponding to the first metadata instance, the first metadata instance is created.
It should be noted that, in the related art, redundant protection of metadata is implemented by creating multiple metadata instances, for example, metadata of data stored in a storage space corresponding to physical addresses 1 to 20 in the storage system 110 needs to be protected, and the management module 111 of the storage system 110 creates at least two metadata instances for the storage space, where the at least two metadata instances may include the metadata instance 1 and the metadata instance 2. The management module 111 may allocate a storage space for storing metadata for each metadata instance, for example, the storage space configured for storing metadata for the metadata instance 1 is a storage space corresponding to physical addresses 50 to 55, and the storage space configured for the metadata instance 2 is a storage space corresponding to physical addresses 60 to 65. After the data are stored in the physical addresses 1 to 20, the metadata instance 1 stores the metadata of the data in the configured storage space, for example, the metadata of the data is metadata 1, and the metadata instance 1 stores the metadata 1 in a storage space with the starting address being the physical address 50. Then, the management module 111 of the storage system 110 will copy the metadata stored in the metadata instance 1 and store the copied metadata in the storage space configured for the metadata instance 2, for example, the management module 111 copies the metadata 1 and stores the copied metadata 1 in another segment of the storage space with the starting address being the physical address 60. It can be seen that in the related art, multiple metadata instances need to be created, which is complex. In the embodiment of the present application, since the metadata in the storage system 110 is already protected redundantly by using the preset RAID type or multiple copy types when being stored in the storage device, in this embodiment of the present application, it is not necessary to create multiple metadata instances in which the same metadata is stored, and a simpler method for protecting the metadata redundantly is provided.
In addition, when the preset RAID type is adopted to store the metadata, the same metadata does not need to store a plurality of copies, so that the storage space occupied by the metadata can be reduced, and the utilization rate of the storage space can be improved.
S102, the storage system 110 determines that the first metadata instance fails, and then creates a second metadata instance.
When the management module 111 determines that the first metadata instance fails, a second metadata instance for snapshotting the metadata may be created, and a storage unit accessible by the second metadata instance may be set to be the same as the first metadata instance. For example, the storage units accessible by the first metadata instance are storage unit 0 and storage unit 1, and the storage units accessible by the second metadata instance are storage unit 0 and storage unit 1, so that sharing of the storage units accessible by the plurality of metadata instances is realized, so that after a new metadata instance is created, the new metadata instance can directly use the metadata in the shared storage unit, the process of copying and transmitting the metadata to the new metadata instance is reduced, the time delay for creating the new metadata instance can be reduced, and the efficiency is improved. Further, since metadata is not transmitted among a plurality of metadata instances, transmission resources can be saved.
It should be noted that, in the above management of the metadata instance, the metadata instance is taken as an example for the purpose of managing the metadata in the preset storage unit, and of course, the manner of creating and managing the metadata instance is not limited to this.
In the embodiments provided in the present application, in order to implement the functions in the methods provided in the embodiments of the present application, the storage system may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Fig. 11 is a schematic structural diagram of a management apparatus 1100 for metadata of a storage system. The management apparatus 1100 for storing metadata of the system may be a device where the management module 111 is located in the embodiment shown in fig. 3, 6, 9, or 10, or may be located in the device where the management module 111 is located, and may be configured to implement the function of the management module 111. The management apparatus 1100 of the metadata of the storage system may be a hardware structure or a hardware structure plus a software module.
The management device 1100 of the metadata of the storage system comprises at least one memory for storing program instructions and/or data. The management device 1100 of metadata of a storage system further comprises at least one processor coupled to the memory, the at least one processor being capable of executing program instructions stored in the memory.
The management apparatus 1100 of metadata of a storage system may include a generation unit 1101, a determination unit 1102, and an execution unit 1103.
The generation unit 1101 may invoke a processor to execute program instructions stored in a memory to perform step S61 in the embodiment shown in fig. 6, and/or other processes for supporting the techniques described herein.
The determination unit 1102 may invoke a processor to execute program instructions stored in memory to perform step S32 in the embodiment shown in fig. 3, or to perform step S62 in the embodiment shown in fig. 6, or to perform step S91 in the embodiment shown in fig. 9, and/or other processes for supporting the techniques described herein.
The execution unit 1103 may call the processor to execute program instructions stored in the memory to perform step S33 in the embodiment shown in fig. 3, or to perform steps S63-S65 in the embodiment shown in fig. 6, or to perform steps S92-S93 in the embodiment shown in fig. 9, or to perform steps S101-S102 in the embodiment shown in fig. 10, and/or other processes for supporting the techniques described herein.
In one possible design, the management apparatus 1100 for storing metadata of a system may further include a receiving unit 1104, and the receiving unit 1104 may call a processor to execute program instructions stored in a memory to perform step S31 in the embodiment shown in fig. 3 and/or other processes for supporting the techniques described herein. The receiving unit 1104 is used for communication between the management apparatus 1100 storing the metadata of the system and other modules, and may be a circuit, a device, an interface, a bus, a software module, a transceiver, or any other apparatus capable of realizing communication. The receiving unit 1104 is not necessary, and in fig. 11, the receiving unit 1104 is indicated by a dotted line.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The division of the modules in the embodiment shown in fig. 11 is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in the embodiments of the present application may be integrated in one processor, or may exist alone physically, or two or more modules are integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 12 shows a management apparatus 1200 for metadata of a storage system according to an embodiment of the present application, where the management apparatus 1200 for metadata of a storage system may be a device where the management module 111 is located in the embodiment shown in fig. 3, or fig. 6, or fig. 9, or fig. 10, or may be located in the device where the management module 111 is located, and may be configured to implement a function of the management module 111.
The management apparatus 1200 for metadata of a storage system includes at least one processor 1220, and the management apparatus 1200 for metadata of a storage system is implemented or supported to implement the function of the management module 111 in the method provided by the embodiment of the present application. For example, the processor 1220 may determine a storage unit for storing metadata, which is described in detail in the method example and is not described herein again.
The management apparatus 1200 of the metadata of the storage system may further include at least one memory 1230 for storing program instructions and/or data. Memory 1230 is coupled to processor 1220. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 1220 may cooperate with the memory 1230. Processor 1220 may execute program instructions stored in memory 1230. At least one of the at least one memory may be included in the processor.
The management apparatus 1200 of metadata of a storage system may further include a communication interface 1210 for communicating with other devices through a transmission medium so that the management apparatus 1200 of metadata of a storage system may communicate with other devices. Illustratively, the other device may be a client or a storage device. The processor 1220 may transceive data using the communication interface 1210.
The specific connection medium among the communication interface 1210, the processor 1220 and the memory 1230 is not limited in the embodiments of the present application. In fig. 12, the memory 1230, the processor 1220 and the communication interface 1210 are connected by a bus 1250, the bus is represented by a thick line in fig. 12, and the connection manner between other components is only for illustrative purposes and is not limited thereto. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 12, but this is not intended to represent only one bus or type of bus.
In the embodiments of the present application, the processor 1220 may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In this embodiment, the memory 1230 may be a non-volatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory (RAM), for example. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Also provided in an embodiment of the present application is a computer-readable storage medium, which includes instructions, when executed on a computer, cause the computer to perform the method performed by the management module 111 in the embodiment shown in fig. 3, 6, 9, or 10.
Also provided in an embodiment of the present application is a computer program product including instructions, which when executed on a computer, cause the computer to execute the method performed by the management module 111 in the embodiment shown in fig. 3, 6, 9, or 10.
The embodiment of the present application provides a storage system, which includes the management module 111 in the embodiment shown in fig. 3, 6, 9, or 10.
The method provided by the embodiment of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, e.g., from one website, computer, server, or data center, over a wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), for short) or wireless (e.g., infrared, wireless, microwave, etc.) network, the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more integrated servers, data centers, etc., the available medium may be magnetic medium (e.g., floppy disk, hard disk, magnetic tape), optical medium (e.g., digital video disc (digital video disc, DVD for short), or a semiconductor medium (e.g., SSD).

Claims (15)

1. A method for managing metadata in a storage system, comprising:
generating metadata corresponding to data to be written;
determining a storage unit for storing the metadata, wherein the storage system comprises a plurality of storage units, and each storage unit is mapped to a physical storage space corresponding to at least two storage devices included in the storage system;
and storing the metadata into at least two storage devices corresponding to the storage unit.
2. The method of claim 1, wherein the storage unit stores the metadata in an append write manner.
3. The method of claim 1 or 2, wherein prior to determining the storage unit for storing the metadata, the method further comprises:
receiving a data writing request, wherein the data writing request is used for writing the data to be written into the storage system;
generating a record item corresponding to the metadata according to the data writing request and the metadata; the record item comprises a data writing operation corresponding to the data writing request and metadata updated after the data writing operation is executed.
4. The method according to any one of claims 1 to 3,
the metadata includes:
the logical address and the physical address of each fragment of the data to be written correspond to each other, the logical address of the storage unit occupied by the data to be written corresponds to the logical address of each fragment contained in the data to be written, and the logical address of each fragment is the logical address corresponding to the storage unit occupied by the fragment; or the like, or, alternatively,
the metadata includes:
the logical address of each copy of the data to be written corresponds to the physical address, the logical address of the data to be written corresponds to the logical address of each copy included in the data to be written, and the logical address of each copy is a logical address corresponding to a storage unit occupied by the copy;
the set of the logical addresses of the respective segments included in the data to be written or the logical address of each copy included in the data to be written, that is, the logical address of the data to be written.
5. The method according to any one of claims 1-4, further comprising:
creating a first metadata instance, wherein the first metadata instance is used for performing business operation on metadata in a preset storage unit.
6. The method of claim 5, further comprising:
and after the first metadata instance fails, creating a second metadata instance, wherein the second metadata instance can access the metadata stored in the preset storage unit.
7. An apparatus for managing metadata in a storage system, comprising:
the generating unit is used for generating metadata corresponding to the data to be written;
a determining unit, configured to determine a storage unit for storing the metadata, where the storage system includes multiple storage units, and each storage unit is mapped to a physical storage space corresponding to at least two storage devices included in the storage system;
and the execution unit is used for storing the metadata into at least two storage devices corresponding to the storage unit.
8. The apparatus of claim 7, wherein the storage unit stores the metadata in an append write manner.
9. The apparatus of claim 7 or 8, further comprising:
a receiving unit, configured to receive a data write request, where the data write request is used to write the data to be written into the storage system;
the generation unit is further configured to: generating a record item corresponding to the metadata according to the data writing request and the metadata; the record item comprises a data writing operation corresponding to the data writing request and metadata updated after the data writing operation is executed.
10. The apparatus according to any one of claims 7-9,
the metadata includes:
the logical address and the physical address of each fragment of the data to be written correspond to each other, the logical address of the storage unit occupied by the data to be written corresponds to the logical address of each fragment contained in the data to be written, and the logical address of each fragment is the logical address corresponding to the storage unit occupied by the fragment; or the like, or, alternatively,
the metadata includes:
the logical address of each copy of the data to be written corresponds to the physical address, the logical address of the data to be written corresponds to the logical address of each copy included in the data to be written, and the logical address of each copy is a logical address corresponding to a storage unit occupied by the copy;
the set of the logical addresses of the respective segments included in the data to be written or the logical address of each copy included in the data to be written, that is, the logical address of the data to be written.
11. The apparatus according to any one of claims 7-10, wherein the execution unit is further configured to:
creating a first metadata instance, wherein the first metadata instance is used for performing business operation on metadata in a preset storage unit.
12. The apparatus of claim 11, wherein the execution unit is further configured to:
and after the first metadata instance fails, creating a second metadata instance, wherein the second metadata instance can access the metadata stored in the preset storage unit.
13. An apparatus for managing metadata in a storage system, comprising a processor and a memory, the memory having stored therein computer-executable instructions for causing the processor to perform the method of any one of claims 1-6 when invoked by the processor.
14. A computer storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-6.
15. A computer program product having stored thereon instructions which, when run on a computer, cause the computer to perform the method according to any one of claims 1-6.
CN202010021351.6A 2019-11-05 2020-01-09 Method and device for managing metadata in storage system Pending CN112783698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/119929 WO2021088586A1 (en) 2019-11-05 2020-10-09 Method and apparatus for managing metadata in storage system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019110728126 2019-11-05
CN201911072812 2019-11-05

Publications (1)

Publication Number Publication Date
CN112783698A true CN112783698A (en) 2021-05-11

Family

ID=75749970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010021351.6A Pending CN112783698A (en) 2019-11-05 2020-01-09 Method and device for managing metadata in storage system

Country Status (2)

Country Link
CN (1) CN112783698A (en)
WO (1) WO2021088586A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342751A (en) * 2021-07-30 2021-09-03 联想凌拓科技有限公司 Metadata processing method, device, equipment and readable storage medium
CN113867642A (en) * 2021-09-29 2021-12-31 杭州海康存储科技有限公司 Data processing method and device and storage equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776675A (en) * 2004-11-17 2006-05-24 国际商业机器公司 Method, system for storing and using metadata in multiple storage locations
US20110219205A1 (en) * 2010-03-05 2011-09-08 Solidfire, Inc. Distributed Data Storage System Providing De-duplication of Data Using Block Identifiers
CN108108308A (en) * 2016-11-24 2018-06-01 爱思开海力士有限公司 Storage system and its operating method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102529696B1 (en) * 2016-07-14 2023-05-10 에스케이하이닉스 주식회사 Memory system and operating method of memory system
US10592408B2 (en) * 2017-09-13 2020-03-17 Intel Corporation Apparatus, computer program product, system, and method for managing multiple regions of a memory device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776675A (en) * 2004-11-17 2006-05-24 国际商业机器公司 Method, system for storing and using metadata in multiple storage locations
US20110219205A1 (en) * 2010-03-05 2011-09-08 Solidfire, Inc. Distributed Data Storage System Providing De-duplication of Data Using Block Identifiers
CN108108308A (en) * 2016-11-24 2018-06-01 爱思开海力士有限公司 Storage system and its operating method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342751A (en) * 2021-07-30 2021-09-03 联想凌拓科技有限公司 Metadata processing method, device, equipment and readable storage medium
CN113867642A (en) * 2021-09-29 2021-12-31 杭州海康存储科技有限公司 Data processing method and device and storage equipment
CN113867642B (en) * 2021-09-29 2023-08-04 杭州海康存储科技有限公司 Data processing method, device and storage equipment

Also Published As

Publication number Publication date
WO2021088586A1 (en) 2021-05-14

Similar Documents

Publication Publication Date Title
US10977124B2 (en) Distributed storage system, data storage method, and software program
WO2018040591A1 (en) Remote data replication method and system
US10402283B1 (en) Online system checkpoint recovery orchestration
US9946655B2 (en) Storage system and storage control method
US9411685B2 (en) Parity chunk operating method and data server apparatus for supporting the same in distributed raid system
US11074129B2 (en) Erasure coded data shards containing multiple data objects
US11907410B2 (en) Method and device for managing storage system
CN109690494B (en) Hierarchical fault tolerance in system storage
US20080282047A1 (en) Methods and apparatus to backup and restore data for virtualized storage area
US20170177224A1 (en) Dynamic storage transitions employing tiered range volumes
US20180373429A1 (en) Computer system, control method for computer system, and recording medium
CN109582213B (en) Data reconstruction method and device and data storage system
US11449400B2 (en) Method, device and program product for managing data of storage device
US10620843B2 (en) Methods for managing distributed snapshot for low latency storage and devices thereof
US10503620B1 (en) Parity log with delta bitmap
US11320988B2 (en) Method, apparatus and computer program product for managing disk array
US11010301B2 (en) Method, apparatus, and computer program product for providing cache service
CN104750428A (en) Block storage access and gateway module, storage system and method, and content delivery apparatus
US11003554B2 (en) RAID schema for providing metadata protection in a data storage system
WO2021017782A1 (en) Method for accessing distributed storage system, client, and computer program product
CN111949210A (en) Metadata storage method, system and storage medium in distributed storage system
WO2021088586A1 (en) Method and apparatus for managing metadata in storage system
US10860224B2 (en) Method and system for delivering message in storage system
US11775194B2 (en) Data storage method and apparatus in distributed storage system, and computer program product
CN109840051B (en) Data storage method and device of storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210511