US20220179743A1 - Method, device and computer program product for storage management - Google Patents
Method, device and computer program product for storage management Download PDFInfo
- Publication number
- US20220179743A1 US20220179743A1 US17/204,216 US202117204216A US2022179743A1 US 20220179743 A1 US20220179743 A1 US 20220179743A1 US 202117204216 A US202117204216 A US 202117204216A US 2022179743 A1 US2022179743 A1 US 2022179743A1
- Authority
- US
- United States
- Prior art keywords
- target
- data
- storage space
- parity value
- stripe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0766—Error or fault reporting or storing
- G06F11/0772—Means for error signaling, e.g. using interrupts, exception flags, dedicated error registers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0766—Error or fault reporting or storing
- G06F11/0787—Storage of error reports, e.g. persistent data storage, storage using memory protection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3034—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- Embodiments of the present disclosure relate to the field of computers, and more particularly, to a storage management method, a device, and a computer program product.
- RAID redundant arrays of independent disks
- the embodiments of the present disclosure provide a solution for storage management.
- a storage management method includes: receiving from a requesting node a write request for writing target data into a first target storage space in a redundant array of independent disks (RAID), wherein the RAID is associated with a plurality of nodes and includes a stripe, the stripe includes a data storage space for storing data and a parity storage space for storing a plurality of parity values corresponding to the plurality of nodes, and the first target storage space is at least a part of the data storage space of the stripe; if a storage device associated with the first target storage space does not fail, acquiring first data stored in the first target storage space and a first parity value corresponding to the requesting node and stored in the parity storage space; determining a target parity value based on the target data, the first data, and the first parity value; and updating the stripe with the target data and the target parity value.
- RAID redundant array of independent disks
- an electronic device includes: at least one processing unit; and at least one memory, wherein the at least one memory is coupled to the at least one processing unit and stores instructions for execution by the at least one processing unit.
- the instructions When executed by the at least one processing unit, the instructions cause the device to perform actions, and the actions include: receiving from a requesting node a write request for writing target data into a first target storage space in a redundant array of independent disks (RAID), wherein the RAID is associated with a plurality of nodes and includes a stripe, the stripe includes a data storage space for storing data and a parity storage space for storing a plurality of parity values corresponding to the plurality of nodes, and the first target storage space is at least a part of the data storage space of the stripe; if a storage device associated with the first target storage space does not fail, acquiring first data stored in the first target storage space and a first parity value corresponding to the requesting node and stored in the par
- RAID redundant array of independent disks
- a computer program product is provided.
- the computer program product is tangibly stored in a non-transitory computer storage medium and includes machine-executable instructions, wherein when run in a device, the machine-executable instructions cause the device to perform any step of the method described according to the first aspect of the present disclosure.
- FIG. 1 illustrates a schematic diagram of an example environment in which the embodiments of the present disclosure may be implemented
- FIG. 2 illustrates a flowchart of a process for storage management according to an embodiment of the present disclosure
- FIG. 3 illustrates a schematic diagram of storage management according to an embodiment of the present disclosure
- FIG. 4 illustrates a schematic diagram of error handling according to an embodiment of the present disclosure
- FIG. 5 illustrates a schematic diagram of storage management according to another embodiment of the present disclosure
- FIG. 6 illustrates a schematic diagram of storage management according to yet another embodiment of the present disclosure.
- FIG. 7 illustrates a schematic block diagram of an example device that may be configured to implement the embodiments of the present disclosure.
- the term “including” and variations thereof mean open-ended inclusion, that is, “including but not limited to.” Unless specifically stated, the term “or” means “and/or.” The term “based on” means “based at least in part on.” The terms “one example embodiment” and “one embodiment” mean “at least one example embodiment.” The term “another embodiment” means “at least one further embodiment.” The terms “first,” “second,” and the like may refer to different or the same objects. Other explicit and implicit definitions may also be included below.
- one RAID may include storage blocks from a plurality of storage disks, and the plurality of storage disks may also constitute a plurality of independent RAIDs.
- other storage blocks in the same RAID may be used to recover data of target storage blocks.
- the node when a stripe in a RAID is written by a node, the node not only needs to modify the written data part, but also needs to modify corresponding parity values.
- one stripe usually has only one parity value. Therefore, the node needs to lock the entire stripe during the writing process to prevent other nodes from accessing data of the stripe.
- some writings are often only for part of the data in the stripe, and such locking will affect the performance of the RAID.
- a storage management solution is provided.
- a write request for writing target data to a first target storage space in the RAID is received from a requesting node, it is determined whether a storage device associated with the first target storage space has not failed. If the storage device associated with the first target storage space does not fail, first data stored in the first target storage space and a first parity value corresponding to the requesting node and stored in a parity storage space are acquired. Then, a target parity value is determined based on the target data, the first data, and the first parity value, and the target data and the target parity value are used to update the stripe.
- the embodiments of the present disclosure may assign corresponding parity values to different nodes. This can eliminate the need for a single node to lock other data spaces and other parity values when performing partial writes. Furthermore, the embodiments of the present disclosure can allow other nodes to execute write or read requests for other data spaces in parallel, thereby improving the efficiency of a storage system.
- FIG. 1 illustrates example environment 100 in which the embodiments of the present disclosure may be implemented.
- environment 100 includes storage management device 120 which is configured to manage RAID 130 coupled thereto.
- storage management device 120 may also be coupled with one or more nodes 110 - 1 , 110 - 2 to 110 -N(individually or collectively referred to as node 110 ) to receive an access request for RAID 130 from node 110 .
- RAID 130 may be organized into multiple stripes, and one stripe may span multiple storage devices.
- one stripe may be associated with five different storage devices to store data in four storage devices and store parity values in one storage device.
- stripe 140 in RAID 130 may span six different storage devices, four of them (storage devices 154 , 156 , 158 , and 160 ) are used for storing data, and two storage devices (storage devices 152 and 162 ) are used for storing the parity values.
- the parity values may correspond to multiple nodes 110 one to one.
- a parity value PA may correspond to node 110 - 1
- a parity value PB may correspond to node 110 - 2 .
- FIG. 1 the specific RAID types and the number of the parity values shown in FIG. 1 are only illustrative. Those skilled in the art can understand that a corresponding number of parity values may be set for any appropriate RAID type based on the number of nodes.
- the embodiments of the present disclosure can allow parallel access to the same stripe.
- the following will describe the access process of the RAID structure based on the multiple parity values in conjunction with FIGS. 2 to 5 .
- FIG. 2 illustrates a flowchart of process 200 for storage management according to some embodiments of the present disclosure.
- Process 200 may be implemented, for example, by storage management device 120 shown in FIG. 1 .
- storage management device 120 receives a write request from a requesting node to write target data to a first target storage space in RAID 130, where RAID 130 is associated with multiple nodes 110 and includes stripe 140 , stripe 140 includes a data storage space for storing data and a parity storage space for storing a plurality of parity values corresponding to a plurality of nodes, and the first target storage space is at least a part of the data storage space of the stripe.
- FIG. 3 illustrates schematic diagram 300 of storage management according to the embodiments of the present disclosure.
- storage management device 120 may, for example, receive a write request from a requesting node (for example, node 110 - 1 in FIG. 1 ).
- the write request may be used, for example, to write target data “D1” and “D2” into corresponding target storage spaces 320 - 2 and 330 - 3 .
- stripe 140 includes four data storage spaces 320 - 1 , 320 - 2 , 320 - 3 , and 320 - 4 for storing data.
- stripe 140 also includes parity storage space 310 - 1 for storing a parity value PB and parity storage space 310 - 2 for storing a parity value PA.
- parity value “PA” may be associated with node 110 - 1
- parity value “PB” may be associated with node 120 - 1 , for example.
- all data storage spaces and parity storage spaces in stripe 140 may be set to initial values (for example, 0).
- storage management device 120 determines whether a storage device associated with the first target storage space has failed. If the storage device has not failed, process 200 proceeds to block 206 . In block 206 , storage management device 120 acquires the first data stored in the first target storage space and the first parity value corresponding to the requesting node and stored in the parity storage space.
- Storage management device 120 may acquire operating information of a storage disk corresponding to RAID 130 to determine whether the corresponding storage device fails.
- storage management device 120 may determine that storage device 156 corresponding to storage space 320 - 2 and storage device 158 corresponding to storage space 320 - 3 have not failed. Subsequently, storage management device 120 may acquire first data “D1” and “D2” stored in storage space 320 - 2 , and acquire the first parity value “PA” corresponding to node 110 - 1 and stored in parity storage space 310 - 2 .
- storage management device 120 determines a first target parity value based on the target data, the first data, and the first parity value.
- storage management device 120 may determine the target parity value based on an exclusive OR operation on the target data, the first data, and the first parity value.
- storage management device 120 may, for example, determine the first target parity value “PA1” based on the exclusive OR operation on the target data “D1” and “D2′,” the first data “D1” and “D2” and the first parity value “PA”, and the first target parity value “PA1” may, for example, be determined according to formula (1):
- PA 1 D 1′ ⁇ D 2′ ⁇ D 1 ⁇ D 2 ⁇ PA (1)
- ⁇ means exclusive OR operation
- storage management device 120 uses the target data and the target parity value to update the stripe. Specifically, storage management device 120 may write the target data into the first target storage space and replace the first parity value with the first target parity value.
- storage management device 120 may write the target data “D1” and “D2” into corresponding storage spaces 320 - 2 and 320 - 3 , and write the determined first target parity value “PA1” into parity storage space 310 - 2 to replace the first parity value “PA.”
- storage management device 120 does not need to lock unaccessed data storage spaces and parity storage spaces for other nodes.
- other nodes 110 may still perform reading or writing to data storage spaces 320 - 1 and/or 320 - 4 .
- another node 110 - 2 may write to data storage space 320 - 1 in parallel, and update the parity value “PB” corresponding to node 110 - 1 based on the same method as process 200 .
- the embodiments of the present disclosure can allow different nodes to initiate access requests to different data storage spaces of the same stripe in parallel without causing conflict. In this way, the performance of the RAID can be improved.
- an error may occur when storage management device 120 writes the target data to the target storage space.
- FIG. 4 illustrates schematic diagram 400 of error handling according to an embodiment of the present disclosure.
- target storage space 320 - 2 and target storage space 320 - 3 have been partially updated, but the writing of the target data fails.
- storage management device 120 may re-determine a second target parity value PA2.
- storage management device 120 may acquire second data from other data storage spaces than the first target storage space in the stripe. Specifically, as shown in FIG. 4 , storage management device 120 may acquire data “D0” in data storage space 320 - 1 and data “D3” in data storage space 320 - 4 .
- storage management device 120 may determine the second target parity value based on the second data and the target data. As shown in FIG. 4 , storage management device 120 may determine the second target parity value “PA2” based on exclusive OR operation on the target data “D1” and “D2′,” the data “D0” in data storage space 320 - 1 , and the data “D3” in data storage space 320 - 4 .
- the second target parity value “PA2” may, for example, be determined according to formula (2):
- PA 1 D 1′ ⁇ D 2′ ⁇ D 0 ⁇ D 3 (2)
- storage management device 120 may use the target data and the second target parity value to update stripe 140 again. Taking FIG. 4 as an example, storage management device 120 may continue to write the target data “D1” and “D2” into target storage spaces 320 - 2 and 320 - 3 , and write the re-determined second target parity value “PA2” into parity storage space 310 - 2 .
- storage management device 120 may also set parity values associated with other nodes among the plurality of parity values to the initial value. For example, storage management device 120 may set the parity value “PB” stored in parity storage space 310 - 1 to 0 .
- storage management device 120 in the event of a failure, needs to lock the stripe to prevent other nodes from modifying other parity data or the second data.
- storage management device 120 may record the target data and the first parity value in a log before completing updating of stripe 140 . In this way, even if an additional storage device fails, storage management device 120 can recover the data from the log information, thereby ensuring data security.
- FIG. 5 illustrates schematic diagram 500 of storage management according to yet another embodiments of the present disclosure.
- storage management device 120 may, for example, receive a request from node 110 - 1 to write the target data “D1” into target storage space 520 - 2 .
- storage management device 120 may determine that a storage device associated with target storage space 520 - 2 has failed, and may acquire the data “D0” in data storage space 520 - 1 , the data “D2” in data storage space 520 - 3 , and the data “D3” in data storage space 520 - 4 .
- storage management device 120 may determine a third target parity value based on the second data and the target data. Specifically, storage management device 120 may determine the third target parity value based on exclusive OR operation on the second data and the target data. Continuing with the example of FIG. 5 , storage management device 120 may determine the third target parity value “PA3” based on the data “D0” in storage space 520 - 1 , the data “D2” in data storage space 520 - 3 , the data “D3” in data storage space 520 - 4 , and the target data “D1′.”
- the third target parity value “PA3” may, for example, be determined according to formula (3):
- PA 3 D 1′ ⁇ D 0 ⁇ D 2 ⁇ D 3 (3)
- storage management device 120 may update the stripe with the target data and the third target parity value. For example, in the example of FIG. 5 , storage management device 120 may write the target data “D1” to the storage space that has not failed, and write the third target parity value to parity storage space 510 - 2 , and thus, updating of stripe 140 is completed.
- storage management device 120 may also set parity values associated with other nodes among the plurality of parity values to the initial value. For example, after writing the target data and the third target parity value, storage management device 120 may set the parity value “PB” stored in parity storage space 310 - 1 to 0 .
- storage management device 120 may also record the target data and the first parity value in the log before completing updating of stripe 140 .
- storage management device 120 may also respond to a read request.
- FIG. 6 illustrates schematic diagram 600 of storage management according to yet another embodiment of the present disclosure.
- storage management device 120 may receive a read request to read data from the second target storage space of the stripe. For example, storage management device 120 may receive the read request from node 110 - 1 to read data from data storage space 620 - 2 in stripe 140 .
- storage management device 120 may acquire fourth data from other data storage spaces than the second target storage space in stripe 140 . Taking FIG. 6 as an example, storage management device 120 may acquire data “D0” in data storage space 620 - 1 , data “D2” in 620 - 3 , and data “D3” in 620 - 4 , as well as multiple parity values “PA” and “PB.”
- storage management device 120 may then restore the data in the second target storage space based on the fourth data and the multiple parity values, and provide the restored data as a response to the read request.
- storage management device 120 may acquire the data “D0” in data storage space 620 - 1 , the data “D2” in 620 - 3 , and the data “D3” in 620 - 4 , as well as the multiple parity values “PA” and “PB” to restore the data “D1” stored in data storage space 620 - 2 .
- the data “D1” may, for example, be determined according to formula (4):
- the data reconstruction process is similar to the data recovery process, which can use data in other data storage spaces and multiple parity values to perform data reconstruction, and the specific process will not be described in detail here.
- FIG. 7 illustrates a schematic block diagram of example device 700 that can be configured to implement an embodiment of the present disclosure.
- storage management device 120 may be implemented by device 700 .
- device 700 includes central processing unit (CPU) 701 , which may perform various appropriate actions and processing according to computer program instructions stored in read only memory (ROM) 702 or computer program instructions loaded into random access memory (RAM) 703 from storage unit 708 .
- ROM read only memory
- RAM random access memory
- Various programs and data required for operations of device 700 may also be stored in RAM 703 .
- CPU 701 , ROM 702 , and RAM 703 are connected to each other through bus 704 .
- Input/output (I/O) interface 705 is also connected to bus 704 .
- a plurality of components in device 700 are connected to I/O interface 705 , including: input unit 706 , such as a keyboard and a mouse; output unit 707 , such as various types of displays and speakers; storage unit 708 , such as a magnetic disk and an optical disk; and communication unit 709 , such as a network card, a modem, and a wireless communication transceiver.
- Communication unit 709 allows device 700 to exchange information/data with other devices via a computer network such as the Internet and/or various telecommunication networks.
- process 200 may be implemented as a computer software program that is tangibly included in a machine-readable medium, for example, storage unit 708 .
- part or all of the computer program may be loaded and/or installed on device 700 via ROM 702 and/or communication unit 709 .
- the computer program is loaded into RAM 703 and executed by CPU 701 , one or more actions of process 200 described above may be implemented.
- the present disclosure may be a method, an apparatus, a system, and/or a computer program product.
- the computer program product may include a computer-readable storage medium on which computer-readable program instructions for performing various aspects of the present disclosure are loaded.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by an instruction execution device.
- the computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any appropriate combination of the above.
- Computer-readable storage media include: a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), a memory stick, a floppy disk, a mechanical encoding device (for example, a punch card or a raised structure in a groove with instructions stored thereon), and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or a flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disc
- memory stick a floppy disk
- mechanical encoding device for example, a punch card or a raised structure in a groove with instructions stored thereon
- Computer-readable storage media used herein are not to be interpreted as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or electrical signals transmitted via electrical wires.
- the computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device.
- Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, wherein the programming languages include object-oriented programming languages, such as Smalltalk and C++, and conventional procedural programming languages, such as the “C” language or similar programming languages.
- the computer-readable program instructions may be executed entirely on a user's computer, partly on a user's computer, as a stand-alone software package, partly on a user's computer and partly on a remote computer, or entirely on a remote computer or a server.
- the remote computer may be connected to a user computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., connected through the Internet using an Internet service provider).
- LAN local area network
- WAN wide area network
- an electronic circuit for example, a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), is personalized by utilizing state information of the computer-readable program instructions, wherein the electronic circuit may execute the computer-readable program instructions so as to implement various aspects of the present disclosure.
- FPGA field programmable gate array
- PDA programmable logic array
- These computer-readable program instructions can be provided to a processing unit of a general-purpose computer, a special-purpose computer, or a further programmable data processing apparatus, thereby producing a machine, such that these instructions, when executed by the processing unit of the computer or the further programmable data processing apparatus, produce means (e.g., specialized circuitry) for implementing functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- means e.g., specialized circuitry
- These computer-readable program instructions may also be stored in a computer-readable storage medium, and these instructions cause a computer, a programmable data processing apparatus, and/or other devices to operate in a specific manner; and thus the computer-readable medium having instructions stored includes an article of manufacture that includes instructions that implement various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- the computer-readable program instructions may also be loaded to a computer, a further programmable data processing apparatus, or a further device, so that a series of operating steps may be performed on the computer, the further programmable data processing apparatus, or the further device to produce a computer-implemented process, such that the instructions executed on the computer, the further programmable data processing apparatus, or the further device may implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- each block in the flowcharts or block diagrams may represent a module, a program segment, or part of an instruction, the module, program segment, or part of an instruction including one or more executable instructions for implementing specified logical functions.
- functions marked in the blocks may also occur in an order different from that marked in the accompanying drawings. For example, two successive blocks may actually be executed in parallel substantially, or they may be executed in an opposite order sometimes, depending on the functions involved.
- each block in the block diagrams and/or flowcharts and a combination of blocks in the block diagrams and/or flowcharts may be implemented using a dedicated hardware-based system for executing specified functions or actions, or may be implemented using a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application claims priority to Chinese Patent Application No. CN202011409033.3, on file at the China National Intellectual Property Administration (CNIPA), having a filing date of Dec. 4, 2020, and having “METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR STORAGE MANAGEMENT” as a title, the contents and teachings of which are herein incorporated by reference in their entirety.
- Embodiments of the present disclosure relate to the field of computers, and more particularly, to a storage management method, a device, and a computer program product.
- With the development of data storage technologies, various data storage devices have been able to provide users with increasingly high data storage capabilities, and the data access speed has also been greatly improved. While data storage capabilities are improved, users also have increasingly high demands for data reliability and storage system response time.
- At present, more and more storage systems use redundant arrays of independent disks (RAID) to provide storage with data redundancy. In the traditional solution, when a node writes to a stripe in the RAID, it needs to lock the stripe to prevent access conflicts with other nodes. However, this has an impact on RAID performance.
- The embodiments of the present disclosure provide a solution for storage management.
- According to a first aspect of the present disclosure, a storage management method is provided. The method includes: receiving from a requesting node a write request for writing target data into a first target storage space in a redundant array of independent disks (RAID), wherein the RAID is associated with a plurality of nodes and includes a stripe, the stripe includes a data storage space for storing data and a parity storage space for storing a plurality of parity values corresponding to the plurality of nodes, and the first target storage space is at least a part of the data storage space of the stripe; if a storage device associated with the first target storage space does not fail, acquiring first data stored in the first target storage space and a first parity value corresponding to the requesting node and stored in the parity storage space; determining a target parity value based on the target data, the first data, and the first parity value; and updating the stripe with the target data and the target parity value.
- According to a second aspect of the present disclosure, an electronic device is provided. The device includes: at least one processing unit; and at least one memory, wherein the at least one memory is coupled to the at least one processing unit and stores instructions for execution by the at least one processing unit. When executed by the at least one processing unit, the instructions cause the device to perform actions, and the actions include: receiving from a requesting node a write request for writing target data into a first target storage space in a redundant array of independent disks (RAID), wherein the RAID is associated with a plurality of nodes and includes a stripe, the stripe includes a data storage space for storing data and a parity storage space for storing a plurality of parity values corresponding to the plurality of nodes, and the first target storage space is at least a part of the data storage space of the stripe; if a storage device associated with the first target storage space does not fail, acquiring first data stored in the first target storage space and a first parity value corresponding to the requesting node and stored in the parity storage space; determining a target parity value based on the target data, the first data, and the first parity value; and updating the stripe with the target data and the target parity value.
- In a third aspect of the present disclosure, a computer program product is provided. The computer program product is tangibly stored in a non-transitory computer storage medium and includes machine-executable instructions, wherein when run in a device, the machine-executable instructions cause the device to perform any step of the method described according to the first aspect of the present disclosure.
- The Summary of the Invention section is provided to introduce the choice of concepts in a simplified form, which will be further described in the following Detailed Description. The Summary of the Invention section is not intended to identify key features or essential features of the present disclosure, nor is it intended to limit the scope of the present disclosure.
- The above and other objectives, features, and advantages of the present disclosure will become more apparent by describing example embodiments of the present disclosure in detail with reference to the accompanying drawings, and in the example embodiments of the present disclosure, the same reference numerals generally represent the same components.
-
FIG. 1 illustrates a schematic diagram of an example environment in which the embodiments of the present disclosure may be implemented; -
FIG. 2 illustrates a flowchart of a process for storage management according to an embodiment of the present disclosure; -
FIG. 3 illustrates a schematic diagram of storage management according to an embodiment of the present disclosure; -
FIG. 4 illustrates a schematic diagram of error handling according to an embodiment of the present disclosure; -
FIG. 5 illustrates a schematic diagram of storage management according to another embodiment of the present disclosure; -
FIG. 6 illustrates a schematic diagram of storage management according to yet another embodiment of the present disclosure; and -
FIG. 7 illustrates a schematic block diagram of an example device that may be configured to implement the embodiments of the present disclosure. - Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although the preferred embodiments of the present disclosure are shown in the accompanying drawings, it should be understood that the present disclosure can be implemented in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be more thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art.
- As used herein, the term “including” and variations thereof mean open-ended inclusion, that is, “including but not limited to.” Unless specifically stated, the term “or” means “and/or.” The term “based on” means “based at least in part on.” The terms “one example embodiment” and “one embodiment” mean “at least one example embodiment.” The term “another embodiment” means “at least one further embodiment.” The terms “first,” “second,” and the like may refer to different or the same objects. Other explicit and implicit definitions may also be included below.
- In a RAID-based storage system, one RAID may include storage blocks from a plurality of storage disks, and the plurality of storage disks may also constitute a plurality of independent RAIDs. In the process of RAID-based data recovery, other storage blocks in the same RAID may be used to recover data of target storage blocks.
- As discussed above, according to a conventional solution, when a stripe in a RAID is written by a node, the node not only needs to modify the written data part, but also needs to modify corresponding parity values. In the conventional solution, one stripe usually has only one parity value. Therefore, the node needs to lock the entire stripe during the writing process to prevent other nodes from accessing data of the stripe. However, some writings are often only for part of the data in the stripe, and such locking will affect the performance of the RAID.
- According to the embodiments of the present disclosure, a storage management solution is provided. In this solution, when a write request for writing target data to a first target storage space in the RAID is received from a requesting node, it is determined whether a storage device associated with the first target storage space has not failed. If the storage device associated with the first target storage space does not fail, first data stored in the first target storage space and a first parity value corresponding to the requesting node and stored in a parity storage space are acquired. Then, a target parity value is determined based on the target data, the first data, and the first parity value, and the target data and the target parity value are used to update the stripe.
- In this way, the embodiments of the present disclosure may assign corresponding parity values to different nodes. This can eliminate the need for a single node to lock other data spaces and other parity values when performing partial writes. Furthermore, the embodiments of the present disclosure can allow other nodes to execute write or read requests for other data spaces in parallel, thereby improving the efficiency of a storage system.
- The solution of the present disclosure will be described below with reference to the accompanying drawings.
-
FIG. 1 illustratesexample environment 100 in which the embodiments of the present disclosure may be implemented. As shown inFIG. 1 ,environment 100 includesstorage management device 120 which is configured to manageRAID 130 coupled thereto. In addition,storage management device 120 may also be coupled with one or more nodes 110-1, 110-2 to 110-N(individually or collectively referred to as node 110) to receive an access request forRAID 130 fromnode 110. - Conventionally,
RAID 130 may be organized into multiple stripes, and one stripe may span multiple storage devices. For example, in a conventional 4+1 RAID 5, one stripe may be associated with five different storage devices to store data in four storage devices and store parity values in one storage device. - As shown in
FIG. 1 , unlike the conventional 4+1 RAID 5,stripe 140 inRAID 130 may span six different storage devices, four of them (storage devices storage devices 152 and 162) are used for storing the parity values. - In some implementations, the parity values may correspond to
multiple nodes 110 one to one. For example, a parity value PA may correspond to node 110-1, and a parity value PB may correspond to node 110-2. It should be understood that the specific RAID types and the number of the parity values shown inFIG. 1 are only illustrative. Those skilled in the art can understand that a corresponding number of parity values may be set for any appropriate RAID type based on the number of nodes. - By setting multiple parity values in one stripe, the embodiments of the present disclosure can allow parallel access to the same stripe. The following will describe the access process of the RAID structure based on the multiple parity values in conjunction with
FIGS. 2 to 5 . -
FIG. 2 illustrates a flowchart ofprocess 200 for storage management according to some embodiments of the present disclosure.Process 200 may be implemented, for example, bystorage management device 120 shown inFIG. 1 . - As shown in
FIG. 2 , inblock 202,storage management device 120 receives a write request from a requesting node to write target data to a first target storage space inRAID 130, whereRAID 130 is associated withmultiple nodes 110 and includesstripe 140,stripe 140 includes a data storage space for storing data and a parity storage space for storing a plurality of parity values corresponding to a plurality of nodes, and the first target storage space is at least a part of the data storage space of the stripe. - The process of
block 202 will be described below in combination withFIG. 3 .FIG. 3 illustrates schematic diagram 300 of storage management according to the embodiments of the present disclosure. As shown inFIG. 3 ,storage management device 120 may, for example, receive a write request from a requesting node (for example, node 110-1 inFIG. 1 ). The write request may be used, for example, to write target data “D1” and “D2” into corresponding target storage spaces 320-2 and 330-3. - As shown in
FIG. 3 ,stripe 140 includes four data storage spaces 320-1, 320-2, 320-3, and 320-4 for storing data. In addition,stripe 140 also includes parity storage space 310-1 for storing a parity value PB and parity storage space 310-2 for storing a parity value PA. - Additionally, the parity value “PA” may be associated with node 110-1, and the parity value “PB” may be associated with node 120-1, for example. When
stripe 140 is initialized, all data storage spaces and parity storage spaces instripe 140 may be set to initial values (for example, 0). - In
block 204,storage management device 120 determines whether a storage device associated with the first target storage space has failed. If the storage device has not failed,process 200 proceeds to block 206. Inblock 206,storage management device 120 acquires the first data stored in the first target storage space and the first parity value corresponding to the requesting node and stored in the parity storage space. - It should be understood that the storage device having not failed described here may mean that an entire storage disk corresponding to the storage space has not failed, or that a physical storage block corresponding to the storage space has not failed.
Storage management device 120 may acquire operating information of a storage disk corresponding to RAID 130 to determine whether the corresponding storage device fails. - Continuing with the example of
FIG. 3 ,storage management device 120 may determine thatstorage device 156 corresponding to storage space 320-2 andstorage device 158 corresponding to storage space 320-3 have not failed. Subsequently,storage management device 120 may acquire first data “D1” and “D2” stored in storage space 320-2, and acquire the first parity value “PA” corresponding to node 110-1 and stored in parity storage space 310-2. - Continuing to refer to
FIG. 2 , inblock 208,storage management device 120 determines a first target parity value based on the target data, the first data, and the first parity value. In some implementations,storage management device 120 may determine the target parity value based on an exclusive OR operation on the target data, the first data, and the first parity value. - In the example of
FIG. 3 ,storage management device 120 may, for example, determine the first target parity value “PA1” based on the exclusive OR operation on the target data “D1” and “D2′,” the first data “D1” and “D2” and the first parity value “PA”, and the first target parity value “PA1” may, for example, be determined according to formula (1): -
PA1=D1′⊕D2′⊕D1⊕D2⊕PA (1) - Where ⊕ means exclusive OR operation.
- Continuing to refer to
FIG. 2 , inblock 210,storage management device 120 uses the target data and the target parity value to update the stripe. Specifically,storage management device 120 may write the target data into the first target storage space and replace the first parity value with the first target parity value. - Continuing with the example of
FIG. 3 ,storage management device 120 may write the target data “D1” and “D2” into corresponding storage spaces 320-2 and 320-3, and write the determined first target parity value “PA1” into parity storage space 310-2 to replace the first parity value “PA.” - In the above process,
storage management device 120 does not need to lock unaccessed data storage spaces and parity storage spaces for other nodes. For example, in the example ofFIG. 3 ,other nodes 110 may still perform reading or writing to data storage spaces 320-1 and/or 320-4. - As an example, another node 110-2 may write to data storage space 320-1 in parallel, and update the parity value “PB” corresponding to node 110-1 based on the same method as
process 200. - Based on the method discussed above, by setting multiple parity values associated with different nodes, the embodiments of the present disclosure can allow different nodes to initiate access requests to different data storage spaces of the same stripe in parallel without causing conflict. In this way, the performance of the RAID can be improved.
- In some implementations, an error may occur when
storage management device 120 writes the target data to the target storage space.FIG. 4 illustrates schematic diagram 400 of error handling according to an embodiment of the present disclosure. - As shown in
FIG. 4 , target storage space 320-2 and target storage space 320-3 have been partially updated, but the writing of the target data fails. In this case,storage management device 120 may re-determine a second target parity value PA2. - In some implementations,
storage management device 120 may acquire second data from other data storage spaces than the first target storage space in the stripe. Specifically, as shown inFIG. 4 ,storage management device 120 may acquire data “D0” in data storage space 320-1 and data “D3” in data storage space 320-4. - Additionally,
storage management device 120 may determine the second target parity value based on the second data and the target data. As shown inFIG. 4 ,storage management device 120 may determine the second target parity value “PA2” based on exclusive OR operation on the target data “D1” and “D2′,” the data “D0” in data storage space 320-1, and the data “D3” in data storage space 320-4. The second target parity value “PA2” may, for example, be determined according to formula (2): -
PA1=D1′⊕D2′⊕D0⊕D3 (2) - In some implementations,
storage management device 120 may use the target data and the second target parity value to updatestripe 140 again. TakingFIG. 4 as an example,storage management device 120 may continue to write the target data “D1” and “D2” into target storage spaces 320-2 and 320-3, and write the re-determined second target parity value “PA2” into parity storage space 310-2. - In some implementations, in order to ensure that the exclusive OR value of all stored data and all parity data is 0,
storage management device 120 may also set parity values associated with other nodes among the plurality of parity values to the initial value. For example,storage management device 120 may set the parity value “PB” stored in parity storage space 310-1 to 0. - In some implementations, in the event of a failure,
storage management device 120 needs to lock the stripe to prevent other nodes from modifying other parity data or the second data. - In some implementations, if a storage device associated with other data storage spaces than the first target storage space in
stripe 140 fails,storage management device 120 may record the target data and the first parity value in a log before completing updating ofstripe 140. In this way, even if an additional storage device fails,storage management device 120 can recover the data from the log information, thereby ensuring data security. - Continuing to refer to
FIG. 2 , if it is determined that the storage device associated with the first target storage space fails inblock 204,process 200 proceeds to block 212. Inblock 212,storage management device 120 may acquire third data from other data storage spaces than the first target storage space in the stripe. The process ofblock 212 will be described below in combination withFIG. 5 .FIG. 5 illustrates schematic diagram 500 of storage management according to yet another embodiments of the present disclosure. - As shown in
FIG. 5 ,storage management device 120 may, for example, receive a request from node 110-1 to write the target data “D1” into target storage space 520-2. For example,storage management device 120 may determine that a storage device associated with target storage space 520-2 has failed, and may acquire the data “D0” in data storage space 520-1, the data “D2” in data storage space 520-3, and the data “D3” in data storage space 520-4. - In
block 214,storage management device 120 may determine a third target parity value based on the second data and the target data. Specifically,storage management device 120 may determine the third target parity value based on exclusive OR operation on the second data and the target data. Continuing with the example ofFIG. 5 ,storage management device 120 may determine the third target parity value “PA3” based on the data “D0” in storage space 520-1, the data “D2” in data storage space 520-3, the data “D3” in data storage space 520-4, and the target data “D1′.” The third target parity value “PA3” may, for example, be determined according to formula (3): -
PA3=D1′⊕D0⊕D2⊕D3 (3) - In
block 216,storage management device 120 may update the stripe with the target data and the third target parity value. For example, in the example ofFIG. 5 ,storage management device 120 may write the target data “D1” to the storage space that has not failed, and write the third target parity value to parity storage space 510-2, and thus, updating ofstripe 140 is completed. - In some implementations, in order to ensure that the exclusive OR value of all stored data and all parity data is 0,
storage management device 120 may also set parity values associated with other nodes among the plurality of parity values to the initial value. For example, after writing the target data and the third target parity value,storage management device 120 may set the parity value “PB” stored in parity storage space 310-1 to 0. - In some implementations, in order to prevent other storage devices from causing data loss due to failure,
storage management device 120 may also record the target data and the first parity value in the log before completing updating ofstripe 140. - In some implementations,
storage management device 120 may also respond to a read request.FIG. 6 illustrates schematic diagram 600 of storage management according to yet another embodiment of the present disclosure. - As shown in
FIG. 6 ,storage management device 120 may receive a read request to read data from the second target storage space of the stripe. For example,storage management device 120 may receive the read request from node 110-1 to read data from data storage space 620-2 instripe 140. - In some implementations, if the storage device associated with the second target storage space fails,
storage management device 120 may acquire fourth data from other data storage spaces than the second target storage space instripe 140. TakingFIG. 6 as an example,storage management device 120 may acquire data “D0” in data storage space 620-1, data “D2” in 620-3, and data “D3” in 620-4, as well as multiple parity values “PA” and “PB.” - In some implementations,
storage management device 120 may then restore the data in the second target storage space based on the fourth data and the multiple parity values, and provide the restored data as a response to the read request. Continuing with the example ofFIG. 6 ,storage management device 120 may acquire the data “D0” in data storage space 620-1, the data “D2” in 620-3, and the data “D3” in 620-4, as well as the multiple parity values “PA” and “PB” to restore the data “D1” stored in data storage space 620-2. The data “D1” may, for example, be determined according to formula (4): -
D1=PA⊕PB⊕D0⊕D2⊕D3 (4) - It should be understood that the data reconstruction process is similar to the data recovery process, which can use data in other data storage spaces and multiple parity values to perform data reconstruction, and the specific process will not be described in detail here.
-
FIG. 7 illustrates a schematic block diagram ofexample device 700 that can be configured to implement an embodiment of the present disclosure. For example,storage management device 120 according to the embodiments of the present disclosure may be implemented bydevice 700. As shown in the figure,device 700 includes central processing unit (CPU) 701, which may perform various appropriate actions and processing according to computer program instructions stored in read only memory (ROM) 702 or computer program instructions loaded into random access memory (RAM) 703 fromstorage unit 708. Various programs and data required for operations ofdevice 700 may also be stored inRAM 703.CPU 701,ROM 702, andRAM 703 are connected to each other throughbus 704. Input/output (I/O)interface 705 is also connected tobus 704. - A plurality of components in
device 700 are connected to I/O interface 705, including:input unit 706, such as a keyboard and a mouse;output unit 707, such as various types of displays and speakers;storage unit 708, such as a magnetic disk and an optical disk; andcommunication unit 709, such as a network card, a modem, and a wireless communication transceiver.Communication unit 709 allowsdevice 700 to exchange information/data with other devices via a computer network such as the Internet and/or various telecommunication networks. - The various processes and processing described above, for example,
method 200 and/ormethod 700, may be performed by processingunit 701. For example, in some embodiments,process 200 may be implemented as a computer software program that is tangibly included in a machine-readable medium, for example,storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed ondevice 700 viaROM 702 and/orcommunication unit 709. When the computer program is loaded intoRAM 703 and executed byCPU 701, one or more actions ofprocess 200 described above may be implemented. - The present disclosure may be a method, an apparatus, a system, and/or a computer program product. The computer program product may include a computer-readable storage medium on which computer-readable program instructions for performing various aspects of the present disclosure are loaded.
- The computer-readable storage medium may be a tangible device that can hold and store instructions used by an instruction execution device. For example, the computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any appropriate combination of the above. More specific examples (a non-exhaustive list) of computer-readable storage media include: a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), a memory stick, a floppy disk, a mechanical encoding device (for example, a punch card or a raised structure in a groove with instructions stored thereon), and any suitable combination of the foregoing. Computer-readable storage media used herein are not to be interpreted as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or electrical signals transmitted via electrical wires.
- The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device.
- Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, wherein the programming languages include object-oriented programming languages, such as Smalltalk and C++, and conventional procedural programming languages, such as the “C” language or similar programming languages. The computer-readable program instructions may be executed entirely on a user's computer, partly on a user's computer, as a stand-alone software package, partly on a user's computer and partly on a remote computer, or entirely on a remote computer or a server. In a case where a remote computer is involved, the remote computer may be connected to a user computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., connected through the Internet using an Internet service provider). In some embodiments, an electronic circuit, for example, a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), is personalized by utilizing state information of the computer-readable program instructions, wherein the electronic circuit may execute the computer-readable program instructions so as to implement various aspects of the present disclosure.
- Various aspects of the present disclosure are described herein with reference to flowcharts and/or block diagrams of the method, the apparatus (system), and the computer program product according to the embodiments of the present disclosure. It should be understood that each block in the flowcharts and/or block diagrams as well as a combination of blocks in the flowcharts and/or block diagrams may be implemented using computer-readable program instructions.
- These computer-readable program instructions can be provided to a processing unit of a general-purpose computer, a special-purpose computer, or a further programmable data processing apparatus, thereby producing a machine, such that these instructions, when executed by the processing unit of the computer or the further programmable data processing apparatus, produce means (e.g., specialized circuitry) for implementing functions/actions specified in one or more blocks in the flowcharts and/or block diagrams. These computer-readable program instructions may also be stored in a computer-readable storage medium, and these instructions cause a computer, a programmable data processing apparatus, and/or other devices to operate in a specific manner; and thus the computer-readable medium having instructions stored includes an article of manufacture that includes instructions that implement various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- The computer-readable program instructions may also be loaded to a computer, a further programmable data processing apparatus, or a further device, so that a series of operating steps may be performed on the computer, the further programmable data processing apparatus, or the further device to produce a computer-implemented process, such that the instructions executed on the computer, the further programmable data processing apparatus, or the further device may implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- The flowcharts and block diagrams in the drawings illustrate the architectures, functions, and operations of possible implementations of the systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or part of an instruction, the module, program segment, or part of an instruction including one or more executable instructions for implementing specified logical functions. In some alternative implementations, functions marked in the blocks may also occur in an order different from that marked in the accompanying drawings. For example, two successive blocks may actually be executed in parallel substantially, or they may be executed in an opposite order sometimes, depending on the functions involved. It also should be noted that each block in the block diagrams and/or flowcharts and a combination of blocks in the block diagrams and/or flowcharts may be implemented using a dedicated hardware-based system for executing specified functions or actions, or may be implemented using a combination of dedicated hardware and computer instructions.
- Various implementations of the present disclosure have been described above. The foregoing description is illustrative rather than exhaustive, and is not limited to the disclosed implementations. Numerous modifications and changes are apparent to those of ordinary skill in the art without departing from the scope and spirit of the various illustrated implementations. The selection of terms as used herein is intended to best explain the principles and practical applications of the various implementations or technical improvements of technologies on the market, or to enable other persons of ordinary skill in the art to understand the implementations disclosed herein.
Claims (23)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011409033.3 | 2020-12-04 | ||
CN202011409033.3A CN114594900A (en) | 2020-12-04 | 2020-12-04 | Method, apparatus and computer program product for storage management |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220179743A1 true US20220179743A1 (en) | 2022-06-09 |
US11366719B1 US11366719B1 (en) | 2022-06-21 |
Family
ID=81812565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/204,216 Active US11366719B1 (en) | 2020-12-04 | 2021-03-17 | Method, device and computer program product for storage management |
Country Status (2)
Country | Link |
---|---|
US (1) | US11366719B1 (en) |
CN (1) | CN114594900A (en) |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU653670B2 (en) * | 1992-03-10 | 1994-10-06 | Data General Corporation | Improvements for high availability disk arrays |
US5548711A (en) * | 1993-08-26 | 1996-08-20 | Emc Corporation | Method and apparatus for fault tolerant fast writes through buffer dumping |
US6446237B1 (en) * | 1998-08-04 | 2002-09-03 | International Business Machines Corporation | Updating and reading data and parity blocks in a shared disk system |
JP2001043031A (en) * | 1999-07-30 | 2001-02-16 | Toshiba Corp | Disk array controller provided with distributed parity generating function |
US7308599B2 (en) * | 2003-06-09 | 2007-12-11 | Hewlett-Packard Development Company, L.P. | Method and apparatus for data reconstruction after failure of a storage device in a storage array |
US7210019B2 (en) * | 2004-03-05 | 2007-04-24 | Intel Corporation | Exclusive access for logical blocks |
US7519629B2 (en) * | 2004-09-30 | 2009-04-14 | International Business Machines Corporation | System and method for tolerating multiple storage device failures in a storage system with constrained parity in-degree |
JP2006285889A (en) * | 2005-04-05 | 2006-10-19 | Sony Corp | Data storage device, reconstruction control device, reconstruction control method, program and storage medium |
EP1770492B1 (en) * | 2005-08-01 | 2016-11-02 | Infortrend Technology, Inc. | A method for improving writing data efficiency and storage subsystem and system implementing the same |
US7546302B1 (en) * | 2006-11-30 | 2009-06-09 | Netapp, Inc. | Method and system for improved resource giveback |
JP2008225616A (en) * | 2007-03-09 | 2008-09-25 | Hitachi Ltd | Storage system, remote copy system and data restoration method |
US8825949B2 (en) * | 2009-01-15 | 2014-09-02 | Lsi Corporation | Locking in raid storage systems |
US8799705B2 (en) | 2012-01-04 | 2014-08-05 | Emc Corporation | Data protection in a random access disk array |
US8862818B1 (en) | 2012-09-27 | 2014-10-14 | Emc Corporation | Handling partial stripe writes in log-structured storage |
US10209904B2 (en) | 2013-04-09 | 2019-02-19 | EMC IP Holding Company LLC | Multiprocessor system with independent direct access to bulk solid state memory resources |
US10437691B1 (en) * | 2017-03-29 | 2019-10-08 | Veritas Technologies Llc | Systems and methods for caching in an erasure-coded system |
US10691354B1 (en) | 2018-01-31 | 2020-06-23 | EMC IP Holding Company LLC | Method and system of disk access pattern selection for content based storage RAID system |
CN110413205B (en) * | 2018-04-28 | 2023-07-07 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer readable storage medium for writing to disk array |
CN111124738B (en) * | 2018-10-31 | 2023-08-18 | 伊姆西Ip控股有限责任公司 | Data management method, apparatus and computer program product for redundant array of independent disks |
-
2020
- 2020-12-04 CN CN202011409033.3A patent/CN114594900A/en active Pending
-
2021
- 2021-03-17 US US17/204,216 patent/US11366719B1/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114594900A (en) | 2022-06-07 |
US11366719B1 (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11163472B2 (en) | Method and system for managing storage system | |
US11003556B2 (en) | Method, device and computer program product for managing storage system | |
CN108733314B (en) | Method, apparatus, and computer-readable storage medium for Redundant Array of Independent (RAID) reconstruction | |
US10824361B2 (en) | Changing data reliability type within a storage system | |
US11074146B2 (en) | Method, device and computer program product for managing redundant arrays of independent drives | |
US11449400B2 (en) | Method, device and program product for managing data of storage device | |
US10922201B2 (en) | Method and device of data rebuilding in storage system | |
US11314594B2 (en) | Method, device and computer program product for recovering data | |
JP2021522577A (en) | Host-aware update write method, system, and computer program | |
CN110121694B (en) | Log management method, server and database system | |
US11385805B2 (en) | Method, electronic device and computer program product for managing storage unit | |
US11422909B2 (en) | Method, device, and storage medium for managing stripe in storage system | |
US11347418B2 (en) | Method, device and computer program product for data processing | |
US11579975B2 (en) | Method, device, and computer readable storage medium for managing redundant array of independent disks | |
US11366719B1 (en) | Method, device and computer program product for storage management | |
US10664346B2 (en) | Parity log with by-pass | |
US11747990B2 (en) | Methods and apparatuses for management of raid | |
US11620080B2 (en) | Data storage method, device and computer program product | |
US11163642B2 (en) | Methods, devices and computer readable medium for managing a redundant array of independent disks | |
US20200341911A1 (en) | Method, device, and computer program product for managing storage system | |
US20130110789A1 (en) | Method of, and apparatus for, recovering data on a storage system | |
US20230333929A1 (en) | Method, electronic device, and computer program product for accessing data of raid | |
US11023158B2 (en) | Constraining placement of replica segment pairs among device pairs based on coding segment count | |
US11269530B2 (en) | Method for storage management, electronic device and computer program product | |
US11429287B2 (en) | Method, electronic device, and computer program product for managing storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MA, CHUN;HAN, GENG;ZHUO, BAOTE;AND OTHERS;REEL/FRAME:055848/0437 Effective date: 20210301 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:056250/0541 Effective date: 20210514 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE MISSING PATENTS THAT WERE ON THE ORIGINAL SCHEDULED SUBMITTED BUT NOT ENTERED PREVIOUSLY RECORDED AT REEL: 056250 FRAME: 0541. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:056311/0781 Effective date: 20210514 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:056295/0280 Effective date: 20210513 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:056295/0124 Effective date: 20210513 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:056295/0001 Effective date: 20210513 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058297/0332 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058297/0332 Effective date: 20211101 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (056295/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062021/0844 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (056295/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062021/0844 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (056295/0124);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0012 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (056295/0124);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0012 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (056295/0280);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0255 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (056295/0280);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0255 Effective date: 20220329 |