CN114415979A - Storage device processing method, computer equipment and storage device - Google Patents

Storage device processing method, computer equipment and storage device Download PDF

Info

Publication number
CN114415979A
CN114415979A CN202210318342.2A CN202210318342A CN114415979A CN 114415979 A CN114415979 A CN 114415979A CN 202210318342 A CN202210318342 A CN 202210318342A CN 114415979 A CN114415979 A CN 114415979A
Authority
CN
China
Prior art keywords
disk
disk array
sub
spare
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210318342.2A
Other languages
Chinese (zh)
Other versions
CN114415979B (en
Inventor
滕开恩
魏齐良
周健
马东星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210318342.2A priority Critical patent/CN114415979B/en
Publication of CN114415979A publication Critical patent/CN114415979A/en
Application granted granted Critical
Publication of CN114415979B publication Critical patent/CN114415979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Abstract

The application discloses a processing method of a storage device, computer equipment and the storage device. The method comprises the following steps: receiving a processing request for a disk array to be processed in a storage device; determining a spare disk array for the disk array to be processed based on the grades of other disk arrays in the storage device, wherein the grades represent the fault-tolerant capability of other disk arrays; and replacing the sub-disk of the disk array to be processed by using the sub-disk of the spare disk array. According to the scheme, the storage reliability of the storage device can be improved.

Description

Storage device processing method, computer equipment and storage device
Technical Field
The present application relates to the field of storage technologies, and in particular, to a storage device, a processing method thereof, a computer device, and a storage device.
Background
With the rapid development of information technologies such as mobile internet, cloud computing, internet of things and the like, the demand of various data transmission and storage increases. Because of its high capacity of data storage space, disk arrays are widely used for data transmission and storage.
A Redundant Array of Independent Disks (RAID), referred to as a Disk Array or Array, is a Disk system that combines multiple Independent disks in different ways to form a Disk group, thereby providing better storage performance and higher reliability than a single Disk. By adopting the RAID technology, the data security can be enhanced while the storage service is better provided, and good data recovery capability is provided.
At present, after a certain disk in a disk array fails, the storage performance of the disk array is reduced, and at this time, other disks may be added in a manual operation manner to replace the failed disk. However, if the failed disk cannot be replaced in time, the data loss of the disk array may occur.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a processing method of a storage device, a computer device and a storage device, which can improve the storage reliability of the storage device.
In order to solve the above problem, a first aspect of the present application provides a processing method of a storage device, the method including: receiving a processing request for a disk array to be processed in a storage device; determining a spare disk array for the disk array to be processed based on the grades of other disk arrays in the storage device, wherein the grades represent the fault-tolerant capability of other disk arrays; and replacing the sub-disk of the disk array to be processed by using the sub-disk of the spare disk array.
In order to solve the above problem, a second aspect of the present application provides a computer device, which includes a memory and a processor coupled to each other, wherein the memory stores program data, and the processor is configured to execute the program data to implement any step of the processing method of the storage apparatus.
In order to solve the above problem, a third aspect of the present application provides a storage device storing program data executable by a processor, the program data being for implementing any one of the steps of the processing method of the storage device.
According to the scheme, the processing request for the to-be-processed disk array in the storage device is received, the standby disk array is determined for the to-be-processed disk array based on the grades of other disk arrays in the storage device, namely based on the fault-tolerant capability of other disk arrays, and the sub-disks of the standby disk array are used for carrying out replacement processing on the sub-disks of the to-be-processed disk array, so that the sub-disks of the to-be-processed disk array can be replaced under the condition that the number of the sub-disks in the storage device is not increased.
In addition, when the sub-disk of other disk array is not used as the spare sub-disk, the sub-disk can also be used as the sub-disk of other disk array, thereby improving the utilization rate of the storage space in the storage device, reducing the waste of the storage space and improving the storage reliability of the disk array in the storage device.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings required in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of a processing method of a storage device according to the present application;
FIG. 2 is a schematic structural diagram of a first embodiment of a memory device according to the present application;
FIG. 3 is a flow chart illustrating a processing method of a storage device according to a second embodiment of the present application;
FIG. 4 is a flow chart of a processing method of the storage device according to a third embodiment of the present application;
FIG. 5 is a flowchart illustrating an embodiment of step S12 of FIG. 1;
FIG. 6 is a schematic structural diagram of a second embodiment of the memory device of the present application;
FIG. 7 is a schematic block diagram of an embodiment of a computer apparatus of the present application;
fig. 8 is a schematic structural diagram of a memory device according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first" and "second" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The inventor of the application finds that a small number of special spare disks can be arranged for the disk array, when the disk of the disk array fails, the failed disk can be replaced through the special spare disks, and when the number of the failed sub disks of the disk array exceeds the number of the spare disks, if new disks cannot be timely supplemented, the data loss of the disk array can be caused. In addition, before the special spare disk is not used for replacing the failed sub-disk of the disk array, the special spare disk is always in an idle state, so that the utilization rate of the storage space of the disk array is low, and the problem of waste of the storage space exists.
In order to solve the above problems, the present application provides the following embodiments, each of which is specifically described below.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a processing method of a memory device according to a first embodiment of the present application. The method may comprise the steps of:
s11: a processing request is received for a pending disk array in a storage device.
In the storage device, at least two different grades of disk arrays are arranged, and the grade can represent the fault tolerance capability of the disk arrays. For example, the level of RAID includes RAID0, RAID1, RAID5, RAID6, RAID01, RAID10, RAID50, RAID60, or the like. Each level of disk array may provide different performance, security levels, fault tolerance, etc.
After an independent sub-disk of a certain block of disk array with fault tolerance fails or is lost, data on the lost disk can still be recovered through redundant data or check data on the remaining sub-disks, the integrity and reliability of RAID data are ensured, the RAID with different data fault tolerance capabilities allows the number of the failed or lost sub-disks to be different under the condition of ensuring the integrity and reliability of the data, taking RAID5 as an example, the RAID can maximally allow the data lacking one sub-disk not to be lost, and RAID6 can maximally allow the data lacking two sub-disks not to be lost.
In some embodiments, after a certain independent sub-disk of the RAID fails or is missing, the RAID may be used as a disk array to be processed, or a RAID that needs to apply for sub-disk expansion and the like may be used as a disk array to be processed.
A processing request for a pending disk array in a storage device is received, where the request may be a request for spare subdisc, and the request may further include the number of spare subdisc that need to be requested, capacity requirements, and the like.
S12: and determining a spare disk array for the disk array to be processed based on the grades of other disk arrays in the storage device, wherein the grades represent the fault-tolerant capability of other disk arrays.
When the pending disk array needs to request a spare sub-disk, the sub-disks of other disk arrays in the storage device may be used as spare sub-disks. And selecting the corresponding disk array as a spare disk array for the disk to be processed from other disk arrays.
When the spare disk array is selected, the spare disk array may be determined for the pending disk array based on the rank of other disk arrays in the storage device, i.e., based on the fault tolerance of the other disk arrays.
In some embodiments, the disk array with the highest rank among the other disk arrays may be used as the spare disk array.
In some embodiments, a disk array with the largest number of assignable sub-disks currently exists in the other disk arrays may be used as the spare disk array. Wherein the number of assignable subdisc is related to the rank of the disk array.
The number of assignable subdisc may be the number of subdisc of the disk array that may fail or be missing. For example, if the maximum number of sub-disks allowed to fail or be missing in the disk array is a, where a is an integer and a is greater than or equal to 2, the number of assignable sub-disks is a. If one sub-disk in the disk array is failed or missing, the number of the assignable sub-disks of the disk array is a-1.
S13: and replacing the sub-disk of the disk array to be processed by using the sub-disk of the spare disk array.
The sub-disk of the spare disk array can be used for replacing the sub-disk of the disk array to be processed.
In some embodiments, the sub-disk in the spare disk array may be used as a spare sub-disk for a failed or missing sub-disk of the pending disk array. And performing data reconstruction by taking the spare sub-disk as a sub-disk of the disk array to be processed, so that the spare sub-disk replaces the failed or missing sub-disk.
After the sub-disk of the spare disk array is used as the spare sub-disk of the disk array to be processed, a data reconstruction process can be started for the disk array to be processed, data stored in other sub-disks in the array to be processed are utilized to calculate data in a failed sub-disk in the disk array to be processed, the calculated data are stored in the spare sub-disk, and after data reconstruction is completed, the spare sub-disk can automatically replace the failed sub-disk to work.
In this embodiment, by receiving a processing request for the pending disk array in the storage device, the spare disk array is determined for the pending disk array based on the rank of the other disk array in the storage device, that is, based on the fault-tolerant capability of the other disk array, and the sub-disk of the pending disk array is used to perform replacement processing, so that the sub-disk of the pending disk array can be replaced without increasing the number of sub-disks in the storage device.
In some embodiments, the storage device does not have a dedicated spare disk array for spare hot spare capability, and the performance does not have a dedicated spare disk array for spare functions. The spare disk array for replacing the sub-disks of the disk array to be processed can also be used for storing data. That is, the spare disk array determined for the disk array to be processed may be other disk arrays currently used for storing data, or other disk arrays having spare sub-disks or disk arrays having sub-disks with fault tolerance capability, which is not limited in this application.
In addition, when the sub-disk of the other disk array is not used as the spare sub-disk, the spare sub-disk can be used as the sub-disk of the other disk array, that is, the fixed spare sub-disk with a single purpose is not arranged, so that the utilization rate of the storage space in the storage device can be improved, the waste of the storage space is reduced, and the storage reliability of the disk array in the storage device can be improved.
As an example, referring to fig. 2, in the storage apparatus 100, there may be at least two different levels of disk arrays, taking a disk array 101 with hot-spare capability (e.g., RAID 5) and a disk array 102 (e.g., RAID 6) as an example, in the storage apparatus 100, there may be no spare sub-disk independently existing, that is, only used for spare. RAID5 may allow a maximum of 1 sub-disk to fail or miss, and RAID6 may allow a maximum of 2 sub-disks to fail or miss. If there is a failed sub-disk in RAID5, the sub-disk in RAID6 may be used as a global hot spare disk of RAID5 with lower data fault tolerance, that is, a spare sub-disk is provided for RAID5 with a lower rank. Compared with the method of providing independent spare subdisc, the data reliability of the storage device 100 of the scheme is high on the premise that the number of subdisc in the total storage device 100 is the same and the space utilization rate of subdisc is equal.
In some embodiments, before the step S11 or S12, for example, before determining a spare disk array for the pending disk array based on the rank of other disk arrays in the storage device, at least two different ranks of disk arrays may be set in the storage device, wherein at least one of the other disk arrays is ranked higher than the pending disk array. Additionally, there may be at least one disk array ranked higher than the rank of the processing disk array. For example, at least one of the disk arrays in the plurality of different levels may have a maximum number of sub-disks that are allowed to fail or be missing simultaneously greater than or equal to 2. Reference may be made in particular to the following examples.
Referring to fig. 3, fig. 3 is a flowchart illustrating a processing method of a memory device according to a second embodiment of the present application. The method may comprise the steps of:
s21: and receiving a setting instruction for setting the number and the level of the disk arrays by a user.
A way of setting the spare capacity (hot spare capacity) of the disk array by the one-key disk array creation may be provided, in other words, the hot spare capacity may also be used as the capacity of the spare disk array. In this case, a setting instruction for the user to set the number and level of disk arrays having a hot standby capability may be received.
In some embodiments, a setting instruction for setting the number and the level of the normal disk array by a user may be further received, where the normal disk array may represent a disk array that is not a spare disk array.
In some embodiments, the generic disk array may be a disk array with low data reliability or low fault tolerance. Alternatively, the user may specify that some disk arrays are not to be used for sparing, i.e., may set them to have no hot-standby capability.
S22: and receiving a setting instruction for checking and setting the disk array by a user.
A single RAID may provide a way to create disk array hot spare capability. In this case, a setting instruction for setting the disk array selected by the user may be received, that is, the user may select the disk array with the hot standby capability from all the disk arrays or disks (hard disks) of the storage device, and may select whether the RAID to be created needs to set the hot standby capability.
S23: receiving a setting instruction of adding the disk array by a user.
After a RAID hot spare has been created, a disk array with hot spare may also be added. The setting instruction of adding the disk array by the user can be received, so that the hot standby capability can be increased in a manual setting mode.
S24: based on the set instruction, a rank of the disk array is determined.
After step S21, that is, after receiving a setting instruction for setting the number and the level of the disk arrays by the user, or after receiving a setting instruction for setting the number and the level of the ordinary disk arrays by the user, all the disk arrays or disks (hard disks) in the storage device may be obtained, and the level of the disk arrays may be determined according to the setting instruction, so as to determine the fault tolerance corresponding to the level of the disk arrays. For example, according to the rule of one-key RAID (the grade and the number of disk arrays), a plurality of RAID with hot standby capability and common RAID are created.
After step S22, that is, after receiving a setting instruction for the user to hook up the set disk array, the hot standby capability is set for the hooked disk array based on the setting instruction.
After step S23, that is, after receiving a setting instruction for the user to add a disk array, the disk array with hot-standby capability is added based on the setting instruction.
In some embodiments, after the steps S21 to S24 are performed, and at least two different levels of disk arrays are set, the following step S25 may be performed.
S25: and adding the disk array with the hot standby capability into a disk management linked list.
For the disk array with hot standby capability, the RAID with hot standby capability may be managed by a disk management linked list in the storage device.
When the disk arrays with the hot standby capability are added into the disk management linked list, the disk arrays with different grades are sorted from high to low in the disk management linked list; and the disk arrays with the same grade are sorted according to the time sequence of adding the disk management linked list.
By the method, the disk arrays are sequenced according to the fault tolerance, so that the RAID with higher fault tolerance can be placed at a former position. In addition, the disk array with hot standby capability is conveniently managed in a unified mode, a fixed and single disk array for standby is not arranged, the disk array with hot standby capability can be used as a standby for a global disk array, and can also be used as a common disk array in a storage device, and waste of storage space is reduced.
Referring to fig. 4, fig. 4 is a flowchart illustrating a processing method of a memory device according to a third embodiment of the present application. The method may comprise the steps of:
s31: and performing state detection on the subdistricts of all the disk arrays in the storage device according to a preset period to acquire the storage states of the disk arrays, wherein the storage states comprise a normal state, a degradation reconstruction state and a degradation lacking state.
In some embodiments, before receiving the processing request for the pending disk array in the storage device in step S11, steps S31 to S33 of the present embodiment may be executed.
In the operation process of the disk array of the storage device, the state detection can be performed on the subdisc of all the disk arrays in the storage device according to a preset period (for example, once every m seconds), and the situations that whether the subdisc of the disk array is invalid, whether the subdisc of the disk array is missing, whether the subdisc of the disk array is abnormal, whether the subdisc of the disk array is effective, whether the data reconstruction is performed and the like can be detected.
In some embodiments, if there is no failed sub-disk in the disk array and the data of each valid sub-disk is synchronized, the storage status of the disk array is a normal status. I.e. a disk array where no missing or failed sub-disk exists. For example, if the number of the sub-disks of the disk array is b, and b is an integer greater than or equal to 2, it is detected that there is no failed sub-disk in the disk array, the number of valid sub-disks is b, and the data of each valid sub-disk is synchronized, then the storage state of the disk array is a normal state.
In some embodiments, if it is detected that a failed sub-disk exists in the disk array, the disk array is degraded, that is, the storage status of the disk array is a degraded-out-of-disk status.
In some embodiments, if the disk array has failed sub disks and the number of failed sub disks is not greater than a failed sub disk threshold of the disk array, where the failed sub disk threshold is the number of sub disks that can be allocated by the disk array in a normal state, the storage state of the disk array is determined as a degraded disk-missing state. For example, the maximum allowable number of sub disks with a rank to which the disk array belongs is a, where a is an integer greater than or equal to 2, that is, the failed sub disk threshold of the disk array is a, the number of sub disks is b, the number of sub disk failures detected to exist in the disk array is c, c is an integer less than or equal to a, and the number of valid sub disks is b-c, the disk array has a sub disk missing or a sub disk failure, and the disk array can be destaged, so that the storage state of the disk array is a destaged-missing state.
In some embodiments, if there is no sub-disk failure in the disk array and there is a spare sub-disk that is undergoing data reconstruction, the storage status of the disk array is determined to be degraded reconstruction status. For example, if the number of the largest permitted failed or missing sub-disks of the rank to which the disk array belongs is a, a is an integer greater than or equal to 2, and the number of the sub-disks is b, it is detected that the disk array does not have a failed sub-disk and the number of valid sub-disks is less than b, or a spare sub-disk in which data reconstruction is being performed exists, it may be indicated as a degraded disk array, and the disk array has a spare sub-disk in which data reconstruction is being performed, and the storage state of the disk array is a degraded reconstruction state.
In some embodiments, the failed sub-disk referred to above in the present application may also be denoted as a sub-disk in which there is an abnormality, a defect, an invalidation, or the like, and further, a sub-disk in the disk array in which there is a case where it cannot be normally used may also be denoted as a failed sub-disk. The present application is not limited thereto.
S32: and if the disk array is in a degraded disk missing state, determining the disk array as a disk array to be processed.
If the disk array is in a degraded disk missing state, that is, in a degraded state, and a failed sub disk is missing or exists, the disk array is determined as a disk array to be processed, and the disk array to be processed may initiate a request to request a spare sub disk to replace the failed sub disk.
By the method, the conditions of abnormity, deletion, failure and the like of the subdisc of the disk array can be found in time, so that the subdisc of the disk array can be applied for standby subdisc to replace in time.
In some embodiments, referring to fig. 5, the step S12 of determining a spare disk array for the pending disk array based on the rank of other disk arrays in the storage device may include the following steps:
s121: and acquiring the grades of other disk arrays except the disk array to be processed from the disk management linked list.
When determining the spare disk array for the disk to be processed, the rank of the disk array other than the disk array to be processed can be obtained from the disk management linked list.
In some embodiments, the disk arrays with hot-standby capability are sorted from high to low in the disk management linked list. The grades of other disk arrays can be sequentially obtained from the disk management linked list. The RAID level, that is, the data fault tolerance of the RAID, may be sequentially obtained backward from the first disk array node at the head of the linked list of the disk management linked list.
S122: and judging whether the grades of other disk arrays are higher than the grade of the disk array to be processed.
Judging whether the grade of other disk arrays is higher than that of the disk array to be processed or not, and determining whether the data fault tolerance of the RAID in the node is higher than that of the RAID (redundant array of independent disks) requesting the hot standby disk or not.
If it is determined that the rank is higher than the rank of the pending raid, step S123 is executed.
If the rank is not higher than the rank of the disk array to be processed, the step S121 is continuously executed to obtain the next other disk array from the disk management linked list, and all the steps of the embodiment are executed on the next other disk array.
In some embodiments, at least one of the at least two different levels of disk arrays includes at least two assignable subdivisions. The fault tolerance levels of each level of RAID are different and the number of assignable sub-disks is also different. For example, the number of all the sub-disks of the disk array is m, and the number of the sub-disks belonging to the maximum allowable failure or missing level is n, wherein m and n are integers greater than or equal to 2, and m is greater than n. The maximum number of assignable sub-disks of the disk array may be the maximum number n of sub-disks allowed to fail or be missing.
In some embodiments, after determining that the rank of the disk array to be processed is higher than the rank of the disk array to be processed in step S122, it may be determined whether an assignable sub-disk exists in other disk arrays; if the assignable sub-disks exist, determining the disk array with the grade higher than that of the disk array to be processed, and replacing the sub-disks with the sub-disks of the disk array to be processed. Step S123 is executed to determine the other disk array as the spare disk array.
S123: and determining other disk arrays as spare disk arrays.
In some embodiments, if the storage status of the spare disk array is normal, that is, not degraded, and there is no failed sub disk, the sub disk is selected from the spare disk array as the spare sub disk of the failed sub disk in the pending disk array. Any one of the sub-disks or a preset number (the number of requested sub-disks) of the sub-disks in the spare disk array can be selected according to the number of the sub-disks requested for the disk to be processed, so as to perform sub-disk replacement for the disk to be processed.
In some embodiments, if the storage status of the spare disk array is a degraded reconstruction status, that is, degraded, but there is no failed sub disk, the sub disk in the spare disk array that is undergoing data reconstruction is selected as the spare sub disk of the failed sub disk in the pending disk array. Selecting the subdisc for which data reconstruction is occurring does not affect the spare disk array.
In some embodiments, if the number of the failed sub-disks in the disk array to be processed is multiple, one other disk array is determined as a spare disk array, and multiple sub-disks are selected from one spare disk array, so that the selected multiple sub-disks serve as multiple spare sub-disks of the multiple failed sub-disks in the disk array to be processed.
In some embodiments, if the number of the failed sub-disks in the disk array to be processed is multiple, the multiple other disk arrays are determined as multiple spare disk arrays, and multiple sub-disks are selected from the multiple spare disk arrays, so that the selected multiple sub-disks are used as multiple spare sub-disks of the multiple failed sub-disks in the disk array to be processed.
In some embodiments, if the number of failed sub-disks in the disk array to be processed is multiple, multiple sub-disks may be selected from multiple spare disk arrays as spare sub-disks, where one or more sub-disks may be selected from one spare disk array as spare sub-disks, and the manner of selecting the spare sub-disks is not limited in this application.
In some embodiments, after determining the other disk array as the spare disk array, it may further be determined whether the capacity of the spare subdisc meets the capacity requirement of the failed subdisc, for example, it may be determined whether the capacity of the spare subdisc is greater than or equal to the minimum subdisc capacity of the spare subdisc requested for the disk to be processed, and if it is determined that the capacity requirement is met, the metadata information on the spare subdisc is cleared in the spare disk array. To indicate that the spare subdisc is no longer a subdisc of the spare disk array. And the spare subdisc can be written with the metadata information of the disc array to be processed to be used as the subdisc of the disc to be processed.
If the capacity requirement of the failed sub-disk is not met, that is, the RAID of the linked list node cannot provide the hot spare disk, the step S121 is continuously executed to continuously traverse the disk management linked list, and find the RAID or linked list traversal that can provide the hot spare disk from the disk array with the hot spare capability is completed. If the RAID that can provide the hot spare disk cannot be found after the disk management linked list is traversed, the request return of the disk array to be processed fails, and the hot spare disk (spare sub-disk) cannot be provided for the disk array to be processed.
In this embodiment, the grades of all the other disk arrays except the disk array to be processed are obtained from the disk management linked list, whether the grades of the other disk arrays are higher than the grade of the disk array to be processed is judged, if the grades are higher than the grade of the disk array to be processed, the other disk arrays are determined to be the spare disk array, the disk array with the high grade can be preferentially used as the spare disk array in the disk management linked list, a spare sub-disk is provided for the disk to be processed, the hot standby capability of the disk array can be integrally improved, and the storage reliability of the whole storage device is improved on the premise that the number of the sub-disks of the storage device is not increased.
In some embodiments, after step S12 or step S13, or after a spare subdisc cannot be provided for the pending disk array, a new subdisc may be added to the corresponding pending disk array and/or spare disk array for use as a spare subdisc of the pending disk array and/or spare subdisc of the pending disk array. After the new sub-disk is filled, the spare disk array and/or the pending disk array may perform data reconstruction using the spare sub-disk.
Specifically, after the spare disk array provides the spare sub-disk for the to-be-processed disk, the spare disk array itself is in a degraded state lacking one or more sub-disks, and temporarily loses the capability of providing the hot spare disk for the to-be-processed RAID, so that the sub-disk can be supplemented for the spare disk array. In the process, the spare disk array of the disk to be supplemented and the number of the sub disks needing to be supplemented can be determined; and selecting the sub-disk for disk complementing, and complementing a new sub-disk in the spare disk array, or substituting the failed sub-disk to complement the new sub-disk, so that the RAID regains the capability of providing global hot spare disks for other RAIDs after disk complementing. The sub-disk for disk complementing may be a hard disk complemented into the storage device manually, or may be an unused sub-disk (for example, a sub-disk for standby) in the storage device, and the like, which is not limited in the present application.
In addition, after all other disk arrays of the storage device cannot provide the spare subdisc for the disk to be processed, a subdisc can be supplemented for the disk to be processed, and the specific implementation process of the manner of supplementing the subdisc can refer to the specific implementation process of supplementing a new subdisc for the spare disk array, which is not described herein again.
For the above embodiments, the present application further provides a memory device, please refer to fig. 6, where fig. 6 is a schematic structural diagram of a second embodiment of the memory device of the present application. The storage device 40 comprises a receiving module 41, a management module 42 and a processing module 43. Wherein, the receiving module 41, the management module 42 and the processing module 43 are connected.
The receiving module 41 is used for receiving a processing request for a pending disk array in the storage device.
The management module 42 is configured to determine a spare disk array for the pending disk array based on a rank of other disk arrays in the storage device, where the rank indicates a fault tolerance of the other disk arrays. The spare disk array is ranked higher than the pending disk array.
The processing module 43 performs replacement processing on the subdisc of the disk array to be processed by using the subdisc of the spare disk array.
The specific implementation of this embodiment can refer to the implementation process of the above embodiment, and is not described herein again.
With reference to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer device according to the present application. The computer device 50 comprises a memory 51 and a processor 52, wherein the memory 51 and the processor 52 are coupled to each other, the memory 51 stores program data, and the processor 52 is configured to execute the program data to implement the steps in any of the embodiments of the processing method of the storage apparatus.
In the present embodiment, the processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The processor 52 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 52 may be any conventional processor or the like.
The specific implementation of this embodiment can refer to the implementation process of the above embodiment, and is not described herein again.
For the method of the above embodiment, it can be implemented in the form of a computer program, so that the present application provides a storage device, please refer to fig. 8, where fig. 8 is a schematic structural diagram of a third embodiment of the storage device of the present application. The storage device 60 has stored therein program data 61 executable by a processor, the program data 61 being executable by the processor to implement the steps of any one of the embodiments of the processing method of the storage device described above.
The specific implementation of this embodiment can refer to the implementation process of the above embodiment, and is not described herein again.
The storage device 60 of the present embodiment may be a medium that can store the program data 61, such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the program data 61, and the server may transmit the stored program data 61 to another device for operation, or may self-operate the stored program data 61.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a storage device, which is a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (16)

1. A method of processing a storage device, the method comprising:
receiving a processing request for a disk array to be processed in the storage device;
determining a spare disk array for the disk array to be processed based on the grades of other disk arrays in the storage device, wherein the grades represent the fault-tolerant capability of the other disk arrays;
and replacing the sub-disk of the disk array to be processed by using the sub-disk of the spare disk array.
2. The method of claim 1, wherein prior to determining a spare disk array for the pending disk array based on the rank of other disk arrays in the storage device, the method further comprises:
in the storage device, at least two different levels of disk arrays are arranged;
and the grade of at least one disk array in the other disk arrays is higher than that of the disk array to be processed.
3. The method of claim 2, wherein the setting at least two different levels of disk arrays comprises:
receiving a setting instruction for setting the number and the level of the disk arrays by a user; and/or receiving a setting instruction for setting the disk array selected by a user; and/or receiving a setting instruction of adding the disk array by a user;
and determining the grade of the disk array based on the setting instruction.
4. The method of claim 2, wherein after the setting of the at least two different levels of disk arrays, the method further comprises:
adding the disk array with hot standby capability into a disk management linked list;
in the disk management linked list, the disk arrays with different grades are sorted from high to low according to the grades; and the disk arrays with the same grade are sorted according to the time sequence of adding the disk management linked list.
5. The method of claim 1, wherein determining a spare disk array for the pending disk array based on the rank of other disk arrays in the storage device comprises:
acquiring the grades of other disk arrays except the disk array to be processed from a disk management linked list;
judging whether the grade of the other disk arrays is higher than that of the disk array to be processed;
and if the grade of the other disk arrays is judged to be higher than that of the disk array to be processed, determining the other disk arrays as the spare disk array.
6. The method of claim 5,
in at least two different grades of disk arrays, at least one grade of disk array comprises at least two distributable sub-disks;
after the determination is that the rank is higher than the rank of the disk array to be processed, the method further includes:
judging whether the other disk arrays have distributable sub disks or not;
and if judging that the assignable sub-disks exist, determining the other disk arrays as the standby disk array.
7. The method of claim 5 or 6, wherein the determining the other disk array as a spare disk array comprises:
if the storage state of the spare disk array is a normal state, selecting a sub disk in the spare disk array as a spare sub disk of a failed sub disk in the disk array to be processed; alternatively, the first and second electrodes may be,
and if the storage state of the spare disk array is a degraded reconstruction state, selecting a sub disk in which data reconstruction is being performed in the spare disk array as a spare sub disk of a failed sub disk in the disk array to be processed.
8. The method according to claim 7, wherein the replacing the sub-disk of the pending disk array with the sub-disk of the spare disk array comprises:
and performing data reconstruction by taking the standby sub-disk as a sub-disk of the disk array to be processed so as to enable the standby sub-disk to replace the failed sub-disk.
9. The method of claim 7, wherein prior to receiving a processing request for a pending disk array in the storage device, the method further comprises:
performing state detection on all the subdistricts of the disk array in the storage device according to a preset period to acquire the storage states of the disk array, wherein the storage states comprise the normal state, the degradation reconstruction state and the degradation disc missing state;
if the disk array does not have the failed sub-disk and the data of each valid sub-disk is synchronized, the storage state of the disk array is the normal state;
if the disk array has failed sub disks and the number of the failed sub disks is not greater than the threshold of the failed sub disks of the disk array, determining the storage state of the disk array as the degraded disk-missing state; wherein the failure subdisc threshold is the number of subdiscs of the disc array which can be allocated in the normal state;
and if the disk array does not have a failed sub disk and has a spare sub disk which is performing data reconstruction, determining the storage state of the disk array as the degraded reconstruction state.
10. The method according to claim 9, wherein after performing state detection on all the subdisc of the disk array in the storage device according to a preset period to obtain the storage state of the disk array, the method further comprises:
and if the disk array is in the degraded disk missing state, determining the disk array as the disk array to be processed.
11. The method of claim 5, wherein determining the other disk array as a spare disk array further comprises:
if the number of the failed sub-disks in the disk array to be processed is multiple, determining one of the other disk arrays as a spare disk array, and selecting multiple sub-disks in one of the spare disk arrays, or determining multiple other disk arrays as multiple spare disk arrays, and selecting multiple sub-disks in the multiple spare disk arrays, so as to use the selected multiple sub-disks as multiple spare sub-disks of the multiple failed sub-disks in the disk array to be processed.
12. The method of claim 5, wherein after determining the other disk array as a spare disk array, further comprising:
judging whether the capacity of the standby sub-disk meets the capacity requirement of the failed sub-disk or not;
and if the capacity requirement is met, clearing the metadata information on the spare subdisc in the spare disc array.
13. The method of claim 1, wherein after determining a spare disk array for the pending disk array based on the rank of other disk arrays in the storage device, the method further comprises:
determining the spare disk array and/or the to-be-processed disk array of a to-be-supplemented disk;
and selecting a sub-disk for disk replenishment, and replenishing a new sub-disk for the spare disk array and/or the disk array to be processed to serve as the spare sub-disk of the spare disk array and/or the disk array to be processed.
14. The method of claim 1,
and the spare disk array for replacing the sub disks of the disk array to be processed is used for storing data.
15. A computer device comprising a memory and a processor coupled to each other, the memory having stored therein program data, the processor being configured to execute the program data to implement the steps of the method of any one of claims 1 to 14.
16. A storage device, characterized in that the storage device stores program data that can be executed by a processor for implementing the steps of the method according to any one of claims 1 to 14.
CN202210318342.2A 2022-03-29 2022-03-29 Storage device processing method, computer equipment and storage device Active CN114415979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210318342.2A CN114415979B (en) 2022-03-29 2022-03-29 Storage device processing method, computer equipment and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210318342.2A CN114415979B (en) 2022-03-29 2022-03-29 Storage device processing method, computer equipment and storage device

Publications (2)

Publication Number Publication Date
CN114415979A true CN114415979A (en) 2022-04-29
CN114415979B CN114415979B (en) 2022-07-15

Family

ID=81264055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210318342.2A Active CN114415979B (en) 2022-03-29 2022-03-29 Storage device processing method, computer equipment and storage device

Country Status (1)

Country Link
CN (1) CN114415979B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293547A (en) * 2004-03-11 2005-10-20 Hitachi Ltd Storage device
US20090177918A1 (en) * 2008-01-04 2009-07-09 Bulent Abali Storage redundant array of independent drives
CN101504594A (en) * 2009-03-13 2009-08-12 杭州华三通信技术有限公司 Data storage method and apparatus
CN102902602A (en) * 2012-09-19 2013-01-30 华为技术有限公司 Method and device for data hot backup as well as storage system
CN103136075A (en) * 2011-12-05 2013-06-05 巴法络股份有限公司 Disk system, data retaining device, and disk device
CN103793292A (en) * 2012-11-03 2014-05-14 上海欧朋软件有限公司 Disaster recovery method for disk array
CN105353977A (en) * 2015-10-22 2016-02-24 捷鼎国际股份有限公司 Data storage system and method
CN105353984A (en) * 2015-11-05 2016-02-24 北京飞杰信息技术有限公司 Floppy disk array-based high-availability cluster controller and control method and system
CN107229423A (en) * 2017-05-31 2017-10-03 郑州云海信息技术有限公司 Data processing method, device and system
US20180210797A1 (en) * 2017-01-23 2018-07-26 Wipro Limited Methods and systems for improving fault tolerance in storage area network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293547A (en) * 2004-03-11 2005-10-20 Hitachi Ltd Storage device
US20090177918A1 (en) * 2008-01-04 2009-07-09 Bulent Abali Storage redundant array of independent drives
CN101504594A (en) * 2009-03-13 2009-08-12 杭州华三通信技术有限公司 Data storage method and apparatus
CN103136075A (en) * 2011-12-05 2013-06-05 巴法络股份有限公司 Disk system, data retaining device, and disk device
CN102902602A (en) * 2012-09-19 2013-01-30 华为技术有限公司 Method and device for data hot backup as well as storage system
CN103793292A (en) * 2012-11-03 2014-05-14 上海欧朋软件有限公司 Disaster recovery method for disk array
CN105353977A (en) * 2015-10-22 2016-02-24 捷鼎国际股份有限公司 Data storage system and method
CN105353984A (en) * 2015-11-05 2016-02-24 北京飞杰信息技术有限公司 Floppy disk array-based high-availability cluster controller and control method and system
US20180210797A1 (en) * 2017-01-23 2018-07-26 Wipro Limited Methods and systems for improving fault tolerance in storage area network
CN107229423A (en) * 2017-05-31 2017-10-03 郑州云海信息技术有限公司 Data processing method, device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A. DI MARCO ET AL.: "Using a Gigabit Ethernet cluster as a distributed disk array with multiple fault tolerance", 《IEEE XPLORE》 *
汪生珠 等: "一种解决IBM X3650 M2服务器磁盘RAID故障的方法", 《电脑编程技巧与维护》 *

Also Published As

Publication number Publication date
CN114415979B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US10459814B2 (en) Drive extent based end of life detection and proactive copying in a mapped RAID (redundant array of independent disks) data storage system
KR101758544B1 (en) Synchronous mirroring in non-volatile memory systems
EP3230870B1 (en) Elastic metadata and multiple tray allocation
JP2021099814A (en) Storage cluster
US20180217888A1 (en) Dynamically adjusting an amount of log data generated for a storage system
CN109725831B (en) Method, system and computer readable medium for managing storage system
CN103534688B (en) Data reconstruction method, memory device and storage system
US10366004B2 (en) Storage system with elective garbage collection to reduce flash contention
CN103942112A (en) Magnetic disk fault-tolerance method, device and system
US20230127166A1 (en) Methods and systems for power failure resistance for a distributed storage system
US11809295B2 (en) Node mode adjustment method for when storage cluster BBU fails and related component
US20140281316A1 (en) Data management device and method for copying data
US9256490B2 (en) Storage apparatus, storage system, and data management method
CN113552998B (en) Method, apparatus and program product for managing stripes in a storage system
CN110674539B (en) Hard disk protection device, method and system
CN114415979B (en) Storage device processing method, computer equipment and storage device
CN111045853A (en) Method and device for improving erasure code recovery speed and background server
WO2021043246A1 (en) Data reading method and apparatus
CN108932176B (en) Data degradation storage method and device
CN113391937A (en) Method, electronic device and computer program product for storage management
US20220398156A1 (en) Distributed multi-level protection in a hyper-converged infrastructure
US10725879B2 (en) Resource management apparatus, resource management method, and nonvolatile recording medium
CN115408345A (en) Data storage method and device applied to parallel file system
CN117519585A (en) Hard disk management method, RAID card and server
CN115809011A (en) Data reconstruction method and device in storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant