CN110058788B - Method for allocating storage, electronic device, storage system and computer program product - Google Patents

Method for allocating storage, electronic device, storage system and computer program product Download PDF

Info

Publication number
CN110058788B
CN110058788B CN201810049315.3A CN201810049315A CN110058788B CN 110058788 B CN110058788 B CN 110058788B CN 201810049315 A CN201810049315 A CN 201810049315A CN 110058788 B CN110058788 B CN 110058788B
Authority
CN
China
Prior art keywords
hard disk
wear
blocks
determining
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810049315.3A
Other languages
Chinese (zh)
Other versions
CN110058788A (en
Inventor
徐鑫磊
高健
贾瑞勇
李雄成
刘友生
高宏坡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC IP Holding Co LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC IP Holding Co LLC filed Critical EMC IP Holding Co LLC
Priority to CN202210638491.7A priority Critical patent/CN115061624A/en
Priority to CN201810049315.3A priority patent/CN110058788B/en
Priority to US16/177,736 priority patent/US10628071B2/en
Publication of CN110058788A publication Critical patent/CN110058788A/en
Priority to US16/811,530 priority patent/US11106376B2/en
Application granted granted Critical
Publication of CN110058788B publication Critical patent/CN110058788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3442Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for planning or managing the needed capacity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems

Abstract

Embodiments of the present disclosure relate to a method of allocating storage, an electronic device, a storage system and a computer program product. The method for allocating storage comprises the following steps: the wear degree of each of a plurality of hard disks related to a Redundant Array of Independent Disks (RAID) is obtained. The method further comprises the following steps: based on the obtained degree of wear, respective spare blocks are determined among the hard disk blocks of the plurality of hard disks such that the number of spare blocks in one hard disk is positively correlated with the degree of wear of the hard disk. The method further comprises the following steps: from the hard disk blocks other than the spare blocks, a predetermined number of hard disk blocks are selected for creating a RAID block for RAID, the predetermined number of hard disk blocks being from different ones of the plurality of hard disks. Through the embodiment of the disclosure, the life cycle of the hard disk is prolonged, the situation that a new hard disk is frequently replaced by a worn hard disk is avoided, and the data loss is reduced.

Description

Method for allocating storage, electronic device, storage system and computer program product
Technical Field
Embodiments of the present disclosure relate to the field of data storage, and more particularly, to a method of allocating storage, an electronic device, a storage system, and a computer program product.
Background
Redundant Array of Independent Disks (RAID) substantially increases the data throughput of a storage system by storing and reading data on multiple hard disks simultaneously. Several, tens or even hundreds of times the rate of a single hard disk drive can be achieved using RAID. Mapped RAID is a new RAID technology. The difference from the conventional RAID is that the mapping RAID is established on top of the hard disk pool, not on several specific hard disks. The hard disks in the hard disk pool are divided into a series of fixed-size, non-overlapping segments, which may be referred to as hard disk blocks. The logical space of a mapped RAID is divided into a set of contiguous, non-overlapping segments, referred to as RAID blocks. Each RAID block is composed of a plurality of disk blocks selected from different hard disks according to a RAID policy. Mapping RAID has several advantages over conventional RAID, such as being able to be rebuilt more quickly, supporting single drive expansion, and supporting mixed-size drives in one hard disk pool.
At present, a neighborhood matrix algorithm is generally used for creating a mapping RAID on a hard disk pool, so that RAID blocks in the created RAID are distributed on the hard disk pool as uniformly as possible. However, RAID created by the neighborhood matrix algorithm may cause the end of the life cycle of some hard disks in the hard disk pool after a period of servicing Input and Output (IO), thereby causing a data loss phenomenon.
Disclosure of Invention
Embodiments of the present disclosure provide a method, electronic device, storage system, and computer program product for mapping distributed storage of a RAID.
In a first aspect of the disclosure, a method of allocating storage is provided. The method includes obtaining respective wear levels of a plurality of hard disks associated with a Redundant Array of Independent Disks (RAID). The method also includes determining respective spare blocks among the hard disk blocks of the plurality of hard disks based on the wear level such that the number of spare blocks in one hard disk is positively correlated with the wear level of the hard disk. The method also includes selecting a predetermined number of hard disk blocks from the hard disk blocks other than the spare blocks for use in creating RAID blocks for the RAID, the predetermined number of hard disk blocks being from different ones of the plurality of hard disks.
In a second aspect of the disclosure, an electronic device is provided. The electronic device includes: at least one processor; and at least one memory including computer program instructions. The at least one memory and the computer program instructions are configured to, with the at least one processor, cause the electronic device to: acquiring respective wear degrees of a plurality of hard disks related to a Redundant Array of Independent Disks (RAID); determining respective spare blocks among the hard disk blocks of the plurality of hard disks based on the degree of wear such that the number of spare blocks in one hard disk is positively correlated with the degree of wear of that hard disk; and selecting a predetermined number of hard disk blocks from the hard disk blocks except the spare block for creating a RAID block for the RAID, the predetermined number of hard disk blocks being from different ones of the plurality of hard disks.
In a third aspect of the disclosure, a computer program product is provided. The computer program product is tangibly stored on a non-transitory computer-readable medium and includes machine-executable instructions. The machine executable instructions, when executed, cause the machine to perform any of the steps of the method described in accordance with the first aspect of the disclosure.
In a fourth aspect of the present disclosure, a memory system is provided. The storage system comprises a plurality of hard disks associated with a redundant array of independent hard disks (RAID) and an electronic device according to the second aspect of the present disclosure.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
FIG. 1 illustrates a schematic diagram of a conventional mapped RAID system;
FIG. 2a is a schematic diagram visually illustrating a domain matrix corresponding to a uniformly distributed RAID in two dimensions;
FIG. 2b is a schematic diagram of a domain matrix corresponding to a uniformly distributed RAID visually in three dimensions;
FIG. 3 illustrates a schematic diagram of a mapping RAID system according to an embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of a method of allocating storage according to an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart for determining the number of spare blocks for a given hard disk in accordance with an embodiment of the disclosure;
FIG. 6 shows a graph of the number of spare blocks for a given hard disk as a function of wear level according to an embodiment of the disclosure;
FIG. 7 illustrates a block diagram of an example device that can be used to implement embodiments of the present disclosure.
Detailed Description
The principles of the present disclosure will be described below with reference to a number of example embodiments shown in the drawings. While the preferred embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that these embodiments are described merely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
As described above, mapping RAID is a new RAID technology. Unlike conventional RAID, mapping RAID is established on top of a hard disk pool, rather than on several specific hard disks. This has several advantages over conventional RAID, such as being able to be rebuilt more quickly, supporting single drive expansion, and supporting mixed size drives in one hard disk pool.
FIG. 1 shows a schematic diagram of a conventional mapping RAID system 100. The system 100 includes a hard disk pool 120 and a RAID 110 established on the hard disk pool 120. In the system 100 shown in FIG. 1, the hard disk pool 120 comprises N hard disks 1201、1202、1203、1204…120NWhere N depends on the RAID policy employed. For example, when the R5 policy of 4D +1P is adopted, N is an integer greater than or equal to 5, and when the R6 policy of 4D +2P is adopted, N is an integer greater than or equal to 6. Hard disk 1201、1202、1203、1204…120NEach hard disk in (a) is divided into a series of fixed-size, non-overlapping blocks, which may be referred to as hard disk blocks. In practical implementations, hard disk blocks may be set to different sizes depending on storage limitations.
The logical space of RAID 110 is divided into a set of contiguous, non-overlapping blocks referred to as RAID blocks. Each RAID block 116 is composed of N hard disks 1201、1202、1203、1204…120NA predetermined number of hard disk block groups selected by different hard disks in the hard disk driveAnd (4) obtaining. The predetermined number depends on the selected RAID policy. For example, when the R5 strategy of 4D +1P is employed, the predetermined number is 5, as shown in fig. 1. Those skilled in the art will appreciate that the value of 5 is merely an example and that the predetermined number may be chosen to be different depending on the RAID policy. In the system 100 shown in FIG. 1, the RAID blocks 116 are shown as being comprised of hard disks 1204Hard disk block 116-1, hard disk 1203Hard disk block 116-2, hard disk 1202Hard disk block 116-3, hard disk 1201Hard disk block 116-4 and hard disk 120NHard disk block 116-5. The plurality of RAID blocks 116 form a RAID block group 112. RAID 110 also includes a mapping table 114 for recording which hard disk blocks from which hard disks each RAID block 116 in the set of RAID blocks 112 in RAID 110 is made up of.
At present, when creating the RAID group 112, in order to make the hard disk blocks included in each RAID block in the RAID group 112 be on the N hard disks 1201、1202、1203、1204…120NIs distributed as uniformly as possible, using a so-called neighborhood matrix algorithm. The neighborhood matrix M is a square matrix of N x N, where N is the number of disks in the disk pool, and N is dependent on the RAID strategy being adopted, as described above. Each element M (i, j) in the matrix represents the number of times hard disk i is adjacent to hard disk j in RAID 110. If a hard disk block of hard disk i and a hard disk block of hard disk j are simultaneously present in the same RAID block, hard disk i and hard disk j are defined as being adjacent once. For example, in the system 100 shown in FIG. 1, the RAID blocks 116 are shown as being from hard disks 1204 Hard disk 1203 Hard disk 1202 Hard disk 1201And a hard disk 120NThe hard disk block of (1). Accordingly, the hard disk 1204Respectively connected with the hard disk 1203 Hard disk 1202 Hard disk 1201And a hard disk 120NAdjacent, hard disk 1203Respectively connected with the hard disk 1204 Hard disk 1202 Hard disk 1201And a hard disk 120NAdjacent, and so on. As can be seen from the definition of the neighborhood matrix, the neighborhood matrix is a symmetric matrix.
If the hard disk blocks contained in the RAID block group 112 in the RAID 110 are evenly distributed on the hard disk pool 120, the respective elements in the neighborhood matrix M should be close to each other. Therefore, the objective of the neighborhood matrix algorithm is to make each element in the neighborhood matrix M substantially the same after the RAID block allocation is completed. Fig. 2a and 2b visually illustrate a neighborhood matrix in which the elements are substantially identical, in two and three dimensions, respectively. In the example of fig. 2a and 2b, the number N of hard disks in the pool 120 is 20. As shown in fig. 2a and 2b, after the neighborhood matrix algorithm, the values of the elements in the matrix are substantially the same, i.e., the RAIDs corresponding to the neighborhood matrix shown in fig. 2a and 2b are evenly distributed across the hard disk pool. When the RAID uniformly distributed on each hard disk obtained by the neighborhood matrix algorithm is used, the IO pressure on each hard disk is approximately the same. The problem is that if mapped RAID is created using hard disks (e.g., solid state disks) that do not have the same degree of wear, the algorithm will cause the life cycle of some hard disks to terminate more quickly than others, and a data loss situation may occur. This problem will be explained below by taking table 1 as an example.
Figure BDA0001551890770000051
TABLE 1
The wear degrees of the 20 hard disks used to establish RAID are shown in table 1, for example, the wear degree of the hard disk 6 is 80%, and the wear degrees of the hard disk 9 and the hard disk 13 are 60%. It should be noted that the numerical representations of the number of hard disks and the corresponding wear degrees shown in table 1 are merely exemplary, and various wear degrees may occur in each hard disk during actual use. In table 1, when the wear of a hard disk reaches 100%, the life cycle of the hard disk is terminated. The current neighborhood matrix algorithm is used to build RAID on the hard disk pool composed of the 20 hard disks shown in table 1, and the neighborhood matrix when the RAID block groups are uniformly distributed on the hard disk pool is the same as that shown in fig. 2a and fig. 2 b. When the RAID built in this way is used subsequently, the IO pressure born by each hard disk is basically the same, so the wear rate is basically the same. After the hard disk pool has served data reads and data writes for a period of time, the life cycle of the hard disk 6 will first be terminated, and then a new hard disk is swapped in. After a certain period of time has elapsed, the life cycle of the hard disk 9 and the hard disk 13 may end successively in a very short time. In this case, if the mapping RAID has not finished copying data to the newly swapped-in hard disk, data loss may result.
In addition, existing solutions inform the user whether a new hard disk needs to be swapped in by selecting the highest degree of hard disk wear. As in the example shown in table 1, the hard disk 6 in the hard disk pool is worn to 80%, and there are many hard disks with 10% and 20% wear, and these hard disks with lower wear can be used for many years. While the user sees that the RAID has worn 80%, the life cycle is about to end and backup data is needed. This will mislead the user to make the wrong decision.
In order to solve the above problem, embodiments of the present disclosure provide a neighborhood matrix algorithm based on the degree of wear of a hard disk, so as to establish RAID blocks, so as to reduce the risk of data loss. In the embodiment of the present disclosure, in establishing RAID, respective spare blocks are allocated among hard disk blocks of a plurality of hard disks based on the degree of wear of each hard disk in a hard disk pool so that the number of spare blocks in one hard disk is positively correlated with the degree of wear of the hard disk, and then RAID is established based on hard disk blocks other than the spare blocks in the hard disk pool. Therefore, under the condition that the number of the hard disk blocks of each hard disk is the same, the number of the hard disk blocks participating in the RAID establishment in the hard disk with higher abrasion degree is less, so that the hard disk is abraded slowly in the subsequent use, the life cycle of the hard disk is prolonged, the hard disk with the end of the life cycle is prevented from being frequently replaced by a new hard disk, and the occurrence of data loss is reduced.
Embodiments of the present disclosure are described below in conjunction with fig. 3. FIG. 3 shows a schematic diagram of a mapping RAID system 300 according to an embodiment of the present disclosure. It should be understood that some of the components shown in fig. 3 may be omitted and that in other embodiments system 300 may include other components not shown here. That is, the schematic diagram depicted in fig. 3 is merely for purposes of describing embodiments of the present disclosure in one example environment, so as to assist those of ordinary skill in the art in understanding the mechanisms and principles described herein, and is not intended to limit the scope of the present disclosure in any way.
System 300 includes a hard disk pool 320, a RAID 310 established on top of hard disk pool 320, and a controller 330. The hard disk pool 320 comprises N hard disks 3201、3202、3203、3204…320NWhere N depends on the RAID policy employed. For example, when the R5 policy of 4D +1P is adopted, N is an integer greater than or equal to 5, and when the R6 policy of 4D +2P is adopted, N is an integer greater than or equal to 6. Hard disk 3201、3202、3203、3204…320NIs divided into a series of fixed-size, non-overlapping hard disk blocks. In an embodiment of the present disclosure, hard disk 3201、3202、3203、3204…320NHas the same capacity, and each hard disk has the same number of hard disk blocks. In the embodiment of the present disclosure, the size of each hard disk block is 10GB to 50GB, but this is merely exemplary, and any other value is also possible. Those skilled in the art will appreciate that embodiments of the present disclosure are equally applicable to multiple hard disks of different capacities.
The logical space of RAID 310 is divided into a set of contiguous, non-overlapping RAID blocks. Each RAID block 316 is composed of N hard disks 3201、3202、3203、3204…320NA predetermined number of hard disk blocks selected by different hard disks in the group. The predetermined number depends on the RAID policy employed. For example, when the R5 strategy of 4D +1P is employed, the predetermined number is 5. Those skilled in the art will appreciate that the value of 5 is merely an example and that the predetermined number may be chosen to be different depending on the RAID policy. In the example shown in FIG. 3, RAID blocks 316 are shown as being comprised of hard disks 3204Hard disk block 316-1, hard disk 3203Hard disk block 316-2, hard disk 3202Hard disk block 316-3 and hard disk 3201Hard disk block 316-4 and hard disk 320NHard disk block 316-5. A plurality of RAID blocks 316 form a RAID block group 312. RAID 310 also includes a mapA table 314 for recording which hard disk blocks from which hard disks each RAID block in the RAID block group 312 consists of.
Unlike the conventional storage system 100 shown in FIG. 1, the controller 330 is based on the hard disk 320 in response to receiving a request to create a RAID block group 312 on the hard disk pool 3201、3202、3203、3204…320NThe number of the spare blocks SP in each hard disk is determined according to the wear degree of each hard disk, so that the hard disk with higher wear degree has more spare blocks SP. For example, in the example shown in FIG. 3, hard disk 3203With more spare blocks SP due to their higher wear level. In an embodiment of the present disclosure, for N hard disks 3201、3202、3203、3204…320NBased on the degree of wear of the given hard disk, the controller 330, the N hard disks 3201、3202、3203、3204…320NThe average wear level and the average number of spare blocks to determine the number of spare blocks SP for the given hard disk.
Controller 310 builds RAID 310 using a conventional neighborhood matrix algorithm based on the hard disk blocks in hard disk pool 310 except spare block SP. Hard disk 320 of which degree of wear is different by determining the number of spare blocks SP based on the degree of wear1、3202、3203、3204…320NThe number of hard disk blocks participating in creating the RAID block group 312 varies. Thus, the hard disk with higher wear degree participates in the creation of the RAID group 312, and the number of hard disk blocks is smaller, so that the hard disk is worn relatively slowly in the process of using the created RAID group 312, thereby prolonging the life cycle of the hard disk with higher wear degree.
FIG. 4 illustrates a flow diagram of a method 400 of allocating storage in accordance with an embodiment of the present disclosure. The method 400 may be implemented by the controller 330 in fig. 3. The controller 330 may perform the method 400 in response to receiving a request to establish a group of RAID blocks 312 on the hard disk pool 320. At 402, controller 330 retrieves a plurality of hard disks 320 associated with RAID 3101、3202、3203、3204…320NRespective degree of wear W1、W2……WNWhere N is the number of hard disks in the hard disk pool 320, as described above, the size of N depends on the RAID policy employed.
At 404, the controller 310 obtains the wear level W based on1、W2……WNOn a plurality of hard disks 3201、3202、3203、3204…320NThe respective spare blocks SP are determined such that the number of spare blocks in a hard disk is positively correlated to the degree of wear of the hard disk. In the embodiment of the present disclosure, the hard disk with higher wear degree has more spare blocks SP, and the spare blocks SP do not participate in the RAID establishment. The process of determining spare blocks for a given hard disk based on wear level will be described in more detail later in connection with FIG. 5.
At 406, controller 330 retrieves data from multiple hard disks 3201、3202、3203、3204…320NThe hard disk blocks of (1) other than spare block SP, a predetermined number of hard disk blocks from different hard disks are selected for use in creating RAID blocks 316 of RAID 310. As described above, the predetermined number depends on the RAID policy employed. For example, when the R5 strategy of 4D +1P is employed, the predetermined number is 5. Those skilled in the art will appreciate that the R5 policy is merely an example and that in practice different RAID policies may be employed and thus the predetermined number may also be different. The hard disk with higher wear degree has more spare blocks, so that the number of the hard disk blocks used for creating the RAID 310 is less, the hard disk is worn slowly in the subsequent use of the RAID 310, the life cycle of the hard disk is prolonged, the hard disk with the end of the life cycle frequently replaced by a new hard disk is avoided, and the occurrence of data loss is reduced
In one embodiment of the present disclosure, for multiple hard disks 3201、3202、3203、3204…320NThe controller 330 may compare its wear level to a predetermined threshold. The predetermined threshold value can be set according to actual needs. In response to the degree of wear of a given hard disk being greater than a predetermined threshold, controller 330 removes the given hard disk from hard disk pool 320 and adds a new hard disk having a degree of wear less than the predetermined threshold to hard disk pool 320 for use in creating RAID 310. Hard disks that are worn out beyond a predetermined threshold are removed from hard disk pool 320 so that they do not participate in creating RAID 320, thereby avoiding allocating too many spare blocks to these hard disks resulting in inefficient storage allocation.
In one embodiment of the present disclosure, during the use of the created RAID 310, a plurality of hard disks 320 may be monitored in real time1、3202、3203、3204…320NAnd reporting the monitored average wear level to the user to prompt the user whether a new hard disk needs to be replaced so that the user can take appropriate measures.
FIG. 5 illustrates a flow chart of a method 500 of determining a number of spare blocks for a given hard disk according to an embodiment of the present disclosure. Method 500 is one specific implementation of 404 shown in fig. 4. Those skilled in the art will appreciate that method 500 is merely one exemplary method of determining the number of spare blocks for a given hard disk and embodiments of the present disclosure are not limited thereto.
At 502, controller 330 determines a plurality of hard disks 3201、3202、3203、3204…320NAverage degree of wear. In one embodiment of the present disclosure, controller 330 determines the plurality of hard disks 320 based on the following equation1、3202、3203、3204…320NAverage degree of wear of (2):
Figure BDA0001551890770000091
wherein, W1、W2……WNRespectively representing the acquired plurality of hard disks 3201、3202、3203、3204…320NN represents the number of hard disks, WaRepresenting multiple hard disks 3201、3202、3203、3204…320NAverage degree of wear.
At 504, controller 330 determines a plurality of hard disks 3201、3202、3203、3204…320NAverage spare block number of. In one embodiment of the present disclosure, controller 330 is based on multiple hard disks 3201、3202、3203、3204…320NThe total capacity of the hard disk pool 320 and the required capacity of the RAID 310 to be established. The controller 330 determines an average spare block number for each hard disk based on the determined spare capacity. In an exemplary embodiment of the present disclosure, the controller 330 may determine the plurality of hard disks 320 based on the following equation1、3202、3203、3204…320NAverage spare block number of (c):
Figure BDA0001551890770000101
wherein, PavgRepresenting multiple hard disks 3201、3202、3203、3204…320NAverage number of spare blocks of SreqRepresenting the required capacity, S, of RAID 310dRepresenting the capacity of each hard disk in the hard disk pool 320, N representing the number of hard disks in the hard disk pool 320, SDEIndicating the capacity of each hard disk block. Equation (2) is merely an example, and those skilled in the art will appreciate that the average spare block number may also be determined in different ways. At 506, for a given hard disk, the controller 330 determines the number of spare blocks for the given hard disk based on the average wear level, the average number of spare blocks, and the wear level of the given hard disk. In one embodiment of the present disclosure, in response to the degree of wear of the given hard disk being equal to the average degree of wear, the controller 330 determines the number of spare blocks of the given hard disk to be equal to the average number of spare blocks. In response to the wear level of the given hard disk being greater than the average wear level, the controller 330 determines the number of spare blocks for the given hard disk to be greater than the average number of spare blocks. Responsive to the degree of wear of a given hard diskLess than the average degree of wear, the controller 330 determines the number of spare blocks for a given hard disk to be less than the average number of spare blocks. In embodiments of the present disclosure, there may be various methods of determining the number of spare blocks for a given hard disk based on the average wear level, the average number of spare blocks, and the wear level of the hard disk. The controller 330 may determine the number of spare blocks P for a given hard disk i based oni
Pt=f(Pavg,Wa,Wi) (3)
Wherein the function f is the degree of wear W of a given hard disk iiAnd is in WiIs equal to WaThe function value is the average number of spare blocks Pavg. Those skilled in the art will appreciate that there are a variety of functions f that satisfy this requirement. By way of example, and not limitation, equation (4) below is a function that satisfies the requirements:
Figure BDA0001551890770000102
equation (4) is only one example function, and there are actually many functions that satisfy the above requirements. For example, fig. 6 shows a curve of another function that meets the requirements. Without departing from the general concept of the embodiments of the present disclosure, one skilled in the art can appreciate various functions for determining the number of spare blocks of a given hard disk based on the degree of wear, the average degree of wear, and the average number of spare blocks of the given hard disk, as long as the functions are increasing functions of the degree of wear of the given hard disk and take values at the average degree of wear as the average number of spare blocks, and not limited to only equation (4).
In the embodiment of the present disclosure, in the case that two given hard disks have the same wear level, for one of the given hard disks, the value determined according to formula (3) may be rounded up, and for the other given hard disk, the value determined according to formula (3) may be rounded down, thereby making the spare blocks of the two given hard disks of the same wear level slightly different, thereby ensuring that the total number of spare blocks of each hard disk meets the system requirements.
According to the method 500, in determining the number of spare blocks for a given hard disk, the average wear level of each hard disk and the average number of spare blocks for each hard disk are taken into account, thereby ensuring that the number of hard disk blocks participating in the RAID establishment can meet the capacity required by the RAID while having more spare blocks for hard disks with higher wear levels.
Fig. 7 illustrates a schematic block diagram of an example device 700 that may be used to implement embodiments of the present disclosure. The device 700 may be used to implement the controller 330 of fig. 3. As shown, device 700 includes a Central Processing Unit (CPU)701 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)702 or computer program instructions loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processing unit 701 performs the various methods and processes described above, such as the method 400 and/or the method 500. For example, in some embodiments, method 400 and/or method 500 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into RAM 703 and executed by CPU 701, one or more steps of method 400 and/or method 500 described above may be performed. Alternatively, in other embodiments, CPU 701 may be configured to perform method 400 and/or method 500 in any other suitable manner (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. A method of allocating storage, comprising:
acquiring respective abrasion degrees of a plurality of hard disks related to the redundant array of independent hard disks RAID;
determining respective spare blocks among the hard disk blocks of the plurality of hard disks based on the degree of wear such that the number of spare blocks in one hard disk is positively correlated with the degree of wear of that hard disk; and
selecting a predetermined number of hard disk blocks from among the hard disk blocks other than the spare block for use in creating RAID blocks for the RAID, the predetermined number of hard disk blocks being from different hard disks of the plurality of hard disks.
2. The method of claim 1, wherein determining respective spare blocks based on the degree of wear comprises, for a given hard disk of the plurality of hard disks:
determining an average degree of wear of the plurality of hard disks;
determining an average spare block number of the plurality of hard disks; and
determining a number of spare blocks for the given hard disk based on the average wear level, the average number of spare blocks, and the wear level of the given hard disk.
3. The method of claim 2, wherein determining the number of spare blocks for the given hard disk comprises:
determining the number to be equal to the average number of spare blocks in response to the degree of wear of the given hard disk being equal to the average degree of wear;
in response to the degree of wear of the given hard disk being greater than the average degree of wear, determining the number to be greater than the average number of spare blocks; and
in response to the level of wear of the given hard disk being less than the average level of wear, determining the number to be less than the average number of spare blocks.
4. The method of claim 2, wherein determining the average number of spare blocks comprises:
determining a total capacity of the spare blocks based on a difference between a total capacity of the plurality of hard disks and a required capacity of the RAID;
determining a total number of the spare blocks based on a ratio of a total capacity of the spare blocks to a capacity of a single hard disk block; and
determining the average number of spare blocks based on a ratio of the total number of spare blocks to the total number of the plurality of hard disks.
5. The method of claim 1, wherein the plurality of hard disks are included in a hard disk pool used to create the RAID, the method further comprising, for a given hard disk of the plurality of hard disks:
comparing the degree of wear of the given hard disk to a predetermined threshold;
in response to the degree of wear of the given hard disk being greater than the predetermined threshold,
removing the given hard disk from the pool of hard disks, an
And adding a new hard disk to the hard disk pool, wherein the abrasion degree of the new hard disk is less than the preset threshold value.
6. An electronic device, comprising:
at least one processor; and
at least one memory including computer program instructions, the at least one memory and the computer program instructions configured to, with the at least one processor, cause the electronic device to perform acts comprising:
acquiring respective abrasion degrees of a plurality of hard disks related to the redundant array of independent hard disks RAID;
determining respective spare blocks among the hard disk blocks of the plurality of hard disks based on the wear level such that the number of spare blocks in one hard disk is positively correlated with the wear level of that hard disk; and
selecting a predetermined number of hard disk blocks from among the hard disk blocks other than the spare block for use in creating RAID blocks for the RAID, the predetermined number of hard disk blocks being from different hard disks of the plurality of hard disks.
7. The electronic device of claim 6, wherein the actions further comprise, for a given hard disk of the plurality of hard disks:
determining an average degree of wear of the plurality of hard disks;
determining the average number of spare blocks of the plurality of hard disks; and
determining a number of spare blocks for the given hard disk based on the average wear level, the average number of spare blocks, and the wear level of the given hard disk.
8. The electronic device of claim 7, wherein the actions further comprise:
determining the number to be equal to the average number of spare blocks in response to the degree of wear of the given hard disk being equal to the average degree of wear;
in response to the degree of wear of the given hard disk being greater than the average degree of wear, determining the number to be greater than the average number of spare blocks; and
in response to the level of wear of the given hard disk being less than the average level of wear, determining the number to be less than the average number of spare blocks.
9. The electronic device of claim 7, wherein the actions further comprise:
determining a total capacity of the spare blocks based on a difference between a total capacity of the plurality of hard disks and a required capacity of the RAID;
determining a total number of the spare blocks based on a ratio of a total capacity of the spare blocks to a capacity of a single hard disk block; and
determining the average number of spare blocks based on a ratio of the total number of spare blocks to the total number of the plurality of hard disks.
10. The electronic device of claim 6, wherein the plurality of hard disks are included in a pool of hard disks used to create the RAID, the acts further comprising:
comparing the degree of wear of a given hard disk to a predetermined threshold;
in response to the degree of wear of the given hard disk being greater than the predetermined threshold,
removing the given hard disk from the pool of hard disks, an
And adding a new hard disk to the hard disk pool, wherein the abrasion degree of the new hard disk is less than the preset threshold value.
11. A computer-readable storage medium having stored thereon machine-executable instructions that, when executed, cause a machine to perform a method for a storage system, the method comprising:
acquiring respective abrasion degrees of a plurality of hard disks related to the redundant array of independent hard disks RAID;
determining respective spare blocks among the hard disk blocks of the plurality of hard disks based on the degree of wear such that the number of spare blocks in one hard disk is positively correlated with the degree of wear of that hard disk; and
selecting a predetermined number of hard disk blocks from among the hard disk blocks other than the spare block for use in creating a RAID block for the RAID, the predetermined number of hard disk blocks being from different hard disks of the plurality of hard disks.
12. The computer-readable storage medium of claim 11, wherein determining the respective spare block based on the degree of wear comprises, for a given hard disk of the plurality of hard disks:
determining an average degree of wear of the plurality of hard disks;
determining an average spare block number of the plurality of hard disks; and
determining a number of spare blocks for the given hard disk based on the average wear level, the average number of spare blocks, and the wear level of the given hard disk.
13. The computer-readable storage medium of claim 12, wherein determining the number of spare blocks for the given hard disk comprises:
determining the number to be equal to the average number of spare blocks in response to the degree of wear of the given hard disk being equal to the average degree of wear;
in response to the degree of wear of the given hard disk being greater than the average degree of wear, determining the number to be greater than the average number of spare blocks; and
in response to the level of wear of the given hard disk being less than the average level of wear, determining the number to be less than the average number of spare blocks.
14. The computer-readable storage medium of claim 12, wherein determining the average spare block number comprises:
determining a total capacity of the spare blocks based on a difference between a total capacity of the plurality of hard disks and a required capacity of the RAID;
determining a total number of the spare blocks based on a ratio of a total capacity of the spare blocks to a capacity of a single hard disk block; and
determining the average number of spare blocks based on a ratio of the total number of spare blocks to the total number of the plurality of hard disks.
15. The computer-readable storage medium of claim 11, wherein the plurality of hard disks are included in a pool of hard disks used to create the RAID, the method further comprising, for a given hard disk of the plurality of hard disks:
comparing the degree of wear of the given hard disk to a predetermined threshold;
in response to the degree of wear of the given hard disk being greater than the predetermined threshold,
removing the given hard disk from the pool of hard disks, an
And adding a new hard disk to the hard disk pool, wherein the abrasion degree of the new hard disk is less than the preset threshold value.
16. A storage system comprising a plurality of hard disks associated with a redundant array of independent hard disks RAID and an electronic device according to any one of claims 6 to 10.
CN201810049315.3A 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product Active CN110058788B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202210638491.7A CN115061624A (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product
CN201810049315.3A CN110058788B (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product
US16/177,736 US10628071B2 (en) 2018-01-18 2018-11-01 Method of storage allocation, electronic device, storage system and computer program product
US16/811,530 US11106376B2 (en) 2018-01-18 2020-03-06 Method of storage allocation, electronic device, storage system and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810049315.3A CN110058788B (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210638491.7A Division CN115061624A (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product

Publications (2)

Publication Number Publication Date
CN110058788A CN110058788A (en) 2019-07-26
CN110058788B true CN110058788B (en) 2022-06-14

Family

ID=67213948

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210638491.7A Pending CN115061624A (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product
CN201810049315.3A Active CN110058788B (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210638491.7A Pending CN115061624A (en) 2018-01-18 2018-01-18 Method for allocating storage, electronic device, storage system and computer program product

Country Status (2)

Country Link
US (2) US10628071B2 (en)
CN (2) CN115061624A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124260B (en) * 2018-10-31 2023-09-08 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for managing redundant array of independent disks
CN111124271B (en) * 2018-10-31 2023-09-08 伊姆西Ip控股有限责任公司 Method, apparatus and medium for performing resource reallocation for disk systems
JP2021135760A (en) * 2020-02-27 2021-09-13 キオクシア株式会社 Memory system and memory control method
CN113391758A (en) * 2020-03-13 2021-09-14 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing stripes in a storage system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101925884A (en) * 2007-11-28 2010-12-22 三德动力有限公司 Increasing spare space in memory to extend lifetime of memory
WO2013145024A1 (en) * 2012-03-30 2013-10-03 Hitachi, Ltd. Storage system with flash memory, and storage control method
CN103678144A (en) * 2012-09-05 2014-03-26 慧荣科技股份有限公司 Data storage device and flash memory control method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8473779B2 (en) * 2008-02-29 2013-06-25 Assurance Software And Hardware Solutions, Llc Systems and methods for error correction and detection, isolation, and recovery of faults in a fail-in-place storage array
US20100235605A1 (en) * 2009-02-13 2010-09-16 Nir Perry Enhancement of storage life expectancy by bad block management
US8639877B2 (en) * 2009-06-30 2014-01-28 International Business Machines Corporation Wear leveling of solid state disks distributed in a plurality of redundant array of independent disk ranks
US8370659B2 (en) * 2009-09-21 2013-02-05 Dell Products L.P. Systems and methods for time-based management of backup battery life in memory controller systems
JP6331773B2 (en) * 2014-06-30 2018-05-30 富士通株式会社 Storage control device and storage control program
US10082965B1 (en) * 2016-06-30 2018-09-25 EMC IP Holding Company LLC Intelligent sparing of flash drives in data storage systems
US11269562B2 (en) * 2019-01-29 2022-03-08 EMC IP Holding Company, LLC System and method for content aware disk extent movement in raid
RU2019102665A (en) * 2019-01-31 2020-07-31 ИЭмСи АйПи ХОЛДИНГ КОМПАНИ, ЛЛС SYSTEM AND METHOD FOR ACCELERATED RAID RECOVERY THROUGH ISSUE KNOWLEDGE

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101925884A (en) * 2007-11-28 2010-12-22 三德动力有限公司 Increasing spare space in memory to extend lifetime of memory
WO2013145024A1 (en) * 2012-03-30 2013-10-03 Hitachi, Ltd. Storage system with flash memory, and storage control method
CN103678144A (en) * 2012-09-05 2014-03-26 慧荣科技股份有限公司 Data storage device and flash memory control method

Also Published As

Publication number Publication date
US11106376B2 (en) 2021-08-31
US20200210089A1 (en) 2020-07-02
US10628071B2 (en) 2020-04-21
CN110058788A (en) 2019-07-26
CN115061624A (en) 2022-09-16
US20190220212A1 (en) 2019-07-18

Similar Documents

Publication Publication Date Title
CN110058788B (en) Method for allocating storage, electronic device, storage system and computer program product
US9733844B2 (en) Data migration method, data migration apparatus, and storage device
CN110058789B (en) Method for managing storage system, storage system and computer program product
CN103688248B (en) A kind of management method of storage array, device and controller
CN110737401B (en) Method, apparatus and computer program product for managing redundant array of independent disks
CN109002259B (en) Hard disk allocation method, system, device and storage medium of homing group
US11150949B2 (en) Resource release method, resource allocation method, devices, and computer program products
CN111857554B (en) Adaptive change of RAID redundancy level
US10922201B2 (en) Method and device of data rebuilding in storage system
US11474919B2 (en) Method for managing multiple disks, electronic device and computer program product
CN111858130A (en) Method, apparatus and computer program product for splitting a disk set
CN108170366A (en) Storage medium management method, device and storage device in storage device
CN112732168B (en) Method, apparatus and computer program product for managing a storage system
US20200285510A1 (en) High precision load distribution among processors
CN107346350B (en) Distribution method, device and cluster system for integrated circuit layout data processing tasks
CN109725835A (en) For managing the method, equipment and computer program product of disk array
CN109725827A (en) Manage the method, system and computer program product of storage system
WO2020094134A1 (en) Disk allocation method and apparatus, and readable storage medium
CN110286848A (en) Data processing method and device
CN115293335A (en) Image identification method and device based on implicit universal matrix multiplication
CN111124260B (en) Method, electronic device and computer program product for managing redundant array of independent disks
CN107577439B (en) Method, apparatus, device and computer readable storage medium for allocating processing resources
CN114168064A (en) Method, apparatus and computer program product for rebuilding a storage system
JP6524945B2 (en) Control device, storage device, storage control method and computer program
CN112748864B (en) Method, electronic device and computer program product for allocating storage discs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant