CN1670682A - A data reintegration method - Google Patents

A data reintegration method Download PDF

Info

Publication number
CN1670682A
CN1670682A CN 200410008942 CN200410008942A CN1670682A CN 1670682 A CN1670682 A CN 1670682A CN 200410008942 CN200410008942 CN 200410008942 CN 200410008942 A CN200410008942 A CN 200410008942A CN 1670682 A CN1670682 A CN 1670682A
Authority
CN
China
Prior art keywords
data
recombination
address
raid
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200410008942
Other languages
Chinese (zh)
Other versions
CN100381999C (en
Inventor
张巍
黄玉环
张国彬
张粤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2004100089420A priority Critical patent/CN100381999C/en
Publication of CN1670682A publication Critical patent/CN1670682A/en
Application granted granted Critical
Publication of CN100381999C publication Critical patent/CN100381999C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

This invention discloses one data regroup method, which is based on the original independent and redundant RAID system high address to preserve one block disk space as regroup area and to repeat the following steps: a, determining the current regroup data low address and to regroup the data from high address to low address into new RAIN type of data; writing the regroup data from initial address into the new RAID type data; using the current regroup data of low address as the next regroup data high address as the next second initial address of the data.

Description

A kind of data reconstitution method
Technical field
The present invention relates to a kind of data processing method, be specifically related to the data reconstitution method of a kind of raid-array (RAID).
Background technology
Along with the develop rapidly of science and technology and the widespread usage of computer technology, people are more and more higher to the performance requirement of memory device, and the RAID technology is widely used in the disk array as a proven technique.
Briefly, RAID be polylith independently physical hard disk combine differently and form a logic hard disk groups, thereby provide than the higher memory property of single hard disk and the technology of data redundancy be provided.The data redundancy technology is meant after in a single day user data is damaged, and utilizes redundant information that corrupt data is recovered, thereby has ensured the safety of user data.Disk array has formed different RAID ranks because of its composition mode difference, and present RAID comprises seven kinds of basic ranks of difference from RAID0 to RAID6.In addition, also having some basic other array configurations of RAID level, is the combination of RAID0 and RAID1 as RAID0+1, and RAID0+5 is the combination of RAID0 and RAID5.Different memory property, data security and the carrying costs of different RAID rank representatives.Such as:
RAID0 is divided into many with data, concurrently they is write then on each hard disk in the disk array; During sense data, RAID controller reading of data from each hard disk is passed to main frame after these data are reverted to original order.The advantage of this method is to adopt deblocking, parallel transfer mode, can improve the main frame read or write speed, and storage space does not have redundancy in the disk array.But without any raising, when any hard disk media broke down, system can't recover to the reliability of system for it.In order to guarantee the security of data recombination process, data recombination now generally adopts the cooperative mode of dual controller, and promptly one of them controller is the controller of recombinating, and the reassembly process of data and main frame read-write requests etc. are controlled; Another controller then is the backup of the controller of recombinating, and when recombinating controller appearance mistake or inefficacy, this backup controller can be taken over its work immediately.
RAID1 is divided into identical two groups to the hard disk in the disk array, and mirror image when arbitrary magnetic disk media breaks down, can utilize the data on its mirror image to recover, thereby improve system survivability each other.It still adopts the mode of parallel transmission behind the piecemeal to data, has not only improved read or write speed, has also strengthened the reliability of system, but low to the utilization factor of hard disk, redundance is 50%.
RAID3 is the same with RAID0, also adopts the method for deblocking parallel transfer, and different is it calculates block data after deblocking parity check sum, then block data and parity information is write in the hard disk array in the lump.This method all makes moderate progress to the access speed and the reliability of data, can utilize the impaired data of signal reconstruct on not the corrupt data dish and parity checking dish.The hard disk utilization factor of RAID3 is than RAID1 height, the array of forming by 5 hard disks for example, and redundance has only 20%.But, because the parity information fixed storage of RAID3 on a hard disk, makes this hard disk burden heavier, thereby produce new bottleneck.
RAID5 is similar to the data processing method that RAID3 is adopted, and different is, and it intersects parity information and writes on each hard disk in the array, thereby has overcome the bottleneck problem among the RAID3.
So concerning a RAID, the read-write requests that it will come from main frame is distributed on each member's disk of forming this RAID, to improve the readwrite performance of main frame, and utilize checking data among the RAID or mirror image data to improve the redundancy of system, so that after one or more member's disk failure of forming RAID, can recover the data on the inefficacy member disk by checking data or mirror image data.Simultaneously readwrite performance and the effective storage size of a RAID are with its RAID type and to form member's number of disks of this RAID also relevant.Write performance such as RAID5 is poorer than RAID0+1, and this is because during the RAID5 write data, need XOR to regenerate checking data, and RAID0+1 only need write mirror image data.And for example RAID 3D+1P and RAID 7D+1P, wherein D represents disk, the digitized representation number of disks before the D; P is a checking data, and the numeral before the P is the checking data amount of per minute bar, because latter's read-write requests has been dispersed on more RAID member's disk, has improved the concurrency of read-write, and it is better that the readwrite performance of RAID 7D+1P is compared RAID 3D+1P.But along with the performance of RAID improves, its cost also increases thereupon.The performance of comparing RAID 3D+1P such as RAID 7D+1P increases, but its member's number of disks then has been increased to 8 by 4 of RAID 3D+1P.Therefore, dispose application scenario, performance and the cost requirement which kind of RAID type depends on the user.And in use, user's according to demand change often dynamically adjusts original RAID type.Such as the increase along with user storage information, original RAID 3D+1P can't satisfy user's demand, and the user can increase one or more disks in original 3D+1P system, and promptly 3D+1P will become (3+r) D+1P, and wherein r is newly-increased number of disks.Like this, both dynamically increase the capacity of RAID system, also improved the performance of system simultaneously.This operation change the rank of RAID, meanwhile, just the data on the original disk array need be rearranged and do on the as a whole new disk array, and this process that data are rearranged just is called data recombination.
Generally speaking, in dynamic expansion or when revising the RAID type, need emphasis to consider two problems: one is will be efficiently the data recombination of former RAID type to be become new RAID type.Another will guarantee that exactly in the data recombination process, data can keep consistency and security all the time.
In the prior art, the data recombination problem when having a lot of patents to relate to dynamic expansion RAID type, one of them is representational to be exactly the patent EP0654736A2 of Hitachi, Ltd, as shown in Figure 1.The direction of its data recombination is for to carry out to the high address region-by-region from low address.In the regrouping process, earlier former categorical data is read in the cache memory (Cache) with former type-scheme, regenerated new checking data according to the pattern after the expansion then, the data after will recombinating at last write disk array by new model.Particularly, this method is divided into three zones and two points with all data by memory address, and wherein zone 6040 is recombination region not as yet, and this zone still is former RAID type, must carry out according to former RAID type its visit; This zone has been new RAID type in order to finish recombination region in 6020 in zone, must carry out according to new RAID type its visit; The zone 6030 then for just at recombination region, this region R AID type confusion must be carried out in Cache its visit.
Two points then are 6011 and 6012, and wherein 6011 are the starting point of not recombinating as yet, i.e. the logic start address of recombination region 6040 not as yet; 6012 for finishing the reorganization end point, promptly finished the logic end address of recombination region 6020.In the regrouping process, these two points are preserved in Cache and are dynamically updated.
So, when main frame need be read and write array, to finishing reorganization part 6020, the part 6040 of not recombinating as yet, respectively according to before the expansion with expansion after the mode access array get final product, but the part 6030 to recombinating is then finished read-write requests in Cache.
This method recombination efficiency is higher, also can recover the method for failed disk data by reconstruct, solves disk failure problem in the regrouping process.But, the zone of carrying out is operated in reorganization for data, if the RAID controller breaks down suddenly during just according to new RAID type recombination data, because the data in the former RAID type area are capped, and the reorganization of new RAID categorical data operation is not finished as yet in should the zone, and the data in then should the zone will lose original integrality and consistance.
Another same representative data recombination technology is exactly the patent WO98/15895 of MYLEX company.Be capped in order to prevent regrouping process Central Plains RAID categorical data, new RAID categorical data does not write and the problem that causes data to recover as yet fully, the data that need in the former RAID type to recombinate will be copied to the free space of RAID system earlier, begin reorganization afterwards again.Fig. 2 has illustrated its reorganization thinking, and among Fig. 2, D1 to D3 is former RAID type disk array, and D4 is newly-increased disk, and Fig. 2 a to 2d is respectively the synoptic diagram of different storage states in the data recombination process.
At first, in former RAID type disk, determine a zone, generally be referred to as destruction region (Destructive Zone), size=N of Destructive Zone * M=2 * 3=6, the zone of promptly former RAID type from 0 to 5 is shown in Fig. 2 a.
Then, according to Destructive Zone size, data among the last 6 itemize data cells of D3 (DB) DB1 to DB6 are copied to from bottom to top in the correspondence position of newly-increased disk D4, so as in D3, to vacate one with the big or small identical space of Destructive Zone, shown in Fig. 2 b.
Then, the data 0 to 5 of the Destructive Zone size of needs reorganization are copied in the last 6 blocks of data spaces that D3 vacates from bottom to top, simultaneously, again in the zone with this copying data DB6 top next-door neighbour in the D4.Copying double purpose, mainly is to occur disk failure in the regrouping process and obliterated data in order to prevent, shown in Fig. 2 c.
At last, the former RAID categorical data among the Destructive Zone is reassembled as new RAID categorical data, shown in Fig. 2 d.
More than operation is exactly the process that a certain amount of data are carried out complete reorganization.Remaining reorganization, all repeat said process, promptly each double of copy earlier will recombination data be put into respectively among last 6 blocks of data spaces of D3 and the D4 in the identical size data space on the DB6, more former categorical data is recombinated, so circulation is finished until reorganization.
This data reconstitution method has solved the inconsistent problem of data that may occur in the data recombination process, the security that has improved data recombination effectively.But, recombinate in situ because of this method is that data with former RAID type backup to other places.Like this, must just can recombinate through after the several read-write operation to the data that will recombinate, to such an extent as to the efficient of reorganization has been greatly diminished.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide the data reconstitution method in a kind of raid-array, in the reduced data regrouping process, not only guarantees higher security but also can improve recombination efficiency.
For achieving the above object, technical scheme of the present invention is achieved in that
The invention discloses the data reconstitution method in a kind of raid-array, reserve a disk space as recombination region at former independent redundancy magnetic disc array system high address side afterbody, with the high address of this reservation recombination region as the start address that writes data, and will have the initial high address of the high address of data in the former independent redundancy magnetic disc array system as recombination data, this data reconstitution method also comprises following steps:
A. determine the current low address of wanting recombination data, and be new raid-array categorical data to the data recombination between the low address the current recombination data high address of wanting;
B. to the low address direction, order writes in the new independent redundancy magnetic disc array system data after will recombinating from the start address that writes data;
C. judge whether that data need reorganization in addition, if have, then with current want the recombination data low address in abutting connection with low address as the following high address of less important recombination data, current write the data low address in abutting connection with low address as the start address that writes data next time, return step a; Otherwise finish current reorganization flow process.
Described reservation recombination region comprises: at least one block size is the zone that the ratio of member's number of disks of member's number of disks of new independent redundancy magnetic disc array system and former independent redundancy magnetic disc array system rounds up.
In the such scheme, the size of the data area of at every turn recombinating is big or small identical with the reservation recombination region.
Determine described in the step a to want the recombination data low address to be: the size that deducts the reservation recombination region with the current high address of wanting recombination data adds 1 again, obtains the current low address of wanting recombination data.When the low address of wanting recombination data that obtains less than 0 the time, the current low address of recombination data of wanting is set to 0.
Step a also further comprises: with current want the recombination data high address to the copying data between the low address to the high speed buffer, in Cache, be reassembled as new raid-array categorical data.
In the such scheme, copy the address of data in former raid-array in the Cache at every turn, and this pointer is mirrored in the backup controller with pointer mark.Finish recombination data with pointer mark at every turn and write address in the new raid-array.
Compared with prior art, data reconstitution method provided by the present invention changes the data recombination order from high address into carries out to low address, the copy of former categorical data is recombinated, write in the other specific region, improved data recombination speed greatly, avoid the data consistency that may occur in the data recombination process and integrality because of the situation that the regrouping process middle controller breaks down and destroyed, guaranteed the security of data recombination process.
Description of drawings
Fig. 1 is the patent EP0654736A2 data recombination area schematic of prior art Hitachi, Ltd;
Fig. 2 is the prior art MYLEX WO98/15895 of a house journal data reconstitution method synoptic diagram;
Fig. 3 is a data reconstitution method synoptic diagram of the present invention;
Fig. 4 is a data recombination process flow diagram of the present invention.
Embodiment
In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention is further described below in conjunction with embodiment.
As shown in Figure 3, the afterbody of the former RAID system high address side that needs among the present invention to recombinate must reserve at least one zone more than or equal to K itemize, as the recombination region of reserving, is used for writing the data that are rearranged into newtype for the first time; The data that reorganization is for the second time finished then write and are rearranged on the data of newtype for the first time, promptly with respect to the low address that write the address last time.
The size of the recombination region of reserving is to determine like this: member's number of disks of supposing former RAID system equals M, and member's number of disks of new RAID system equals N, then reserves recombination region , the unit of K is an itemize here, symbol " " represent to round up.Such as former RAID type is 3D+1P, and new RAID type is 7D+1P, then This zone is initially a no datat zone, does not promptly have the Any user data to be present in wherein.For common RAID system, the data member number of disks is 7 to the maximum, and minimum is 2, and then K is 4 to the maximum; And stripe unit size maximum generally is 64K, and so, user's maximum possible is reserved size and just equaled 4 * 7 * 64K=1792K.This space size user is complete acceptable, and because RAID has redundancy, so accomplish in the reservation recombination region of K itemize size that no datat is no problem.
After having determined the reservation recombination region, can begin reorganization.But, also need the cooperation of pointer: establish a pointer X in order to allow data when recombinating, have certain directivity and positional.If current number be from y to y+K-1 at the itemize of recombination region just, so y value is composed to X, X just at the itemize of recombination region number, in regrouping process, less than the zone of X, is the zone of not recombinating as yet with regard to sensing then; More than or equal to the zone of X+K, all be the zone of having finished reorganization.
During data recombination, its reorganization direction, is upwards recombinated promptly from last itemize that is using of former RAID type for to carry out to low address from high address always.The data of K itemize size of each reorganization, until No. 0 itemize of the former RAID categorical data of having recombinated, reorganization both came to an end.
Detailed process is: during the reorganization beginning, at first copy is equivalent to the data of recombination region size in Cache from former RAID type, carry out data recombination according to new RAID type, and regenerate checking data, newtype data after will recombinating afterwards write in the recombination region and newly-increased disk space of reservation, that is: write among the new RAID from location superlatively to the data space of K itemize size of low address.When recombinating for the second time, the copying data that is equivalent to the recombination region size of copies data low address is in Cache in the time of at first will recombinating in abutting connection with last time, carry out data recombination according to new RAID type, and regenerate checking data, newtype data after will recombinating afterwards write on the data that write last time, promptly write write last time the address in the space that is equivalent to the recombination region size of low address.The data of K itemize size are recombinated later at every turn, copied and write in abutting connection with low address by what copied last time and write respectively, until No. 0 itemize of the former RAID categorical data of having recombinated, reorganization promptly comes to an end.
Follow reassembly process, when reserving recombination region and newly-increased disk space and new, former categorical data occurrence positions in the usage space conflict, former categorical data in the usage space is then covered by the newtype data after the reorganization, be reassembled as the newtype data already and be in this part the former categorical data that is capped the position this moment, thereby can not influence the security of data recombination.
Can there be host requests, the i.e. read-write requests of RAID data processing in real time in the regrouping process.For the zone of not recombinating as yet, be according to former RAID type of process read-write requests; And read-write requests then need be handled by the position of new RAID type and new itemize in the zone of having recombinated; For the zone of recombinating, if read request, then by former RAID type of process; If write request, can only wait for that then being requested deal with data reorganization finishes after, carry out according to new RAID type and new itemize position.During reorganization,, then to wait for after write-back or tracing are finished and just can proceed if run into this area data just at write-back or tracing.
In addition, for preventing in the regrouping process, the controller of recombinating lost efficacy and causes pointer X to lose its indication address, after then upgrading X at every turn, all it is mirrored in the another one backup controller, so that after local RAID controller lost efficacy, mirror image end RAID controller can be taken over operation, thereby continues to finish data recombination.
In order more clearly to understand the data recombination process, the mode with process flow diagram is introduced below, and as shown in Figure 4, wherein step 401 to 403 is the preliminary work of data recombination.The idiographic flow of data recombination may further comprise the steps:
Step 401: the size of at first in former type disk, determining to reserve recombination region;
Step 402: determining will be by the high address of recombination data in the former type disk, and further determining according to the size of reserving recombination region will be by the low address of recombination data in the former type disk again;
Step 403: determine the location superlatively that the data afterwards of quilt reorganization for the first time write, the location superlatively of this moment is exactly the location superlatively of disk array.
Step 404~406: after determining the recombination data start address and writing the address, will be judged by the address of recombination data to current.Generally, the size of data of each reorganization all is a K itemize, whenever finish once reorganization all being subtracted K by the address of recombination data, but because the size of data that needs in the former type disk to recombinate not necessarily is the integral multiple of K, so will judge that all whether the current low address of recombination data of wanting is less than 0 before each reorganization.If the current low address of recombination data of wanting is not less than 0, then expression residue is not less than K itemize by the recombination data amount, can continue reorganization, so just entering step 405 inquiry will be by the write-back of recombination data, tracing state, and wanting the address of recombination data this moment is y~(y+K-1); Whether judge the current high address of wanting recombination data less than 0 if the current low address of wanting recombination data, then enters step 406 less than 0, if the current high address of wanting recombination data, just represent that data have been recombinated less than 0 and finish, then finish this reorganization flow process; Otherwise just entering step 405 inquiry will be by the write-back of recombination data, tracing state, and want the address of recombination data this moment is 0~(y+K-1).
Step 407: whether judgement will carry out having in the recombination region data at write-back or tracing, if data are arranged at write-back or tracing, then returns step 405 and inquires about again; If do not have data at write-back or tracing, then enter step 408.
Step 408: forbid to will be, for follow-up data recombination operation provides safe data environment by the write-back of data in the recombination data zone, tracing request.In fact the purpose of step 404~408 be exactly two kinds of situations of difference forbid will recombination data write-back, tracing request: a kind of situation is, when finding not have data at write-back, tracing, the write-back of forbidden data, tracing immediately; Another kind of situation is, when finding data are arranged just at write-back, tracing, then need wait pending data write-back, tracing finish could forbidden data write-back, tracing request.
Step 409: recombinate according to new RAID type being copied among the Cache, and regenerate checking data by recombination data.
Step 410:Cache to data recombination nodule bundle after, the data of finishing reorganization are write in the determined disk space of step 403, promptly from the newtype disk superlatively the location to the data space of K itemize size of relative low address; Subtract K writing the pointer address of finishing recombination data afterwards, write the pointer address of finishing recombination data with newly getting the address assignment, as write the pointer address of finishing recombination data next time.
So far, the once reorganization of data handled finish, can cancel, promptly enter step 411 the forbidding of data write-back, tracing.
Step 412: will be subtracted K by the pointer value in recombination data zone, and make the former categorical data of the K itemize size that pointed will recombinate next time, and this new pointer value is saved in the backup controller, and return step 404.
At this moment, the complete operation of a data recombination has just been finished.Need reorganization if also leave former categorical data, data recombination operation so next time will enter step 404 once more and whether judge the current low address of wanting recombination data less than 0, if be not less than 0, then its subsequent operation is identical with data recombination operation last time; If the current low address of recombination data of wanting is less than 0, then to enter step 406 and judge that whether the current high address of recombination data of wanting is less than 0, if be not less than 0, just enter write-back, tracing state that step 405 is inquired about the data that will be recombinated, enter step 407 afterwards and judged whether data at write-back or tracing, operation after this is identical with the operation of each data recombination.If the judged result in step 406 be the current high address of wanting recombination data less than 0, that just explanation do not had former categorical data to need reorganization, the reorganization operation both came to an end.
In above-mentioned data recombination flow process, step 410 can be exchanged with step 411 order; Pointer X neither point to y, if pointer X when each data recombination, can identify in the former type disk will recombination data address realm.
By the inventive method as can be known, technical method provided by the present invention makes data keep its integrality, unitarity and security in regrouping process.The above only is process of the present invention and method embodiment, in order to restriction the present invention, all any modifications of being made within the spirit and principles in the present invention, is not equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (9)

1, the data reconstitution method in a kind of raid-array, it is characterized in that, reserve a disk space as recombination region at former independent redundancy magnetic disc array system high address side afterbody, with the high address of this reservation recombination region as the start address that writes data, and will have the initial high address of the high address of data in the former independent redundancy magnetic disc array system as recombination data, this data reconstitution method also comprises following steps:
A. determine the current low address of wanting recombination data, and be new raid-array categorical data to the data recombination between the low address the current recombination data high address of wanting;
B. to the low address direction, order writes in the new independent redundancy magnetic disc array system data after will recombinating from the start address that writes data;
C. judge whether that data need reorganization in addition, if have, then with current want the recombination data low address in abutting connection with low address as the following high address of less important recombination data, current write the data low address in abutting connection with low address as the start address that writes data next time, return step a; Otherwise finish current reorganization flow process.
2, data reconstitution method as claimed in claim 1, it is characterized in that described reservation recombination region comprises: at least one block size is the zone that the ratio of member's number of disks of member's number of disks of new independent redundancy magnetic disc array system and former independent redundancy magnetic disc array system rounds up.
3, data reconstitution method as claimed in claim 1 is characterized in that, the size of the data area of at every turn recombinating is big or small identical with the reservation recombination region.
4, as claim 1,2 or 3 described data reconstitution methods, it is characterized in that, determine described in the step a to want the recombination data low address to be: the size that deducts the reservation recombination region with the current high address of wanting recombination data adds 1 again, obtains the current low address of wanting recombination data.
5, data reconstitution method as claimed in claim 4 is characterized in that, the resulting low address of wanting recombination data was less than 0 o'clock, and the current low address of recombination data of wanting is set to 0.
6, as claim 1,2 or 3 described data reconstitution methods, it is characterized in that, step a further comprises: with current want the recombination data high address to the copying data between the low address to the high speed buffer, in Cache, be reassembled as new raid-array categorical data.
7, data reconstitution method as claimed in claim 1 is characterized in that, this method further comprises: copy the address of data in former raid-array in the Cache to pointer mark at every turn.
8, data reconstitution method as claimed in claim 7 is characterized in that, this method further comprises: before carrying out data recombination, the pointer of recombination data address in former raid-array to be mirrored in the backup controller sign at every turn.
9, data reconstitution method as claimed in claim 1 is characterized in that, this method further comprises: finish recombination data with pointer mark at every turn and write address in the new raid-array.
CNB2004100089420A 2004-03-15 2004-03-15 A data reintegration method Expired - Fee Related CN100381999C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100089420A CN100381999C (en) 2004-03-15 2004-03-15 A data reintegration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100089420A CN100381999C (en) 2004-03-15 2004-03-15 A data reintegration method

Publications (2)

Publication Number Publication Date
CN1670682A true CN1670682A (en) 2005-09-21
CN100381999C CN100381999C (en) 2008-04-16

Family

ID=35041958

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100089420A Expired - Fee Related CN100381999C (en) 2004-03-15 2004-03-15 A data reintegration method

Country Status (1)

Country Link
CN (1) CN100381999C (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955673A (en) * 2011-08-29 2013-03-06 厦门市美亚柏科信息股份有限公司 RAID5 (Redundant Array of Inexpensive Disc level 5) intelligent regrouping method and device
CN105487825A (en) * 2015-12-08 2016-04-13 浙江宇视科技有限公司 RAID array reconstruction method and device
CN108228382A (en) * 2018-01-11 2018-06-29 成都信息工程大学 A kind of data reconstruction method for EVENODD code single-deck failures
CN111158589A (en) * 2019-12-16 2020-05-15 绿晶半导体科技(北京)有限公司 Dynamic management method and device for storage array

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2603757B2 (en) * 1990-11-30 1997-04-23 富士通株式会社 Method of controlling array disk device
JP3249868B2 (en) * 1993-11-19 2002-01-21 株式会社日立製作所 Array type storage system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955673A (en) * 2011-08-29 2013-03-06 厦门市美亚柏科信息股份有限公司 RAID5 (Redundant Array of Inexpensive Disc level 5) intelligent regrouping method and device
CN102955673B (en) * 2011-08-29 2015-08-12 厦门市美亚柏科信息股份有限公司 RAID5 intelligence recombination method and device
CN105487825A (en) * 2015-12-08 2016-04-13 浙江宇视科技有限公司 RAID array reconstruction method and device
CN105487825B (en) * 2015-12-08 2019-04-30 浙江宇视科技有限公司 RAID array method for reconstructing and device
CN108228382A (en) * 2018-01-11 2018-06-29 成都信息工程大学 A kind of data reconstruction method for EVENODD code single-deck failures
CN108228382B (en) * 2018-01-11 2021-08-10 成都信息工程大学 Data recovery method for single-disk fault of EVENODD code
CN111158589A (en) * 2019-12-16 2020-05-15 绿晶半导体科技(北京)有限公司 Dynamic management method and device for storage array
CN111158589B (en) * 2019-12-16 2023-10-20 绿晶半导体科技(北京)有限公司 Dynamic management method and device for storage array

Also Published As

Publication number Publication date
CN100381999C (en) 2008-04-16

Similar Documents

Publication Publication Date Title
US7213166B2 (en) In-place data transformation for fault-tolerant disk storage systems
US10152254B1 (en) Distributing mapped raid disk extents when proactively copying from an EOL disk
US7370145B2 (en) Write back method for RAID apparatus
JP3753461B2 (en) Data writing method and data storage system by redundancy parity method
KR102533389B1 (en) Data storage device for increasing lifetime of device and raid system including the same
JP2654346B2 (en) Disk array system, storage method, and control device
US8392678B2 (en) Storage system and data management method
US20100306466A1 (en) Method for improving disk availability and disk array controller
CN101236482B (en) Method for processing data under degrading state and independent redundancy magnetic disc array system
US20020194428A1 (en) Method and apparatus for distributing raid processing over a network link
CN103049222A (en) RAID5 (redundant array of independent disk 5) write IO optimization processing method
US20040064641A1 (en) Storage device with I/O counter for partial data reallocation
CN100498678C (en) Method and system for read-write operation to cheap magnetic disk redundant array
WO2009058189A1 (en) Improved system and method for efficient updates of sequential block storage
CN101526882A (en) Method and device for reconstructing logic unit in redundant array subsystem of independent disk
CN100337224C (en) Method of local data migration
US7062605B2 (en) Methods and structure for rapid background initialization of a RAID logical unit
KR20110093035A (en) Apparatus for flash address translation apparatus and method thereof
CN107430494A (en) Remote Direct Memory accesses
US8402213B2 (en) Data redundancy using two distributed mirror sets
US7051156B2 (en) Raid-5 disk having cache memory
CN1253791C (en) Read-write operation method in multi-disc failure in five-grade independent redundant disc array
CN1670682A (en) A data reintegration method
US20080104445A1 (en) Raid array
JP2006252165A (en) Disk array device and computer system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080416

Termination date: 20180315