CN102521058A - Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group - Google Patents
Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group Download PDFInfo
- Publication number
- CN102521058A CN102521058A CN2011103940053A CN201110394005A CN102521058A CN 102521058 A CN102521058 A CN 102521058A CN 2011103940053 A CN2011103940053 A CN 2011103940053A CN 201110394005 A CN201110394005 A CN 201110394005A CN 102521058 A CN102521058 A CN 102521058A
- Authority
- CN
- China
- Prior art keywords
- data
- disk
- source tray
- written
- raid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a disk data pre-migration method of an RAID (Redundant Array of Independent Disks) group, and relates to the technical field of data storage. The method comprises the following steps of: 1, monitoring disk error information of the RAID group, wherein the disk error information comprises software detecting information and SMART alarming information; 2, if the number of the monitored disk error information achieves a preset threshold, migrating the data of a source disk having the disk error information to a target disk, wherein the data of the source disk having the disk error information comprises the data existed on the source disk, the data to be written into the source disk and the data to be written into the target disk; and 3, using the target disk to replace the source disk. According to the invention, a disk early warning technology is used for predicting that a failure of the disk may occur; the data is migrated to a spare disk in advance before the failure occurs; and when the RAID group breaks down, the possibility of losing data is preferably reduced by adopting the method provided by the invention than by the traditional method of restoring data when the disk is failed.
Description
Technical field
The present invention relates to technical field of data storage, be specifically related to the preparatory moving method of a kind of RAID group data in magnetic disk.
Background technology
RAID is the abbreviation of English Redundant Array of Independent Disks, and translating into the Chinese meaning is " RAID ", also abbreviates disk array (Disk Array) sometimes as.
Briefly, RAID be a kind of polylith independently hard disk (physical hard disk) combine by different modes and form a hard disk groups (logic hard disk), thereby the memory property and technology that data backup be provided higher than single hard disk is provided.The different modes of forming disk array is called RAID rank (RAID Levels).The function of data backup is after in a single day user data is damaged, and utilizes backup information can make corrupt data be able to recover, thereby has ensured safety of user data.Seem that the user disk groups of composition similarly is a hard disk, the user can carry out subregion to it, format or the like.In a word, operation and the single hard disk to disk array is the same.Different is that the storage speed of disk array is high more a lot of than single hard disk, and automatic data backup can be provided.
The RAID technology has had seven kinds of basic RAID ranks from RAID 0 to RAID 6 now through constantly development.In addition, also have some basic other array configurations of RAID level, like RAID 10 (combination of RAID 0 and RAID 1), RAID 50 (combination of RAID 0 and RAID 5) etc.Different RAID ranks are being represented different memory properties, data security and carrying cost.
RAID 0:RAID 0 is not real RAID structure, does not have data redundancy.RAID 0 continuously partition data and concurrently read/write therefore on a plurality of disks, have very high data transmission rate.But RAID 0 does not provide data reliability when improving performance.If a disk failure will influence whole data.Therefore RAID 0 need not to can be applicable to the key application of data high availability.
RAID 1:RAID 1 realizes data redundancy through data image, on the disk of two pairs of separation, produces mutually redundant data.RAID 1 can improve the performance of reading, when raw data is busy, and direct reading of data from mirror-image copies.RAID 1 is that expense is the highest in the disk array, but the highest data available rate is provided.When a disk failure, system can automatically exchange on the mirrored disk, and the data that need not recombinate and lose efficacy.
RAID 2: in concept, RAID 2 is similar with RAID 3, and the both divides the data stick to be distributed on the different hard disks, and the bar block unit is position or byte.Yet RAID 2 uses the coding techniques that is called " hamming error correcting code " that bug check and recovery are provided.This coding techniques needs a plurality of disks to deposit inspection and recovering information, makes RAID 2 technology implementations more complicated.Therefore, RAID 2 seldom uses in business environment.
RAID 3: be different from RAID 2, RAID 3 uses the monolithic disk to deposit parity information.If a disk failure, parity checking dish and other data disks can produce data again.If the odd even dish lost efficacy, then do not influence data and use.RAID 3 can provide good transfer rate for a large amount of continuous datas, but for random data, the odd even dish can become the bottleneck of write operation.
RAID 4: the same with RAID 2, RAID 3, RAID 4, RAID 5 are too with the data stickization and be distributed on the different disks, but the bar block unit is piece or record.RAID 4 uses a disk as the parity checking dish, and each write operation all need be visited the parity checking dish, becomes the bottleneck of write operation.RAID 4 seldom uses in commercial application.
RAID 5:RAID 5 does not have the parity checking dish of independent appointment, but across access data and parity information on all disks.On RAID 5, read/write pointer array equipment is simultaneously operated, and higher data traffic is provided.RAID 5 is more suitable in the data of small data piece, random read-write.RAID 3 compares with RAID 5, and important difference is that RAID 3 whenever carries out a data transfer, need relate to all array dishes.And for RAID 5, most of data transmission can be carried out parallel work-flow only to a disk operating." writing loss " arranged in RAID 5, and promptly old data and parity information are wherein read in the write operation read/write operation that will produce four reality each time for twice, write new data and parity information twice.
RAID 6:RAID 6 compares with RAID 5, has increased by second independently parity information piece.Two independently the odd even system use different algorithms, the reliability of data is very high.Even two disks lost efficacy simultaneously, can not influence the use of data yet.But need distribute to the bigger disk space of parity information, bigger " writing loss " arranged with respect to RAID 5.The non-constant of the write performance of RAID 6.
General RAID disk error handling processing is machine-processed as follows: in read-write RAID group, in the disk process I/O mistake takes place; System can be with the disk of makeing mistakes from the deletion of RAID group; And then HotSpare disk joined in the array; Then the system start-up data restore operation utilizes the data on other disk to recover the new data that add disk.
Use original a certain disk in the new disk Replace Disk and Press Anykey To Reboot array, two kinds of methods are arranged.One is not stop I/O, and method is the same with top description low-quality disk replacement.Another kind method is to stop I/O earlier, and the whole dish of source tray data that will want then to replace copies new building to, re-uses new building afterwards and substitutes source tray, then recovers the I/O operation again.
Above-mentioned error handling processing mechanism has two defectives.Only, the I/O mistake just use new disk to substitute the disk of makeing mistakes when taking place.In actual the use I/O mistake takes place and often mean that bad piece has appearred in disk, data are made mistakes, and this disk can not normally use.For RAID 0, disk failure just means losing of data.This treatment mechanism lacks foresight, waits for passively when disk is made mistakes and just the mistake that takes place being remedied.Simultaneously; For the huge disk array of data volume; Utilize other recovering disk data data in the array need carry out a large amount of calculating, the data bus resource and the cpu resource of this recovery operation labor mean that also the time of restore data needs is longer; And the I/O ability of reduction RAID group itself, lengthening I/O operation delay and attenuating I/O throughput rate.
A kind of method of Replace Disk and Press Anykey To Reboot will stop the I/O operation of RAID group earlier in addition, and this replacement method can not be accomplished the user transparent.It is a kind of unacceptable behavior for the user that storage system in use stops I/O.
Summary of the invention
The technical matters that (one) will solve
The technical matters first to be solved by this invention: the RAID group fault of how avoiding when disk breaks down being caused; It two is: the disk in the replacement RAID group under the situation that does not stop the I/O operation how; It three is: reduce the performance loss that causes in the Replace Disk and Press Anykey To Reboot process.
(2) technical scheme
For solving the problems of the technologies described above, the invention provides the preparatory moving method of a kind of RAID group data in magnetic disk, may further comprise the steps:
The disk error information of S1, monitoring RAID group, said disk error information comprises software detection information and SMART warning information;
S2, if the quantity of the said disk error information that monitors reaches preset threshold value; The data migtation that then will have the source tray of said disk error information arrives destination disk; Said data with source tray of said disk error information are included in the data of existing data and source tray to be written on the source tray, are the data of source tray to be written.
S3, use destination disk replacement source tray.
Preferably, step S2 specifically comprises: the step of S20, two data buffer memorys of establishment: the migration buffer memory of creating the data that are used for temporary destination disk to be written; With the establishment mirror cache, said mirror cache is used for temporary mirror-write data; S21, the data of source tray to be written are carried out mirror-write step and S22, the data of destination disk to be written are carried out the data migration step;
Step S21 specifically comprises:
S211, if the quantity of the said disk error information that monitors reaches preset threshold value, before the data of source tray to be written write source tray, intercept and capture the write operation of source tray;
If the position that S212 writes does not also begin migration, the data migtation with source tray to be written does not arrive destination disk; If the position of writing has been accomplished migration, clone said write operation, and the data of said source tray to be written are write the corresponding position of destination disk; If move the position of writing, be written to mirror cache to the mirror-write data earlier, wait the data migtation of this position of writing to accomplish, write destination disk to the data of writing again;
Step S22 specifically comprises:
S221, reading of data is temporary to the migration buffer memory from source tray;
S222, the data that will move in the buffer memory are written to destination disk;
Whether there are data need be written to destination disk in S223, the inspection mirror cache,, write destination disk to the data of the correspondence in the mirror cache if there are data need be written to destination disk;
Whether S224, inspection migration are accomplished, if accomplish, and repeating step S221~S223 so; If accomplish, withdraw from the migration flow process.
Preferably, said software detection information comprises that bad piece is redirected number, and said SMART warning information comprises the SMART number of faults, and disk read error number and disk can not be repaired sector number.
Preferably; The step of said use destination disk replacement source tray is specially: the data migtation that will have the source tray of said disk error information arrives destination disk; Use said destination disk to replace said source tray then; Again that said source tray is relevant I/O transition of operation removes said source tray to said destination disk at last from said RAID group.
(3) beneficial effect
The present invention possibly break down through disk early warning technology prediction disk, and before fault takes place, just arrive subsequent use dish to data migtation in advance, thereby avoid the generation of fault.After losing efficacy than traditional magnetic disk again the method for restore data more preferably reduce the possibility of loss of data when breaking down in the RAID group.Specifically, the disk replacement process does not influence the normal read-write of user to disk array to user transparent, and can reduce the system overhead that in replacement process, causes to greatest extent.Online method of duplicating with Replace Disk and Press Anykey To Reboot need not stop the I/O operation of RAID group in the process of Replace Disk and Press Anykey To Reboot.Use the mirror-write mode to realize that the mode of online migration realizes the seamless replacement of RAID group disk.The mirror-write of online migration does not have lock design, the influence that the system overhead of avoiding locking and locking mechanisms are operated I/O.In addition, migration operation only be with the data migtation of wrong disk to destination disk, a large amount of system overheads of having avoided data restore operation to need.
Description of drawings
Fig. 1 is the method flow diagram of the embodiment of the invention.
Embodiment
Regard to the preparatory moving method of a kind of RAID group data in magnetic disk proposed by the invention down, specify in conjunction with accompanying drawing and embodiment.
As shown in Figure 1, the embodiment of the invention provides a kind of RAID group data in magnetic disk preparatory moving method, may further comprise the steps:
The disk error information of S1, monitoring RAID group, said disk error information comprises software detection information and SMART warning information;
S2, if the quantity of the said disk error information that monitors reaches preset threshold value; The data migtation of source tray (disk of promptly makeing mistakes) that then will have said disk error information is to destination disk (being backup diskette); Said data with source tray of said disk error information are included in the data of existing data and source tray to be written on the source tray, these two kinds of data that data are source tray to be written of the data of existing data and source tray to be written on source tray.
S3, use destination disk replacement source tray, the I/O that source tray is relevant transfers to destination disk, and the RAID group is deleted from source tray.
Step S2 specifically comprises: the step of S20, two data buffer memorys of establishment: the migration buffer memory of creating the data that are used for temporary destination disk to be written; With the establishment mirror cache, said mirror cache is used for temporary mirror-write data; S21, the data of source tray to be written are carried out mirror-write step and S22, the data of destination disk to be written are carried out the data migration step, promptly from source tray, read data earlier and be put into the migration buffer memory, be written to destination disk to the data in the migration buffer memory then;
Step S21 specifically comprises:
S211, if the quantity of the said disk error information that monitors reaches preset threshold value, before the data of source tray to be written write source tray, intercept and capture the write operation of source tray;
If the position that S212 writes does not also begin migration, the data migtation with source tray to be written does not arrive destination disk; If the position of writing has been accomplished migration, clone said write operation, and the data of said source tray to be written are write the corresponding position of destination disk; If move the position of writing, be written to mirror cache to the mirror-write data earlier, wait the data migtation of this position of writing to accomplish, write destination disk to the data of writing again;
Step S22 specifically comprises:
S221, reading of data is temporary to the migration buffer memory from source tray;
S222, the data that will move in the buffer memory are written to destination disk;
Whether there are data need be written to destination disk in S223, the inspection mirror cache,, write destination disk to the data of the correspondence in the mirror cache if there are data need be written to destination disk;
Whether S224, inspection migration are accomplished, if accomplish, and repeating step S221~S223 so; If accomplish, withdraw from the migration flow process.
Said software detection information comprises that bad piece is redirected number; Said SMART (Self-Monitoring Analysis and Reporting Technology; Self-monitoring, analysis and reporting techniques) warning information comprises the SMART number of faults, disk read error number and disk can not be repaired sector number.
Among the step S3; The data migtation that will have the source tray of said disk error information arrives destination disk; Use said destination disk to replace said source tray then, again that said source tray is relevant I/O transition of operation removes said source tray to said destination disk at last from said RAID group.
If two tasks of mirror-write and data migtation are operated the same block space of disk simultaneously, at this time just clash.Way commonly used in the prior art is that contingent conflict is added latching operation.In the migration task, the data of moving are locked, after accomplishing migration, to this block space release.Too, when writing beginning, add the space of latching operation in the mirror-write task, write the space of unlocking operation after accomplishing.The task of elder generation's holder lock has the right of operating this block space, and the task of holder lock does not need to wait for, lock is untied and could be continued operation by the time.This method that locks needs to wait for that lock just can continue after being untied by the holder by the time when clashing, the I/O that so just strengthens when clashing postpones, and has reduced the I/O performance.And the present invention is temporary to mirror cache with data earlier when handling conflict, and migration is accomplished in this position by the time, writes destination disk (referring to step S212) to the data in the buffer memory again.There is not wait in the design of this nothing lock, does not exist to lock/system overhead that release brings yet.
Can find out by above embodiment; The present invention possibly break down through disk early warning technology prediction disk; And before fault takes place just in advance data migtation to subsequent use dish, after losing efficacy than traditional magnetic disk again the method for restore data organize the possibility that more preferably reduces loss of data when breaking down at RAID.Specifically, the disk replacement process does not influence the normal read-write of user to disk array to user transparent, and can reduce the system overhead that in replacement process, causes to greatest extent.Online method of duplicating with Replace Disk and Press Anykey To Reboot need not stop the I/O operation of RAID group in the process of Replace Disk and Press Anykey To Reboot.Use the mirror-write mode to realize that the mode of online migration realizes the seamless replacement of RAID group disk.The mirror-write of online migration does not have lock design, the influence that the system overhead of avoiding locking and locking mechanisms are operated I/O.In addition, migration operation only be with the data migtation of wrong disk to destination disk, a large amount of system overheads of having avoided data restore operation to need.
Above embodiment only is used to explain the present invention; And be not limitation of the present invention; The those of ordinary skill in relevant technologies field under the situation that does not break away from the spirit and scope of the present invention, can also be made various variations and modification; Therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.
Claims (6)
1. the preparatory moving method of RAID group data in magnetic disk is characterized in that, may further comprise the steps:
The disk error information of S1, monitoring RAID group, said disk error information comprises software detection information and SMART warning information;
S2, if the quantity of the said disk error information that monitors reaches preset threshold value; The data migtation that then will have the source tray of said disk error information arrives destination disk; Said data with source tray of said disk error information are included in the data of existing data and source tray to be written on the source tray, are the data of destination disk to be written;
S3, use destination disk replacement source tray.
2. the method for claim 1 is characterized in that, step S2 specifically comprises: the step of S20, two data buffer memorys of establishment: the migration buffer memory of creating the data that are used for temporary destination disk to be written; With the establishment mirror cache, said mirror cache is used for temporary mirror-write data; S21, the data of source tray to be written are carried out the mirror-write step; With S22, the data of destination disk to be written are carried out the data migration step.
3. method as claimed in claim 2 is characterized in that step S21 specifically comprises:
S211, if the quantity of the said disk error information that monitors reaches preset threshold value, before the data of source tray to be written write source tray, intercept and capture the write operation of source tray;
If the position that S212 writes does not also begin migration, the data migtation with source tray to be written does not arrive destination disk; If the position of writing has been accomplished migration, clone said write operation, and the data of said source tray to be written are write the corresponding position of destination disk; If move the position of writing, be written to mirror cache to the mirror-write data earlier, wait the data migtation of this position of writing to accomplish, write destination disk to the data of writing again.
4. method as claimed in claim 2 is characterized in that step S22 specifically comprises:
S221, reading of data is temporary to the migration buffer memory from source tray;
S222, the data that will move in the buffer memory are written to destination disk;
Whether there are data need be written to destination disk in S223, the inspection mirror cache,, write destination disk to the data of the correspondence in the mirror cache if there are data need be written to destination disk;
Whether S224, inspection migration are accomplished, if accomplish, and repeating step S221~S223 so; If accomplish, withdraw from the migration flow process.
5. the method for claim 1 is characterized in that, said software detection information comprises that bad piece is redirected number, and said SMART warning information comprises the SMART number of faults, and disk read error number and disk can not be repaired sector number.
6. like each described method in the claim 1~5; It is characterized in that; The step of said use destination disk replacement source tray is specially: the data migtation that will have the source tray of said disk error information arrives destination disk; Use said destination disk to replace said source tray then, again that said source tray is relevant I/O transition of operation removes said source tray to said destination disk at last from said RAID group.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103940053A CN102521058A (en) | 2011-12-01 | 2011-12-01 | Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103940053A CN102521058A (en) | 2011-12-01 | 2011-12-01 | Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102521058A true CN102521058A (en) | 2012-06-27 |
Family
ID=46291993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103940053A Pending CN102521058A (en) | 2011-12-01 | 2011-12-01 | Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102521058A (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103513942A (en) * | 2013-10-21 | 2014-01-15 | 华为技术有限公司 | Method and device for reconstructing independent redundancy array of inexpensive disks |
CN103577111A (en) * | 2012-07-23 | 2014-02-12 | 上海宝存信息科技有限公司 | Nonvolatile memory based dynamic redundant array of independent disks (RAID) storage system and method |
CN103677733A (en) * | 2013-12-16 | 2014-03-26 | 华为技术有限公司 | Method and device for changing RAID attributes |
CN103713969A (en) * | 2013-12-30 | 2014-04-09 | 华为技术有限公司 | Method and device for improving reliability of solid state disk |
WO2014075586A1 (en) * | 2012-11-13 | 2014-05-22 | 浙江宇视科技有限公司 | Method and device for automatically recovering storage of jbod array |
CN104375953A (en) * | 2013-08-15 | 2015-02-25 | 联想(北京)有限公司 | Equipment control method and electronic equipment |
CN104407821A (en) * | 2014-12-12 | 2015-03-11 | 浪潮(北京)电子信息产业有限公司 | Method and device for achieving RAID reconstitution |
CN104461771A (en) * | 2014-11-03 | 2015-03-25 | 北京百度网讯科技有限公司 | Data backup processing method and device |
WO2015176455A1 (en) * | 2014-05-22 | 2015-11-26 | 中兴通讯股份有限公司 | Hadoop-based hard disk damage handling method and device |
CN105224888A (en) * | 2015-09-29 | 2016-01-06 | 上海爱数软件有限公司 | A kind of data of magnetic disk array protection system based on safe early warning technology |
CN106201834A (en) * | 2016-07-06 | 2016-12-07 | 乐视控股(北京)有限公司 | A kind for the treatment of method and apparatus of disk failures |
CN106610788A (en) * | 2015-10-26 | 2017-05-03 | 华为技术有限公司 | Hard disk array control method and device |
CN107391042A (en) * | 2017-07-28 | 2017-11-24 | 郑州云海信息技术有限公司 | The design method and system of a kind of disk array |
CN107612719A (en) * | 2017-08-29 | 2018-01-19 | 深圳市盛路物联通讯技术有限公司 | The data back up method and device of Internet of Things access point |
CN107733916A (en) * | 2017-11-09 | 2018-02-23 | 新华三云计算技术有限公司 | The distributed lock resources control authority moving method and device of a kind of image file |
CN108205424A (en) * | 2017-12-29 | 2018-06-26 | 北京奇虎科技有限公司 | Data migration method, device and electronic equipment based on disk |
WO2019071431A1 (en) * | 2017-10-10 | 2019-04-18 | 华为技术有限公司 | I/o request processing method and device and host |
US10389342B2 (en) | 2017-06-28 | 2019-08-20 | Hewlett Packard Enterprise Development Lp | Comparator |
US10402113B2 (en) | 2014-07-31 | 2019-09-03 | Hewlett Packard Enterprise Development Lp | Live migration of data |
US10402261B2 (en) | 2015-03-31 | 2019-09-03 | Hewlett Packard Enterprise Development Lp | Preventing data corruption and single point of failure in fault-tolerant memory fabrics |
US10402287B2 (en) | 2015-01-30 | 2019-09-03 | Hewlett Packard Enterprise Development Lp | Preventing data corruption and single point of failure in a fault-tolerant memory |
US10409681B2 (en) | 2015-01-30 | 2019-09-10 | Hewlett Packard Enterprise Development Lp | Non-idempotent primitives in fault-tolerant memory |
CN110445803A (en) * | 2019-08-21 | 2019-11-12 | 之江实验室 | A kind of traffic smoothing moving method of isomery cloud platform |
CN110545268A (en) * | 2019-08-21 | 2019-12-06 | 之江实验室 | multidimensional mimicry voting method based on process elements |
US10540109B2 (en) | 2014-09-02 | 2020-01-21 | Hewlett Packard Enterprise Development Lp | Serializing access to fault tolerant memory |
US10594442B2 (en) | 2014-10-24 | 2020-03-17 | Hewlett Packard Enterprise Development Lp | End-to-end negative acknowledgment |
US10664369B2 (en) | 2015-01-30 | 2020-05-26 | Hewlett Packard Enterprise Development Lp | Determine failed components in fault-tolerant memory |
CN111324304A (en) * | 2020-02-14 | 2020-06-23 | 西安奥卡云数据科技有限公司 | Data protection method and device based on SSD hard disk life prediction |
CN112084061A (en) * | 2019-06-15 | 2020-12-15 | 国际商业机器公司 | Reducing data loss events in RAID arrays of the same RAID level |
CN113311990A (en) * | 2020-02-26 | 2021-08-27 | 杭州海康威视数字技术股份有限公司 | Data storage method, device and storage medium |
CN115061641A (en) * | 2022-08-16 | 2022-09-16 | 新华三信息技术有限公司 | Disk fault processing method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1501364A (en) * | 2002-11-18 | 2004-06-02 | 华为技术有限公司 | A hot backup data migration method |
CN1519726A (en) * | 2003-01-24 | 2004-08-11 | 华为技术有限公司 | Online method for reorganizing magnetic disk |
US20070174720A1 (en) * | 2006-01-23 | 2007-07-26 | Kubo Robert A | Apparatus, system, and method for predicting storage device failure |
CN101866271A (en) * | 2010-06-08 | 2010-10-20 | 华中科技大学 | Security early warning system and method based on RAID |
CN101923501A (en) * | 2010-07-30 | 2010-12-22 | 华中科技大学 | Disk array multi-level fault tolerance method |
-
2011
- 2011-12-01 CN CN2011103940053A patent/CN102521058A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1501364A (en) * | 2002-11-18 | 2004-06-02 | 华为技术有限公司 | A hot backup data migration method |
CN1519726A (en) * | 2003-01-24 | 2004-08-11 | 华为技术有限公司 | Online method for reorganizing magnetic disk |
US20070174720A1 (en) * | 2006-01-23 | 2007-07-26 | Kubo Robert A | Apparatus, system, and method for predicting storage device failure |
CN101866271A (en) * | 2010-06-08 | 2010-10-20 | 华中科技大学 | Security early warning system and method based on RAID |
CN101923501A (en) * | 2010-07-30 | 2010-12-22 | 华中科技大学 | Disk array multi-level fault tolerance method |
Non-Patent Citations (2)
Title |
---|
胡维等: "基于智能预警和自修复的高可靠磁盘阵列关键技术研究", 《中国优秀硕士论文电子期刊网》, 30 November 2010 (2010-11-30) * |
胡维等: "基于智能预警的自恢复存储系统研究", 《计算机研究与发展》, 3 December 2010 (2010-12-03) * |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577111A (en) * | 2012-07-23 | 2014-02-12 | 上海宝存信息科技有限公司 | Nonvolatile memory based dynamic redundant array of independent disks (RAID) storage system and method |
CN103577111B (en) * | 2012-07-23 | 2017-05-31 | 上海宝存信息科技有限公司 | Dynamic independent redundancy array storage system and method based on nonvolatile memory |
WO2014075586A1 (en) * | 2012-11-13 | 2014-05-22 | 浙江宇视科技有限公司 | Method and device for automatically recovering storage of jbod array |
US9697078B2 (en) | 2012-11-13 | 2017-07-04 | Zhejiang Uniview Technologies Co., Ltd | Method and device for auto recovery storage of JBOD array |
CN104375953A (en) * | 2013-08-15 | 2015-02-25 | 联想(北京)有限公司 | Equipment control method and electronic equipment |
CN103513942B (en) * | 2013-10-21 | 2016-06-29 | 华为技术有限公司 | The reconstructing method of raid-array and device |
CN103513942A (en) * | 2013-10-21 | 2014-01-15 | 华为技术有限公司 | Method and device for reconstructing independent redundancy array of inexpensive disks |
WO2015058542A1 (en) * | 2013-10-21 | 2015-04-30 | 华为技术有限公司 | Reconstruction method and device for redundant array of independent disks |
CN103677733A (en) * | 2013-12-16 | 2014-03-26 | 华为技术有限公司 | Method and device for changing RAID attributes |
CN103677733B (en) * | 2013-12-16 | 2017-04-12 | 华为技术有限公司 | Method and device for changing RAID attributes |
CN103713969A (en) * | 2013-12-30 | 2014-04-09 | 华为技术有限公司 | Method and device for improving reliability of solid state disk |
WO2015176455A1 (en) * | 2014-05-22 | 2015-11-26 | 中兴通讯股份有限公司 | Hadoop-based hard disk damage handling method and device |
US10402113B2 (en) | 2014-07-31 | 2019-09-03 | Hewlett Packard Enterprise Development Lp | Live migration of data |
US10540109B2 (en) | 2014-09-02 | 2020-01-21 | Hewlett Packard Enterprise Development Lp | Serializing access to fault tolerant memory |
US11016683B2 (en) | 2014-09-02 | 2021-05-25 | Hewlett Packard Enterprise Development Lp | Serializing access to fault tolerant memory |
US10594442B2 (en) | 2014-10-24 | 2020-03-17 | Hewlett Packard Enterprise Development Lp | End-to-end negative acknowledgment |
CN104461771A (en) * | 2014-11-03 | 2015-03-25 | 北京百度网讯科技有限公司 | Data backup processing method and device |
CN104407821B (en) * | 2014-12-12 | 2018-02-06 | 浪潮(北京)电子信息产业有限公司 | A kind of method and device for realizing RAID reconstruction |
CN104407821A (en) * | 2014-12-12 | 2015-03-11 | 浪潮(北京)电子信息产业有限公司 | Method and device for achieving RAID reconstitution |
US10409681B2 (en) | 2015-01-30 | 2019-09-10 | Hewlett Packard Enterprise Development Lp | Non-idempotent primitives in fault-tolerant memory |
US10402287B2 (en) | 2015-01-30 | 2019-09-03 | Hewlett Packard Enterprise Development Lp | Preventing data corruption and single point of failure in a fault-tolerant memory |
US10664369B2 (en) | 2015-01-30 | 2020-05-26 | Hewlett Packard Enterprise Development Lp | Determine failed components in fault-tolerant memory |
US10402261B2 (en) | 2015-03-31 | 2019-09-03 | Hewlett Packard Enterprise Development Lp | Preventing data corruption and single point of failure in fault-tolerant memory fabrics |
CN105224888A (en) * | 2015-09-29 | 2016-01-06 | 上海爱数软件有限公司 | A kind of data of magnetic disk array protection system based on safe early warning technology |
CN106610788A (en) * | 2015-10-26 | 2017-05-03 | 华为技术有限公司 | Hard disk array control method and device |
CN106201834A (en) * | 2016-07-06 | 2016-12-07 | 乐视控股(北京)有限公司 | A kind for the treatment of method and apparatus of disk failures |
US10389342B2 (en) | 2017-06-28 | 2019-08-20 | Hewlett Packard Enterprise Development Lp | Comparator |
CN107391042A (en) * | 2017-07-28 | 2017-11-24 | 郑州云海信息技术有限公司 | The design method and system of a kind of disk array |
CN107612719B (en) * | 2017-08-29 | 2021-03-19 | 深圳市盛路物联通讯技术有限公司 | Data backup method and device for Internet of things access point |
CN107612719A (en) * | 2017-08-29 | 2018-01-19 | 深圳市盛路物联通讯技术有限公司 | The data back up method and device of Internet of Things access point |
CN109906438B (en) * | 2017-10-10 | 2021-02-09 | 华为技术有限公司 | Method for processing I/O request, storage array and host |
WO2019071431A1 (en) * | 2017-10-10 | 2019-04-18 | 华为技术有限公司 | I/o request processing method and device and host |
US11762555B2 (en) | 2017-10-10 | 2023-09-19 | Huawei Technologies Co., Ltd. | I/O request processing method, storage array, and host |
CN109906438A (en) * | 2017-10-10 | 2019-06-18 | 华为技术有限公司 | Handle method, storage array and the host of I/O request |
WO2019071699A1 (en) * | 2017-10-10 | 2019-04-18 | 华为技术有限公司 | Method for processing i/o request, storage array, and host |
EP4030296A1 (en) * | 2017-10-10 | 2022-07-20 | Huawei Technologies Co., Ltd. | I/o request processing method, storage array, and host |
US11209983B2 (en) | 2017-10-10 | 2021-12-28 | Huawei Technologies Co., Ltd. | I/O request processing method, storage array, and host |
CN107733916A (en) * | 2017-11-09 | 2018-02-23 | 新华三云计算技术有限公司 | The distributed lock resources control authority moving method and device of a kind of image file |
CN108205424A (en) * | 2017-12-29 | 2018-06-26 | 北京奇虎科技有限公司 | Data migration method, device and electronic equipment based on disk |
CN112084061A (en) * | 2019-06-15 | 2020-12-15 | 国际商业机器公司 | Reducing data loss events in RAID arrays of the same RAID level |
CN110545268A (en) * | 2019-08-21 | 2019-12-06 | 之江实验室 | multidimensional mimicry voting method based on process elements |
CN110445803A (en) * | 2019-08-21 | 2019-11-12 | 之江实验室 | A kind of traffic smoothing moving method of isomery cloud platform |
CN111324304A (en) * | 2020-02-14 | 2020-06-23 | 西安奥卡云数据科技有限公司 | Data protection method and device based on SSD hard disk life prediction |
CN113311990A (en) * | 2020-02-26 | 2021-08-27 | 杭州海康威视数字技术股份有限公司 | Data storage method, device and storage medium |
WO2021170048A1 (en) * | 2020-02-26 | 2021-09-02 | 杭州海康威视数字技术股份有限公司 | Data storage method and apparatus, and storage medium |
CN115061641A (en) * | 2022-08-16 | 2022-09-16 | 新华三信息技术有限公司 | Disk fault processing method, device, equipment and storage medium |
CN115061641B (en) * | 2022-08-16 | 2022-11-25 | 新华三信息技术有限公司 | Disk fault processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102521058A (en) | Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group | |
US8117496B2 (en) | Detecting and recovering from silent data errors in application cloning systems | |
US9189311B2 (en) | Rebuilding a storage array | |
US7640452B2 (en) | Method for reconstructing data in case of two disk drives of RAID failure and system therefor | |
US8554734B1 (en) | Continuous data protection journaling in data storage systems | |
US7565573B2 (en) | Data-duplication control apparatus | |
US6892276B2 (en) | Increased data availability in raid arrays using smart drives | |
CN100426247C (en) | Data recovery method | |
US20110264949A1 (en) | Disk array | |
US7975171B2 (en) | Automated file recovery based on subsystem error detection results | |
CN102110154B (en) | File redundancy storage method in cluster file system | |
CN102929750A (en) | Nonvolatile media dirty region tracking | |
US11347600B2 (en) | Database transaction log migration | |
KR20060043873A (en) | System and method for drive recovery following a drive failure | |
US10503620B1 (en) | Parity log with delta bitmap | |
US11403176B2 (en) | Database read cache optimization | |
US20120243395A1 (en) | Method and System for Data Replication | |
US8135928B2 (en) | Self-adjusting change tracking for fast resynchronization | |
CN110058965A (en) | Data re-establishing method and equipment in storage system | |
US11093339B2 (en) | Storage utilizing a distributed cache chain and a checkpoint drive in response to a data drive corruption | |
CN113377569A (en) | Method, apparatus and computer program product for recovering data | |
US7529776B2 (en) | Multiple copy track stage recovery in a data storage system | |
CN106933707B (en) | Data recovery method and system of data storage device based on raid technology | |
CN106527983B (en) | Data storage method and disk array | |
US10664346B2 (en) | Parity log with by-pass |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120627 |