CN103761058A - RAID1 and RAID4 hybrid structure network storage system and method - Google Patents

RAID1 and RAID4 hybrid structure network storage system and method Download PDF

Info

Publication number
CN103761058A
CN103761058A CN201410033455.3A CN201410033455A CN103761058A CN 103761058 A CN103761058 A CN 103761058A CN 201410033455 A CN201410033455 A CN 201410033455A CN 103761058 A CN103761058 A CN 103761058A
Authority
CN
China
Prior art keywords
data
check
node
back end
write operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410033455.3A
Other languages
Chinese (zh)
Other versions
CN103761058B (en
Inventor
许鲁
郭明阳
杨琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN BRANCH BLUE WHALE INFORMATION TECHNOLOGY Co Ltd
Original Assignee
TIANJIN BRANCH BLUE WHALE INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TIANJIN BRANCH BLUE WHALE INFORMATION TECHNOLOGY Co Ltd filed Critical TIANJIN BRANCH BLUE WHALE INFORMATION TECHNOLOGY Co Ltd
Priority to CN201410033455.3A priority Critical patent/CN103761058B/en
Publication of CN103761058A publication Critical patent/CN103761058A/en
Application granted granted Critical
Publication of CN103761058B publication Critical patent/CN103761058B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an RAID1 and RAID4 hybrid structure network storage system and method. The system comprises a plurality of redundancy groups which are logically independent. Each redundancy group is of an RAID1 and RAID4 hybrid structure and comprises a plurality of data nodes and one check node. The data nodes are used for catching write operation data blocks. The write operation data blocks are sent to the check nodes in an RAID1 mode to form an RAID1 mirror image. The check nodes are used for catching the write operation data blocks and forming an RAID4 redundancy strip with data blocks, with the same logic address, of the other data nodes of the redundancy groups, asynchronous redundancy computing is conducted and data migration instructions are sent after checked information is stored. The data nodes are further used for receiving the data migration instructions and migrating the write operation data blocks to an RAID4 data set. The RAID1 and RAID4 hybrid structure network storage system and method effectively solve the problem that in a concentrated redundancy management method, a control node becomes the performance bottleneck.

Description

RAID1 and RAID4 mixed structure network store system and method
Technical field
The present invention relates to Computer Storage field, particularly relate to a kind of RAID1 and RAID4 mixed structure network store system and method.
Background technology
Raid-array (Redundant Arrays Of Independent Disks, RAID) technology is a kind of storage means that strengthens redundancy, capacity and memory property that provides, and has stronger manageability, reliabilty and availability.By redundant computation, RAID technology can meet under the prerequisite of reliability requirement, reduces the capacity overhead of system.The research that RAID technology is applied to network store system has been experienced from centralized and has been controlled to distributed control, from single RAID mode to the evolution that mixes RAID mode.In large scale network storage system, owing to need to considering the impact of complex network environment on performance, so adopting, some storage systems mix RAID mode.
Wherein, AutoRAID is by being placed on most recently used data in the high performance hard disk with RAID1 storage, not too conventional data are placed in the hard disk of the economical and efficient of storing with RAID5, because RAID1 and RAID5 data are Separate Storages, therefore, at net environment, bulk data is moved to RAID5 from RAID1, can bring the huge network bandwidth and Disk bandwidth consumption, be not suitable for using under distributional environment; DPGADR adopts the structure similar to AutoRAID, the method raising Performance of Network Storage Systems generating by copying and postpone check block, and still, in DPGADR, because RAID5 is partly degraded mode storage, therefore, the readwrite performance of cold data is poor; And the rear end redundancy centralization of Adoption Network storage system, because Redundancy Management node itself is not stored data, data and check information are all stored on memory node, therefore the read-write of data and check information can bring huge network bandwidth consumption, increased the management difficulty of memory node to data and check information simultaneously, and between application server and node network failure in the situation that, the read-write operation of data can not normally carry out.
Summary of the invention
Based on this, be necessary for centralized Redundancy Management method performance bottleneck problem, and the problem that data can not normally be accessed in node failure situation, a kind of RAID1 and RAID4 mixed structure network store system and method are provided.
RAID1 and a RAID4 mixed structure network store system, for be connected write operation or the read operation of carrying out data with application server, comprise a plurality of separate redundancy group in logic:
The structure of described redundancy group is the mixed structure of RAID1 and RAID4;
Described redundancy group comprises a plurality of back end and a check-node:
Described back end, for when described redundancy group is received the write operation requests of described application server, by the data-block cache of write operation; And use RAID1 mode, and the data block of described write operation is sent in described check-node, form RAID1 mirror image;
Described check-node, be used for the data-block cache of described write operation, and with the data block of the same logical address of other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is stored, and upgraded after storage in described asynchronous redundant computation and described check information, to described back end, send Data Migration order;
Described back end, also, for after receiving the described Data Migration order of described check-node transmission, migrates to RAID4 data set by the data block of corresponding described write operation.
As a kind of embodiment, when described redundancy group is received the read operation request of described application server, from described back end, read the data block of read operation, send to described application server.
Preferably, described redundancy group is RAID1 and the RAID4 mixed structure of many buffer memorys of layering storage;
Described back end comprises the first data transmit-receive module, the first buffer unit and the first logical volume, wherein:
Described the first data transmit-receive module, for receiving described write operation requests or the described read operation request of described application server; After receiving described write operation requests, the data block of described write operation is written to described the first buffer unit, and uses RAID1 mode, the data block of described write operation is written to described check-node; After receiving described read operation request, the data block of described read operation is sent to described application server;
Described the first buffer unit, for the data block of write operation or the data block of described read operation described in buffer memory, and before described check-node carries out described asynchronous redundant computation, by the data block of the described write operation of buffer memory, according to cache policy, independent lower brush is to described the first logical volume;
Described the first logical volume, for storing the independently data block of the described write operation of lower brush of described the first buffer unit, and after receiving the described Data Migration order of described check-node transmission, the data block of corresponding described write operation is migrated to described RAID4 data set;
Described check-node comprises the second buffer unit, the second redundant computation unit and the second verification volume, wherein:
Described the second buffer unit, the data block of the described write operation sending for back end described in buffer memory;
Described the second redundant computation unit, for the data block of the data block of described write operation that described the second buffer unit is sent and the same logical address of other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out described asynchronous redundant computation, after calculating completes, described check information is write in described the second verification volume, and sent described Data Migration order to described the first logical volume;
Described the second verification volume, for storing described check information.
More excellent, also comprise degradation processing module and path modular converter, wherein:
Described degradation processing module, at described redundancy group back end described in certain, or described check-node is when break down, and controls described redundancy group and enters degrading state, demote write operation or read operation;
Described path modular converter, for under described back end failure condition, data access path between described application server and described redundancy group is switched to described check-node from described back end, controls described write operation requests or described read operation request are issued to described check-node; And again access at described back end, carry out after data reconstruction completes, described data access path being switched back to described back end from described check-node.
As a kind of embodiment, described degradation processing module comprises the first degradation processing unit; Wherein:
Described the first degradation processing unit, for under described back end fault or data reconstruction unfinished state, described data access path, from described back end is switched to described check-node, is controlled described check-node and is carried out described degradation write operation or read operation;
Described check-node also comprises the second data transmit-receive module, wherein:
Described the second data transmit-receive module, under described back end fault or data reconstruction unfinished state, receives described write operation requests or the described read operation request of described application server; After receiving described write operation requests, use the RAID1 mode of degradation, the data block of described write operation is written to described the second buffer unit; After receiving described read operation request, the data block of described read operation is sent to described application server;
Described the second redundant computation unit, also under described back end fault or data reconstruction unfinished state, according to RAID4 degradation processing mode, by the data block of the data block of the described write operation of described the second buffer unit and the same logical address of other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out described asynchronous redundant computation, after having calculated, described check information is write in the described second verification volume of described check-node;
Described degradation processing module also comprises the second degradation processing unit and the second degradation redundant processing unit, wherein:
Described the second degradation processing unit, under described check-node fault or check information reconstruct unfinished state, controls described back end and carries out RAID1 degradation write operation or read operation;
Described the second degradation redundant processing unit, for under described check-node fault or check information reconstruct unfinished state, the data block of the described write operation that described application server is sent writes described the first buffer unit, then returns to write operation requests and completes information.
Accordingly, based on above-mentioned RAID1 and RAID4 mixed structure network store system principle, the present invention also provides a kind of RAID1 and RAID4 mixed structure network storage method, comprises the steps:
In be connected the write operation or read operation process that carries out data with application server, when the back end of described redundancy group is received the write operation requests of described application server, by the data-block cache of write operation; And use RAID1 mode, and the Backup Data of the data block of described write operation is sent in the check-node of corresponding described redundancy group, form RAID1 mirror image;
Described check-node is by the data-block cache of described write operation, and with the data block of same logical address in other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is stored, and upgrade after storage in described asynchronous redundant computation and described check information, to described back end, send Data Migration order;
Described back end, after receiving the described Data Migration order of described check-node transmission, migrates to RAID4 data set by the data block of corresponding described write operation;
Wherein, described redundancy group is a plurality of separate RAID1 with a plurality of described back end and a described check-node in logic and the redundancy group of RAID4 mixed structure.
As a kind of embodiment, also comprise the steps:
When described redundancy group is received the read operation request of described application server, from described back end, read the data block of read operation, and the data block of described read operation is sent to described application server.
As a kind of embodiment, also comprise write operation under RAID normal condition and the read operation under described RAID normal condition, wherein:
Write operation under described RAID normal condition comprises the steps:
Described back end receives the described write operation requests of described application server;
The data block of described write operation is stored in described the first buffer unit, the data block of described write operation is forwarded to described check-node simultaneously;
Described back end is sent completely confirmation to described application server;
Read operation under described RAID normal condition comprises the steps:
Described back end receives the read operation request of described application server;
According to described read operation request, detect the data block that whether has described read operation in described the first buffer unit;
While there is the data block of described read operation in described the first buffer unit, the data block of described read operation is sent to described application server; While there is not the data block of described read operation in described the first buffer unit, from the first logical volume of described back end, read the data block of described read operation, then return to described application server;
The described data block by described write operation is forwarded to described check-node, comprises the steps:
The data block of controlling described write operation is stored in the second buffer unit;
Described the second buffer unit merges from Spatial Dimension the data block of described write operation;
According to cache policy, to meeting the data block of asynchronous redundant computation condition in the data block of described write operation, carry out buffer memory write-back.
Preferably, also comprise write operation under RAID degrading state and the read operation under described RAID degrading state, wherein:
Write operation under described RAID degrading state comprises the steps:
When the described back end fault of described redundancy group or data reconstruction do not complete, described check-node receives the described write operation requests that described application server issues; Described check-node writes the data block of described write operation after described second buffer unit of described check-node, to described application server, has returned to confirmation;
When described check-node fault or check information reconstruct do not complete, described back end receives the described write operation requests that described application server issues; Described back end writes the data block of described write operation after described first buffer unit of described back end, returns to write operation requests complete information to described application server;
Read operation under described RAID degrading state comprises the steps:
When described back end fault or data reconstruction do not complete, described check-node receives the described read operation request that described application server issues; Described check-node detects described the second buffer unit; While there is the data block of described read operation in described the second buffer unit, described check-node returns to described application server by the data block of described read operation;
When described check-node fault or check information reconstruct do not complete, described back end receives the described read operation request that described application server issues, and the data block of described read operation is returned to described application server;
While there is not the data block of described read operation in described the second buffer unit, described check-node sends the request of the data block of reading described same logical address to other back end of described redundancy group;
Described check-node reads in the second verification volume after the corresponding check information of data block with described read operation, returns to read check and completes information;
Described back end reads the data block of the described read operation in described the first logical volume, and returns to described check-node;
When the data block of all described read operations of other back end of described redundancy group returns to after described check-node, described check-node reconstructs the data block of the described read operation on described back end according to RAID4 recovery algorithms.
More excellent, also comprise the data reconstruction step of described back end;
The data reconstruction step of described back end comprises RAID1 data recovering step and the synchronous first step recovering of RAID4, wherein:
Described RAID1 data recovering step comprises:
Described check-node is collected all data acquisitions relevant to fault data, and according to described data acquisition, RAID1 the first data bitmap in described check-node is set;
All data of controlling in described RAID1 the first data bitmap are recovered;
Described back end is stored to the data block of the described write operation of described check-node in described the first buffer unit, and returns to write operation requests and complete information;
The synchronous first step recovering of described RAID4 comprises:
Block user's access of described check-node, and obtain the second data bitmap that does not carry out accordingly described asynchronous redundant computation on described check-node with described fault data;
Described back end by the Data Migration of described the second data bitmap to described RAID4 data set;
The user's access that recovers described check-node, described check-node rebuilds described RAID4;
Described the second data bitmap is set in described RAID4;
Described RAID4 arranges synchronization check information according to described the second data bitmap, until the described check information of described RAID4 synchronously completes;
And,
The check information reconstruction step that also comprises described check-node;
The check information reconstruction step of described check-node comprises Data Migration step and the synchronous second step recovering of described RAID4 of all back end of described redundancy group, wherein:
The Data Migration step of all back end of described redundancy group comprises:
Controlling RAID1 the first data bitmap described in all back end of described redundancy group is clean;
The equipment that all back end of described redundancy group export to described check-node described back end joins in described RAID1 in re-add mode;
User's access of blocking all back end of described redundancy group;
All back end of described redundancy group to described RAID4 data set, recover user's access of all back end of described redundancy group by the Data Migration that does not carry out described asynchronous redundant computation;
The synchronous second step recovering of described RAID4 comprises:
Collect in all back end of described redundancy group and carried out the data message of described asynchronous redundant computation, and described data message is merged, form union;
According to described union, the bitmap of RAID4 reconstruct described in described check-node is set;
Described check-node has carried out the request of the data block of described asynchronous redundant computation described in reading to all back end transmissions of described redundancy group;
After all back end of described redundancy group receive described request, described data block of carrying out described asynchronous redundant computation is returned to described check-node;
Described check-node, according to described data block of carrying out described asynchronous redundant computation, carries out described RAID4 redundant computation, and the described check information calculating is stored in described the second verification volume;
The checking data of controlling in described RAID4 reconstruct bitmap is all reconstructed.
A kind of RAID1 provided by the invention and RAID4 mixed structure network store system and method, its system comprises the separate redundancy group in logic of a plurality of mixed structures based on RAID1 and RAID4, and a plurality of back end by redundancy group and check-node carry out write operation or the read operation with the data of application server; When redundancy group receives the write operation requests of application server, when back end carries out buffer memory by the data block of write operation, use RAID1 mode, the data block of write operation is sent in check-node, form RAID1 mirror image; After the data-block cache of the write operation that check-node sends over back end, data block to the same logical address of other back end of the data block of this write operation and redundancy group, carry out the asynchronous redundant computation of RAID4, and by the check information storage calculating, when back end breaks down or when data reconstruction does not complete, according to this check information, reconstruct the data block of write operation, thereby effectively solved the problem that data can not normally be accessed in node failure situation.
Accompanying drawing explanation
Fig. 1 is RAID1 and RAID4 mixed structure network store system one specific embodiment structural representation;
Fig. 2 is redundancy group structural representation in RAID1 and RAID4 mixed structure network store system one specific embodiment;
Fig. 3 is RAID1 and another specific embodiment schematic diagram of RAID4 mixed structure network store system;
Fig. 4 is RAID1 and RAID4 mixed structure network storage method one specific embodiment process flow diagram;
Fig. 5 is the write operation one specific embodiment schematic diagram under RAID1 and RAID4 mixed structure network storage method RAID normal condition;
Fig. 6 is the read operation one specific embodiment schematic diagram under RAID1 and RAID4 mixed structure network storage method RAID normal condition;
Fig. 7 is the write operation one specific embodiment schematic diagram under RAID1 and RAID4 mixed structure network storage method RAID degrading state;
Fig. 8 is another specific embodiment schematic diagram of write operation under RAID1 and RAID4 mixed structure network storage method RAID degrading state;
Fig. 9 is the read operation one specific embodiment schematic diagram under RAID1 and RAID4 mixed structure network storage method RAID degrading state;
Figure 10 is another specific embodiment schematic diagram of read operation under RAID1 and RAID4 mixed structure network storage method RAID degrading state;
Figure 11 is the data reconstruction one specific embodiment schematic diagram of RAID1 and RAID4 mixed structure network storage method back end;
Figure 12 is the check information reconstruct one specific embodiment schematic diagram of RAID1 and RAID4 mixed structure network storage method check-node.
Embodiment
For making technical solution of the present invention clearer, below in conjunction with drawings and the specific embodiments, the present invention is described in further details.
Referring to Fig. 1 to Fig. 3, a kind of RAID1 and RAID4 mixed structure network store system 100, for be connected write operation or the read operation of carrying out data with application server 200, comprise a plurality of separate redundancy group 110 in logic:
The structure of redundancy group 110 is the mixed structure of RAID1 and RAID4;
Redundancy group 110 comprises a plurality of back end 111 and a check-node 112:
Back end 111, for when redundancy group 110 is received the write operation requests of application server 200, by the data-block cache of write operation; And use RAID1 mode, and the data block of write operation is sent in check-node 112, form RAID1 mirror image;
Check-node 112, be used for the data-block cache of write operation, and with the data block of the same logical address of other back end 111 of corresponding redundancy group 110 together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is stored, and upgraded after storage in asynchronous redundant computation and check information, to back end 111, send Data Migration orders;
Back end 111, also, for after receiving the Data Migration order of check-node 112 transmissions, migrates to RAID4 data set by the data block of corresponding write operation requests.
RAID1 provided by the invention and RAID4 mixed structure network store system 100, by a plurality of separate redundancy group 110 are in logic set, the structure of each redundancy group 110 is the mixed structure of RAID1 and RAID4, and comprises a plurality of back end and a check-node; When in RAID1 and RAID4 mixed structure network store system 100, any one redundancy group 110 receives the write operation requests of application server 200 transmissions, back end 111 corresponding with this write operation in this redundancy group 110 is used RAID1 mode that the data block of write operation is carried out to buffer memory, and the data block of this write operation is sent in the check-node 112 of this redundancy group 110, form RAID1 mirror image; Check-node 112 forms RAID4 redundancy band according to forming the data block of write operation of RAID1 mirror image and the data block of the same logical address in other back end of this redundancy group 110, carry out asynchronous redundant computation, and by the check information storage calculating, this just makes in the time need to carrying out the read operation of data, when if back end 111 breaks down, can according to the check information calculating, reconstruct the data block of read operation, effectively solve the problem that data can not normally be accessed in node failure situation.
Wherein, RAID1 is called disk mirroring, its principle be by the data image of a disk to another disk, in data, write in the process of a disk, can on another idle disk, generate image file, reliability and the recoverability of data in the situation that not affecting disk performance, have been guaranteed to greatest extent, the data block that can have two parts of write operations while making RAID1 and RAID4 mixed structure network store system 100 carry out write operation by employing RAID1 mirror image, even one of them node failure, copy also can ensure a correct data block; RAID4 is by data strip blocking, and be distributed on different disks, realization takies less redundant space storage check information, by adopting RAID4 redundancy, can be when any single node fault, still can the data block of losing be recovered by data block and the check information of other nodes, guaranteed the reliability of RAID1 and RAID4 mixed structure network store system 100.
At this, it should be noted that, the asynchronous redundant computation of RAID4 specifically refers to: by the mirror image data of the interior all back end 111 of redundancy group 110 (i.e. the data block of the write operation of buffer memory in all back end 111), carry out XOR calculating, this computation process is asynchronous to data access flow process.
As a kind of embodiment, when redundancy group 110 is received the read operation request of application server 200, from back end 111, read the data block of read operation, send to application server 200; Realized the read operation of the data of 200 of RAID1 and RAID4 mixed structure network store system 100 and application servers.
Preferably, as a specific embodiment of RAID1 of the present invention and RAID4 mixed structure network store system 100, redundancy group 110 is RAID1 and the RAID4 mixed structure of many buffer memorys of layering storage;
Back end 111 comprises the first data transmit-receive module 1110, the first buffer unit 1111 and the first logical volume 1112; Wherein:
The first data transmit-receive module 1110, for receiving write operation requests or the read operation request of application server 200; After receiving write operation requests, the data block of write operation is written to the first buffer unit 1111, and uses RAID1 mode, the data block of write operation is written to check-node 112; After receiving read operation request, the data block of read operation is sent to application server 200;
The first buffer unit 1111, for the data block of buffer memory write operation or the data block of read operation, and before check-node 112 carries out asynchronous redundant computation, by the data block of the write operation of buffer memory, according to cache policy, independent lower brush is to the first logical volume 1112;
The first logical volume 1112, for storing the data block of the write operation of the first buffer unit 1111 independent lower brushes, and after receiving the Data Migration order of check-node 112 transmissions, migrates to RAID4 data set by the data block of corresponding write operation;
Check-node 112 comprises the second buffer unit 1120, the second redundant computation unit 1121 and the second verification volume 1122, wherein:
The second buffer unit 1120, the data block of the write operation sending for data cached node 111;
The second redundant computation unit 1121, for the data block of the same logical address of the data block of write operation that the second buffer unit 1120 is sent and other back end 111 of corresponding redundancy group 110 together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is write in the second verification volume 1122, and sent Data Migration order to the first logical volume 1112;
Described the second verification volume 1122, for storing check information.
It should be noted that, the first buffer unit 1111 and the second buffer unit 1120 are the multilayer buffer structure of memory cache and disk buffering composition, and such multilayer buffer structure can reduce the Disk bandwidth consumption that half band write operation brings; Meanwhile, by RAID1 and the RAID4 mixed structure of many buffer memorys that redundancy group 110 is multilayered memory are set, data are carried out to layering storage, by RAID1 and RAID4 cooperation, to the data of different accumulation layers, provide Reliability Assurance.
More excellent, also comprise degradation processing module 120 and path modular converter 130, wherein:
Degradation processing module 120, at redundancy group 110 certain back end 111, or check-node 112 is when break down, and controls redundancy group 110 and enters degrading state, demote write operation or read operation;
Path modular converter 130, for under back end 111 failure conditions, data access path between application server 200 and redundancy group 110 is switched to check-node 112 from back end 111, controls write operation requests or read operation request are issued to check-node 112; And again access at back end 111, carry out after data reconstruction completes, data access path being switched back to back end 111 from check-node 112.By multipath access mode, guaranteed in RAID1 and RAID4 mixed structure network store system arbitrary back end 111 or check-node 112 breaks down or during network failure, still can realize the normal access of data, guarantee the continuity of data access.
As a kind of embodiment, degradation processing module 120 comprises the first degradation processing unit 121, wherein:
The first degradation processing unit 121, under back end 111 faults or data reconstruction unfinished state, path modular converter 130 is switched to access path after check-node 112, controls check-node 112 and carries out degradation write operation or read operation;
Check-node 112 also comprises the second data transmit-receive module 1123, wherein:
The second data transmit-receive module 1123, under back end 111 faults or data reconstruction unfinished state, receives write operation requests or the read operation request of application server 200; After receiving the data of write operation, use the RAID1 mode of degradation, the data block of write operation is written to the second buffer unit 1120; After receiving read operation request, the data block of read operation is sent to application server 200;
The second redundant computation unit 1121, also under back end 111 faults or data reconstruction unfinished state, according to RAID4 degradation processing mode, by the data block of the data block of the write operation of the second buffer unit 1120 and the same logical address of other back end 111 of corresponding redundancy group 110 together, form RAID4 redundancy band, carry out asynchronous redundant computation, calculated in the second verification volume 1122 that rear check information writes check-node 112.
What deserves to be explained is, degradation processing module 120 also comprises the second degradation processing unit 122 and the second degradation redundant processing unit 123, wherein:
The second degradation processing unit 122, for when check-node 112 faults or the check information reconstruct unfinished state, controls back end 111 and carries out RAID1 degradation write operation or read operations;
The second degradation redundant processing unit 123, under check-node 112 faults or check information reconstruct unfinished state, the data block of the write operation that application server 200 is sent writes the first buffer unit 1111, then returns to write operation requests and completes information.
Accordingly, principle based on above-mentioned RAID1 and RAID4 mixed structure network store system 100, the present invention also provides a kind of RAID1 and RAID4 mixed structure network storage method, due to the design principle of this RAID1 and RAID4 mixed structure network storage method and the principle of RAID1 and RAID4 mixed structure network store system 100 basic identical, therefore repeat part and repeat no more.
Referring to Fig. 4, a kind of RAID1 and RAID4 mixed structure network storage method, comprise the steps:
S100, in being connected the write operation or read operation process that carries out data, when the back end of redundancy group is received the write operation requests of application server, by the data-block cache of write operation with application server; And use RAID1 mode, and the data block of write operation is sent in the check-node of corresponding redundancy group, form RAID1 mirror image;
S200, check-node is by the data-block cache of write operation, and with the data block of same logical address in other back end of corresponding redundancy group together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is stored, and upgraded after storage in asynchronous redundant computation and check information, to back end, send Data Migration order;
S300, back end, after receiving the Data Migration order of check-node transmission, migrates to RAID4 data set by the data block of corresponding write operation requests;
Wherein, redundancy group is a plurality of separate RAID1 with a plurality of back end and a check-node in logic and the redundancy group of RAID4 mixed structure.
What deserves to be explained is, also comprise the steps:
S400, when redundancy group is received the read operation request of application server, reads the data block of read operation, and the data block of read operation is sent to application server from back end.When the write operation of data can be carried out between assurance redundancy group and application server, also realized the read operation of data.
Referring to Fig. 5 to Fig. 6, as a specific embodiment of RAID1 of the present invention and RAID4 mixed structure network storage method, also comprise write operation under RAID normal condition and the read operation under RAID normal condition, wherein:
Referring to Fig. 5, the write operation under RAID normal condition comprises the steps:
S210, back end receives the write operation requests of application server;
S230, is stored to the data block of write operation in the first buffer unit, the data block of write operation is forwarded to check-node simultaneously;
S250, back end is sent completely confirmation to application server;
What deserves to be explained is, back end is stored to the data block of write operation in the first buffer unit, the data block of write operation is forwarded to check-node simultaneously and specifically comprises: S231, and the data of controlling write operation are stored in the second buffer unit of check-node; S232, the second buffer unit merges from Spatial Dimension the data block of write operation; S233, and according to cache policy, to meeting the data block of asynchronous redundant computation condition in the data block of write operation, carry out buffer memory write-back; In buffer memory write-back, carry out RAID4 redundant computation, redundant computation and application write operation requests asynchronous execution.It should be noted that, buffer memory write-back comprises that full band is write with half band to be write, and wherein, full band is write to refer to and carried out a required data block of RAID4 redundant computation; S2330, when the data block of buffer memory write-back meets full band, directly by the data block of full band, carry out the check information that RAID4 redundant computation is new, when the data block of buffer memory write-back is that half band is when write, read relevant old data block, the old data block reading is carried out together with new data block to RAID4 redundant computation and obtain new check information; S2331, is stored to the new check information calculating in the second verification volume of check-node, returns to write request and completes; Afterwards, S2332, check-node sends Data Migration order to back end, S2333, back end receives after this Data Migration order, and the Data Migration of write operation, to RAID4 data centralization, and is sent to confirmation to check-node; Check-node receives the confirmation after information, thereby completes the renewal of redundant computation and check information.
Referring to Fig. 6, the read operation under RAID normal condition comprises the steps:
S220, back end receives the read operation request of application server;
S240, according to read operation request, detects the data block that whether has read operation in the first buffer unit;
S260, while there is the data block of read operation in the first buffer unit, sends to application server by the data block of read operation; While there is not the data block of read operation in the first buffer unit, from the first logical volume of back end, read the data block of read operation, then return to application server.
Referring to Fig. 7 to Figure 10, as a specific embodiment of RAID1 of the present invention and RAID4 mixed structure network storage method, also comprise write operation under RAID degrading state and the read operation under RAID degrading state, wherein:
Referring to Fig. 7 to Fig. 8, the write operation under RAID degrading state comprises the steps:
Referring to Fig. 7, when the back end fault of redundancy group or data reconstruction do not complete, S310, check-node receives the write operation requests that application server issues; S330, check-node writes the data block of write operation after the second buffer unit of check-node; S350, has returned to confirmation to application server;
When namely in redundancy group, arbitrary data node failure or data reconstruction do not complete, data access path between application server and redundancy group is switched to check-node by back end, check-node receives after the write operation requests of application server, and the data block of write operation is stored to the second buffer unit; It should be noted that, the step of the buffer memory write-back in the write operation process under RAID degrading state is consistent with the buffer memory write back step in write operation process under RAID normal condition, difference is: the new check information calculating is stored to after the second verification volume of check-node, because back end fault or data reconstruction do not complete, back end can not receive Data Migration order, so check-node will no longer send Data Migration order to back end; It is not by when back end fault or data reconstruction complete, data access path is switched to check-node, by check-node, carries out RAID degradation write operation, the write request of completing user layer, effectively solved when back end fault the problem that data can not normally be accessed.
In like manner, referring to Fig. 8, when check-node fault or check information reconstruct do not complete, S320, back end receives the write operation requests that application server issues; S340, back end is by after the first buffer unit of the data block data writing node of write operation, and S360, returns to write operation requests to application server and completes information; When check-node fault or check information reconstruct do not complete, back end receives the write operation requests of application server, and the data block of write operation is stored in its first buffer unit, thereby completes the write operation of data; Due to check-node fault, therefore no longer carry out asynchronous redundant computation.
Referring to Fig. 9 to Figure 10, the read operation under RAID degrading state comprises the steps:
Referring to Fig. 9, when back end fault or data reconstruction do not complete, S410, check-node receives the read operation request that application server issues; S430, check-node detects the second buffer unit; S450, while there is the data block of read operation in the second buffer unit, check-node returns to application server by the data block of read operation;
In like manner, when back end fault or data reconstruction do not complete, the data access path of application server is switched to check-node, has equally effectively solved the problem that data can not normally be accessed when node failure.
What deserves to be explained is, when check-node detects the data block that does not have read operation in the second buffer unit, by check-node, according to RAID4 recovery algorithms, reconstruct all data blocks on back end, and the data block of read operation is returned to application server, the read operation request of completing user layer.The concrete steps that check-node reconstructs the data block on back end are: S451, and while there is not the data block of read request in the second buffer unit, check-node sends the request of the data block of reading same logical address to other back end of redundancy group; S452, check-node reads in the second verification volume after the corresponding check information of data block with read operation, returns to read check and completes information; S453, back end reads the data block of the read operation in the first logical volume, and returns to check-node; S454, after the data block of all read operations of other back end of redundancy group is returned, check-node reconstructs the data block of the read operation on back end according to RAID4 recovery algorithms.It is reconstructed the data block on back end by check-node, thereby realizes when back end fault, the normal access that application server still can executing data.
Referring to Figure 10, when check-node fault or check information reconstruct do not complete, S420, back end receives the read operation request that application server issues; S440, back end reads the data block of read operation in the first buffer unit; S460, back end returns to application server by the data block of read operation.
Preferably, referring to Figure 11, as a specific embodiment of RAID1 of the present invention and RAID4 mixed structure network storage method, also comprise the data reconstruction step of back end, the data reconstruction step of back end comprises RAID1 data recovering step and the synchronous first step recovering of RAID4, wherein:
RAID1 data recovering step comprises:
S500, check-node is collected all data acquisitions relevant to fault data, and according to data acquisition, RAID1 the first data bitmap in check-node is set, and all data blocks that check-node is controlled in RAID1 the first data bitmap are recovered;
S510, check-node sends the data block of write operation to back end;
S520, back end is stored to the data block of the write operation of check-node in the first buffer unit, and returns to write request and complete information;
According to above-mentioned steps, after all data blocks of RAID1 have been recovered, carry out the synchronous first step recovering of RAID4, its process specifically comprises:
S530, user's access of blocking check-node, S540, and obtain the second data bitmap that does not carry out accordingly asynchronous redundant computation on check-node with fault data;
S550, back end by the Data Migration of the second data bitmap to RAID4 data set;
S560, the user's access that recovers check-node, S570, check-node rebuilds RAID4;
S580, is set to the second data bitmap in RAID4, and RAID4 arranges synchronization check information according to the second data bitmap, until the check information of RAID4 synchronously completes.
Referring to Figure 12, a kind of embodiment as RAID1 of the present invention and RAID4 mixed structure network storage method, the check information reconstruction step that also comprises check-node, the check information reconstruction step of check-node comprises Data Migration step and the synchronous second step recovering of RAID4 of all back end of redundancy group, wherein:
The Data Migration step of all back end of redundancy group comprises:
S600, in all back end of control redundancy group, RAID1 the first data bitmap is clean;
S610, the equipment that all back end of redundancy group export to check-node back end joins in RAID1 in re-add mode;
S620, user's access of blocking all back end of redundancy group;
S630, all back end of redundancy group by the Data Migration that does not carry out asynchronous redundant computation to RAID4 data set, S640, the user's access that recovers all back end of redundancy group;
After the Data Migration of all back end of redundancy group completes, carry out the synchronous second step recovering of RAID4, it specifically comprises:
S650, has carried out the data message of asynchronous redundant computation in all back end of collection redundancy group, and data message has been merged, and forms union;
S650 ', according to union, arranges RAID4 reconstruct bitmap in check-node;
S660, check-node sends to all back end of redundancy group the request of carrying out the data block of asynchronous redundant computation of reading;
S670, after all back end of redundancy group receive request, returns to check-node by the data block of carrying out asynchronous redundant computation;
S680, check-node, according to carrying out the data block of asynchronous redundant computation, carries out RAID4 redundant computation;
S690, is stored to the check information calculating in the second verification volume in check-node;
According to above-mentioned steps, the checking data of controlling in RAID4 reconstruct bitmap is all reconstructed.
The above embodiment has only expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (10)

1. RAID1 and a RAID4 mixed structure network store system, for be connected write operation or the read operation of carrying out data with application server, is characterized in that, comprises a plurality of separate redundancy group in logic:
The structure of described redundancy group is the mixed structure of RAID1 and RAID4;
Described redundancy group comprises a plurality of back end and a check-node:
Described back end, for when described redundancy group is received the write operation requests of described application server, by the data-block cache of write operation; And use RAID1 mode, and the data block of described write operation is sent in described check-node, form RAID1 mirror image;
Described check-node, be used for the data-block cache of described write operation, and with the data block of the same logical address of other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is stored, and upgraded after storage in described asynchronous redundant computation and described check information, to described back end, send Data Migration order;
Described back end, also, for after receiving the described Data Migration order of described check-node transmission, migrates to RAID4 data set by the data block of corresponding described write operation.
2. RAID1 according to claim 1 and RAID4 mixed structure network store system, it is characterized in that, when described redundancy group is received the read operation request of described application server, from described back end, read the data block of read operation, send to described application server.
3. RAID1 according to claim 2 and RAID4 mixed structure network store system, is characterized in that, described redundancy group is RAID1 and the RAID4 mixed structure of many buffer memorys of layering storage;
Described back end comprises the first data transmit-receive module, the first buffer unit and the first logical volume, wherein:
Described the first data transmit-receive module, for receiving described write operation requests or the described read operation request of described application server; After receiving described write operation requests, the data block of described write operation is written to described the first buffer unit, and uses RAID1 mode, the data block of described write operation is written to described check-node; After receiving described read operation request, the data block of described read operation is sent to described application server;
Described the first buffer unit, for the data block of write operation or the data block of described read operation described in buffer memory, and before described check-node carries out described asynchronous redundant computation, by the data block of the described write operation of buffer memory, according to cache policy, independent lower brush is to described the first logical volume;
Described the first logical volume, for storing the independently data block of the described write operation of lower brush of described the first buffer unit, and after receiving the described Data Migration order of described check-node transmission, the data block of corresponding described write operation is migrated to described RAID4 data set;
Described check-node comprises the second buffer unit, the second redundant computation unit and the second verification volume, wherein:
Described the second buffer unit, the data block of the described write operation sending for back end described in buffer memory;
Described the second redundant computation unit, for the data block of the data block of described write operation that described the second buffer unit is sent and the same logical address of other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out described asynchronous redundant computation, after calculating completes, described check information is write in described the second verification volume, and sent described Data Migration order to described the first logical volume;
Described the second verification volume, for storing described check information.
4. RAID1 according to claim 3 and RAID4 mixed structure network store system, is characterized in that, also comprises degradation processing module and path modular converter, wherein:
Described degradation processing module, at described redundancy group back end described in certain, or described check-node is when break down, and controls described redundancy group and enters degrading state, demote write operation or read operation;
Described path modular converter, for under described back end failure condition, data access path between described application server and described redundancy group is switched to described check-node from described back end, controls described write operation requests or described read operation request are issued to described check-node; And again access at described back end, carry out after data reconstruction completes, described data access path being switched back to described back end from described check-node.
5. RAID1 according to claim 4 and RAID4 mixed structure network store system, is characterized in that, described degradation processing module comprises the first degradation processing unit; Wherein:
Described the first degradation processing unit, for under described back end fault or data reconstruction unfinished state, described data access path, from described back end is switched to described check-node, is controlled described check-node and is carried out described degradation write operation or read operation;
Described check-node also comprises the second data transmit-receive module, wherein:
Described the second data transmit-receive module, under described back end fault or data reconstruction unfinished state, receives described write operation requests or the described read operation request of described application server; After receiving described write operation requests, use the RAID1 mode of degradation, the data block of described write operation is written to described the second buffer unit; After receiving described read operation request, the data block of described read operation is sent to described application server;
Described the second redundant computation unit, also under described back end fault or data reconstruction unfinished state, according to RAID4 degradation processing mode, by the data block of the data block of the described write operation of described the second buffer unit and the same logical address of other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out described asynchronous redundant computation, after having calculated, described check information is write in the described second verification volume of described check-node;
Described degradation processing module also comprises the second degradation processing unit and the second degradation redundant processing unit, wherein:
Described the second degradation processing unit, under described check-node fault or check information reconstruct unfinished state, controls described back end and carries out RAID1 degradation write operation or read operation;
Described the second degradation redundant processing unit, for under described check-node fault or check information reconstruct unfinished state, the data block of the described write operation that described application server is sent writes described the first buffer unit, then returns to write operation requests and completes information.
6. RAID1 and a RAID4 mixed structure network storage method, is characterized in that, comprises the steps:
In be connected the write operation or read operation process that carries out data with application server, when the back end of described redundancy group is received the write operation requests of described application server, by the data-block cache of write operation; And use RAID1 mode, and the data block of described write operation is sent in the check-node of corresponding described redundancy group, form RAID1 mirror image;
Described check-node is by the data-block cache of described write operation, and with the data block of same logical address in other back end of corresponding described redundancy group together, form RAID4 redundancy band, carry out asynchronous redundant computation, after calculating completes, check information is stored, and upgrade after storage in described asynchronous redundant computation and described check information, to described back end, send Data Migration order;
Described back end, after receiving the described Data Migration order of described check-node transmission, migrates to RAID4 data set by the data block of corresponding described write operation;
Wherein, described redundancy group is a plurality of separate RAID1 with a plurality of described back end and a described check-node in logic and the redundancy group of RAID4 mixed structure.
7. RAID1 according to claim 6 and RAID4 mixed structure network storage method, is characterized in that, also comprises the steps:
When described redundancy group is received the read operation request of described application server, from described back end, read the data block of read operation, and the data block of described read operation is sent to described application server.
8. RAID1 according to claim 7 and RAID4 mixed structure network storage method, is characterized in that, also comprises write operation under RAID normal condition and the read operation under described RAID normal condition, wherein:
Write operation under described RAID normal condition comprises the steps:
Described back end receives the described write operation requests of described application server;
The data block of described write operation is stored in described the first buffer unit, the data block of described write operation is forwarded to described check-node simultaneously;
Described back end is sent completely confirmation to described application server;
Read operation under described RAID normal condition comprises the steps:
Described back end receives the read operation request of described application server;
According to described read operation request, detect the data block that whether has described read operation in described the first buffer unit;
While there is the data block of described read operation in described the first buffer unit, the data block of described read operation is sent to described application server; While there is not the data block of described read operation in described the first buffer unit, from the first logical volume of described back end, read the data block of described read operation, then return to described application server;
The described data block by described write operation is forwarded to described check-node, comprises the steps:
The data block of controlling described write operation is stored in the second buffer unit;
Described the second buffer unit merges from Spatial Dimension the data block of described write operation;
According to cache policy, to meeting the data block of asynchronous redundant computation condition in the data block of described write operation, carry out buffer memory write-back.
9. RAID1 according to claim 8 and RAID4 mixed structure network storage method, is characterized in that, also comprises write operation under RAID degrading state and the read operation under described RAID degrading state, wherein:
Write operation under described RAID degrading state comprises the steps:
When the described back end fault of described redundancy group or data reconstruction do not complete, described check-node receives the described write operation requests that described application server issues; Described check-node writes the data block of described write operation after described second buffer unit of described check-node, to described application server, has returned to confirmation;
When described check-node fault or check information reconstruct do not complete, described back end receives the described write operation requests that described application server issues; Described back end writes the data block of described write operation after described first buffer unit of described back end, returns to write operation requests complete information to described application server;
Read operation under described RAID degrading state comprises the steps:
When described back end fault or data reconstruction do not complete, described check-node receives the described read operation request that described application server issues; Described check-node detects described the second buffer unit; While there is the data block of described read operation in described the second buffer unit, described check-node returns to described application server by the data block of described read operation;
When described check-node fault or check information reconstruct do not complete, described back end receives the described read operation request that described application server issues, and the data block of described read operation is returned to described application server;
While there is not the data block of described read operation in described the second buffer unit, described check-node sends the request of the data block of reading described same logical address to other back end of described redundancy group;
Described check-node reads in the second verification volume after the corresponding check information of data block with described read operation, returns to read check and completes information;
Described back end reads the data block of the described read operation in described the first logical volume, and returns to described check-node;
When the data block of all described read operations of other back end of described redundancy group returns to after described check-node, described check-node reconstructs the data block of the described read operation on described back end according to RAID4 recovery algorithms.
10. RAID1 according to claim 9 and RAID4 mixed structure network storage method, is characterized in that, also comprises the data reconstruction step of described back end;
The data reconstruction step of described back end comprises RAID1 data recovering step and the synchronous first step recovering of RAID4, wherein:
Described RAID1 data recovering step comprises:
Described check-node is collected all data acquisitions relevant to fault data, and according to described data acquisition, RAID1 the first data bitmap in described check-node is set;
All data blocks of controlling in described RAID1 the first data bitmap are recovered;
Described back end is stored to the data block of the described write operation of described check-node in described the first buffer unit, and returns to write operation requests and complete information;
The synchronous first step recovering of described RAID4 comprises:
Block user's access of described check-node, and obtain the second data bitmap that does not carry out accordingly described asynchronous redundant computation on described check-node with described fault data;
Described back end by the Data Migration of described the second data bitmap to described RAID4 data set;
The user's access that recovers described check-node, described check-node rebuilds described RAID4;
Described the second data bitmap is set in described RAID4;
Described RAID4 arranges synchronization check information according to described the second data bitmap, until the described check information of described RAID4 synchronously completes;
And,
The check information reconstruction step that also comprises described check-node;
The check information reconstruction step of described check-node comprises Data Migration step and the synchronous second step recovering of described RAID4 of all back end of described redundancy group, wherein:
The Data Migration step of all back end of described redundancy group comprises:
Controlling RAID1 the first data bitmap described in all back end of described redundancy group is clean;
The equipment that all back end of described redundancy group export to described check-node described back end joins in described RAID1 in re-add mode;
User's access of blocking all back end of described redundancy group;
All back end of described redundancy group to described RAID4 data set, recover user's access of all back end of described redundancy group by the Data Migration that does not carry out described asynchronous redundant computation;
The synchronous second step recovering of described RAID4 comprises:
Collect in all back end of described redundancy group and carried out the data message of described asynchronous redundant computation, and described data message is merged, form union;
According to described union, the bitmap of RAID4 reconstruct described in described check-node is set;
Described check-node has carried out the request of the data block of described asynchronous redundant computation described in reading to all back end transmissions of described redundancy group;
After all back end of described redundancy group receive described request, described data block of carrying out described asynchronous redundant computation is returned to described check-node;
Described check-node, according to described data block of carrying out described asynchronous redundant computation, carries out described RAID4 redundant computation, and the described check information calculating is stored in described the second verification volume;
The checking data of controlling in described RAID4 reconstruct bitmap is all reconstructed.
CN201410033455.3A 2014-01-23 2014-01-23 RAID1 and RAID4 mixed structure network store system and method Expired - Fee Related CN103761058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410033455.3A CN103761058B (en) 2014-01-23 2014-01-23 RAID1 and RAID4 mixed structure network store system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410033455.3A CN103761058B (en) 2014-01-23 2014-01-23 RAID1 and RAID4 mixed structure network store system and method

Publications (2)

Publication Number Publication Date
CN103761058A true CN103761058A (en) 2014-04-30
CN103761058B CN103761058B (en) 2016-08-17

Family

ID=50528303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410033455.3A Expired - Fee Related CN103761058B (en) 2014-01-23 2014-01-23 RAID1 and RAID4 mixed structure network store system and method

Country Status (1)

Country Link
CN (1) CN103761058B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142979A (en) * 2014-07-15 2014-11-12 武汉理工大学 Index method for realizing RFID (Radio Frequency Identification Devices) tag storage management
CN104317678A (en) * 2014-10-30 2015-01-28 浙江宇视科技有限公司 Method and device for repairing RAID (redundant array of independent disks) without interrupting data storage service
CN104714758A (en) * 2015-01-19 2015-06-17 华中科技大学 Method for building array by adding mirror image structure to check-based RAID and read-write system
CN105094696A (en) * 2015-07-06 2015-11-25 中国科学院计算技术研究所 Method and apparatus for ensuring data reliability during conversion process based on RAID 1 and RAID 4 mixture structure
CN105808154A (en) * 2014-12-31 2016-07-27 北京神州云科数据技术有限公司 Bit map based dual-controller cache memory write-back method and apparatus
CN106027638A (en) * 2016-05-18 2016-10-12 华中科技大学 Hadoop data distribution method based on hybrid coding
CN106227464A (en) * 2016-07-14 2016-12-14 中国科学院计算技术研究所 A kind of double-deck redundant storage system and data write, reading and restoration methods
WO2018094704A1 (en) * 2016-11-25 2018-05-31 华为技术有限公司 Data check method and storage system
CN109032513A (en) * 2018-07-16 2018-12-18 山东大学 Based on the RAID framework of SSD and HDD and its backup, method for reconstructing
CN109783000A (en) * 2017-11-10 2019-05-21 成都华为技术有限公司 A kind of data processing method and equipment
CN111416753A (en) * 2020-03-11 2020-07-14 上海爱数信息技术股份有限公司 High-availability method of two-node Ceph cluster
CN111666043A (en) * 2017-11-03 2020-09-15 华为技术有限公司 Data storage method and equipment
CN113420341A (en) * 2021-06-11 2021-09-21 联芸科技(杭州)有限公司 Data protection method, data protection equipment and computer system
WO2022143677A1 (en) * 2020-12-28 2022-07-07 华为技术有限公司 Method for using intermediate device to process data, computer system, and intermediate device
CN115309591A (en) * 2022-10-10 2022-11-08 浪潮电子信息产业股份有限公司 Recovery method and related device of full flash memory system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019893A (en) * 2012-11-16 2013-04-03 华中科技大学 Multi-disk fault-tolerant two-dimensional hybrid disk RAID4 system architecture and read-write method thereof
CN103186437A (en) * 2013-04-02 2013-07-03 浪潮电子信息产业股份有限公司 Method for upgrading hybrid hard disk array system
CN103488432A (en) * 2013-09-16 2014-01-01 哈尔滨工程大学 Hybrid disk array, deferred write verification method for hybrid disk array, and data recovery method for hybrid disk array

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019893A (en) * 2012-11-16 2013-04-03 华中科技大学 Multi-disk fault-tolerant two-dimensional hybrid disk RAID4 system architecture and read-write method thereof
CN103186437A (en) * 2013-04-02 2013-07-03 浪潮电子信息产业股份有限公司 Method for upgrading hybrid hard disk array system
CN103488432A (en) * 2013-09-16 2014-01-01 哈尔滨工程大学 Hybrid disk array, deferred write verification method for hybrid disk array, and data recovery method for hybrid disk array

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142979A (en) * 2014-07-15 2014-11-12 武汉理工大学 Index method for realizing RFID (Radio Frequency Identification Devices) tag storage management
CN104142979B (en) * 2014-07-15 2017-07-25 武汉理工大学 A kind of indexing means for realizing RFID tag storage management
CN104317678A (en) * 2014-10-30 2015-01-28 浙江宇视科技有限公司 Method and device for repairing RAID (redundant array of independent disks) without interrupting data storage service
CN105808154A (en) * 2014-12-31 2016-07-27 北京神州云科数据技术有限公司 Bit map based dual-controller cache memory write-back method and apparatus
CN105808154B (en) * 2014-12-31 2019-05-24 深圳神州数码云科数据技术有限公司 The cache memory write-back method and device of dual controller based on bitmap
CN104714758A (en) * 2015-01-19 2015-06-17 华中科技大学 Method for building array by adding mirror image structure to check-based RAID and read-write system
CN104714758B (en) * 2015-01-19 2017-07-07 华中科技大学 A kind of array construction method and read-write system based on verification RAID addition mirror-image structures
CN105094696B (en) * 2015-07-06 2018-02-06 中国科学院计算技术研究所 Based on RAID1 and RAID4 mixed structures transfer process data reliability ensuring method and device
CN105094696A (en) * 2015-07-06 2015-11-25 中国科学院计算技术研究所 Method and apparatus for ensuring data reliability during conversion process based on RAID 1 and RAID 4 mixture structure
CN106027638B (en) * 2016-05-18 2019-04-12 华中科技大学 A kind of hadoop data distributing method based on hybrid coding
CN106027638A (en) * 2016-05-18 2016-10-12 华中科技大学 Hadoop data distribution method based on hybrid coding
CN106227464B (en) * 2016-07-14 2019-03-15 中国科学院计算技术研究所 It is a kind of bilayer redundant storage system and its data write-in, read and restoration methods
CN106227464A (en) * 2016-07-14 2016-12-14 中国科学院计算技术研究所 A kind of double-deck redundant storage system and data write, reading and restoration methods
CN109074227B (en) * 2016-11-25 2020-06-16 华为技术有限公司 Data verification method and storage system
CN109074227A (en) * 2016-11-25 2018-12-21 华为技术有限公司 A kind of method and storage system of data check
US10303374B2 (en) 2016-11-25 2019-05-28 Huawei Technologies Co.,Ltd. Data check method and storage system
WO2018094704A1 (en) * 2016-11-25 2018-05-31 华为技术有限公司 Data check method and storage system
CN111666043A (en) * 2017-11-03 2020-09-15 华为技术有限公司 Data storage method and equipment
CN109783000A (en) * 2017-11-10 2019-05-21 成都华为技术有限公司 A kind of data processing method and equipment
CN109032513B (en) * 2018-07-16 2020-08-25 山东大学 RAID (redundant array of independent disks) architecture based on SSD (solid State disk) and HDD (hard disk drive) and backup and reconstruction methods thereof
CN109032513A (en) * 2018-07-16 2018-12-18 山东大学 Based on the RAID framework of SSD and HDD and its backup, method for reconstructing
CN111416753A (en) * 2020-03-11 2020-07-14 上海爱数信息技术股份有限公司 High-availability method of two-node Ceph cluster
CN111416753B (en) * 2020-03-11 2021-12-03 上海爱数信息技术股份有限公司 High-availability method of two-node Ceph cluster
WO2022143677A1 (en) * 2020-12-28 2022-07-07 华为技术有限公司 Method for using intermediate device to process data, computer system, and intermediate device
CN113420341A (en) * 2021-06-11 2021-09-21 联芸科技(杭州)有限公司 Data protection method, data protection equipment and computer system
CN113420341B (en) * 2021-06-11 2023-08-25 联芸科技(杭州)股份有限公司 Data protection method, data protection equipment and computer system
CN115309591A (en) * 2022-10-10 2022-11-08 浪潮电子信息产业股份有限公司 Recovery method and related device of full flash memory system
CN115309591B (en) * 2022-10-10 2023-03-24 浪潮电子信息产业股份有限公司 Recovery method and related device of full flash memory system

Also Published As

Publication number Publication date
CN103761058B (en) 2016-08-17

Similar Documents

Publication Publication Date Title
CN103761058B (en) RAID1 and RAID4 mixed structure network store system and method
US8060772B2 (en) Storage redundant array of independent drives
CN101291347B (en) Network storage system
JP4815825B2 (en) Disk array device and method for reconstructing the same
KR100701563B1 (en) Storage control apparatus and method
AU654482B2 (en) A dish memory system
JP5887757B2 (en) Storage system, storage control device, and storage control method
US5566316A (en) Method and apparatus for hierarchical management of data storage elements in an array storage device
US7975168B2 (en) Storage system executing parallel correction write
CN100489796C (en) Methods and system for implementing shared disk array management functions
US9823876B2 (en) Nondisruptive device replacement using progressive background copyback operation
JP3742494B2 (en) Mass storage device
CN101093434B (en) Method of improving input and output performance of raid system using matrix stripe cache
CN101755257B (en) Managing the copying of writes from primary storages to secondary storages across different networks
US20040037120A1 (en) Storage system using fast storage devices for storing redundant data
CN102053802B (en) Network RAID (redundant array of independent disk) system
CN103513942B (en) The reconstructing method of raid-array and device
CN103793182A (en) Scalable storage protection
CN106227464B (en) It is a kind of bilayer redundant storage system and its data write-in, read and restoration methods
CN103246478A (en) Disk array system supporting grouping-free overall situation hot standby disks based on flexible redundant array of independent disks (RAID)
CN102110154A (en) File redundancy storage method in cluster file system
CN105094696A (en) Method and apparatus for ensuring data reliability during conversion process based on RAID 1 and RAID 4 mixture structure
CN116204137B (en) Distributed storage system, control method, device and equipment based on DPU
US20040216012A1 (en) Methods and structure for improved fault tolerance during initialization of a RAID logical unit
JP4794357B2 (en) RAID level conversion method and RAID apparatus in RAID apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160817

Termination date: 20210123

CF01 Termination of patent right due to non-payment of annual fee