CN112328182B - RAID data management method, device and computer readable storage medium - Google Patents

RAID data management method, device and computer readable storage medium Download PDF

Info

Publication number
CN112328182B
CN112328182B CN202011352853.3A CN202011352853A CN112328182B CN 112328182 B CN112328182 B CN 112328182B CN 202011352853 A CN202011352853 A CN 202011352853A CN 112328182 B CN112328182 B CN 112328182B
Authority
CN
China
Prior art keywords
data
raid
disk
metadata
disks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011352853.3A
Other languages
Chinese (zh)
Other versions
CN112328182A (en
Inventor
吴文政
邹博
齐季
应志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Jingjia Microelectronics Co ltd
Original Assignee
Changsha Jingjia Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Jingjia Microelectronics Co ltd filed Critical Changsha Jingjia Microelectronics Co ltd
Priority to CN202011352853.3A priority Critical patent/CN112328182B/en
Publication of CN112328182A publication Critical patent/CN112328182A/en
Application granted granted Critical
Publication of CN112328182B publication Critical patent/CN112328182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a RAID data management method, wherein the RAID consists of data disks distributed in different disks, and comprises the following steps: detecting the state of each data disc; and if at least one data disk state is invalid, writing the data to be stored into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state. The invention also provides a RAID data management device and a computer readable storage medium. By adopting the technical scheme of the invention, the equipment can be ensured to normally store various service data required by a user under extremely severe conditions as long as one data disk is effective, the previously stored data is not influenced, the data stored before and when the data disk is invalid can be normally accessed after the data disk of the equipment is restored to a normal environment, and meanwhile, the data storage service can be continuously and normally carried out.

Description

RAID data management method, device and computer readable storage medium
Technical Field
The invention belongs to the technical field of data storage management, and particularly relates to a RAID data management method, a device and a computer readable storage medium.
Background
RAID technology was born in 1987 and was introduced by Berkeley division, university of California, USA. RAID is an abbreviation for "Redundant Array of Independent Disks" and means, in Chinese, Redundant Array of Independent Disks. Briefly, RAID is a technology that combines a plurality of independent hard disks (physical hard disks) in different ways to form a hard disk group (logical hard disk), thereby providing higher storage performance than a single hard disk and providing data redundancy. The purpose of the original development of redundant disk arrays was to combine small inexpensive disks to replace large expensive disks to reduce the cost of mass data storage, and it is also desirable to use redundant information so that the failure of a disk will not cause loss of access to the data, thereby developing a certain level of data protection technology and increasing the data transmission speed appropriately. With the explosive growth of the modern storage demand and the development of the storage technology, the RAID technology is widely applied to various data computing and storage devices, and according to the requirements of users on data storage performance and security, the RAID includes RAID 0-RAID 7, and commonly used RAID includes RAID0, RAID1/10, RAID5/50 and RAID 6/60.
The utilization rate ordering of the storage space of different RAID under the condition of the same disk number is from high to low respectively: RAID0 > RAID5/50 > RAID6/60 > RAID 1; the read-write access performance ordering from high to low is respectively: RAID0 > RAID5/50 > RAID6/60 > RAID 1; the data security ordering from high to low is completely opposite to the previous ordering, and the ordering condition is as follows: RAID0 < RAID5/50 < RAID6/60 < RAID 1. In general, users usually adopt a compromise mode for data security and performance, such as RAID5 and RAID6, where RAID5 allows each RAID group to operate in the case of a bad disk, and RAID6 allows two bad disks to operate normally.
RAID5 and RAID6 are good at solving the problems of data security and data access due to damage of storage media under normal operating environment conditions, but are troublesome in the face of temporary failure of storage media due to harsh operating environment conditions, such as data storage devices operating under harsh environment conditions, where users want the devices to store as much data as possible even in extremely hostile environments (which are beyond the normal operating range), and then the accident can be verified and analyzed based on the complete or incomplete data. Under the extreme conditions, the number of data disks of the storage device failing and the failure time are random, if RAID5/6 is continuously used, when the number of failed disks exceeds 1-2, the whole RAID cannot continue to work, even if the number of failed disks is within the range allowed by RAID5/6, reconstruction of RAID5/RAID6 also has a great influence on the read-write performance of data, the reconstruction time and efficiency of RAID5/6 increase with the increase of stored data, and if the serial number of the failed data disks changes frequently, RAID5/6 cannot continue to work, so that the whole storage device cannot record any data completely under the conditions, and a user cannot acquire the data under the extreme conditions, thereby being not beneficial to analysis and event verification of subsequent data. In some large-capacity data storage devices, the number of the allowable bad disks of the device can be increased as much as possible by means of RAID60 to alleviate the problem, but RAID60 requires a relatively large number of disks, conditions are not met on many small data storage devices, and space utilization rate and performance loss of the device are relatively large; otherwise, RAID60 may not solve the RAID reconfiguration problem caused by random failures of data disks.
Disclosure of Invention
In order to solve the problem, the invention provides a high-reliability RAID data management method and device under extremely severe conditions by dynamically synchronizing and distributing the RAID metadata of the data disks by taking advantage of the fact that RAID0 is not constrained by the number of disks, has high space utilization efficiency, has high read-write performance and the like, and combining the problems of uncertainty of the number of failed data disks, uncertainty of the serial numbers of the failed data disks and the like under the extremely severe environment conditions.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a RAID data management method, wherein the RAID consists of data disks distributed in different disks, and comprises the following steps:
detecting the state of each data disc;
and if at least one data disk state is invalid, writing the data to be stored into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state.
Preferably, if the data disk state is recovery normal, new storage data cannot be written in a data-free area left in the process of writing the data to be stored into the RAID during the data disk failure period in the recovery normal state.
Preferably, the data-free area is determined according to whether bitmap information corresponding to a RAID stripe is changed in a process of writing data to be stored into the RAID during a data disk failure period.
Preferably, the method further comprises RAID initialization, which specifically includes:
loading RAID information for reading each data disk into the RAID metadata buffer area;
striping the RAID, and simultaneously dividing each data disk of the RAID into a RAID metadata area and a RAID data area;
carrying out bitmap processing on the RAID stripes to obtain bitmap information corresponding to the RAID stripes;
loading the RAID stripe bitmap information into the RAID metadata cache area;
and synchronizing RAID information and RAID stripe bitmap information of each data disk in the RAID metadata buffer area to a RAID metadata area of each data disk of the RAID.
Preferably, the synchronizing the RAID information of each data disk in the RAID metadata buffer area and the RAID stripe bitmap information table to the RAID metadata area of each data disk of the RAID includes:
reading metadata in the RAID metadata buffer area, wherein the metadata comprises RAID information and RAID stripe bitmap information of each data disk;
verifying the correctness of the metadata of each data disc according to the verification information of the metadata;
comparing the updating time of the metadata of each data disk, and loading and operating the RAID by the latest updated metadata;
the RAID metadata on the individual data disks of all the RAIDs is synchronized.
Preferably, when the data disk is restored, the following steps are executed:
reading RAID information corresponding to a data disk to be recovered, wherein the RAID information comprises RAID metadata of the data disk to be recovered in a failure state;
synchronizing the content of the RAID metadata buffer zone corresponding to the RAID information to a metadata zone corresponding to a data disk to be recovered;
and adding the synchronized data disks to be recovered into the corresponding RAID group to update the configuration of the RAID, and synchronizing the data disks to each data disk of the RAID.
The present invention also provides a RAID data management apparatus, where the RAID is configured by data disks distributed in different disks, and includes:
the detection module is used for detecting the state of each data disc;
and the writing module is used for writing the data to be stored into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state if at least one data disk state is invalid.
Preferably, if the data disk state is recovery normal, new storage data cannot be written in a data-free area left in the process of writing the data to be stored into the RAID during the data disk failure period in the recovery normal state.
Preferably, the data-free area is determined according to whether bitmap information corresponding to a RAID stripe is changed in a process of writing data to be stored into the RAID during a data disk failure period.
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of a RAID data management method.
The invention detects the state of each data disk; if at least one data disk state is invalid, writing the data to be stored into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state; and if the data disk state is recovered to be normal, writing the data to be stored into a data-free area left in the RAID process within the data disk failure period in the normal recovery state, and not writing new stored data. Therefore, the data storage method can ensure that the equipment can normally store various service data required by a user under extremely severe conditions as long as one data disk is effective, cannot influence the previously stored data, and can ensure that the data stored before and when the data disk fails can be normally accessed and simultaneously can continue to normally store the data after the data disk of the equipment is restored to a normal environment. The method and the device can effectively meet the requirement of realizing the storage of the service data to the greatest extent under extremely severe environmental conditions, and improve the reliability and the storage efficiency of the data storage equipment.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow chart of a RAID data management method according to the present invention;
FIG. 2 is a schematic diagram of a RAID architecture;
FIG. 3 is a schematic diagram of data storage under a normal RAID state;
FIG. 4 is a schematic view of data storage in the event of a data disk failure;
FIG. 5 is a schematic diagram of data storage in the event of multiple data disk failures;
FIG. 6 is a schematic diagram of data storage after recovery of a data disk;
FIG. 7 is a diagram of a write data store after recovery of a data disk;
FIG. 8 is a block diagram of a RAID data management apparatus according to the present invention.
Detailed Description
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Under extremely harsh environments, the conventional RAID5 and RAID6 cannot solve the problem of being unable to continue normal operation again due to unstable data disk states of the data storage devices. According to the needs of users, the data can be stored under the normal condition of only one disk of the device, and the data stored before can not be influenced, after the environment is recovered, the RAID can be loaded normally under the condition that the storage medium is not damaged, and the data stored under the normal environment and the extremely severe working environment can be read normally, and the following problems need to be solved for realizing the functions:
1) under an extremely severe environment, the RAID needs to be capable of rapidly adapting to the change of the state of the data disk and synchronizing RAID metadata information to the data disk in a normal state in time;
2) the RAID can dynamically support the random switching of the number of the data disks, and can read and write normally under the normal condition of any data disk;
3) under the normal working environment, the data stored under the normal environment and the data stored under the condition that the data disk fails can be read normally, the RAID can write the data normally after the state of the data disk is recovered to be normal from the abnormal state, and the data stored under the condition that the data disk fails can not be covered.
Example 1:
as shown in fig. 1, the present invention provides a RAID data management method, where the RAID is formed by data disks distributed in different disks, and the method includes the following steps:
detecting the state of each data disk, and synchronizing the RAID metadata of each data disk into a RAID metadata buffer area;
and if at least one data disk state is invalid, writing the data to be stored into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state.
Further, if the data disk state is recovery normal, new storage data cannot be written in a data-free area left in the process of writing the data to be stored into the RAID in the data disk failure period in the recovery normal state; and determining the data-free area according to whether bitmap information corresponding to the RAID stripe is changed in the process of writing the data to be stored into the RAID in the data disk failure period.
By adopting the technical scheme of the invention, if at least one data disk is in failure, the data to be stored is written into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state; therefore, the data storage method can ensure that the equipment can normally store various service data required by a user under extremely severe conditions as long as one data disk is effective, cannot influence the previously stored data, and can ensure that the data stored before and when the data disk fails can be normally accessed and simultaneously can continue to normally store the data after the data disk of the equipment is restored to a normal environment.
Further, still include: RAID initialization, as shown in fig. 2, specifically includes:
loading RAID information for reading each data disk into the RAID metadata buffer area;
striping the RAID, and simultaneously dividing each data disk of the RAID into a RAID metadata area and a RAID data area;
carrying out bitmap processing on the RAID stripe to obtain bitmap information (strip bitmap for short) corresponding to the RAID stripe, wherein the initial value of the bitmap information is an invalid value (namely '0'), and when the data disk RAID stripe is written with data to be stored, the bitmap information corresponding to the data disk RAID stripe is modified (or marked) to be an effective value (namely '1');
loading the RAID stripe bitmap information into the RAID metadata cache area;
and synchronizing RAID information and RAID stripe bitmap information of each data disk in the RAID metadata buffer area to a RAID metadata area of each data disk of the RAID.
Wherein, the synchronizing the RAID information of each data disk in the RAID metadata buffer area and the RAID stripe bitmap information table to the RAID metadata area of each data disk of the RAID includes:
reading metadata in the RAID metadata buffer area, wherein the metadata comprises RAID information and RAID stripe bitmap information of each data disk;
verifying the correctness of the metadata of each data disc according to the verification information of the metadata;
comparing the updating time of the metadata of each data disk, and loading and operating the RAID by the latest updated metadata;
the RAID metadata on the individual data disks of all the RAIDs is synchronized.
The working process after the RAID data disk fails is as follows:
in the RAID of the present application, the number of RAID data disks is not limited, and in the RAID, a RAID with 5 disks is taken as an example, and it is assumed that writing of file system data is continuous, and data files are stored in each data disk under normal conditions, which is schematically shown in fig. 3.
Under the normal condition of the equipment, file data are uniformly distributed on each stripe data of the RAID, when the disks fail in the data file writing process, the state of the RAID changes, the corresponding RAID state is stored in a RAID metadata buffer area, the RAID state comprises the state information of the data disks, the file system continues to write in the original normal data writing mode when continuing writing in, and the storage schematic diagram of the data files which continue to be stored in each disk of the RAID after the data disks fail is shown in FIG. 4.
According to the data storage diagrams shown in fig. 3 and 4, no matter data storage is performed in a normal state of a data disk or abnormal data storage is performed under a condition that the data disk fails, when stripe data corresponding to RAID is written, a bitmap corresponding to the RAID stripe is marked. In the case of a data disk failure, if the bitmap area of the stripe (the area with "null" character in fig. 5) is once identified as a written area, after the subsequent data disk is recovered, the corresponding non-data area in the previous RAID stripe cannot be read or written again until the corresponding bitmap area is cleared. A schematic diagram of data storage after multiple disk failures is shown in fig. 5.
According to the data storage diagram shown in fig. 5, the RAID technology designed in the present invention can support data storage after any number of data disks (the number of failures is less than the total number of data disks of the RAID) have failed, data written after the data disk fails does not affect data written before, and data before the data disk fails and data after the data disk fails can be normally read after the state of the data disk is restored.
The workflow of RAID data disk recovery is as follows:
when the data disk is recovered again, the following steps are executed:
reading RAID information corresponding to a data disk to be recovered, wherein the RAID information comprises RAID metadata of the data disk to be recovered in a failure state;
synchronizing the content of the RAID metadata buffer zone corresponding to the RAID information to a metadata zone corresponding to a data disk to be recovered;
and adding the synchronized data disks to be recovered into the corresponding RAID group to update the configuration of the RAID, and synchronizing the data disks to each data disk of the RAID.
After the data disk is successfully recovered, reading and writing operations can be performed on data, when data reading is performed, the file system layer initiates reading and writing operations to the RAID layer according to the LBA position of the file in the RAID, the RAID layer receives the file system reading and writing LBA request, redirects the LBA to the corresponding data disk according to the RAID state where the file is currently located, and performs data reading operations according to the stripe size of the RAID, assuming that storage of a certain file in the data disk is as shown in fig. 6.
The "no-data area" in fig. 6 is a no-data area left in the data writing process when the data disk fails, and after the data disk is recovered, these data areas cannot be written again until the corresponding file information on the corresponding stripe is cleared. After the data disk is recovered, if the data content of the file a shown in fig. 6 needs to be read, the file system reads and writes according to the normal data modulation size, and the file system sends a read and write request LBA range of B1-B5 to the RAID layer. After receiving the data reading request, the RAID layer will locate that the data disk 2 is a data-free area according to the RAID state when the file is written, and when reading and writing, will send the B1-B4 data block reading and writing instructions to the data disk 1/3/4/5, respectively, and then send the B5 data block reading instruction to the data disk 1.
After the state of the data disk is restored, when data reading and writing are performed again, the data file is created again according to the latest RAID state, and then the data is written onto the current online data disk in the RAID according to the latest RAID data disk, as shown in fig. 7, assuming that the state of the data disk 4 is restored in the RAID working process, when data storage is continued, the file data storage state is as shown in fig. 7.
Example 2:
as shown in fig. 8, the present invention further provides a RAID data management apparatus, where the RAID is formed by data disks distributed in different disks, and includes:
the detection module is used for detecting the state of each data disc;
and the writing module is used for writing the data to be stored into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state if at least one data disk state is invalid.
Further, if the data disk state is recovery normal, new storage data cannot be written in a data-free area left in the process of writing the data to be stored into the RAID during the data disk failure period in the recovery normal state. And determining the data-free area according to whether the bitmap information corresponding to the RAID stripe is changed in the process of writing the data to be stored into the RAID in the failure period of the data disk.
By adopting the technical scheme of the invention, if at least one data disk is in failure, the data to be stored is written into the RAID stripes of the data disks in the normal state/normal state and the recovery normal state; therefore, the data storage method can ensure that the equipment can normally store various service data required by a user under extremely severe conditions as long as one data disk is effective, cannot influence the previously stored data, and can ensure that the data stored before and when the data disk fails can be normally accessed and simultaneously can continue to normally store the data after the data disk of the equipment is restored to a normal environment.
Embodiments of the present application also provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the RAID data management method.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A RAID data management method, the RAID is composed of data disks distributed in different disks, characterized by comprising the following steps:
detecting the state of each data disc;
if at least one data disk state is invalid, writing the data to be stored into the RAID stripes of the data disks in the normal state and the recovery normal state;
and if the data disk state is recovered to be normal, in the data disk failure period of the recovered normal state, the data to be stored is written into the data-free area left in the RAID process, and new stored data cannot be written.
2. The RAID data management method of claim 1, wherein the no-data area is determined according to whether bitmap information corresponding to a RAID stripe is changed during writing of data to be stored to the RAID during a data disk failure period.
3. A RAID data management method according to claim 1 or 2 further comprising RAID initialization which specifically comprises:
loading RAID information for reading each data disk into the RAID metadata buffer area;
striping the RAID, and simultaneously dividing each data disk of the RAID into a RAID metadata area and a RAID data area;
carrying out bitmap processing on the RAID stripes to obtain bitmap information corresponding to the RAID stripes;
loading the RAID stripe bitmap information into the RAID metadata cache area;
and synchronizing RAID information and RAID stripe bitmap information of each data disk in the RAID metadata buffer area to a RAID metadata area of each data disk of the RAID.
4. The RAID data management method of claim 3, wherein synchronizing the RAID information and the RAID stripe bitmap information table for each data disk in the RAID metadata buffer to a RAID metadata area for each data disk of a RAID comprises:
reading metadata in the RAID metadata buffer area, wherein the metadata comprises RAID information and RAID stripe bitmap information of each data disk;
verifying the correctness of the metadata of each data disc according to the verification information of the metadata;
comparing the updating time of the metadata of each data disk, and loading and operating the RAID by the latest updated metadata;
the RAID metadata on the individual data disks of all the RAIDs is synchronized.
5. A RAID data management method according to claim 3 wherein upon recovery of the data disk, the steps of:
reading RAID information corresponding to a data disk to be recovered, wherein the RAID information comprises RAID metadata of the data disk to be recovered in a failure state;
synchronizing the content of the RAID metadata buffer zone corresponding to the RAID information to a metadata zone corresponding to a data disk to be recovered;
and adding the synchronized data disks to be recovered into the corresponding RAID group to update the configuration of the RAID, and synchronizing the data disks to each data disk of the RAID.
6. A RAID data management apparatus, the RAID comprising data disks distributed among different disks, the apparatus comprising:
the detection module is used for detecting the state of each data disc;
the write-in module is used for writing the data to be stored into the RAID stripes of the data disks in the normal state and the recovery normal state if at least one data disk state is invalid;
and if the data disk state is recovered to be normal, in the data disk failure period of the recovered normal state, the data to be stored is written into the data-free area left in the RAID process, and new stored data cannot be written.
7. The RAID data management apparatus of claim 6, wherein the no-data area is determined according to whether bitmap information corresponding to a RAID stripe is changed during writing of data to be stored to the RAID during a data disk failure period.
8. A computer-readable storage medium, characterized in that it stores a computer program which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 5.
CN202011352853.3A 2020-11-27 2020-11-27 RAID data management method, device and computer readable storage medium Active CN112328182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011352853.3A CN112328182B (en) 2020-11-27 2020-11-27 RAID data management method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011352853.3A CN112328182B (en) 2020-11-27 2020-11-27 RAID data management method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112328182A CN112328182A (en) 2021-02-05
CN112328182B true CN112328182B (en) 2022-04-22

Family

ID=74309141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011352853.3A Active CN112328182B (en) 2020-11-27 2020-11-27 RAID data management method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112328182B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129267B (en) * 2022-09-01 2023-02-03 苏州浪潮智能科技有限公司 Domain address changing method, device and equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411516A (en) * 2011-12-13 2012-04-11 云海创想信息技术(天津)有限公司 RAID5 (Redundant Array of Independent Disk 5) data reconstruction method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4426262B2 (en) * 2003-11-26 2010-03-03 株式会社日立製作所 Disk array device and failure avoiding method for disk array device
US8074019B2 (en) * 2007-11-13 2011-12-06 Network Appliance, Inc. Preventing data loss in a storage system
CN102226892B (en) * 2011-05-20 2013-10-23 杭州华三通信技术有限公司 Disk fault tolerance processing method and device thereof
CN102662609B (en) * 2012-04-16 2016-03-30 华为软件技术有限公司 The method of video access and device
CN103729268A (en) * 2014-01-15 2014-04-16 浪潮电子信息产业股份有限公司 Data recovery method for RAID5 with two disks lost
CN105094704B (en) * 2015-07-24 2019-01-15 浙江宇视科技有限公司 A kind of method and apparatus of high scalability RAID array storage video monitoring data
CN105630417B (en) * 2015-12-24 2018-07-20 创新科软件技术(深圳)有限公司 A kind of RAID5 systems and in the subsequent method for continuing data of RAID5 thrashings
CN108733314B (en) * 2017-04-17 2021-06-29 伊姆西Ip控股有限责任公司 Method, apparatus, and computer-readable storage medium for Redundant Array of Independent (RAID) reconstruction
CN109725822B (en) * 2017-10-27 2022-03-11 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing a storage system
CN111857552A (en) * 2019-04-30 2020-10-30 伊姆西Ip控股有限责任公司 Storage management method, electronic device and computer program product

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411516A (en) * 2011-12-13 2012-04-11 云海创想信息技术(天津)有限公司 RAID5 (Redundant Array of Independent Disk 5) data reconstruction method and device

Also Published As

Publication number Publication date
CN112328182A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
US10365983B1 (en) Repairing raid systems at per-stripe granularity
US7975168B2 (en) Storage system executing parallel correction write
US9372743B1 (en) System and method for storage management
US5315602A (en) Optimized stripe detection for redundant arrays of disk drives
US9684591B2 (en) Storage system and storage apparatus
US9292228B2 (en) Selective raid protection for cache memory
EP0718766A2 (en) Method of operating a disk drive array
US7849258B2 (en) Storage apparatus and data verification method for the same
CN103534688B (en) Data reconstruction method, memory device and storage system
US20030023933A1 (en) End-to-end disk data checksumming
US20060161808A1 (en) Method, apparatus and program storage device for providing intelligent copying for faster virtual disk mirroring
KR100208801B1 (en) Storage device system for improving data input/output perfomance and data recovery information cache method
US20130103902A1 (en) Method and apparatus for implementing protection of redundant array of independent disks in file system
US20070101188A1 (en) Method for establishing stable storage mechanism
CN113377569A (en) Method, apparatus and computer program product for recovering data
CN112328182B (en) RAID data management method, device and computer readable storage medium
US11275513B2 (en) System and method for selecting a redundant array of independent disks (RAID) level for a storage device segment extent
US11150991B2 (en) Dynamically adjusting redundancy levels of storage stripes
RU2750645C1 (en) Method for data storage in redundant array of independent disks with increased fault tolerance
US8140800B2 (en) Storage apparatus
CN111506259B (en) Data storage method, data reading method, data storage device, data reading apparatus, data storage device, and readable storage medium
CN114415968A (en) Storage system and data writing method thereof
US20210208969A1 (en) Dropped write error detection
US11221773B2 (en) Method and apparatus for performing mapping information management regarding redundant array of independent disks
CN108614746A (en) A kind of data processing method and its system, server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant