CN116302673A - Method for improving data recovery rate of Ceph storage system - Google Patents
Method for improving data recovery rate of Ceph storage system Download PDFInfo
- Publication number
- CN116302673A CN116302673A CN202310602070.3A CN202310602070A CN116302673A CN 116302673 A CN116302673 A CN 116302673A CN 202310602070 A CN202310602070 A CN 202310602070A CN 116302673 A CN116302673 A CN 116302673A
- Authority
- CN
- China
- Prior art keywords
- data
- written
- osd
- writing
- recovered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000011084 recovery Methods 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000015556 catabolic process Effects 0.000 claims abstract description 4
- 238000006731 degradation reaction Methods 0.000 claims abstract description 4
- 239000012634 fragment Substances 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 8
- 230000026676 system process Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 101100121776 Arabidopsis thaliana GIG1 gene Proteins 0.000 description 1
- 101100267551 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) YME1 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1004—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/142—Reconfiguring to eliminate the error
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method for improving the data recovery rate of a Ceph storage system, which comprises the following steps: when the OSD (n) in the PG fails to correspond to the hard disk, the object tracking module searches all PGs containing the OSD (n), searches the object contained in each PG, and marks the label to be recovered; during the fault period of the OSD (n) corresponding to the hard disk, the data which is newly written into the corresponding PG is completely subjected to degradation writing, and the data writing is completed according to the object type of the data writing; after the OSD state is normal, the object tracking module selects a set number of PG, the written object data is started to reconstruct and restore, and after the object data is restored, the label needing to be restored is removed; and executing the steps on each PG until all the objects needing to be restored are reconstructed, and finishing data restoration. The invention can reduce the data recovery time and reduce the influence on the service.
Description
Technical Field
The invention relates to the field of data storage, in particular to a method for improving the data recovery rate of a Ceph storage system.
Background
Ceph is a distributed storage system with high availability, high scalability, and high performance that is built with generic hardware. Compared with special hardware, the general hardware has poorer performance and reliability, but can exert high performance through cluster advantages, and solves high availability and expandability through the design of software.
Component failure of general hardware is normal in later maintenance, and this situation is further exacerbated in a large-scale clustered environment (the scale of a Ceph storage system can be extended to hundreds or thousands of nodes), where the Ceph storage system can create a series of challenging problems in terms of data recovery, data migration, etc.
One challenge is that the data can be recovered after the components (hard disk and node fail), ceph provides redundancy of the data through multiple copies of the data and erasure codes, and a certain number of components can be allowed to fail simultaneously.
The second challenge is the speed of data recovery, which is too slow, and the first challenge affects the service performance for a long time; another increases the risk of multiple components failing at the same time, and if the number of components failing at the same time exceeds the number allowed by the system, the data is lost completely.
The existing technology for increasing the data recovery speed mainly increases the concurrency of data recovery, and increases the concurrency from two dimensions, one is: the storage virtualization technology is adopted to distribute the recovered data in a plurality of hard disks, and after the hard disks fail, the hard disks can participate in the recovery process as much as possible, so that the data recovery performance is improved. The other is to increase the processing thread of data recovery, so that more CPU resources are involved in the data recovery process, thereby improving the data recovery speed.
The existing data recovery technology can greatly improve the speed of data recovery, but has two disadvantages: one is that if the data recovery speed is too high, the service performance is affected, and the recovery speed is reduced by manual intervention; in the data recovery process, the newly written data is still written in a degraded manner, and particularly when the writing data speed is higher than the data recovery speed, more and more data to be recovered can be obtained, so that the data recovery speed is greatly slowed down.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for improving the data recovery rate of a Ceph storage system, which comprises the following steps:
when the OSD (n) in the PG fails to correspond to the hard disk, the object tracking module searches all PGs containing the OSD (n), searches the object contained in each PG, marks the label to be recovered, and records the marked objects to the object tracking module;
step two, during the period that the OSD (n) is in fault with the corresponding hard disk, the data which is newly written into the corresponding PG are all subjected to degradation writing, and if a new disk is added or a new OSD is distributed by the system, the step four is entered; if a new disc is added or a new OSD is distributed by the system, and the OSD state is normal, entering a step III;
thirdly, the object tracking module selects a set number of PG, the reconstruction and recovery of the written object data are started, the missing data fragments are calculated through an EC algorithm, then the data fragments are written into the newly allocated OSD, and after the object data are recovered, the labels needing to be recovered are removed;
step four, classifying the written data according to the type of the object written by the data until the data writing is completed;
and fifthly, executing the steps on each PG until all the objects needing to be restored are reconstructed, and finishing data restoration.
Further, the classifying the written data according to the object type of the data writing until the data writing is completed includes:
if the data is written in the old object in an overwriting manner and the old object is not being recovered, the old object is directly covered, the fragments belonging to the OSD (n) are as new writing, of course, the two check data fragments need to be recalculated and updated, and after the object is successfully written, the labels needing to be recovered are removed;
if the data is a new writing object, whether the written PG is recovered or not is the normal writing data, and the writing is not degraded any more;
if the written object data is being recovered, the written data is suspended, and the written data is written after the object number is recovered.
Further, the object tracking module selects a set number of PGs, including:
in one data recovery process, the number of PG participating in data recovery is configurable, a set number of PG is configured according to the data recovery speed and the influence on the service, and then the data recovery process simultaneously starts the number of PG participating in data recovery according to the set number of PG.
Further, the downgrade writing is as follows: and EC calculates M check sheets through N data blocks to realize data redundancy protection, and when the number of faults of a disk or a node is not more than M, data is allowed to be written continuously without losing data, namely, degraded writing is realized.
Further, the OSD state is normally: and in the storage system, an operating system process manages a corresponding hard disk device, and when the process and the corresponding hard disk device are normal, the OSD state is normal.
The beneficial effects of the invention are as follows: the invention can reduce the data to be recovered, reduce the recovery time and reduce the influence on the service by tracking the new written data and making the new written data not participate in the recovery.
Drawings
FIG. 1 is a flow chart of a method for increasing the data recovery rate of a Ceph storage system;
FIG. 2 is a schematic diagram of a normal EC write process in a Ceph storage system;
FIG. 3 is an illustration of object data destaging (EC 4+2) writes;
fig. 4 is a schematic diagram of object data reconstruction recovery (ec4+2).
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings, but the scope of the present invention is not limited to the following description.
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention. It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
As shown in fig. 1, a method for improving a data recovery rate of a Ceph storage system includes the following steps:
when the OSD (n) in the PG fails to correspond to the hard disk, the object tracking module searches all PGs containing the OSD (n), searches the object contained in each PG, marks the label to be recovered, and records the marked objects to the object tracking module;
step two, during the period that the OSD (n) is in fault with the corresponding hard disk, the data which is newly written into the corresponding PG are all subjected to degradation writing, and if a new disk is added or a new OSD is distributed by the system, the step four is entered; if a new disc is added or a new OSD is distributed by the system, and the OSD state is normal, entering a step III;
thirdly, the object tracking module selects a set number of PG, the reconstruction and recovery of the written object data are started, the missing data fragments are calculated through an EC algorithm, then the data fragments are written into the newly allocated OSD, and after the object data are recovered, the labels needing to be recovered are removed;
step four, classifying the written data according to the type of the object written by the data until the data writing is completed;
and fifthly, executing the steps on each PG until all the objects needing to be restored are reconstructed, and finishing data restoration.
The classifying processing is performed on the written data according to the object type of the data writing until the data writing is completed, and the classifying processing comprises the following steps:
if the data is written in the old object in an overwriting manner and the old object is not being recovered, the old object is directly covered, the fragments belonging to the OSD (n) are as new writing, of course, the two check data fragments need to be recalculated and updated, and after the object is successfully written, the labels needing to be recovered are removed;
if the data is a new writing object, whether the written PG is recovered or not is the normal writing data, and the writing is not degraded any more;
if the written object data is being recovered, the written data is suspended, and the written data is written after the object number is recovered.
The object tracking module selects a set number of PGs, including:
in one data recovery process, the number of PG participating in data recovery is configurable, a set number of PG is configured according to the data recovery speed and the influence on the service, and then the data recovery process simultaneously starts the number of PG participating in data recovery according to the set number of PG.
The downgrade write is: and EC calculates M check sheets through N data blocks to realize data redundancy protection, and when the number of faults of a disk or a node is not more than M, data is allowed to be written continuously without losing data, namely, degraded writing is realized.
The OSD state is normally: and in the storage system, an operating system process manages a corresponding hard disk device, and when the process and the corresponding hard disk device are normal, the OSD state is normal.
Specifically, ceph storage systems. The Ceph storage system is internally provided with an object-based storage system, after business data is written into the Ceph, the business data is divided into objects with fixed sizes, and a unique object ID in the system is generated for each object. The object ID is then hash-modulo and mapped to a unique PG. Finally, a pseudo-random algorithm is adopted to allocate a storage unit OSD (object storage device, object storage device, and the like) of a plurality of nodes to the PG, usually one OSD corresponds to one hard disk, the number of OSDs in one PG and the number of stripe blocks of erasure codes are kept consistent, for example, the erasure codes (EC, and the like) are configured to EC4+2 (4 is actual data, 2 is check data, the configuration allows two hard disks to fail at the same time), the number of stripe blocks is 6, and then the number of OSDs in the PG is 6 and is distributed to six nodes. After the PG distributes the OSD, the object is cut and calculated according to the configuration of the EC and then written into each OSD, namely each hard disk.
A normal EC write flow in the Ceph storage system is shown in fig. 2. Wherein ABCDEFGH is write data, AB, CD, EF, GH … …, data slicing, OSD1, …, OSD6 is allocated slicing data storage unit
When a hard disk fails, if the OSD corresponding to the hard disk is OSD4, the solution of the present invention includes the following steps:
step one: the OSD4 is in fault corresponding to the hard disk, an object tracking module intervenes, the module searches all PGs comprising the OSD4, searches the object comprising each PG, marks the object to be recovered, and records the marked object to the object tracking module;
step two: during the period that the OSD4 fails to correspond to the hard disk, the data newly written into the corresponding PG is all downgraded and written, as shown in fig. 3, wherein "X" indicates that the data fragment is lost, the data fragment "MN" which should be written into the OSD4 is lost, and other data fragments are successfully written.
Step three: when a new disc is added or a new OSD is allocated to the system, and after the OSD state is normal, the object tracking module selects a plurality of proper PGs, the reconstruction recovery is started on the written object data, the missing data fragments are calculated through an EC algorithm and then written into the newly allocated OSD, and the labels needing to be recovered are removed after the object data is recovered, as shown in fig. 4, wherein GHIJKLMN is the written data, GH, IJ and … …, and MN is the fragment data.
Step four: after a new disk is added or the system allocates a new OSD, the data writing is divided into several cases:
1, if the data is written in the old object in an overwriting manner and the object is not being recovered, the original object is directly covered, the fragments belonging to the OSD4 are as new writing, of course, the two check data fragments need to be recalculated and updated, and after the object is successfully written, the labels needing to be recovered are removed;
2, if the data is a new object to be written, the writing is not degraded as normal writing data, no matter whether the written PG is being recovered or not.
And 3, if the written object data is just being recovered, suspending the written data, and writing after the object is recovered.
Step five: each PG executes the steps once until all the objects to be restored are reconstructed.
Eraser Coding: the EC (N+M) calculates M check slices (M takes the value of 2, 3 or 4) through N data blocks (N is even number) to realize data redundancy protection.
Degrading writing: degraded writing is a special state of EC mode storage that allows data to continue writing without losing data when the number of disk or node failures is not greater than M for EC (n+m), such a failed write state is commonly referred to as degraded writing.
OSD state is normal: OSD is an abbreviation for object storage device, and is embodied in a storage system, typically an operating system process, which generally manages a corresponding piece of hard disk device, and generally, when the process and the corresponding hard disk device are normal, the OSD is in a normal state.
In a data recovery process, the number of PGs participating in data recovery is generally configurable, a user can comprehensively consider the data recovery speed and the influence on the service to configure the appropriate number of PGs, and then the data recovery process starts the number of PGs participating in data recovery simultaneously according to the user configuration.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.
Claims (5)
1. A method for increasing the data recovery rate of a Ceph memory system, comprising the steps of:
when the OSD (n) in the PG fails to correspond to the hard disk, the object tracking module searches all PGs containing the OSD (n), searches the object contained in each PG, marks the label to be recovered, and records the marked objects to the object tracking module;
step two, during the period that the OSD (n) is in fault with the corresponding hard disk, the data which is newly written into the corresponding PG are all subjected to degradation writing, and if a new disk is added or a new OSD is distributed by the system, the step four is entered; if a new disc is added or a new OSD is distributed by the system, and the OSD state is normal, entering a step III;
thirdly, the object tracking module selects a set number of PG, the reconstruction and recovery of the written object data are started, the missing data fragments are calculated through an EC algorithm, then the data fragments are written into the newly allocated OSD, and after the object data are recovered, the labels needing to be recovered are removed;
step four, classifying the written data according to the type of the object written by the data until the data writing is completed;
and fifthly, executing the steps on each PG until all the objects needing to be restored are reconstructed, and finishing data restoration.
2. The method for increasing a data recovery rate of a Ceph memory system according to claim 1, wherein said classifying the written data according to the type of the object to which the data is written until the data writing is completed, comprises:
if the data is written in the old object in an overwriting manner and the old object is not being recovered, the old object is directly covered, the fragments belonging to the OSD (n) are as new writing, of course, the two check data fragments need to be recalculated and updated, and after the object is successfully written, the labels needing to be recovered are removed;
if the data is a new writing object, whether the written PG is recovered or not is the normal writing data, and the writing is not degraded any more;
if the written object data is being recovered, the written data is suspended, and the written data is written after the object number is recovered.
3. The method for increasing a data recovery rate of a Ceph memory system according to claim 1, wherein said object tracking module selects a set number of PGs, comprising:
in one data recovery process, the number of PG participating in data recovery is configurable, a set number of PG is configured according to the data recovery speed and the influence on the service, and then the data recovery process simultaneously starts the number of PG participating in data recovery according to the set number of PG.
4. The method for increasing a data recovery rate of a Ceph memory system according to claim 1, wherein said downgraded writing is: and EC calculates M check sheets through N data blocks to realize data redundancy protection, and when the number of faults of a disk or a node is not more than M, data is allowed to be written continuously without losing data, namely, degraded writing is realized.
5. The method for increasing a data recovery rate of a Ceph memory system according to claim 1, wherein said OSD state is normally: and in the storage system, an operating system process manages a corresponding hard disk device, and when the process and the corresponding hard disk device are normal, the OSD state is normal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310602070.3A CN116302673B (en) | 2023-05-26 | 2023-05-26 | Method for improving data recovery rate of Ceph storage system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310602070.3A CN116302673B (en) | 2023-05-26 | 2023-05-26 | Method for improving data recovery rate of Ceph storage system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116302673A true CN116302673A (en) | 2023-06-23 |
CN116302673B CN116302673B (en) | 2023-08-22 |
Family
ID=86820813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310602070.3A Active CN116302673B (en) | 2023-05-26 | 2023-05-26 | Method for improving data recovery rate of Ceph storage system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116302673B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117608500A (en) * | 2024-01-23 | 2024-02-27 | 四川省华存智谷科技有限责任公司 | Method for rescuing effective data of storage system when data redundancy is insufficient |
CN117851132A (en) * | 2024-03-07 | 2024-04-09 | 四川省华存智谷科技有限责任公司 | Data recovery optimization method for distributed object storage |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9021296B1 (en) * | 2013-10-18 | 2015-04-28 | Hitachi Data Systems Engineering UK Limited | Independent data integrity and redundancy recovery in a storage system |
US20170075761A1 (en) * | 2014-12-09 | 2017-03-16 | Hitachi Data Systems Corporation | A system and method for providing thin-provisioned block storage with multiple data protection classes |
US20180039444A1 (en) * | 2015-04-09 | 2018-02-08 | Hitachi, Ltd. | Storage system and data control method |
CN109710456A (en) * | 2018-12-10 | 2019-05-03 | 新华三技术有限公司 | A kind of data reconstruction method and device |
US20200050373A1 (en) * | 2018-08-09 | 2020-02-13 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Method for fast recovering of data on a failed storage device |
CN111309524A (en) * | 2020-02-14 | 2020-06-19 | 苏州浪潮智能科技有限公司 | Distributed storage system fault recovery method, device, terminal and storage medium |
US20210373796A1 (en) * | 2020-05-31 | 2021-12-02 | EMC IP Holding Company LLC | Balancing resiliency and performance by selective use of degraded writes and spare capacity in storage systems |
CN114077517A (en) * | 2020-08-13 | 2022-02-22 | 华为技术有限公司 | Data processing method, equipment and system |
-
2023
- 2023-05-26 CN CN202310602070.3A patent/CN116302673B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9021296B1 (en) * | 2013-10-18 | 2015-04-28 | Hitachi Data Systems Engineering UK Limited | Independent data integrity and redundancy recovery in a storage system |
US20170075761A1 (en) * | 2014-12-09 | 2017-03-16 | Hitachi Data Systems Corporation | A system and method for providing thin-provisioned block storage with multiple data protection classes |
US20180039444A1 (en) * | 2015-04-09 | 2018-02-08 | Hitachi, Ltd. | Storage system and data control method |
US20200050373A1 (en) * | 2018-08-09 | 2020-02-13 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Method for fast recovering of data on a failed storage device |
CN109710456A (en) * | 2018-12-10 | 2019-05-03 | 新华三技术有限公司 | A kind of data reconstruction method and device |
CN111309524A (en) * | 2020-02-14 | 2020-06-19 | 苏州浪潮智能科技有限公司 | Distributed storage system fault recovery method, device, terminal and storage medium |
US20210373796A1 (en) * | 2020-05-31 | 2021-12-02 | EMC IP Holding Company LLC | Balancing resiliency and performance by selective use of degraded writes and spare capacity in storage systems |
CN114077517A (en) * | 2020-08-13 | 2022-02-22 | 华为技术有限公司 | Data processing method, equipment and system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117608500A (en) * | 2024-01-23 | 2024-02-27 | 四川省华存智谷科技有限责任公司 | Method for rescuing effective data of storage system when data redundancy is insufficient |
CN117608500B (en) * | 2024-01-23 | 2024-03-29 | 四川省华存智谷科技有限责任公司 | Method for rescuing effective data of storage system when data redundancy is insufficient |
CN117851132A (en) * | 2024-03-07 | 2024-04-09 | 四川省华存智谷科技有限责任公司 | Data recovery optimization method for distributed object storage |
CN117851132B (en) * | 2024-03-07 | 2024-05-07 | 四川省华存智谷科技有限责任公司 | Data recovery optimization method for distributed object storage |
Also Published As
Publication number | Publication date |
---|---|
CN116302673B (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116302673B (en) | Method for improving data recovery rate of Ceph storage system | |
US10795789B2 (en) | Efficient recovery of erasure coded data | |
EP2672387B1 (en) | A distributed object storage system | |
CN109213618B (en) | Method, apparatus and computer program product for managing a storage system | |
US8205139B1 (en) | Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network | |
CN106776130B (en) | Log recovery method, storage device and storage node | |
US7681104B1 (en) | Method for erasure coding data across a plurality of data stores in a network | |
US7231493B2 (en) | System and method for updating firmware of a storage drive in a storage network | |
US20190129815A1 (en) | Drive extent based end of life detection and proactive copying in a mapped raid (redundant array of independent disks) data storage system | |
US9170888B2 (en) | Methods and apparatus for virtual machine recovery | |
US7694171B2 (en) | Raid5 error recovery logic | |
CN110389858B (en) | Method and device for recovering faults of storage device | |
US9529674B2 (en) | Storage device management of unrecoverable logical block addresses for RAID data regeneration | |
US7308532B1 (en) | Method for dynamically implementing N+K redundancy in a storage subsystem | |
CN107357689B (en) | Fault processing method of storage node and distributed storage system | |
GB2414592A (en) | Decreasing failed disk reconstruction time in a RAID data storage system | |
CN111124264B (en) | Method, apparatus and computer program product for reconstructing data | |
EP3262500A1 (en) | Data stripping, allocation and reconstruction | |
US20170083244A1 (en) | Mitigating the impact of a single point of failure in an object store | |
CN112148204A (en) | Method, apparatus and computer program product for managing independent redundant disk arrays | |
CN110874194A (en) | Persistent storage device management | |
WO2018166526A1 (en) | Data storage, distribution, reconstruction and recovery methods and devices, and data processing system | |
CN109726600B (en) | System and method for providing data protection for super fusion infrastructure | |
CN107885615B (en) | Distributed storage data recovery method and system | |
CN112612412B (en) | Method for reconstructing data in distributed storage system and storage node equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |