CN112965859A - Data disaster recovery method and equipment based on IPFS cluster - Google Patents

Data disaster recovery method and equipment based on IPFS cluster Download PDF

Info

Publication number
CN112965859A
CN112965859A CN202110256708.3A CN202110256708A CN112965859A CN 112965859 A CN112965859 A CN 112965859A CN 202110256708 A CN202110256708 A CN 202110256708A CN 112965859 A CN112965859 A CN 112965859A
Authority
CN
China
Prior art keywords
disaster recovery
backup
data
node
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110256708.3A
Other languages
Chinese (zh)
Inventor
李峰
石涛声
李昕
李涛
郭本信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kunyao Network Technology Co ltd
Original Assignee
Shanghai Kunyao Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Kunyao Network Technology Co ltd filed Critical Shanghai Kunyao Network Technology Co ltd
Priority to CN202110256708.3A priority Critical patent/CN112965859A/en
Publication of CN112965859A publication Critical patent/CN112965859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application aims to provide a data disaster preparation scheme based on an IPFS cluster. According to the scheme, a plurality of IPFS nodes are started as disaster recovery backup nodes, and the plurality of disaster recovery backup nodes are organized into a distributed disaster recovery backup cluster; then generating a backup file based on an original file, synchronizing the backup file to the disaster recovery backup node, generating a corresponding CID for each backup file through IPFS, and adding the CID of the backup file into a pinset database through pin operation; and then, synchronizing the updated data of the pinset database to the persistent storage of the node by other disaster recovery nodes in the distributed disaster recovery cluster through a consensus algorithm. The method and the device can efficiently and flexibly realize data disaster recovery, and solve a series of problems that the prior art scheme has large dependence on the same city or different disaster recovery centers, high requirements on bandwidth and storage capacity, poor single-point failure and DDoS attack defense performance and the like.

Description

Data disaster recovery method and equipment based on IPFS cluster
Technical Field
The application relates to the technical field of information, in particular to a data disaster recovery technology based on an IPFS cluster.
Background
The data disaster recovery is called data disaster backup, and refers to a process of copying all or part of data sets of a system from a hard disk or an array of an application host to other storage media in order to prevent data loss caused by misoperation or system failure. The current scheme is to remotely copy the local backup data to the same city or a different place disaster recovery center, and when the local backup data is lost or fails, the data is recovered from the disaster recovery center.
Whether the same-city disaster recovery or the remote disaster recovery, the essential problem is how to deal with the node failure in the distributed storage system. In order to solve the problem, the existing technical scheme mainly comprises a plurality of technical schemes and an erasure code technical scheme. However, the prior art solutions have the following disadvantages: (1) in the multi-copy technical scheme, the copy replication granularity is too large; (2) in the erasure code technical scheme, the erasure code is too high in the overhead of copying, reading, writing and updating operations; (3) the fault detection time and recovery time of the large-scale storage system are too long; (4) disaster recovery is not intelligent and flexible enough.
Disclosure of Invention
An object of the present application is to provide a data disaster recovery method and device based on an IPFS cluster.
According to an aspect of the present application, there is provided an IPFS cluster-based data disaster recovery method, wherein the method includes:
starting a plurality of IPFS nodes as disaster recovery backup nodes, and organizing the plurality of disaster recovery backup nodes into a distributed disaster recovery backup cluster;
generating backup files based on original files, synchronizing the backup files to the disaster recovery backup node, generating a corresponding CID for each backup file through IPFS, and adding the CID of the backup files into a pinset database through pin operation;
and other disaster recovery nodes in the distributed disaster recovery cluster synchronize the updated data of the pinset database to the persistent storage of the node through a consensus algorithm.
According to another aspect of the present application, there is also provided a computing device, wherein the device comprises a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform the IPFS cluster-based data disaster recovery method.
According to yet another aspect of the present application, there is also provided a computer readable medium having stored thereon computer program instructions executable by a processor to implement the IPFS cluster-based data disaster recovery method.
In the scheme provided by the application, a plurality of IPFS nodes are started as disaster recovery backup nodes, and the plurality of disaster recovery backup nodes are organized into a distributed disaster recovery cluster; then generating a backup file based on an original file, synchronizing the backup file to the disaster recovery backup node, generating a corresponding CID for each backup file through IPFS, and adding the CID of the backup file into a pinset database through pin operation; and then, synchronizing the updated data of the pinset database to the persistent storage of the node by other disaster recovery nodes in the distributed disaster recovery cluster through a consensus algorithm. The method and the device can efficiently and flexibly realize data disaster recovery, and solve a series of problems that the prior art scheme has large dependence on the same city or different disaster recovery centers, high requirements on bandwidth and storage capacity, poor single-point failure and DDoS attack defense performance and the like.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flowchart of a data disaster recovery method based on IPFS cluster according to an embodiment of the present application;
FIGS. 2(a) - (c) are schematic diagrams of a chained data storage method according to an embodiment of the present application;
fig. 3(a) - (c) are schematic diagrams of implementing data distribution organization by using a distributed hash table according to an embodiment of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, program means, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The embodiment of the application provides a data disaster recovery method based on an IPFS (inter platform File System) Cluster (Cluster), which can efficiently and flexibly realize data disaster recovery and remotely copy local backup data to a same city or a different place disaster recovery center. The embodiment of the application solves a series of problems that the prior art has large dependence on the same city or different disaster recovery centers, high requirements on bandwidth and storage capacity, single-point failure, poor DDoS attack defense and the like.
The IPFS Cluster is a distributed application program, can be used as an auxiliary management tool of the IPFS peer node, maintains Cluster global pinset and intelligently allocates items of the Cluster global pinset to the IPFS peer node. IPLD is a component of IPFS, the core of which is MerkleDAG based on the DAG structure. The MerkleDAG is composed of nodes and links, wherein the nodes store data and subordinate link relations of the data, and the links store Hash (Hash) values of the data.
In a practical scenario, the device performing the method may be a user equipment, a network device, or a device formed by integrating the user equipment and the network device through a network. The user equipment includes, but is not limited to, a terminal device such as a smartphone, a tablet computer, a Personal Computer (PC), and the like, and the network device includes, but is not limited to, a network host, a single network server, multiple network server sets, or a cloud computing-based computer set. Here, the Cloud is made up of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a type of distributed Computing, one virtual computer consisting of a collection of loosely coupled computers.
Fig. 1 is a flowchart of a data disaster recovery method based on an IPFS cluster according to an embodiment of the present application, where the method includes step S101, step S102, and step S103.
Step S101, starting a plurality of IPFS nodes as disaster recovery backup nodes, and organizing the plurality of disaster recovery backup nodes into a distributed disaster recovery backup cluster.
The plurality of disaster recovery nodes comprise a local disaster recovery node and a plurality of disaster recovery nodes in the same city or different places or on the cloud. Here, the plurality of disaster recovery nodes may be organized into a distributed disaster recovery Cluster by using IPFS Cluster, the Cluster has no centralized server, and all the disaster recovery nodes are peer-to-peer. Any disaster recovery node can initiate a pin operation to add or delete the designated backup file.
In some embodiments, as shown in fig. 2(a) - (c), the method further comprises: and storing all files in a chained data storage mode.
For example, chained data may be "chained" by way of a directory. The large file can be divided into small files, and in the process of division, under the condition that the files are too large, an intermediate layer, namely a multi-level directory form, is introduced. Here, the chained data storage mode is a bottom-layer general mode, and all data (including original data and backup data) are stored in the chained data storage mode.
Based on a chain data storage mode, each file is divided into fragments to be stored, leaf nodes are the fragments in the process of constructing the merkle tree, the maximum file size is determined by the number of the leaf nodes and the fragment size, the overlarge files are divided, and the files needing to be divided are called large files. For example, a shard size of 128M, a leaf node number of 128, and a merkle tree level of 8 may be defined, which determines a file threshold size of 128 × 128M. Files in excess of 128 x 128M are stored in a multi-level directory as shown in fig. 2 (c). The hierarchical calculation mode of the merkle tree is as follows: 128- >64- >32- >16- >8- >4- >2- >1, 8 layers in total.
In some embodiments, the chained data storage manner includes: the file is divided into a plurality of fragments, a corresponding hash value is generated for each fragment, the hash values of the corresponding fragments are stored through a plurality of IPLD objects, and the hash values of the IPLD objects are stored in a root ID.
For example, fig. 2(a) shows a chained data store for a single file, where the file is first split into slices, each slice may generate a hash value, and then an IPLD object stores the hash value of the slice, and the hash value of the IPLD object itself (slice object ID) is stored in a root ID (IPLD object). Fig. 2(b) shows a chained data storage in which a large file is divided into a plurality of files, i.e. one large file is divided into a plurality of small files, wherein the storage principle of each small file is similar to that of fig. 2 (a). Fig. 2(c) shows a chained data store for a multi-level directory, similar to the chained data store for multiple files shown in fig. 2(b), because a directory can also be considered as one file (IPLD object).
Step S102, generating backup files based on original files, synchronizing the backup files to the disaster recovery node, generating corresponding CIDs for each backup file through IPFS, and adding the CIDs of the backup files into a pinset database through pin operation.
For example, any disaster recovery node in the distributed disaster recovery cluster may initiate a pin operation to add or delete a designated backup file. The pin operation is targeted to CID (Content Identifier), and different backup files generate corresponding CIDs by IPFS. If the contents of the backup file are the same, the CID is also the same, which is the content addressing storage mode of IPFS. Based on content addressing, data exchange is carried out by segmenting data blocks, and the requirements of bandwidth and storage capacity can be effectively reduced.
In some embodiments, the pinset database includes metadata information and disaster tolerance factor information.
For example, the metadata information includes: IPLD object name, Hash ID, object size, creation time, modification time, rights information, belonging information, storage server information, disk information, IP address information, encryption information, and the like. The disaster tolerance factor information includes: redundant copies (e.g., 3 copies, 5 copies) of each IPLD object and the data itself object, as well as the storage location, lifecycle, etc. of each copy. Here, different file objects have different disaster tolerance factors, and the disaster tolerance factor of the directory object is greater than that of the file object. The ping is that some copies are put on a machine of the distributed ring hash table to form a disaster recovery set, and the redundant information of each copy is called a ping.
As shown in fig. 3(a) - (c), since the IPFS cluster is used as the underlying infrastructure, both the file object and the directory object are stored in the nodes of the IPFS cluster in a ping manner, and the nodes are organized in the form of a distributed hash table. And the mapping relation between the data object and the node is stored on the metadata server. As shown in fig. 3(a), if an IPLD object is mapped onto a virtual distributed ring hash table, it should be noted that the directory IPLD object, the file IPLD object, and the file data themselves are the same file abstraction, and all need to be mapped onto the distributed ring hash table. And the metadata information and replication factors of these objects are stored in a disaster recovery database (e.g., the pinset database).
And step S103, synchronizing the updated data of the pinset database to the persistent storage of the node by other disaster recovery nodes in the distributed disaster recovery cluster through a consensus algorithm.
For example, after the data backup is generated, the backup is synchronized to the local disaster recovery node. After receiving the backup data, the local disaster recovery backup node generates a CID of the backup data, and then adds the CID of the backup data to a global pinset (such as the pinset database) through a pin operation of the IPFS Cluster, and other disaster recovery backup nodes in the distributed disaster recovery Cluster synchronize the data newly added to the global pinset to persistent storage of the local node through a consensus algorithm.
Here, after the data backup stage is completed, the backup function cannot be started yet, because the global ping set does not have the backup (copy) information yet, the copy information needs to be reported through a pin operation, so that the registration of the copy information is completed, and then the copy can be discovered and used through a disaster recovery mechanism. According to the consensus algorithm, a plurality of nodes may report the duplicate information, and the nodes may also have faults in the process of reporting the duplicate information.
In some embodiments, the method further comprises: when the localized fault detection is carried out, the disaster recovery nodes in the distributed disaster recovery cluster carry out fault detection based on the distributed hash table, and each disaster recovery node detects a successor node, a predecessor node and a predecessor node. The detection mechanism realizes the localized fault detection, so that the detection sensitivity can be improved, and the fault detection overhead of the cluster can be greatly reduced.
Here, according to a self-immunity (disaster recovery) recovery mechanism, after a failure event is detected, a recovery mechanism of a local node for a data object stored therein is triggered, the recovery principle conforms to a data distribution placement principle, and new storage information is updated to a metadata base.
In some embodiments, the method further comprises: and when the backup data needs to be restored, searching and downloading the corresponding backup file from the ping database.
For example, data may be recovered from the local disaster recovery node when data is corrupted or lost due to an operational failure or system failure. Specifically, on the local disaster recovery node, a suitable backup (copy) can be searched and downloaded from the global pinset (such as the pinset database) through a command line tool and an API interface provided by the IPFS Cluster for recovering damaged or lost data.
In the embodiment of the present application, by implementing mapping storage between nodes and data objects, and then by using a fault detection and disaster recovery mechanism, the state and location of any data can be found from the ping set database (including the metadata information and the disaster recovery factor information).
In some embodiments, the corresponding backup file may be queried and downloaded using a command line tool and an API interface. Specifically, a target backup file (for example, a copy whose state meets the requirement) and a server IP address corresponding to the target backup file may be first searched in the ping database; and then requests to download a file by directly accessing the server IP address, and after obtaining the server's response, downloads the target backup file.
In some embodiments, the method further comprises: and if all backup data are lost by the local disaster recovery backup node, newly building a disaster recovery backup node, accessing the newly built disaster recovery backup node into the distributed disaster recovery cluster, and acquiring data in the ping database from other disaster recovery backup nodes in the distributed disaster recovery cluster.
For example, if the local disaster recovery node suffers from a hard disk damage and loses all backup data, a new disaster recovery node can be created, and the new disaster recovery node is accessed into the distributed disaster recovery Cluster based on the IPFS Cluster to quickly acquire data in the global ping (such as the ping database) from other nodes, so that the local disaster recovery node is quickly recovered. If the data needs to be backed up before the local disaster recovery node is not restored, the corresponding backup data can be downloaded from the nodes in the distributed disaster recovery cluster.
In the embodiment of the application, the data disaster recovery is realized by realizing the mapping storage between the nodes and the data objects and then by a fault detection and disaster recovery mechanism. When a new node joins the distributed disaster recovery cluster, a mapping rule, fault detection and a disaster recovery mechanism are triggered, and then node data required to be recovered by the node is automatically acquired, and a data recovery process is triggered. Here, the data placement rules and the data redundancy rules automatically implement redistribution of data.
In summary, the embodiment of the present application provides a data disaster recovery method based on IPFSCluster, which efficiently and flexibly implements data disaster recovery by a distributed disaster recovery Cluster based on IPFS Cluster, and remotely copies local backup data to a same city or a remote disaster recovery center. The embodiment of the application solves a series of problems that the prior art has large dependence on the same city or different disaster recovery centers, high requirements on bandwidth and storage capacity, single-point failure, poor DDoS attack defense and the like.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. Herein, some embodiments of the present application provide a computing device comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform the methods and/or aspects of the embodiments of the present application as described above.
Furthermore, some embodiments of the present application also provide a computer readable medium, on which computer program instructions are stored, the computer readable instructions being executable by a processor to implement the methods and/or aspects of the foregoing embodiments of the present application.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In some embodiments, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A data disaster recovery method based on IPFS cluster, wherein the method comprises the following steps:
starting a plurality of IPFS nodes as disaster recovery backup nodes, and organizing the plurality of disaster recovery backup nodes into a distributed disaster recovery backup cluster;
generating backup files based on original files, synchronizing the backup files to the disaster recovery backup node, generating a corresponding CID for each backup file through IPFS, and adding the CID of the backup files into a pinset database through pin operation;
and other disaster recovery nodes in the distributed disaster recovery cluster synchronize the updated data of the pinset database to the persistent storage of the node through a consensus algorithm.
2. The method of claim 1, wherein the method further comprises:
and when the backup data needs to be restored, searching and downloading the corresponding backup file from the ping database.
3. The method of claim 2, wherein looking up and downloading a corresponding backup file from the ping database comprises:
searching a target backup file and a server IP address corresponding to the target backup file in the pinset database;
and downloading the target backup file by accessing the IP address of the server.
4. The method of claim 1, wherein the method further comprises:
and if all backup data are lost by the local disaster recovery backup node, newly building a disaster recovery backup node, accessing the newly built disaster recovery backup node into the distributed disaster recovery cluster, and acquiring data in the ping database from other disaster recovery backup nodes in the distributed disaster recovery cluster.
5. The method of claim 1, wherein the method further comprises:
and storing all files in a chained data storage mode.
6. The method of claim 5, wherein the chained data storage manner comprises:
the file is divided into a plurality of fragments, a corresponding hash value is generated for each fragment, the hash values of the corresponding fragments are stored through a plurality of IPLD objects, and the hash values of the IPLD objects are stored in a root ID.
7. The method of claim 1, wherein the method further comprises:
when the localized fault detection is carried out, the disaster recovery nodes in the distributed disaster recovery cluster carry out fault detection based on the distributed hash table, and each disaster recovery node detects a successor node, a predecessor node and a predecessor node.
8. The method according to any one of claims 1 to 7, wherein the pinset database includes metadata information and disaster tolerance factor information.
9. A computing device, wherein the device comprises a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform the method of any of claims 1 to 8.
10. A computer readable medium having stored thereon computer program instructions executable by a processor to implement the method of any one of claims 1 to 8.
CN202110256708.3A 2021-03-09 2021-03-09 Data disaster recovery method and equipment based on IPFS cluster Pending CN112965859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110256708.3A CN112965859A (en) 2021-03-09 2021-03-09 Data disaster recovery method and equipment based on IPFS cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110256708.3A CN112965859A (en) 2021-03-09 2021-03-09 Data disaster recovery method and equipment based on IPFS cluster

Publications (1)

Publication Number Publication Date
CN112965859A true CN112965859A (en) 2021-06-15

Family

ID=76276984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110256708.3A Pending CN112965859A (en) 2021-03-09 2021-03-09 Data disaster recovery method and equipment based on IPFS cluster

Country Status (1)

Country Link
CN (1) CN112965859A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992499A (en) * 2021-11-16 2022-01-28 中国电信集团系统集成有限责任公司 Disaster recovery method, storage medium and system based on dynamic migration of services
CN114253933A (en) * 2021-12-22 2022-03-29 上海玄翎科技有限公司 Method and device for segmenting and restoring data set

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753825A (en) * 2019-01-09 2019-05-14 篱笆墙网络科技有限公司 The storage of backup file, backup document down loading method and system
CN111414431A (en) * 2020-04-28 2020-07-14 武汉烽火技术服务有限公司 Network operation and maintenance data disaster recovery backup management method and system based on block chain technology
CN111552676A (en) * 2020-04-26 2020-08-18 北京众享比特科技有限公司 Block chain based evidence storing method, device, equipment and medium
CN111782722A (en) * 2020-06-02 2020-10-16 北京海泰方圆科技股份有限公司 Data management method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753825A (en) * 2019-01-09 2019-05-14 篱笆墙网络科技有限公司 The storage of backup file, backup document down loading method and system
CN111552676A (en) * 2020-04-26 2020-08-18 北京众享比特科技有限公司 Block chain based evidence storing method, device, equipment and medium
CN111414431A (en) * 2020-04-28 2020-07-14 武汉烽火技术服务有限公司 Network operation and maintenance data disaster recovery backup management method and system based on block chain technology
CN111782722A (en) * 2020-06-02 2020-10-16 北京海泰方圆科技股份有限公司 Data management method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992499A (en) * 2021-11-16 2022-01-28 中国电信集团系统集成有限责任公司 Disaster recovery method, storage medium and system based on dynamic migration of services
CN113992499B (en) * 2021-11-16 2023-08-15 中电信数智科技有限公司 Disaster recovery method, storage medium and system based on service dynamic migration
CN114253933A (en) * 2021-12-22 2022-03-29 上海玄翎科技有限公司 Method and device for segmenting and restoring data set

Similar Documents

Publication Publication Date Title
US11892912B2 (en) Incremental file system backup using a pseudo-virtual disk
CN110169040B (en) Distributed data storage method and system based on multilayer consistent hash
US11755415B2 (en) Variable data replication for storage implementing data backup
JP5671615B2 (en) Map Reduce Instant Distributed File System
JP5254611B2 (en) Metadata management for fixed content distributed data storage
JP5918243B2 (en) System and method for managing integrity in a distributed database
US7139808B2 (en) Method and apparatus for bandwidth-efficient and storage-efficient backups
US11687265B2 (en) Transferring snapshot copy to object store with deduplication preservation and additional compression
US8572136B2 (en) Method and system for synchronizing a virtual file system at a computing device with a storage device
CN110096891B (en) Object signatures in object libraries
US8452731B2 (en) Remote backup and restore
JP2013544386A5 (en)
JP2013545162A (en) System and method for integrating query results in a fault tolerant database management system
JP2013545162A5 (en)
JP2009527824A (en) Mean data loss time improvement method for fixed content distributed data storage
CN112965859A (en) Data disaster recovery method and equipment based on IPFS cluster
CN113795827A (en) Garbage collection for deduplication cloud layering
US20130318086A1 (en) Distributed file hierarchy management in a clustered redirect-on-write file system
WO2009031158A2 (en) Method and apparatus for network based data recovery
US10620883B1 (en) Multi-format migration for network attached storage devices and virtual machines
Zhao et al. H2cloud: maintaining the whole filesystem in an object storage cloud
US11531644B2 (en) Fractional consistent global snapshots of a distributed namespace
Junping Analysis of key technologies of distributed file system based on big data [J]

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination