CN108710550B - Double-data-center disaster tolerance system for public security traffic management inspection and control system - Google Patents

Double-data-center disaster tolerance system for public security traffic management inspection and control system Download PDF

Info

Publication number
CN108710550B
CN108710550B CN201810933749.XA CN201810933749A CN108710550B CN 108710550 B CN108710550 B CN 108710550B CN 201810933749 A CN201810933749 A CN 201810933749A CN 108710550 B CN108710550 B CN 108710550B
Authority
CN
China
Prior art keywords
data
center
cluster
service
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810933749.XA
Other languages
Chinese (zh)
Other versions
CN108710550A (en
Inventor
张金锋
李君�
杨霄
苏雪民
张奕
赵新勇
王锐锋
孙建宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing E Hualu Information Technology Co Ltd
Original Assignee
Beijing E Hualu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing E Hualu Information Technology Co Ltd filed Critical Beijing E Hualu Information Technology Co Ltd
Priority to CN201810933749.XA priority Critical patent/CN108710550B/en
Publication of CN108710550A publication Critical patent/CN108710550A/en
Application granted granted Critical
Publication of CN108710550B publication Critical patent/CN108710550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The invention discloses a double-data center disaster recovery system for a public security traffic management inspection and control system, which comprises a main center, a data access service cluster, a FastDFS cluster, a Kafka cluster, an inspection and control service system and a standby center. The invention has the beneficial effects that: the Storage nodes of the FastDFS cluster are used for storing actual data, a plurality of Storage nodes can be a group, redundancy backup is carried out in the group, all the Storage nodes should register own information to Tracker nodes, service data of the audit control service system is stored in a Redis memory database and an Oracle relational database, the Oracle database supports cross-data center replication backup, more reliable data safety guarantee and more reliable data recovery capability are provided, the standby center sets up a peer FastDFS cluster, FastDFS Storage does not depend on file indexes, data of the main center FastDFS cluster can be copied to the peer data center for access in real time through Storage level copying, fault tolerance is improved, and data can be better stored and stored.

Description

Double-data-center disaster tolerance system for public security traffic management inspection and control system
Technical Field
The invention relates to an inspection, distribution and control disaster tolerance system, in particular to a double-data-center disaster tolerance system for a public security traffic management inspection and distribution system, and belongs to the field of inspection, distribution and control disaster tolerance.
Background
The disaster tolerance scheme is used for the distribution management inspection and control disaster tolerance scheme under the double data centers, the disaster tolerance is expanded from the single data center to the double data centers by the technologies of service double-activity, storage synchronization, service data replay, service data caching and the like under the environment of the double data centers, the remote disaster tolerance capability is provided for the distribution management inspection and control system, the powerful guarantee is provided for the data safety of the distribution management inspection and control system, and the high availability and the fault tolerance of the service in the single data center are realized in the modes of distributed storage, distributed computation, service clustering and the like under the environment of the single data center. The same traffic management inspection and control system is deployed in a data center at a different place to carry out real-time storage data synchronization and business data synchronization, so that a cross-data-center traffic management inspection and control disaster tolerance scheme is realized, the rapid switching to another data center for providing service can be ensured under the condition that a disaster occurs in one data center, the consistency of business data is ensured as much as possible, and disaster recovery is carried out on a main data center by a storage master-slave switching and index data migration method.
Although the existing traffic control audit deployment and control disaster recovery system exists at present, the existing system has a low fault tolerance, and the existing system has no good data storage capacity compared with the traffic control audit deployment and control disaster recovery system in the double data centers.
Disclosure of Invention
The invention aims to solve the problems and provide a double-data-center disaster recovery system for a public security traffic management inspection and control system.
The invention realizes the purpose through the following technical scheme: a disaster recovery system with dual data centers for the check and control system for the public security and traffic management comprises
The data access service cluster, in a single data center, provides high-availability service through the service cluster, provides service publishing and subscribing capability through the service registration center, and realizes the characteristics of service expansion as required, load balance and high availability;
the FastDFS cluster is characterized in that a storage system stores vehicle passing picture information;
the Kafka cluster provides massive and reliable storage of messages and publishing and subscribing capabilities of data;
the checking and control service system caches the accessed mass vehicle passing data;
and the standby center synchronizes the data of the main center in real time.
The data access service cluster comprises a plurality of access ports, two nodes, namely Storage and Tracker, are arranged in the FastDFS cluster, the Kafka cluster is a distributed publishing and subscribing message system, the inspection and deployment service system is subject to mass vehicle passing data Storage, the distributed Storage system is used for allowing mass data to be stored on different nodes in the large-scale cluster in a distributed mode, and the standby center adopts the same configuration and system as the main center.
Preferably, in order to store information quickly and make redundant information available for individual backup, Storage nodes of the FastDFS cluster are used for storing actual data, and a plurality of storages can be grouped into one group, and redundant backup is performed in the group. All Storage nodes should register their own information with the Tracker node, the Tracker node can form a cluster, and the client reads and writes data in the Storage through the Tracker.
Preferably, in order to provide more reliable data security assurance and more reliable data recovery capability, the service data of the audit organization service system is stored in a Redis memory database and an Oracle relational database, the Oracle database supports replication backup across data centers, and the data in the Redis is designed to be recovered by acquiring original data from an Oracle table through a data playback service and rewriting the original data into the Redis after a disaster occurs.
Preferably, in order to improve fault tolerance and better store and store data, the standby center builds a peer FastDFS cluster, FastDFS storage does not depend on file indexes, and data of the FastDFS cluster in the main center can be copied to the peer data center in real time through copying at a storage level for access.
A working method of a double-data-center disaster recovery system for a public security traffic management inspection and control system comprises the following steps:
step A, receiving data by a data access service cluster, transmitting picture information to a FastDFS cluster, and transmitting the rest information to a Kafka cluster;
step B, configuring and storing data, synchronizing the data from the standby center to the main center, synchronizing the data of the standby center to a FastDFS cluster corresponding to the main center through storage, and synchronizing the data of the standby center Oracle database to the main center Oracle database;
c, after the data of the main center are synchronized and real-time, the front-end equipment is informed to stop uploading the data to the data access service of the standby center;
step D, checking and controlling services which are not used by the offline standby center, and reserving data access services and data index services of the standby center;
step E, storing the main and standby switches, switching the storage from the main center to the standby center, and configuring the database from the main center to the standby center;
step F, starting relevant checking and control service of the main center, starting vehicle passing data access service of the main center, and preparing the main center;
step G, if the vehicle passing data in the elastic search is lost during the disaster of the main center, the elastic search plug-in Knapack can be used for transferring the whole vehicle passing data of the standby center to the main center for data recovery, and the operation can be executed in a delayed mode;
step H, after the main center finishes the online, the front-end equipment is informed to start to push data to the data access service of the main center;
and step I, completing the online of the main center, completing the disaster recovery, and recovering the system to the state before the disaster recovery.
The invention has the beneficial effects that: the double-data-center disaster recovery system for the public security traffic control inspection and deployment system is reasonable in design, Storage nodes of a FastDFS cluster are used for storing actual data, a plurality of Storage nodes can be a group, redundancy backup is carried out in the group, all the Storage nodes should register own information with Tracker nodes, the Tracker nodes can form the cluster, a client reads and writes data in the Storage nodes through Tracker, information is stored rapidly, the redundant information can be backed up independently, service data of the inspection and deployment service system is stored in a Redis memory database and an Oracle relational database, the Oracle database supports cross-data-center replication backup, the data in the Redis is designed to be restored by acquiring original data from an Oracle table through data replay service after a disaster occurs and rewriting the original data into the Redis table, more reliable data safety guarantee and more reliable data restoration capability are provided, the standby center builds the equivalent FastDFS cluster, the FastDFS storage does not depend on file indexes, and the data of the FastDFS cluster in the main center can be copied to the peer data center for access in real time through the copy of the storage level, so that the fault tolerance is improved, and the data can be better stored and stored.
Drawings
FIG. 1 is a schematic structural view of the present invention;
fig. 2 is a schematic diagram of a disaster recovery method for vehicle passing data access service according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, a dual data center disaster recovery system for a public security, transportation, inspection, and distribution control system includes
The data access service cluster, in a single data center, provides high-availability service through the service cluster, provides service publishing and subscribing capability through the service registration center, and realizes the characteristics of service expansion as required, load balance and high availability;
the FastDFS cluster is characterized in that a storage system stores vehicle passing picture information;
the Kafka cluster provides massive and reliable storage of messages and publishing and subscribing capabilities of data;
the checking and control service system caches the accessed mass vehicle passing data;
the standby center synchronizes the data of the main center in real time;
the data access service cluster comprises a plurality of access ports, two nodes, namely Storage and Tracker, are arranged in the FastDFS cluster, the Kafka cluster is a distributed publishing and subscribing message system, the inspection and deployment service system is subject to mass vehicle passing data Storage, the distributed Storage system is used for allowing mass data to be stored on different nodes in the large-scale cluster in a distributed mode, and the standby center adopts the same configuration and system as the main center.
The Storage nodes of the FastDFS cluster are used for storing actual data, a plurality of Storage nodes can be a group, redundant backup is carried out in the group, all the Storage nodes should register own information with a Tracker node, the Tracker node can form the cluster, a client reads and writes data in the Storage nodes through the Tracker, the information is stored rapidly, the redundant information can be backed up independently, the service data of the audit cloth and control service system is stored in a Redis memory database and an Oracle relational database, the Oracle database supports cross-data center replication and backup, the data in the Redis is designed to obtain original data from an Oracle table through data playback service after a disaster occurs and rewrite the original data into the Redis for recovery, more reliable data safety guarantee and more reliable data recovery capability are provided, the standby center builds a peer-to-peer FastDFS cluster, FastDFS Storage does not depend on file index, and main FastDFS data can be copied to a data center through Storage level copy in real time for accessing the peer-to the data center of the peer-to-peer DFS cluster, the fault tolerance is improved, and the data can be better stored and saved.
A working method of a double-data-center disaster recovery system for a public security traffic management inspection and control system comprises the following steps:
step A, receiving data by a data access service cluster, transmitting picture information to a FastDFS cluster, and transmitting the rest information to a Kafka cluster;
step B, configuring and storing data, synchronizing the data from the standby center to the main center, synchronizing the data of the standby center to a FastDFS cluster corresponding to the main center through storage synchronization, and synchronizing the data of the standby center Oracle database to the main center Oracle database;
c, after the data of the main center are synchronized and real-time, the front-end equipment is informed to stop uploading the data to the data access service of the standby center;
step D, checking and controlling services which are not used by the offline standby center, and reserving data access services and data index services of the standby center;
step E, storing the main and standby switches, switching the storage from the main center to the standby center, and configuring the database from the main center to the standby center;
step F, starting relevant checking and control service of the main center, starting vehicle passing data access service of the main center, and preparing the main center;
step G, if the vehicle passing data in the elastic search is lost during the disaster of the main center, the elastic search plug-in Knapack can be used for transferring the whole vehicle passing data of the standby center to the main center for data recovery, and the operation can be executed in a delayed mode;
step H, after the main center finishes the online, the front-end equipment is informed to start to push data to the data access service of the main center;
and step I, completing the online of the main center, completing the disaster recovery, and recovering the system to the state before the disaster recovery.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (3)

1. A double-data center disaster recovery system for a public security traffic management inspection and control system is characterized in that: comprises that
The data access service cluster, in a single data center, provides high-availability service through the service cluster, provides service publishing and subscribing capability through the service registration center, and realizes the characteristics of service expansion as required, load balance and high availability;
the FastDFS cluster is characterized in that a storage system stores vehicle passing picture information;
the Kafka cluster provides massive and reliable storage of messages and publishing and subscribing capabilities of data;
the checking and control service system caches the accessed mass vehicle passing data;
the standby center synchronizes the data of the main center in real time;
the data access service cluster comprises a plurality of access ports, two nodes, namely Storage and Tracker, are arranged in the FastDFS cluster, the Kafka cluster is a distributed publishing and subscribing message system, the inspection and deployment service system is subjected to mass vehicle passing data Storage, the distributed Storage system is used for allowing mass data to be stored in different nodes in the large-scale cluster in a distributed mode, and the standby center adopts the same configuration and system as the main center; the Storage nodes of the FastDFS cluster are used for storing actual data, a plurality of storages form a group, and redundant backup is carried out in the group; all Storage nodes register own information to Tracker nodes, the Tracker nodes form a cluster, and a client reads and writes data in the Storage through the Tracker; and the standby center builds a peer FastDFS cluster, FastDFS storage does not depend on file indexes, and data of the FastDFS cluster in the main center is copied to the peer data center in real time for access through copying of storage levels.
2. The dual data center disaster recovery system for the public security traffic management inspection and control system as claimed in claim 1, wherein: the service data of the audit control service system is stored in a Redis memory database and an Oracle relational database, the Oracle database supports cross-data center copy and backup, and the data in the Redis is designed to acquire original data from an Oracle table through data playback service after a disaster happens and write the original data into the Redis again for recovery.
3. A working method of a double-data-center disaster recovery system for a public security traffic management inspection and control system is characterized by comprising the following steps:
step A, receiving data by a data access service cluster, transmitting picture information to a FastDFS cluster, and transmitting the rest information to a Kafka cluster;
step B, configuring and storing data, synchronizing the data from the standby center to the main center, synchronizing the data of the standby center to a FastDFS cluster corresponding to the main center through storage synchronization, and synchronizing the data of the standby center Oracle database to the main center Oracle database;
c, after the data of the main center are synchronized and real-time, the front-end equipment is informed to stop uploading the data to the data access service of the standby center;
step D, checking and controlling services which are not used by the offline standby center, and reserving data access services and data index services of the standby center;
step E, storing the main and standby switches, switching the storage from the main center to the standby center, and configuring the database from the main center to the standby center;
step F, starting relevant checking and control service of the main center, starting vehicle passing data access service of the main center, and preparing the main center;
step G, if the situation of vehicle passing data loss in the elastic search occurs during the disaster of the main center, using an elastic search plug-in Knapack to transfer the whole vehicle passing data of the standby center to the main center for data recovery, and delaying the execution of the operation;
step H, after the main center finishes the online, the front-end equipment is informed to start to push data to the data access service of the main center;
and step I, completing the online of the main center, completing the disaster recovery, and recovering the system to the state before the disaster recovery.
CN201810933749.XA 2018-08-16 2018-08-16 Double-data-center disaster tolerance system for public security traffic management inspection and control system Active CN108710550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810933749.XA CN108710550B (en) 2018-08-16 2018-08-16 Double-data-center disaster tolerance system for public security traffic management inspection and control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810933749.XA CN108710550B (en) 2018-08-16 2018-08-16 Double-data-center disaster tolerance system for public security traffic management inspection and control system

Publications (2)

Publication Number Publication Date
CN108710550A CN108710550A (en) 2018-10-26
CN108710550B true CN108710550B (en) 2021-09-28

Family

ID=63873350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810933749.XA Active CN108710550B (en) 2018-08-16 2018-08-16 Double-data-center disaster tolerance system for public security traffic management inspection and control system

Country Status (1)

Country Link
CN (1) CN108710550B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558082B (en) * 2018-11-26 2021-12-07 深圳天源迪科信息技术股份有限公司 Distributed file system
CN110163745B (en) * 2019-05-07 2022-02-01 中国工商银行股份有限公司 Hierarchical control data checking and controlling processing system and method
CN111198883B (en) * 2019-12-27 2023-06-09 福建威盾科技集团有限公司 Real-time vehicle control information processing method, system and storage medium
CN113422840A (en) * 2021-07-13 2021-09-21 全景智联(武汉)科技有限公司 Picture processing system and method based on file transfer protocol

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101631143A (en) * 2009-08-27 2010-01-20 中兴通讯股份有限公司 Multi-server system in load-balancing environment and file transmission method thereof
CN103873501A (en) * 2012-12-12 2014-06-18 华中科技大学 Cloud backup system and data backup method thereof
CN105049524A (en) * 2015-08-13 2015-11-11 浙江鹏信信息科技股份有限公司 Hadhoop distributed file system (HDFS) based large-scale data set loading method
CN106713487A (en) * 2017-01-16 2017-05-24 腾讯科技(深圳)有限公司 Data synchronization method and device
CN107241430A (en) * 2017-07-03 2017-10-10 国家电网公司 A kind of enterprise-level disaster tolerance system and disaster tolerant control method based on distributed storage
CN107734066A (en) * 2017-11-16 2018-02-23 郑州云海信息技术有限公司 A kind of data center's total management system services administering method
CN107844571A (en) * 2017-11-03 2018-03-27 优公里(北京)网络技术有限公司 The realization device that a kind of intelligent data center is built
CN108134795A (en) * 2017-12-26 2018-06-08 郑州云海信息技术有限公司 A kind of access control management method and system of data center's total management system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609619B2 (en) * 2005-02-25 2009-10-27 Cisco Technology, Inc. Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101631143A (en) * 2009-08-27 2010-01-20 中兴通讯股份有限公司 Multi-server system in load-balancing environment and file transmission method thereof
CN103873501A (en) * 2012-12-12 2014-06-18 华中科技大学 Cloud backup system and data backup method thereof
CN105049524A (en) * 2015-08-13 2015-11-11 浙江鹏信信息科技股份有限公司 Hadhoop distributed file system (HDFS) based large-scale data set loading method
CN106713487A (en) * 2017-01-16 2017-05-24 腾讯科技(深圳)有限公司 Data synchronization method and device
CN107241430A (en) * 2017-07-03 2017-10-10 国家电网公司 A kind of enterprise-level disaster tolerance system and disaster tolerant control method based on distributed storage
CN107844571A (en) * 2017-11-03 2018-03-27 优公里(北京)网络技术有限公司 The realization device that a kind of intelligent data center is built
CN107734066A (en) * 2017-11-16 2018-02-23 郑州云海信息技术有限公司 A kind of data center's total management system services administering method
CN108134795A (en) * 2017-12-26 2018-06-08 郑州云海信息技术有限公司 A kind of access control management method and system of data center's total management system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Efficient Storage Method for Disaster Tolerant System;Li Wen 等;《2010 The 3rd International Conference on Computational Intelligence and Industrial Application (PACIIA2010)》;20101204;第108-111页 *
面向城轨线网的海量小文件存储方法;刘靖 等;《计算机及应用与软件》;20160815;第33卷(第8期);第76-80页 *

Also Published As

Publication number Publication date
CN108710550A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN108710550B (en) Double-data-center disaster tolerance system for public security traffic management inspection and control system
CN106407040B (en) A kind of duplicating remote data method and system
CN101809558B (en) System and method for remote asynchronous data replication
CN101741536B (en) Data level disaster-tolerant method and system and production center node
US6915448B2 (en) Storage disk failover and replacement system
EP3537687B1 (en) Access method for distributed storage system, related device and related system
CN103853837B (en) Oracle does not stop the table level back-up restoring method of Production database automatically
CN105069160A (en) Autonomous controllable database based high-availability method and architecture
CN106815097A (en) Database disaster tolerance system and method
CN106156359A (en) A kind of data synchronization updating method under cloud computing platform
CN105530294A (en) Mass data distributed storage method
WO2012126232A1 (en) Method, system and serving node for data backup and recovery
CN102521083A (en) Backup method and system of virtual machine in cloud computing system
CN104536971A (en) High-availability database
CN111597197B (en) Data reconciliation method and device between databases, storage medium and electronic equipment
CN105426427A (en) MPP database cluster replica realization method based on RAID 0 storage
CN113688035B (en) Database dual-activity center verification method and system based on sandbox environment
CN102890716A (en) Distributed file system and data backup method thereof
CN107038091A (en) A kind of Information Security protection system and electric power application system data guard method based on asynchronous remote mirror image
CN101986276A (en) Methods and systems for storing and recovering files and server
CN110348826A (en) Strange land disaster recovery method, system, equipment and readable storage medium storing program for executing mostly living
CN106873902B (en) File storage system, data scheduling method and data node
CN114900532A (en) Power data disaster tolerance method, system, device, computer equipment and storage medium
CN105323271B (en) Cloud computing system and processing method and device thereof
CN114385755A (en) Distributed storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant