CN112243030A - Data synchronization method, device, equipment and medium of distributed storage system - Google Patents

Data synchronization method, device, equipment and medium of distributed storage system Download PDF

Info

Publication number
CN112243030A
CN112243030A CN202011101540.0A CN202011101540A CN112243030A CN 112243030 A CN112243030 A CN 112243030A CN 202011101540 A CN202011101540 A CN 202011101540A CN 112243030 A CN112243030 A CN 112243030A
Authority
CN
China
Prior art keywords
instance
server
target
data
storage server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011101540.0A
Other languages
Chinese (zh)
Inventor
罗绍华
史业政
郑文琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011101540.0A priority Critical patent/CN112243030A/en
Publication of CN112243030A publication Critical patent/CN112243030A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services

Abstract

The invention discloses a data synchronization method, a device, equipment and a medium of a distributed storage system, wherein when an instance of the distributed storage system is in an off-line state, a read-write access path of a target instance is switched to a standby storage server, the distributed storage system can normally read and write data during the period, and when the data are written, a proxy server respectively writes the data into the standby storage server and a publish-subscribe message system, at the moment, because the same data are written into the standby storage server and the publish-subscribe message system, the target data in the publish-subscribe message system are consumed after the instance is recovered to an on-line state from the off-line state, and the target data in the publish-subscribe message system are written into the instance in the on-line state, the primary storage server may be synchronized with the data in the backup storage server.

Description

Data synchronization method, device, equipment and medium of distributed storage system
Technical Field
The present invention relates to the field of distributed storage technologies, and in particular, to a data synchronization method, apparatus, device, and medium for a distributed storage system.
Background
In order to achieve efficient management of data, a distributed system is generally adopted for solution. In order to realize storage of mass data, a distributed system generally employs a cluster for caching. In order to realize high availability of a distributed system, a master-slave structure is generally adopted in a cluster in the prior art, but due to the change of the state of an instance, the reading and writing of data are limited, and the data in the instance of the cluster are asynchronous.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a data synchronization method, a data synchronization device, data synchronization equipment and a data synchronization medium of a distributed storage system, and aims to solve the technical problem of data asynchronization in the clustering example of the conventional distributed system.
In order to achieve the above object, the present invention provides a data synchronization method for a distributed storage system, where the distributed storage system includes a server cluster and a proxy server, the server cluster includes a primary storage server and a backup storage server, and the primary storage server includes multiple instances;
the data synchronization method of the distributed storage system comprises the following steps:
monitoring a status of an instance in the primary storage server;
if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
and after the target instance is monitored to be restored from the offline state to the online state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so that the data in the main storage server and the standby storage server are synchronized.
Optionally, after the step of monitoring the instance status in the primary storage server, the method further comprises:
if the target instance in the main storage server is monitored to be in an online state, generating online state information of the target instance, so that the proxy server sends target data in the target instance to the publishing and subscribing message system after reading the online state information;
the standby storage server and the publish-subscribe message system have a subscription relationship, so that the standby storage server consumes target data in the publish-subscribe message system and synchronizes data in the main storage server and the standby storage server.
Optionally, the step of monitoring the status of the instance in the primary storage server comprises:
periodically detecting the ping connection condition of the instance in the main storage server according to a preset time interval;
and if the example fails in network connection for at least three times continuously, judging that the example is in an offline state.
Optionally, after the step of determining that the instance is in the offline state if the instance fails to connect the network for at least three consecutive times, the method further includes:
and when the instance is in the offline state, if the instance is detected to be successfully connected with the network at least once, judging that the target instance is recovered to the online state from the offline state.
Optionally, the distributed storage system further comprises a coordination server;
if the target instance in the main storage server is monitored to be in the offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information, wherein the step comprises the following steps of:
if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, and storing the offline state information in the coordination server, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information from the coordination server.
Optionally, a plurality of logical slots are configured in the proxy server, and a mapping relationship exists between the plurality of logical slots and the plurality of instances.
Optionally, the primary storage server is a Redis storage server, and the instance is a Redis instance; the standby storage server is an LMDB storage server.
In addition, in order to achieve the above object, the present invention further provides a data synchronization apparatus for a distributed storage system, where the distributed storage system includes a server cluster and a proxy server, the server cluster includes a primary storage server and a backup storage server, and the primary storage server includes multiple instances;
the data synchronization device of the distributed storage system comprises:
the state monitoring module is used for monitoring the state of the instance in the main storage server;
the offline processing module is used for generating offline state information of the target instance if the target instance in the main storage server is monitored to be in an offline state, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
and the data synchronization module is used for consuming the target data in the publish-subscribe message system and writing the target data in the publish-subscribe message system into the target instance in the online state after monitoring that the target instance is recovered from the offline state to the online state, so that the data in the main storage server and the data in the standby storage server are synchronized.
In addition, in order to achieve the above object, the present invention further provides a data synchronization device of a distributed storage system, where the data synchronization device of the distributed storage system includes a processor, a memory, and a data synchronization program of the distributed storage system stored in the memory, and when the data synchronization program of the distributed storage system is executed by the processor, the steps of the data synchronization method of the distributed storage system are implemented.
In addition, to achieve the above object, the present invention further provides a computer storage medium, where a data synchronization program of a distributed storage system is stored, and when the data synchronization program of the distributed storage system is executed by a processor, the steps of the data synchronization method of the distributed storage system are implemented.
The invention can achieve the beneficial effects.
The embodiment of the invention provides a data synchronization method, a device, equipment and a medium of a distributed storage system, wherein the distributed storage system comprises a server cluster and a proxy server, the server cluster comprises a main storage server and a standby storage server, and the main storage server comprises a plurality of instances; the data synchronization method of the distributed storage system comprises the steps of firstly monitoring the state of an instance in a main storage server; if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system; and then, after the target instance is monitored to be restored from the offline state to the online state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so that the data in the main storage server and the standby storage server are synchronized. Therefore, when the instance of the distributed storage system is in an offline state, the read-write access path of the target instance is switched to the standby storage server, during the period, the distributed storage system can read and write data normally, and when the data is written, the proxy server writes the data into the standby storage server and the publish-subscribe message system respectively.
Drawings
FIG. 1 is a schematic structural diagram of a distributed storage system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another structure of a distributed storage system according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a data synchronization method of a distributed storage system according to an embodiment of the present invention;
fig. 4 is a block diagram of a data synchronization apparatus of a distributed storage system according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the descriptions relating to "first", "second", etc. in the present invention are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The main solution of the embodiment of the invention is as follows: the data synchronization method of a distributed storage system is adopted, the distributed storage system comprises a server cluster and a proxy server, the server cluster comprises a main storage server and a standby storage server, and the main storage server comprises a plurality of instances; the data synchronization method of the distributed storage system comprises the steps of firstly monitoring the state of an instance in a main storage server; if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system; and then, after the target instance is monitored to be restored from the offline state to the online state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so that the data in the main storage server and the standby storage server are synchronized.
In the prior art, a master-slave structure is generally adopted in a cluster, but due to the change of the state of an instance, the reading and writing of data are limited, and the data in the instance of the cluster are asynchronous.
The present invention provides a solution, in a data synchronization method of a distributed storage system, because the read-write access path of the target instance is switched to the standby storage server when the instance of the distributed storage system is in the offline state, during the period, the distributed storage system can read and write data normally, and when the data is written, the proxy server writes the data into the standby storage server and the publish-subscribe message system respectively, at this time, since the same data is written to both the alternate storage server and the publish-subscribe messaging system, the alternate storage server, and the publish-subscribe messaging system, therefore, consuming the target data in the publish-subscribe message system after the instance is restored from the offline state to the online state, and writing target data in the publish-subscribe message system into the instance in an online state, the primary storage server may be synchronized with the data in the backup storage server.
It should be noted that, since the data synchronization method of the distributed storage system according to the embodiment of the present application is implemented based on the distributed storage system, before explaining the data synchronization method of the distributed storage system according to the present application, the distributed storage system is explained first.
Referring to fig. 1, an embodiment of the present application provides a distributed storage system, where the distributed storage system includes a server cluster and a proxy server, where the server cluster includes a primary storage server and a backup storage server, the primary storage server includes multiple instances, and the multiple instances are used for storing data; the proxy server is configured with a plurality of logical slots, and the plurality of logical slots and the plurality of instances have mapping relations.
Specifically, in this embodiment, the proxy server is a Redis proxy server; the primary storage server is a Redis server, and the backup storage server is an LMDB server, so the instance in the embodiment is a Redis instance, and the Redis instance is used for storing the target data.
The Redis proxy server is used for producing the target data in the Redis instance into a publish-subscribe message system; the LMDB instance in the LMDB server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the LMDB server store the same target data;
the Redis proxy server is further configured to receive a data reading request, and read target data in the Redis instance or target data in the LMDB instance based on an identifier in the data reading request.
It should be noted that the full name of Redis is Remote Dictionary Server, which is an open source, written in ANSI C language, supporting network, log-type database which can be based on memory and can be persistent, Key-Value database, and API for multiple languages. In particular, redis supports relatively more stored value types, including string, list, set, and zset. In the present embodiment, the Redis server refers to a server for storing data in which Redis software is installed and configured.
The LMDB is called Lightning Memory-Mapped Database, and is translated into: as for a memory mapping database of lightning, the LMDB accesses files by using a memory mapping mode, so that the addressing overhead in the files is very small, and the file addressing can be realized by using pointer operation; the database single file can also reduce the overhead of the data set copying/transmitting process. In the present embodiment, the LMDB server is a server for storing data, in which LMDB software is installed and configured, and serves as a backup of the Redis server.
It can be seen that since both Redis and LMDB are memory-based data structure stores, they implement the functionality of the database by calling memory. Therefore, under the same conditions, in this embodiment, compared to Redis, the LMDB is selected to have relatively less requirements on the memory space, and accordingly, the configuration requirements on the hardware are relatively lower, and the cost of the deployed hardware is relatively lower.
In addition, the Redis proxy server is a server which is provided with a Redis proxy program, and the Redis proxy program is a proxy program in a Redis scheme; redis proxy supports multiple business statement access modes, the command mode is seamlessly compatible with Redis syntax commands, and the vector mode provides binary data/natural statement input.
In a specific implementation process, in the storage distributed storage system in this embodiment, the server cluster is in communication connection with the Redis proxy server to implement reading and writing of data. In order to realize high availability while expanding data storage capacity, the server cluster may include a plurality of Redis groups, each Redis group including a Redis server and an LMDB server, the Redis server serving as a host and the LMDB server serving as a standby.
In order to avoid the single point problem of the proxy point, a plurality of Redis proxy servers may be provided and respectively connected to a plurality of Redis groups in a communication manner.
Referring to fig. 1, the storage distributed storage system includes: 3 Redis proxy servers, 3 Redis groups, each Redis group comprising one Redis server and one LMDB server. Each Redis proxy server establishes long links with each Redis group respectively, and long links can reduce time consumption caused by frequent links.
In an embodiment, a long link pool mechanism may be established between the Redis proxy server and the Redis group, and specifically, a long chain takeover program may be added to the Redis proxy server to perform the following steps:
when a cache data read-write request is monitored, acquiring an idle link in a link pool;
sending a data reading/writing request to a Redis server through the idle link so that the Redis server can execute target data reading and writing operations according to the data reading/writing request;
if the read-write operation is detected to fail, judging the idle link as an invalid link;
and if the idle link is detected to be an invalid link, sending an invalid notification to a link pool manager so that the link pool manager starts a link recovery mechanism for the idle link.
Compared with the prior art that invalid links in a link pool are periodically recovered, the storage distributed storage system in the embodiment immediately sends an invalid notification to the link pool manager when detecting that an idle link is an invalid link, so that the link pool manager can timely start a link recovery mechanism for the idle link, the technical defect that the invalid links cannot be timely recovered in the prior art is overcome, and the invalid link is prevented from occupying link pool resources.
In addition, the 3 Redis proxy servers may also be connected to a user Side (also referred to as a service Side in this embodiment), and in one scenario, the storage distributed storage system is used for online delivery of portrait data, so that the user Side may develop a set of software to facilitate import of portrait data and output data read from the server cluster to an advertisement delivery software DSP (Demand Side Platform) so as to facilitate delivery of the data by the DSP. It should be noted that a long link may also be established between the Redis proxy server and the user side.
Further, the Redis server may include a plurality of Redis instances, and the LMDB server may include a plurality of LMDB instances, the plurality of Redis instances for storing the target data. The storage mode can be cache, and different from disk storage, the cache can be read and written quickly, so that data release is faster.
The Redis proxy server is used for producing the target data in the Redis instance into a publish-subscribe message system; the LMDB instance in the LMDB server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the LMDB server store the same target data. In this case, the Redis server serves as a host, stores the target data, and is used by the user end to read the target data through the Redis proxy server; and the LMDB server serves as a standby machine, and when the Redis instance in the host computer is offline, the user end reads the target data through the Redis proxy server.
Specifically, the subscription and issuance message system may be Kafka, the Kafka server installed with Kafka software is in communication connection with the Redis proxy server and the LMDB server, and under a normal condition of the Redis instance, the read-write data of the user side is read-write data of the host Redis instance through the Redis proxy server, and the Redis proxy server can be used as a producer to produce the write command data to the lmdbtopic theme of Kafka; the LMDB proxy server serves as a consumer to consume data in Kafka in a quasi-real-time manner, so that the final consistency of the data between the main server and the standby server is guaranteed, and the high availability of the distributed system is guaranteed under the condition of lower deployment cost.
In addition, under the condition that the Redis instance is offline, the Redis proxy server checks the failed instance to start a standby machine scheme, the read command data of the offline instance is switched to the standby machine LMDB proxy server to be read, the write command data of the offline instance can be used as production data to be produced on the failtopic theme of Kafka for one more time, the failtopic can be consumed after the instance is online, write data is cached during the offline instance period, and loss and inconsistency between the main and standby are avoided.
When the Redis instance is recovered to be normal, the instance is sensed to be recovered through the calling component and used as a consumer to start consuming the data of the failtopic, the data are directly written into the Redis instance, and when the data in the failtopic are consumed completely, the read-write request is switched from the standby machine to the host machine, so that normal high-performance service is provided.
In addition, in this embodiment, the Redis proxy server may further be configured with a plurality of virtual groups and a plurality of logical slots, where the plurality of logical slots have mapping relationships with the plurality of virtual groups, and the plurality of virtual groups have mapping relationships with the Redis instances and/or the LMDB instances, so as to implement that the plurality of logical slots have mapping relationships with the plurality of Redis instances;
the storage distributed storage system further comprises a zookeeper server, wherein a mapping relation table is stored in the zookeeper server, and the mapping relation table comprises mapping relations between the plurality of logic slots and the plurality of virtual groups and mapping relations between the plurality of virtual groups and the Redis instance and/or the LMDB instance.
In a specific implementation, slots are obtained for logical partitions in this embodiment, and are therefore referred to as logical slots; and performing logic fragmentation by using a Cyclic Redundancy Check (CRC) algorithm according to the identification key in the data to obtain 1024 logic slots, namely slot0, slot1 and slot 1023.
Setting a table name rule according to a service end, namely: the method comprises the steps that an instance is mapped by ApName.TableName, a plurality of slot logic slots (app.table- > slot) are statically configured, different slots are also mapped to a corresponding group virtual group (slot- > group) relationship, the virtual group corresponds to a master multi-slave instance, and the relationship among the master multi-slave instance, the master multi-slave instance and the slave multi-slave instance is stored in a zookeeper server;
when data reading and writing are carried out on a Redis server, all slot relations are obtained through AppName.
Get the slotid that falls based on crc32 (key)% 1024;
and searching the virtual group corresponding to the slotid, and obtaining the instance information corresponding to the group.
As an optional implementation manner, the Redis proxy server is further configured to receive a data write request, obtain a target logical slot based on an identifier in the data write request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and store target data in the data write request into a Redis instance having a mapping relationship with the target virtual group.
For example, when the service end has a data write request related to image data deviced 5:1101(key: value format), after the Redis proxy server receives the data write request, the request is subjected to CRC32 algorithm on the key, and modulo 1024 is performed, so as to obtain a slotid, which is a corresponding ID of the logical slot; and then finding the corresponding virtual group according to the slotid, obtaining Redis instance information (IP and Port) according to the mapping relation table of the virtual group, and writing the data devicemd5:1101 into the Redis instance.
As another optional implementation manner, the Redis proxy server is further configured to receive a data reading request, obtain a corresponding target logical slot based on a value corresponding to an identifier in the data reading request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and read target data in a Redis instance having a mapping relationship with the target virtual group from the Redis server or read target data in an LMDB instance having a mapping relationship with the target virtual group from the LMDB server.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and those skilled in the art can set the technical solution based on the needs in practical application, and the technical solution is not limited herein.
As can be easily found from the above description, in the distributed storage system in this embodiment, logical fragmentation is performed in the Redis proxy server, and the mapping relationship table is stored in the zookeeper server, so as to implement a low coupling design with the cluster, that is, Redis software of various versions in the cluster can be matched; compared with the existing codis scheme, according to the codis code display of the current version, the codis does not adopt logic fragmentation, but fragmentation is carried out on the data bottom layer, and the current codis version is low and cannot be compatible with the Redis database of the high version, so that the distributed system of the embodiment not only realizes low cost and high availability, but also has the data bottom layer Redis server compatible with any native version of Redis and supports the upgrade of the Redis version.
Based on the distributed storage system in the foregoing embodiment, an embodiment of the present application provides a data synchronization method for a distributed storage system. Referring to fig. 2, in the present embodiment, the distributed storage system further includes a monitoring server, a coordination server, and a publish-subscribe message system based on the foregoing embodiments.
Specifically, the monitoring server is connected with the main storage server on one hand to monitor the state of the instance in the main storage server; on the other hand, the system is connected with a publish-subscribe message system to consume the data in the publish-subscribe message system; and meanwhile, connecting with the coordination server to store the state information of the instance. The proxy server is connected with the coordination server to read the instance state information in the proxy server; the publish-subscribe message system is also connected with the proxy server and the standby storage server to receive data writing of the proxy server and enable the standby storage server to consume data in the data writing.
Based on the distributed storage system of fig. 2, please refer to fig. 3, the data synchronization method of the distributed storage system of the present embodiment includes:
s10, monitoring the state of the instance in the main storage server;
s20, if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
s30, after the target instance is monitored to be restored to the online state from the offline state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so that the data in the main storage server and the standby storage server are synchronized.
It should be noted that, because the instance has an online state and an offline state, the instance is in a normal state when in the online state, and data reading and writing can be normally performed; when the instance is in the offline state, data reading and writing cannot be performed, and therefore the data in the instance is not synchronous. The data synchronization method of the embodiment can synchronize the data in each instance aiming at the data asynchronization caused by the instance state change.
In addition, the method execution subject in this implementation may be a monitoring server.
First, S10 is executed to monitor the status of the instance in the primary storage server.
In the specific implementation process, the state of the instance refers to whether each instance in the main storage server is in an online state or an offline state. Since data synchronization is due to a change in the state of an instance, to perform data synchronization, the state of the instance in the primary storage server is first monitored.
In one embodiment, the step of monitoring the status of instances in the primary storage server comprises:
periodically detecting the network connection condition of the instance in the main storage server according to a preset time interval;
and if the example fails in network connection for at least three times continuously, judging that the example is in an offline state.
In the specific implementation process, the network connection can be ping command connection, ping is a command under Windows, Unix and Linux systems, ping also belongs to a communication protocol and is a part of a TCP/IP protocol, and whether the network is connected or not can be checked by using the 'ping' command, so that the network fault can be well analyzed and judged. Thus, it can be detected through the ping connection whether the instance is on-line, i.e. if the network connection with the instance is not possible, it is indicated that the instance is off-line.
Specifically, since a ping connection failure may be a temporary ping connection failure caused by an accidental network drop, in this embodiment, if the example fails in ping connection at least three times continuously, it is determined that the example is in an offline state.
Further, after the step of determining that the instance is in the offline state if the instance fails in ping connection for at least three consecutive times, the method further includes:
and when the instance is in the offline state, if the successful ping connection of the instance for at least one time is detected, judging that the target instance is recovered to the online state from the offline state.
In the specific implementation process, a successful ping connection represents network connection, and at this time, the instance is necessarily in an online state.
Next, executing S20, if it is monitored that the target instance in the primary storage server is in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; and when the read-write access path of the target instance is switched to the standby storage server, the proxy server writes target data into the standby storage server and a subscription information publishing system respectively.
It should be noted that the target instance is an instance in the primary storage server. Under normal conditions, the instances are all in an online state, and when the monitoring server monitors that the target instance in the main storage server is in an offline state, the offline state information of the target instance is generated.
Specifically, the distributed storage system may further include a coordination server; at this time, the offline state information may be stored in the coordination server, and the proxy server may read the offline state information from the coordination server, determine that the target instance is in the offline state, and at this time, switch the read-write access path to the target instance to the standby storage server. And during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server writes target data into the standby storage server and the publish-subscribe message system respectively.
The writing of the target data into the standby storage server is to ensure that the standby storage server is used as a standby machine and the data can be written normally when the distributed storage system is offline in the target instance; the target data is written into the publish-subscribe message system, so that the target data is supplemented into the target instance after the target instance is online subsequently, and synchronous data in the main storage server and the standby storage server is ensured.
Therefore, S30 is executed to consume the target data in the publish-subscribe message system and write the target data in the publish-subscribe message system into the target instance in the online state after it is monitored that the target instance is restored from the offline state to the online state, so as to synchronize the data in the primary storage server and the backup storage server.
In a specific implementation process, after monitoring that a target instance is offline, the state of the target instance is continuously monitored, and after monitoring that the target instance is recovered from the offline state to the online state, a monitoring server consumes target data in the publish-subscribe message system and writes the target data in the publish-subscribe message system into the target instance in the online state, so that data in the main storage server and the standby storage server are synchronized.
Specifically, since the target data in the publish-subscribe message system is also synchronously stored in the backup storage server, writing the target data in the publish-subscribe message system into the target instance in an online state can synchronize the data in the primary storage server and the backup storage server.
As an embodiment, after the step of monitoring the instance status in the primary storage server, the method further comprises:
if the target instance in the main storage server is monitored to be in an online state, generating online state information of the target instance, so that the proxy server sends target data in the target instance to the publishing and subscribing message system after reading the online state information;
the standby storage server and the publish-subscribe message system have a subscription relationship, so that the standby storage server consumes target data in the publish-subscribe message system and synchronizes data in the main storage server and the standby storage server.
In a specific implementation, the publish-subscribe messaging system may be Kafka; when the distributed storage system normally operates, data synchronization between the main storage server and the standby storage server also needs to be maintained, so that if it is monitored that a target instance in the main storage server is in an online state, online state information of the target instance is generated, and the proxy server sends target data in the target instance to the subscription information publishing system after reading the online state information.
As an embodiment, the distributed storage system further comprises a coordination server;
if the target instance in the main storage server is monitored to be in the offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information, wherein the step comprises the following steps of:
if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, and storing the offline state information in the coordination server, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information from the coordination server.
In a specific implementation process, the coordination server may be a zookeeper, and is used for storing the instance state information for the proxy server to obtain.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and those skilled in the art can set the technical solution based on the needs in practical application, and the technical solution is not limited herein.
It is not difficult to find through the above description that, when an instance of the distributed storage system is in an offline state, the read-write access path to the target instance is switched to the standby storage server, during this period, the distributed storage system can read and write data normally, and when data is written, the proxy server writes data into the standby storage server and the publish-subscribe message system respectively, and at this time, since the same data is written into the standby storage server and the publish-subscribe message system, the target data in the publish-subscribe message system is consumed after the instance is restored from the offline state to the online state, and the target data in the publish-subscribe message system is written into the instance in the online state, so that the data in the main storage server and the standby storage server can be synchronized.
In addition, an embodiment of the present invention further provides a computer storage medium, where a data synchronization program of a distributed storage system is stored on the computer storage medium, and when the data synchronization program of the distributed storage system is executed by a processor, the steps of the data synchronization method of the distributed storage system are implemented.
Referring to fig. 4, based on the same inventive concept as the foregoing embodiment, an embodiment of the present application further provides a data synchronization apparatus of a distributed storage system, where the distributed storage system includes a server cluster and a proxy server, the server cluster includes a primary storage server and a backup storage server, and the primary storage server includes multiple instances;
the data synchronization device of the distributed storage system comprises:
a status monitoring module 10, configured to monitor a status of an instance in the primary storage server;
the offline processing module 20 is configured to generate offline state information of the target instance if it is monitored that the target instance in the primary storage server is in an offline state, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
the data synchronization module 30 is configured to consume the target data in the publish-subscribe message system and write the target data in the publish-subscribe message system into the target instance in the online state after it is monitored that the target instance is restored from the offline state to the online state, so that the data in the primary storage server and the data in the backup storage server are synchronized.
It should be noted that the data synchronization apparatus of the distributed storage system in this embodiment corresponds to the data synchronization method of the distributed storage system in the foregoing embodiment one to one, and therefore, various embodiments thereof may also refer to the embodiments in the foregoing embodiment, and are not described herein again.
In addition, in an embodiment, a data synchronization device of a distributed storage system is further provided, where the data synchronization device of the distributed storage system includes a processor, a memory, and a data synchronization program of the distributed storage system stored in the memory, and when the data synchronization program of the distributed storage system is executed by the processor, the steps of the data synchronization method of the distributed storage system are implemented.
On the basis of the distributed storage system of the foregoing embodiment, an embodiment of the present application further provides a storage capacity adjustment method, where the storage capacity adjustment method includes:
generating a migration schedule about a target logical slot in the proxy server based on the acquired capacity adjustment requirement, so that the proxy server sets a hit instance in the migration schedule as a migration state after acquiring the migration schedule; when the hit instance is in a migration state, the proxy server transfers a read-write access path of data in the hit instance to the standby storage server;
when the hit instance is in a migration state, acquiring an identifier of target data corresponding to the target logical slot in the original instance;
migrating the target data from the original instance to the target instance and migrating the mapping relation between the target logical slot and the original instance to the target instance based on the identification of the target data, so that the target logical slot and the target instance have the mapping relation; the target instance is an instance which is newly added during capacity expansion or an instance which is left during capacity reduction of the distributed storage system and corresponds to the capacity adjustment requirement.
It should be noted that, in this embodiment, the storage capacity adjustment refers to performing capacity expansion or capacity reduction on the distributed storage system in the foregoing embodiment, that is, adding or subtracting instances to a server cluster, which may be embodied as adding or subtracting a primary storage server from a hardware perspective.
In particular, the method of this embodiment may be implemented by a program process, which may be installed on a host different from the proxy server, and may establish a communication connection with the server cluster. The method execution process of the present embodiment will be described below with reference to the program process as an execution subject.
First, acquiring capacity adjustment requirements is performed.
In a specific implementation process, the storage capacity adjustment refers to performing capacity expansion or capacity reduction on the distributed storage system in the above embodiment, and the capacity adjustment requirement may be obtained from the operation and maintenance system. The operation and maintenance system has the functions of: when business data grows/decreases, a Redis instance expansion/contraction instruction is triggered through the tool/platform.
Next, generating a migration schedule about a target logical slot in the proxy server based on the acquired capacity adjustment requirement, so that the proxy server sets a hit instance in the migration schedule as a migration state after acquiring the migration schedule;
and when the hit instance is in a migration state, the proxy server transfers the read-write access path of the data in the hit instance to the standby storage server.
In a specific implementation process, the migration schedule refers to a migration schedule related to a logical slot, and the target logical slot refers to a logical slot corresponding to target data migration in the capacity adjustment requirement. It should be noted that, in this embodiment, a plurality of logical slots are configured in the proxy server, and a mapping relationship exists between the plurality of logical slots and the plurality of instances. Therefore, for increasing or decreasing instances, re-fragmentation is required, and the logical slot is migrated to reestablish the mapping relationship, thereby implementing data reading and writing.
In addition, the migration schedule includes hit instances, where the hit instances include an original instance of the target logical slot pre-migration mapping and a target instance of the target logical slot post-migration mapping. In order to realize the reading and writing during the data migration, after the proxy server acquires the migration schedule, the proxy server may set a hit instance in the migration schedule to be in a migration state, and when the hit instance is in the migration state, the proxy server transfers a read-write access path of the data in the hit instance to the standby storage server. Compared with the prior art, the data distribution is realized through the logical slot in the proxy server, and the fragmentation is not carried out on the bottom layer of the database, so that when the expansion/contraction operation is carried out on the bottom layer of the database of the distributed storage system, the read-write access can be temporarily transferred to the standby storage server without influencing the read-write access of the data, and the problems that the data distribution is realized through the physical slot, the standby scheme cannot be realized, and the data writing in the expansion/contraction period cannot be realized in the prior art are solved; and then after the expansion/contraction operation is finished, the data read and written temporarily by the standby storage server can be synchronized to the cluster, so that the data consistency of the distributed storage system is ensured.
Therefore, before data migration, a migration schedule needs to be generated for the target logical slot.
As an alternative embodiment, the step of generating a migration schedule for a target logical slot in the proxy server based on the capacity adjustment requirement includes:
and generating a migration schedule about the target logic slot in the proxy server by adopting a balance algorithm based on the capacity adjustment requirement.
In the specific implementation process, the allocation of the migrated logic slot can be more balanced by adopting a balancing algorithm, so that the read-write performance of the expanded/reduced distributed storage system is more stable.
As an optional embodiment, when the capacity adjustment requirement is a capacity expansion requirement, the target instance is an instance in a newly added primary storage server; the step of generating a migration schedule about a target logical slot in the proxy server by using a balancing algorithm based on the capacity adjustment requirement includes:
based on the capacity expansion requirement, obtaining the average number of the logic slots distributed to the capacity expanded instance;
based on the number of the logic slots of each instance before capacity expansion, performing descending order arrangement on each instance before capacity expansion;
traversing each instance after descending order arrangement before capacity expansion, distributing the logical slots exceeding the average number in each instance before capacity expansion to the target instance until the number of the logical slots in the instance before capacity expansion is equal to the average number, and generating a migration schedule of the target logical slots in the proxy server.
In a specific implementation process, when the capacity adjustment requirement is a capacity expansion requirement, and the target instance is an instance in a newly added primary storage server, at this time, an existing part of the logical slots needs to be allocated to the newly added instance, and therefore, the allocation of the logical slots needs to be performed again. Specifically, the sorted examples are sequentially distributed according to the number of the logical slots and the average number, so that the balance of the distributed logical slots is ensured.
As another optional embodiment, when the capacity adjustment requirement is a capacity reduction requirement, the target instance is an instance in an original primary storage server; the step of generating a migration schedule about a target logical slot in the proxy server by using a balancing algorithm based on the capacity adjustment requirement includes:
obtaining an average number of logical slots allocated to the scaled instances based on the scaling requirements;
based on the number of logic slots of each instance before capacity reduction, carrying out ascending order arrangement on the instances reserved for capacity reduction;
and traversing the instances of the capacity reduction reservation after the ascending arrangement, distributing the logic slots in the reduced instances into the target instances until the number of the logic slots in the instances of the capacity reduction reservation is equal to the average number, and generating a migration schedule of the target logic slots in the proxy server.
In a specific implementation process, when the capacity adjustment requirement is a capacity reduction requirement and the target instance is an instance in the original primary storage server, at this time, the logical slot in the reduced instance needs to be mapped into the target instance, and therefore, the logical slot needs to be newly allocated. Specifically, the sorted examples are sequentially distributed according to the number of the logical slots and the average number, so that the balance of the distributed logical slots is ensured.
And then, when the hit instance is in a migration state, acquiring the identifier of the target data corresponding to the target logical slot in the original instance.
In a specific implementation process, before migrating target data, in order to subsequently generate a migration command, the target data is migrated, and first, an instance needs to be scanned to obtain an identifier of the target data. In the process, the hit instance is in a migration state, and the read-write performance is realized.
Then, based on the identification of the target data, migrating the target data from the original instance to the target instance, and migrating the mapping relation between the target logical slot and the original instance to the target instance, so that the target logical slot and the target instance have a mapping relation; the target instance is an instance which is newly added during capacity expansion or an instance which is left during capacity reduction of the distributed storage system and corresponds to the capacity adjustment requirement.
In a specific implementation process, a migration command needs to be generated first based on the identification of the target data. As an optional embodiment, the migrating the target data from the original instance to the target instance and the mapping relationship between the target logical slot and the original instance to the target instance based on the identifier of the target data includes:
generating a native migration command based on the identification of the target data;
migrating the target data from the original instance to the target instance and migrating a mapping relationship of the target logical slot to the original instance to the target instance based on the native migration command.
The native migration command is a native Migrate command. In a specific implementation process, the native Migrate command can realize batch migration, improve data migration efficiency, and further improve capacity expansion/reduction efficiency.
In a specific implementation process, first, target data is migrated, and as an optional embodiment, the migration command includes migration plans of the target data corresponding to a plurality of target logical slots; the step of migrating the target data from the original instance to the target instance based on the migration command includes:
creating a task queue based on the migration command, wherein the task queue comprises migration plans of target data corresponding to the target logical slots;
creating a plurality of threads with the same number as the target logic slots on the basis of the task queue;
and the multiple threads respectively obtain the migration plans in the task queue so as to migrate the target data corresponding to the multiple target logic slots from the original examples to the target examples.
It can be understood that, in a specific implementation process, when a migration command includes migration plans of target data corresponding to multiple target logical slots, a task queue of a shared memory may be created, the migration plan of the target data corresponding to each target logical slot is taken as a task to be put into the queue, the number of CPU cores is fully utilized, multiple threads are created, each thread acquires the task in the queue, the target data corresponding to the multiple target logical slots are migrated from the original instance to the target instance, concurrent migration is implemented, and the migration speed is increased.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention essentially or contributing to the prior art can be embodied in the form of a software product, which is stored in a storage medium (e.g. rom/ram, magnetic disk, optical disk) and includes instructions for enabling a multimedia terminal (e.g. mobile phone, computer, television receiver, or network device) to execute the method according to the embodiments of the present invention
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A data synchronization method of a distributed storage system is characterized in that the distributed storage system comprises a server cluster and a proxy server, wherein the server cluster comprises a main storage server and a standby storage server, and the main storage server comprises a plurality of instances;
the data synchronization method of the distributed storage system comprises the following steps:
monitoring a status of an instance in the primary storage server;
if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
and after the target instance is monitored to be restored from the offline state to the online state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so that the data in the main storage server and the standby storage server are synchronized.
2. The method for data synchronization of a distributed storage system according to claim 1, wherein after the step of monitoring the status of instances in the primary storage server, the method further comprises:
if the target instance in the main storage server is monitored to be in an online state, generating online state information of the target instance, so that the proxy server sends target data in the target instance to the publishing and subscribing message system after reading the online state information;
the standby storage server and the publish-subscribe message system have a subscription relationship, so that the standby storage server consumes target data in the publish-subscribe message system and synchronizes data in the main storage server and the standby storage server.
3. The method for data synchronization in a distributed storage system according to claim 1, wherein the step of monitoring the status of the instances in the primary storage server comprises:
periodically detecting the network connection condition of the instance in the main storage server according to a preset time interval;
and if the example fails in network connection for at least three times continuously, judging that the example is in an offline state.
4. The method according to claim 3, wherein after the step of determining that the instance is in the offline state if the instance fails to connect to the network at least three consecutive times, the method further comprises:
and when the instance is in the offline state, if the instance is detected to be successfully connected with the network at least once, judging that the target instance is recovered to the online state from the offline state.
5. The data synchronization method of the distributed storage system according to claim 1, wherein the distributed storage system further comprises a coordination server;
if the target instance in the main storage server is monitored to be in the offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information, wherein the step comprises the following steps of:
if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, and storing the offline state information in the coordination server, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information from the coordination server.
6. The data synchronization method of the distributed storage system according to any one of claims 1 to 5, wherein a plurality of logical slots are configured in the proxy server, and the plurality of logical slots have a mapping relation with the plurality of instances.
7. The data synchronization method of the distributed storage system according to claim 6, wherein the primary storage server is a Redis storage server, and the instance is a Redis instance; the standby storage server is an LMDB storage server.
8. The data synchronization device of the distributed storage system is characterized in that the distributed storage system comprises a server cluster and a proxy server, wherein the server cluster comprises a main storage server and a standby storage server, and the main storage server comprises a plurality of instances;
the data synchronization device of the distributed storage system comprises:
the state monitoring module is used for monitoring the state of the instance in the main storage server;
the offline processing module is used for generating offline state information of the target instance if the target instance in the main storage server is monitored to be in an offline state, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
and the data synchronization module is used for consuming the target data in the publish-subscribe message system and writing the target data in the publish-subscribe message system into the target instance in the online state after monitoring that the target instance is recovered from the offline state to the online state, so that the data in the main storage server and the data in the standby storage server are synchronized.
9. A data synchronization device of a distributed storage system, characterized in that the data synchronization device of the distributed storage system comprises a processor, a memory and a data synchronization program of the distributed storage system stored in the memory, and when the data synchronization program of the distributed storage system is executed by the processor, the steps of the data synchronization method of the distributed storage system according to any one of claims 1 to 7 are implemented.
10. A computer storage medium, characterized in that the computer storage medium has stored thereon a data synchronization program of a distributed storage system, which when executed by a processor implements the steps of the data synchronization method of the distributed storage system according to any one of claims 1 to 7.
CN202011101540.0A 2020-10-14 2020-10-14 Data synchronization method, device, equipment and medium of distributed storage system Pending CN112243030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011101540.0A CN112243030A (en) 2020-10-14 2020-10-14 Data synchronization method, device, equipment and medium of distributed storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011101540.0A CN112243030A (en) 2020-10-14 2020-10-14 Data synchronization method, device, equipment and medium of distributed storage system

Publications (1)

Publication Number Publication Date
CN112243030A true CN112243030A (en) 2021-01-19

Family

ID=74169082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011101540.0A Pending CN112243030A (en) 2020-10-14 2020-10-14 Data synchronization method, device, equipment and medium of distributed storage system

Country Status (1)

Country Link
CN (1) CN112243030A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691599A (en) * 2021-08-16 2021-11-23 银清科技有限公司 Method and device for data synchronization between service instances
CN114785807A (en) * 2022-03-16 2022-07-22 深信服科技股份有限公司 Data processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691599A (en) * 2021-08-16 2021-11-23 银清科技有限公司 Method and device for data synchronization between service instances
CN114785807A (en) * 2022-03-16 2022-07-22 深信服科技股份有限公司 Data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11636015B2 (en) Storage system and control software deployment method
US9965203B1 (en) Systems and methods for implementing an enterprise-class converged compute-network-storage appliance
CN110377395B (en) Pod migration method in Kubernetes cluster
WO2019154394A1 (en) Distributed database cluster system, data synchronization method and storage medium
CN102455942B (en) Method and system for dynamic migration of WAN virtual machines
US8904006B2 (en) In-flight block map for a clustered redirect-on-write filesystem
CN102594849B (en) Data backup and recovery method and device, virtual machine snapshot deleting and rollback method and device
CN102355369B (en) Virtual clustered system as well as processing method and processing device thereof
CN112230853A (en) Storage capacity adjusting method, device, equipment and storage medium
CN111124475B (en) Method for storage management, electronic device, and computer-readable storage medium
CN110247984B (en) Service processing method, device and storage medium
US20150169718A1 (en) System and method for supporting persistence partition discovery in a distributed data grid
CN112235405A (en) Distributed storage system and data delivery method
CN111274310A (en) Distributed data caching method and system
CN105069152B (en) data processing method and device
CN105493474A (en) System and method for supporting partition level journaling for synchronizing data in a distributed data grid
CN112243030A (en) Data synchronization method, device, equipment and medium of distributed storage system
CN110377664B (en) Data synchronization method, device, server and storage medium
CN111818188B (en) Load balancing availability improving method and device for Kubernetes cluster
CN113032091B (en) Method, system and medium for improving storage performance of virtual machine by adopting AEP
CN110807039A (en) Data consistency maintenance system and method in cloud computing environment
CN113254437B (en) Batch processing job processing method and device
CN113934575A (en) Big data backup system and method based on distributed copy
CN111400098A (en) Copy management method and device, electronic equipment and storage medium
CN113439258A (en) Hosting virtual machines on secondary storage systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination