CN114598711A - Data migration method, device, equipment and medium - Google Patents

Data migration method, device, equipment and medium Download PDF

Info

Publication number
CN114598711A
CN114598711A CN202210324045.9A CN202210324045A CN114598711A CN 114598711 A CN114598711 A CN 114598711A CN 202210324045 A CN202210324045 A CN 202210324045A CN 114598711 A CN114598711 A CN 114598711A
Authority
CN
China
Prior art keywords
node
database service
slave node
server
slave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210324045.9A
Other languages
Chinese (zh)
Other versions
CN114598711B (en
Inventor
郭添伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Pte Ltd
Original Assignee
Bigo Technology Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Pte Ltd filed Critical Bigo Technology Pte Ltd
Priority to CN202210324045.9A priority Critical patent/CN114598711B/en
Publication of CN114598711A publication Critical patent/CN114598711A/en
Application granted granted Critical
Publication of CN114598711B publication Critical patent/CN114598711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Hardware Redundancy (AREA)

Abstract

The application discloses a data migration method, a data migration device, data migration equipment and a data migration medium. After the migration instruction of the database service is acquired, a first slave node is newly established for the database service in the second server, and the data stored in the master node corresponding to the database service is synchronized to the first slave node. If the target node is determined to be the master node corresponding to the database service, when the completion of data synchronization of the first slave node is determined, the state of the first slave node is updated to a writable state, and the address information of the master node corresponding to the database service in the name service is updated according to the address information of the first slave node, so that the client can write data into the first slave node through the address information of the master node recorded in the name service. And closing the target node and determining that the first slave node continues to work as the master node corresponding to the database service, so that the stability and the availability of the database service are ensured, and the time of the database service exception caused by data migration is reduced.

Description

Data migration method, device, equipment and medium
Technical Field
The present application relates to the field of data migration technologies, and in particular, to a data migration method, apparatus, device, and medium.
Background
Redis is an open-source, network-enabled, memory-based, distributed, optionally persistent key-value pair storage database written using ANSI C. Based on the advantages of simplicity, high performance, stability and the like of Redis, some systems select Redis as a service database.
In the framework of microservices, a system may consist of hundreds or thousands of microservices, each with its own independent database, and therefore using comparable orders of magnitude of Redis instances. Wherein, the Redis instance refers to a process started by Redis on a server. In order to ensure high availability of the micro service and avoid deploying too many Redis instances associated with the micro service, one micro service is generally used to associate two Redis instances, one of which serves as a Redis Master node (Master) and the other serves as a disaster tolerance scheme of a Redis slave node (Replica). Meanwhile, in order to ensure resource balance, the resource manager may frequently control the Redis instance associated with the microservice to migrate among different servers at irregular times. Under the disaster recovery scheme, in the process of migrating a Redis instance associated with a micro service, if a Redis host node associated with the micro service fails or a server where the Redis host node is located is abnormal, the associated Redis host node cannot be automatically determined again for the micro service, and manual switching is required. During the time period before the manual switching is completed, the micro service cannot normally provide the service to the outside, and the availability of the micro service is affected. Therefore, how to improve the availability of database services becomes a key to improve the quality of microservice.
Disclosure of Invention
The embodiment of the application provides a data migration method, a data migration device, data migration equipment and a data migration medium, and aims to solve the problem that the existing database service is low in availability.
The embodiment of the application provides a data migration method, which comprises the following steps:
acquiring a migration instruction of a database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
newly building a first slave node for the database service in the second server, and synchronizing data stored in a master node corresponding to the database service to the first slave node;
if the target node is the main node corresponding to the database service, updating the state of the first slave node to a writable state when the first slave node is determined to finish data synchronization, and updating the address information of the main node corresponding to the database service in the name service according to the address information of the first slave node;
and closing the target node, and determining that the first slave node continues to work as the master node corresponding to the database service.
The embodiment of the application provides a data migration device, the device includes:
the acquisition unit is used for acquiring a migration instruction of the database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
a first processing unit, configured to newly create a first slave node for the database service in the second server, and synchronize data stored in a master node corresponding to the database service to the first slave node;
an updating unit, configured to update a state of the first slave node to a writable state when it is determined that the first slave node completes data synchronization if the target node is a master node corresponding to the database service, and update address information of the master node corresponding to the database service in name service according to address information of the first slave node;
and the second processing unit is used for closing the target node and determining that the first slave node continues to work as the master node corresponding to the database service.
The present application provides a data migration device comprising at least a processor and a memory, the processor being configured to implement the steps of the data migration method as described above when executing a computer program stored in the memory.
The present application provides a computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the data migration method as described above.
The present application provides a computer program product, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the steps of the data migration method described above.
In the data migration process, after the migration instruction of the database service is acquired, the first slave node can be newly established for the database service in the second server, and the data stored in the master node corresponding to the database service is synchronized to the first slave node, so that manual interference is not needed, and the efficiency of data migration is improved. And if the target node is determined to be the master node corresponding to the database service, when it is determined that the first slave node completes data synchronization, the state of the first slave node is updated to a writable state, and address information of the master node corresponding to the database service in the name service is updated according to the address information of the first slave node, so that the client can write data in the first slave node through the address information of the master node recorded in the name service. And then closing the target node and determining that the first slave node works as the master node corresponding to the database service, thereby ensuring that the database service always corresponds to the master node, ensuring the stability and the availability of the database service and reducing the time of abnormal database service caused by data migration.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a data migration process according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a specific data migration process provided herein;
fig. 3 is a schematic diagram of node switching corresponding to a specific target node according to an embodiment of the present application;
fig. 4 is a schematic diagram of node switching corresponding to a specific target node according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a master node failover provided in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a data migration apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a data migration apparatus according to an embodiment of the present application.
Detailed Description
The present application will now be described in further detail with reference to the accompanying drawings, wherein like reference numerals refer to like elements throughout. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be embodied as a system, apparatus, device, method, or computer program product. Thus, the present application may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
For convenience of understanding, some concepts related to the embodiments of the present application are explained below:
redis (Remote Dictionary Server): an open source, network-enabled, memory-based, distributed, optionally persistent key-value pair storage database written using ANSI C.
Sentinel (Sentinel): a special Redis provides high available capabilities for Redis.
Master node (Master): the primary node of the Redis instance may provide read and write operations.
Slave node (Replica): the slave nodes of the Redis instance can be understood as copies of Master, typically providing only read operations.
Name service: the name service stores basic information of the service, and the service room can acquire information of the opposite service by inquiring the name service.
In a traditional architecture, the Redis instances are generally divided according to business dimensions, all services related to the same business use the same Redis instance, and a system including multiple businesses usually only uses dozens of Redis instances. These Redis instances are typically deployed in a particular server and do not migrate frequently between servers. Through a Redis-master-slave deployment mode and combined with a Sentinel technology, high availability of database services can be ensured. When data in the Redis instance needs to be migrated, maintenance personnel can perform migration through manual operation or script operation within the scheduled time. In the data migration process, any one of the replias can be used as a new Master, so that the migration step can be completed by switching through a Sentinel command and updating the routing information.
With the development of system architecture, systems employing an architecture of microservices have emerged. In the framework of microservices, a system may consist of hundreds or thousands of microservices, each with its own independent database, and therefore using comparable orders of magnitude of Redis instances. Wherein, the Redis instance refers to a process started by Redis on a server. In order to ensure high availability of the micro service and avoid deploying too many Redis instances associated with the micro service, one micro service is generally used to associate two Redis instances, one of which serves as a Redis Master node (Master) and the other serves as a disaster tolerance scheme of a Redis slave node (Replica). Meanwhile, in order to ensure resource balance, the resource manager frequently controls the Redis instance associated with the microservice to migrate among different servers at irregular time. For example, to reduce the time required for a service to call data in a Redis instance, the Redis instance may be deployed in the same server as the service; or, in order to ensure the load balance of the servers, the data stored in the Redis instance on one server with higher load is migrated to another server with lower load. Under the disaster recovery scheme, in the process of migrating a Redis instance associated with a micro-service, if a Redis master node associated with the micro-service fails or a server where the Redis master node is located is abnormal, the associated Redis master node cannot be automatically redetermined for the micro-service, and switching needs to be performed in a manual manner, so that a large amount of manpower is consumed in the data migration process, and the data migration quality and efficiency cannot be guaranteed. And in a time period before the manual switching is completed, the micro service cannot normally provide services to the outside, and the availability of the micro service is influenced.
In order to solve the above problems, the present application provides a data migration method, apparatus, device, and medium. In the data migration process, after the migration instruction of the database service is acquired, the first slave node can be newly established for the database service in the second server, and the data stored in the master node corresponding to the database service is synchronized to the first slave node, so that manual interference is not needed, and the efficiency of data migration is improved. And if the target node is determined to be the master node corresponding to the database service, when it is determined that the first slave node completes data synchronization, the state of the first slave node is updated to a writable state, and address information of the master node corresponding to the database service in the name service is updated according to the address information of the first slave node, so that the client can write data in the first slave node through the address information of the master node recorded in the name service. And then closing the target node and determining that the first slave node works as the master node corresponding to the database service, thereby ensuring that the database service always corresponds to the master node, ensuring the stability and the availability of the database service and reducing the time of abnormal database service caused by data migration.
It should be noted that the application scenarios mentioned in the foregoing embodiments are merely exemplary scenarios provided for convenience of description, and are not intended to limit the application scenarios of the data migration method, apparatus, device, and medium provided in the embodiments of the present application. It should be understood by those skilled in the art that the data migration method, apparatus, device, and medium provided in the embodiments of the present application may be applied to all application scenarios requiring data migration, for example, product online.
Example 1:
fig. 1 is a schematic diagram of a data migration process provided in an embodiment of the present application, where the process includes:
s101: acquiring a migration instruction of a database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server.
The data migration method provided by the embodiment of the application is applied to an electronic device (for convenience of description, referred to as a data migration device), and the data migration device may be an intelligent device such as a mobile terminal and a computer, or a server, such as an application server and a resource scheduler.
In this embodiment of the present application, when a certain node of a database service needs to be migrated, for example, a certain Redis instance of a Redis service, the node may be determined as a target node, and a server (denoted as a second server) to which the target node is to be migrated is determined from servers (denoted as first servers) other than a server (denoted as a first server) where the node is currently located. And then generating a migration instruction of the database service according to the address information of the target node, the address information of the first server and the address information of the second server. The target node may be a master node corresponding to the database service, or may be a slave node corresponding to the database service. The data migration device can realize the migration of the target node of the database service from the first server to the second server by adopting the data migration method provided by the application based on the migration instruction.
The second server may be determined from the other servers according to loads corresponding to the other servers, for example, the other server with the smallest load may be determined as the second server according to the loads corresponding to the other servers. The second server may be determined from the other servers according to the remaining storage spaces corresponding to the other servers. Of course, it may also be configured by a worker.
It should be noted that, the method for specifically determining the respective loads corresponding to each of the other servers belongs to the prior art, and for example, the method may be flexibly set according to the actual requirements in the specific implementation process, and is not specifically limited herein.
In one example, the determination that data migration is required may be made by at least one of:
in a first mode, when the main node corresponding to the database service works normally, the abnormal alarm information of the first server is received.
In the actual application process, the first server may be abnormal in hardware, maintained, and the like, so that the target node may have a risk of being unable to normally provide services. Therefore, in the present application, when the first server is abnormal, the server for monitoring the health status of the first server may monitor that the first server is abnormal, and generate the alarm information that the first server is abnormal. When the master node corresponding to the database service works normally, the data migration device receives the alarm information, and if the node on the first server has a risk that the node cannot provide the service normally, it is determined that the database service provided on the first server needs to perform data migration.
And secondly, determining that the load of the first server reaches a preset load threshold value.
In order to ensure load balance among the servers, in the application, when the data migration device monitors that the load of a certain server is too high, data migration can be performed on data stored in the server, and a migration instruction of a database service is acquired. For example, a load threshold is preset, the data migration device obtains the load of each server in real time, and determines whether the load of the server reaches the preset load threshold for each server. When the load of the server is determined to reach a preset load threshold, which indicates that the load of the server is too high, it is determined that the database service provided by the server needs to perform data migration. And when the load of the server is determined not to reach the preset load threshold, the load of the server is not high, data migration of the data stored in the server is not needed for the moment, and then the next server is obtained.
For example, after the data migration device acquires the load of the first server, it is determined that the load of the first server reaches a preset load threshold, and it is determined that the database service provided on the first server needs to perform data migration.
It should be noted that, in the actual use process, the two manners may be used in combination, that is, when the master node corresponding to the database service normally works, the alarm information that the first server where the target node corresponding to the database service is located is abnormal is received, and it is determined that the load of the first server reaches the preset load threshold, it is determined that the target node needs to perform data migration.
S102: and newly establishing a first slave node for the database service in the second server, and synchronizing data stored in the master node corresponding to the database service to the first slave node.
In this application, after the data migration device receives a migration instruction of the database service and determines the second server, a new slave node (denoted as a first slave node) may be created for the database service in the second server. For example, the data migration apparatus may send a control instruction for newly building the first slave node to the second server, so that the second server, after receiving the control instruction, newly builds the first slave node for the database service, and returns address information of the first slave node to the data migration apparatus.
In an example, in the data migration process, if the master node needs data migration, the data stored by the master node may be directly and synchronously stored in a newly-built node; if the slave node needs data migration, the master node corresponding to the database service can be determined, and the data stored in the master node is synchronously stored in the new node. Therefore, in the present application, after receiving a migration instruction of a certain database service, the data migration apparatus may determine a master node corresponding to the database service, determine that the second server creates a new first slave node for the database service, and then synchronously store data stored in the master node corresponding to the database service in the first slave node.
For example, if the type of the target node is a master node, that is, the target node is a master node corresponding to the database service, and the data migration device determines that the data to be stored synchronously originates from a server where the target node is located, after the data migration device creates a first slave node for the database service on a second server, the data migration device sends address information of the first slave node and address information of the second server to the target node of the database service in the first server, so as to notify the database service of the first server to store the data stored in the target node synchronously in the first slave node; and the database service in the subsequent first server synchronously stores the data stored in the target node into the first slave node according to the received address information of the first slave node and the address information of the second server.
For another example, if the type of the target node is a slave node, the data migration device determines that the data to be stored synchronously originates from a server (denoted as a third server) where the master node corresponding to the target node is located, and after the data migration device creates a first slave node for the database service on the second server, the data migration device sends address information of the first slave node and address information of the second server to the target node of the database service in the third server to notify the database service in the third server to store the data stored in the target node synchronously in the first slave node; and the subsequent database service in the third server synchronously stores the data stored by the main node corresponding to the database service in the first slave node according to the received address information of the first slave node and the address information of the second server, so that the data stored in the target node is synchronized in the first slave node.
In the application, a fault tolerance mechanism is configured in advance, so that the availability of the database service in the data migration process is ensured through the fault tolerance mechanism. For example, after obtaining the migration instruction of the database service, it may be determined whether the node corresponding to the current database service satisfies a preset fault tolerance mechanism. If the node corresponding to the current database service meets the preset fault tolerance mechanism, the data stored in the main node corresponding to the database service can be synchronized into the first slave node; if the node corresponding to the current database service is determined not to meet the preset fault tolerance mechanism, the subsequent step of synchronizing the data stored in the main node corresponding to the database service to the first slave node is not executed, so that the availability and the stability of the database service are ensured.
The node corresponding to the database service comprises a master node corresponding to the database service and a slave node corresponding to the database service.
In one example, it may be determined that the node corresponding to the database service satisfies the preset fault tolerance mechanism by at least one of:
and in the mode 1, determining that the master node corresponding to the database service and at least one slave node corresponding to the database service both work normally.
In order to ensure that a slave node disaster tolerance exists at any time in a data migration process, or to avoid data loss of a database service caused by a failure of a target node in the data migration process, in the application, if data migration is to be performed, it is required to ensure that a master node corresponding to the database service and at least one slave node corresponding to the database service both work normally to ensure that data stored in a subsequent target node is migrated, and even if the target node fails, data stored in the target node can be restored through other nodes corresponding to the database service except the target node, so that data loss stored in the target node is avoided. Therefore, whether the node corresponding to the database service meets a preset fault-tolerant mechanism or not can be determined by judging whether the master node corresponding to the database service and the at least one slave node corresponding to the database service work normally or not. If the master node corresponding to the database service and the at least one slave node corresponding to the database service both work normally and it is determined that the node corresponding to the database service meets a preset fault tolerance mechanism, a subsequent step of synchronizing the data stored in the master node corresponding to the database service to the first slave node may be performed. If the master node corresponding to the database service or all the slave nodes corresponding to the database service do not work normally, and the node corresponding to the database service is determined not to meet the preset fault tolerance mechanism, the subsequent step of synchronizing the data stored in the master node corresponding to the database service to the first slave node is not performed.
And in the mode 2, the target node is determined to be the master node corresponding to the database service, and the second server is different from the server where the at least one slave node corresponding to the database service is located.
In order to avoid the single point problem and influence on the stability of the database service, a master node of the database service cannot be migrated to a server where any slave node corresponding to the database service is located. Therefore, whether the node corresponding to the database service meets a preset fault tolerance mechanism can be determined by judging whether the type of the target node is the master node and whether the server where the second server and the at least one slave node corresponding to the database service are located is different. If the type of the target node is determined to be the master node, that is, the target node is the master node corresponding to the database service, and the servers where the second server and the at least one slave node corresponding to the database service are located are different, it is determined that the node corresponding to the database service meets a preset fault tolerance mechanism, and then a subsequent step of synchronizing the data stored in the master node corresponding to the database service to the first slave node can be performed. If the type of the target node is determined to be the master node, the second server is the same as the server where any slave node corresponding to the database service is located, and the node corresponding to the database service does not meet the preset fault tolerance mechanism, the subsequent step of synchronizing the data stored in the master node corresponding to the database service to the first slave node is not performed.
And 3, determining that the target node is a slave node corresponding to the database service, and the second server is different from the server where the master node corresponding to the database service is located.
Similarly, to avoid the problem of a single point and affect the stability of the database service, the slave node of the database service cannot be migrated to the server where the master node corresponding to the database service is located. Therefore, in the present application, it may be determined whether the node corresponding to the database service satisfies a preset fault tolerance mechanism by determining whether the type of the target node is a slave node and whether the second server is different from the server where the master node corresponding to the database service is located. If the type of the target node is determined to be a slave node, the second server is different from the server where the master node corresponding to the database service is located, and the node corresponding to the database service is determined to meet a preset fault tolerance mechanism, the subsequent step of synchronizing the data stored in the master node corresponding to the database service to the first slave node can be performed. If the type of the target node is determined to be a slave node, the second server is the same as the server where the master node corresponding to the database service is located, and the node corresponding to the database service does not meet the preset fault tolerance mechanism, the subsequent step of synchronizing the data stored in the master node corresponding to the database service to the first slave node is not performed.
S103: if the target node is the main node corresponding to the database service, when the first slave node is determined to finish data synchronization, the state of the first slave node is updated to be a writable state, and address information of the main node corresponding to the database service in name service is updated according to the address information of the first slave node.
The master node can perform read-write operation, while the slave node can only perform read operation, and the request (query) sent by the client is mainly processed by the master node. Therefore, when the type of the target node is the master node, when it is determined that the first slave node completes data synchronization, the state of the first slave node may be updated to a writable state, and address information of the master node corresponding to the database service in the name service is updated according to the address information of the first slave node, so that the client may write data in the first slave node through the address information of the master node recorded in the name service, thereby facilitating subsequent determination of the first slave node as the master node corresponding to the database service.
In an application scenario, in order to conveniently monitor the state of each node and ensure the stability and availability of database services, Sentinel may be used to monitor each node, and route information and types of each node are stored in the Sentinel.
S104: and closing the target node, and determining that the first slave node works as a master node corresponding to the database service.
If the target node is the master node corresponding to the database service, the master node of the database service needs to be switched after the first slave node completes data synchronization. If the conventional Sentinel command is used for switching the main nodes, a certain time is consumed, so that the client-side is interrupted and reconnected in the switching process of the main nodes, the client-side requests the database service in the reconnecting process and is in a continuous failure state, and after the subsequent reconnecting, the data requested by the client-side is written in the main node corresponding to the database service, so that the risk of data loss is possibly caused. Therefore, in the application, after the state of the first slave node is updated to the writable state, the target node is closed and it is determined that the first slave node continues to operate as the master node corresponding to the database service, and after the target node is closed, the client tries to inquire the name service to acquire the latest address information of the master node, so that the target node is switched to the first slave node, that is, the master node of the database service is switched, and the database service is guaranteed to have the master node corresponding to the database service all the time, thereby guaranteeing the stability and availability of the database service and reducing the time of the database service exception caused by data migration.
In another example, after updating address information of a master node corresponding to the database service in a name service according to address information of the first slave node and before the target node is turned off, the method further includes:
if the target node is a master node corresponding to the database service and receives a request sent by a client, sending a rejection instruction to the client and determining that the first slave node works as the master node corresponding to the database service; wherein the rejection instruction is used for rejecting the request sent by the response client.
In the application, after the data migration device updates the address information of the master node corresponding to the database service in the name service according to the address information of the first slave node, if the data migration device receives a request sent by a client, the data migration device may reject the request sent by the response client by sending a rejection instruction, such as a (Redis command), to the client without closing the target node, and determine that the first slave node operates as the master node corresponding to the database service, so that the client tries to inquire the name service to acquire the address information of the latest master node of the database service after receiving the rejection instruction, thereby actively closing the target node and realizing the switching of the master node of the database service.
In one example, the method further comprises:
if the target node is a slave node corresponding to the database service and the first slave node completes data synchronization, closing the target node and determining that the first slave node continues to work as the slave node corresponding to the database service.
In the present application, the type of the target node may also be a slave node, that is, the target node serves a corresponding slave node for the database. Therefore, if the target node is the slave node corresponding to the database service and it is determined that the first slave node completes data synchronization, the target node may be directly turned off and it is determined that the first slave node continues to operate as the slave node corresponding to the database service.
It should be noted that the data migration device may send the address information of the first slave node to the device storing the address information of the slave node corresponding to the database service, so that the devices may update the address information of the slave node corresponding to the database service according to the address information of the first slave node, thereby ensuring the stability of the database service and facilitating the devices to manage the database service.
For example, assuming Master (M1) is deployed at server A and Replica (R1) is deployed at server B, R1 needs to be migrated from server B to server C. Firstly, a new Replica is built in the server C (R2), the data stored in the M1 is synchronously stored in the R2, the R1 is closed, the cache information of the R1 is cleared, for example, the cache information of the R1 is deleted from Sentinel, and the R2 is determined to continue to work as the slave node corresponding to the database service.
In this application, when the type of the target node is the master node, in the process of migrating the target node from the first server to the second server, it should be avoided that the slave node (denoted as the second slave node) corresponding to the database service is determined as the master node. Wherein the creation time of the second slave node is earlier than the creation time of the first slave node. Therefore, after the migration instruction of the database service is obtained, before the first slave node is determined to operate as the master node corresponding to the database service, the second slave node is kept as the slave node corresponding to the database service. Illustratively, maintaining the second slave node to serve the corresponding slave node for the database may be accomplished by modifying the priority of the second slave node.
In an example, in the process of data migration, an abnormality may occur in any node corresponding to a database service, and in order to ensure stability and high availability of the database service, in the present application, different processing may be performed on the node corresponding to the database service according to different situations. Illustratively, the case that any node corresponding to the target service fails includes the following:
in case one, when the type of the target node is a master node, if it is determined that the target node is abnormal and the first slave node does not complete data synchronization, stopping continuously synchronizing the data to the first slave node and rolling back a node corresponding to the database service.
In this application, rolling back a node corresponding to a database service refers to deleting the node if the first slave node is created, and adopting the master node corresponding to the database service and the slave node corresponding to the database service before the data migration.
In an example, after the node corresponding to the database service is rolled back, the master node cannot normally provide the service to the outside in consideration of the abnormality of the target node, that is, the abnormality of the master node corresponding to the database service. Therefore, in order to ensure high availability of the database service, in the present application, when the master node corresponding to the database service is abnormal, an automatic failover mechanism may be adopted to actively switch the slave node corresponding to the database service to the master node corresponding to the database service.
For example, if it is determined that the master node corresponding to the database service has a fault, a target slave node that is allowed to be switched to the master node corresponding to the database service is determined from at least one slave node corresponding to the database service;
determining that the state of the target slave node is a writable state, and updating address information of a master node corresponding to the database service in the name service according to the address information of the target slave node;
and determining that the target slave node works as a master node corresponding to the database service.
In this application, when it is determined that the master node corresponding to the database service has failed, a target slave node that is allowed to be switched to the master node may be determined from the slave nodes corresponding to the database service. For example, the slave node with the higher priority may be determined as the target slave node according to the priority corresponding to each of the at least one slave node corresponding to the database service, or any slave node may be determined as the target slave node from the at least one slave node corresponding to the database service, or the target slave node may be determined from the at least one slave node corresponding to the database service by a manual configuration method. Of course, the above methods for determining the target slave node may also be combined with each other, for example, the slave node with higher priority is determined as the target slave node according to the priority corresponding to each of the at least one slave node corresponding to the database service, but if the priorities corresponding to each of the at least one slave node corresponding to the database service are the same, any slave node may be determined as the target slave node from the at least one slave node corresponding to the database service, and the like.
Considering that the master node can perform read-write operation, the slave node can only perform read operation, and the request (query) sent by the client is mainly processed by the master node. Therefore, in the present application, after the target slave node is determined, the state of the target slave node may be updated to a writable state, and the address information of the master node corresponding to the database service in the name service is updated according to the address information of the target slave node, so that the client may write data in the target slave node through the address information of the master node recorded in the name service, thereby facilitating the subsequent determination of the target slave node as the master node corresponding to the database service. The data migration device may then determine that the target slave node is operating as the master node corresponding to the database service. For example, the data migration device may send notification information to the target slave node and the device managing the node information corresponding to the database service to notify the target slave node and the device managing the node information corresponding to the database service, where the target slave node operates as the master node corresponding to the database service.
For example, assuming Master (M1) is deployed at Server A, Replica (R1) is deployed at Server B, and M1 needs to be migrated from Server A to Server C. Firstly, M1 needs to synchronize data from server a to a newly built Replica (R2) on server C, and in the process that M1 synchronizes data to R2, if it is determined that M1 is abnormal and M1 does not complete synchronization of data to R2, data synchronization is stopped, M1 is continuously adopted as a Master, R1 is adopted as a Replica, and subsequently, an automatic failure switching mechanism is relied on to switch R1 to a new Master.
And in the second situation, when the type of the target node is the master node, if the target node is determined to be abnormal and the first slave node completes data synchronization, the step of data migration is continuously executed.
For example, assuming Master (M1) is deployed at Server A, Replica (R1) is deployed at Server B, and M1 needs to be migrated from Server A to Server C. Firstly, M1 needs to synchronize data from server a to a newly built Replica (R2) on server C, and in the process that M1 synchronizes data to R2, if it is determined that M1 is abnormal and M1 has completed synchronizing data to R2, the step of data migration is continuously performed, for example, the state of R1 is updated to a writable state, address information of a host node corresponding to the database service in the name service is updated according to the address information of R1, R2 is used as a new Master, R1 is used as a Replica, and M1 is closed.
And in a third case, when the type of the target node is the master node, if it is determined that the first slave node is abnormal and the first slave node is not determined as the master node corresponding to the database service, rolling back the node corresponding to the database service.
For example, assuming Master (M1) is deployed at Server A, Replica (R1) is deployed at Server B, and M1 needs to be migrated from Server A to Server C. Firstly, M1 needs to synchronize data from server a to newly built Replica on server C (R2), and if it is determined that R2 is abnormal and R2 is not adopted as a new Master during the process of synchronizing data to R2 by M1, then the newly built R2 on the server is deleted, and M1 is continuously adopted as the Master and R1 is adopted as the Replica.
And when the type of the target node is a master node, if it is determined that the first slave node is determined as the master node corresponding to the database service and the first slave node is abnormal, which indicates that the master node corresponding to the database service is abnormal, a failure automatic switching mechanism may be adopted to actively switch the slave node corresponding to the database service to the master node corresponding to the database service.
It should be noted that, the process of actively switching the slave node corresponding to the database service to the master node corresponding to the database service by using the automatic failure switching mechanism is described in the above first case, and repeated parts are not described again.
For example, assuming Master (M1) is deployed at Server A, Replica (R1) is deployed at Server B, and M1 needs to be migrated from Server A to Server C. Firstly, M1 needs to synchronize data from server a to newly built Replica on server C (R2), and after M1 completes the data synchronization to R2, if it is determined that R2 is adopted as a new Master and is abnormal, R1 is determined as a Master node corresponding to the database service. And determining that the state of the R1 is a writable state, and updating the address information of the Master in the name service according to the address information of the R1.
And fifthly, when the type of the target node is the slave node, if the target node is determined to be abnormal, deleting the target node.
In one example, when the type of the target node is a slave node, if the target node is determined to be abnormal and the first slave node does not complete data synchronization, the data is continuously synchronized to the first slave node, and the target node is deleted.
For example, assuming Master (M1) is deployed at server A and Replica (R1) is deployed at server B, R1 needs to be migrated from server B to server C. First, M1 needs to synchronize data from server a to newly built Replica on server C (R2), and if it is determined that R1 is abnormal and M1 does not complete data synchronization with R2 in the process of M1 synchronizing data to R2, continue to synchronize the data to R2 and delete R1.
In another example, when the type of the target node is a slave node, if it is determined that the target node is abnormal and the first slave node completes data synchronization, the target node is directly deleted.
For example, assuming Master (M1) is deployed at server A and Replica (R1) is deployed at server B, R1 needs to be migrated from server B to server C. First, M1 needs to synchronize data from server a to newly built Replica on server C (R2), and in the process of M1 synchronizing data to R2, if it is determined that R1 is abnormal and M1 completes data synchronization with R2, R1 is directly deleted.
And sixthly, when the type of the target node is the slave node, if the first slave node is determined to be abnormal and the second slave node is not deleted, rolling back the node corresponding to the database service and deleting the first slave node.
In the data migration process, after the migration instruction of the database service is acquired, the first slave node can be newly established for the database service in the second server, and the data stored in the master node corresponding to the database service is synchronized to the first slave node, so that manual interference is not needed, and the efficiency of data migration is improved. And if the target node is determined to be the master node corresponding to the database service, when it is determined that the first slave node completes data synchronization, the state of the first slave node is updated to a writable state, and address information of the master node corresponding to the database service in the name service is updated according to the address information of the first slave node, so that the client can write data in the first slave node through the address information of the master node recorded in the name service. And then closing the target node and determining that the first slave node works as the master node corresponding to the database service, thereby ensuring that the database service always corresponds to the master node, ensuring the stability and the availability of the database service and reducing the time of abnormal database service caused by data migration.
Example 2:
the following describes a data migration method provided by the present application through a specific embodiment, and fig. 2 is a schematic diagram of a specific data migration process provided by the present application, where the process includes:
s201: acquiring a migration instruction of a database service; wherein the migration instruction is used to migrate the target node of the database service from the first server to the second server.
Before a migration instruction of a database service is acquired, when a main node corresponding to the database service normally works, receiving abnormal alarm information of the first server; and/or determining that the load of the first server reaches a preset load threshold.
S202: and determining that the node corresponding to the database service meets a preset fault tolerance mechanism.
In one example, determining that a node corresponding to a database service satisfies a predetermined fault tolerance mechanism by at least one of:
determining that the master node corresponding to the database service and at least one slave node corresponding to the database service both work normally;
determining that the target node is a master node corresponding to the database service, and the second server is different from the server where at least one slave node corresponding to the database service is located;
and determining that the target node is a slave node corresponding to the database service, and the second server is different from the server where the master node corresponding to the database service is located.
S203: and judging whether the type of the target node is a main node or not, if so, executing S204, otherwise, executing S208.
S204: and creating a first slave node for the database service in the second server, and synchronously storing the data stored in the target node into the first slave node.
S205: when the first slave node is determined to finish data synchronization, keeping the second slave node as a slave node corresponding to the database service; wherein the creation time of the second slave node is earlier than the creation time of the first slave node.
S206: the state of the first slave node is updated to a writable state.
S207: and according to the address information of the first slave node, updating the address information of the master node corresponding to the database service in the name service, closing the target node, and determining that the first slave node continues to work as the master node corresponding to the database service.
Exemplarily, fig. 3 is a schematic diagram of node switching corresponding to a specific target node provided in the embodiment of the present application. As shown in fig. 3, assuming that Master (M1) is deployed at server a, repica (R1) is deployed at server B, and M1 needs to be migrated from server a to server C. Firstly, constructing a new Replica (R2) in the server C, synchronously storing the data stored by the M1 into the R2, and determining that the data synchronization is completed; then, by modifying the priority of the R1, the R1 is kept as a slave node corresponding to the database service, so that the slave node cannot be promoted to the Master, and the state of the R2 is set to be a writable state; then, according to the address information of R2, the address information of Master corresponding to the database service in the name service is updated, and M1 is closed. Due to the fact that M1 is closed, when it is determined that R2 completes data synchronization, one available Replica can be selected from two replicas corresponding to the database service to become a new Master. Since R1 is set to not be promoted to Master, R2 will become a new Master, i.e., determine R2 as M2, transition R1 to repica of M2, and transition M1 to repica of M2 (R3), at which time the information of R3 is removed, e.g., the information of R3 is removed from Sentinel and name service. After M1 stops service, the client will try to ask the name service to obtain the latest Master address, i.e. the address of R2, since R2 can write, the data is written into R2, and since M1 is already closed, the problem that the data written in R2 is overwritten due to synchronization problem does not occur. Finally, since Sentinel changes all other nodes of the new Master to the repica node of the new Master by default, M1 is converted into M2 repica (R3), at this time, the information of R3 is removed, and the Master migration flow is completed.
S208: and newly building a first slave node for the database service in the second server, and synchronizing the data stored by the master node corresponding to the database service into the first slave node.
S209: and if the first slave node is determined to complete data synchronization, determining that the first slave node works as a slave node corresponding to the database service.
Exemplarily, fig. 4 is a schematic node switching diagram corresponding to a specific target node provided in the embodiment of the present application. As shown in fig. 4, assuming that Master (M1) is deployed at server a, and repica (R1) is deployed at server B, R1 needs to be migrated from server B to server C. First, a new Replica is constructed in the server C (R2), and the data stored by the M1 is synchronously stored in the R2. Close R1 and clear R1 cache information, such as name service and R1 cache information in Sentinel.
S210: and if the master node corresponding to the database service is determined to be abnormal, determining a target slave node which is allowed to be switched to the master node corresponding to the database service from at least one slave node corresponding to the database service.
S211: and determining that the state of the target slave node is a writable state, and updating the address information of the master node corresponding to the database service in the name service according to the address information of the target slave node.
S212: and determining that the target slave node works as a master node corresponding to the database service.
Exemplarily, fig. 5 is a schematic flow diagram of master node failover provided in an embodiment of the present application, and as shown in fig. 5, it is assumed that states of target slave nodes are monitored by Sentinel a to Sentinel C. When the Master corresponding to the database service is determined to be abnormal, the Sentinl senses the fault and raises the replia corresponding to the database service to the Master, and the Sentinl broadcasts a message containing the name of the replia and switches the Master from which old address to which new address. The Sentinel Checker (Sentinel Checker) subscribes to the message broadcasted by Sentinel, and when Sentinel Checker 0-Sentinel Checker2 listen to the master-slave switching message, the Sentinel Checker notifies the Name Service (Name Service) and updates the address information of the master node corresponding to the database Service. After sensing the abnormal connection of a main node before the update of the database service through a transmitted request (query), a client (client) tries to reconnect; and acquiring the address information of the host node which is served by the database latest by inquiring the name service, and then connecting a new host node to continue external service.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
Example 3:
fig. 6 is a schematic structural diagram of a data migration apparatus provided in an embodiment of the present application, where the apparatus includes:
an obtaining unit 61, configured to obtain a migration instruction of a database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
a first processing unit 62, configured to create a new first slave node for the database service in the second server, and synchronize data stored in the master node corresponding to the database service to the first slave node;
an updating unit 63, configured to update the state of the first slave node to a writable state when it is determined that the first slave node completes data synchronization if the target node is a master node corresponding to the database service, and update address information of the master node corresponding to the database service in name service according to the address information of the first slave node;
and a second processing unit 64, configured to close the target node, and determine that the first slave node continues to operate as the master node corresponding to the database service.
The principle of the data migration apparatus for solving the problem provided by the embodiment of the present application is the same as the principle of the data migration method for solving the problem, and specific contents can be referred to the above method embodiments.
Example 4:
fig. 7 is a schematic structural diagram of a data migration device provided in an embodiment of the present application, and on the basis of the foregoing embodiments, an embodiment of the present application further provides a data migration device, as shown in fig. 7, including: the system comprises a processor 71, a communication interface 72, a memory 73 and a communication bus 74, wherein the processor 71, the communication interface 72 and the memory 73 are communicated with each other through the communication bus 74;
the memory 73 has stored therein a computer program which, when executed by the processor 71, causes the processor 71 to perform the steps of:
acquiring a migration instruction of a database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
newly building a first slave node for the database service in the second server, and synchronizing data stored in a master node corresponding to the database service to the first slave node;
if the target node is the main node corresponding to the database service, updating the state of the first slave node to a writable state when the first slave node is determined to finish data synchronization, and updating the address information of the main node corresponding to the database service in the name service according to the address information of the first slave node;
and closing the target node, and determining that the first slave node continues to work as the master node corresponding to the database service.
Because the principle of solving the problem of the data migration device is similar to that of the data migration method, the implementation of the data migration device may refer to the implementation of the method, and repeated details are not described herein.
Example 5:
on the basis of the foregoing embodiments, the present application further provides a computer-readable storage medium, in which a computer program executable by a processor is stored, and when the program is run on the processor, the processor is caused to execute the following steps:
acquiring a migration instruction of a database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
newly building a first slave node for the database service in the second server, and synchronizing data stored in a master node corresponding to the database service to the first slave node;
if the target node is the main node corresponding to the database service, updating the state of the first slave node to a writable state when the first slave node is determined to finish data synchronization, and updating the address information of the main node corresponding to the database service in the name service according to the address information of the first slave node;
and closing the target node, and determining that the first slave node continues to work as the master node corresponding to the database service.
The principle of solving the problem by the computer-readable medium provided by the embodiment of the present application is the same as the principle of solving the problem by the data migration method, and specific contents can be referred to the above method embodiments.

Claims (13)

1. A method of data migration, the method comprising:
acquiring a migration instruction of a database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
newly building a first slave node for the database service in the second server, and synchronizing data stored in a master node corresponding to the database service to the first slave node;
if the target node is the main node corresponding to the database service, updating the state of the first slave node to a writable state when the first slave node is determined to finish data synchronization, and updating the address information of the main node corresponding to the database service in the name service according to the address information of the first slave node;
and closing the target node, and determining that the first slave node continues to work as the master node corresponding to the database service.
2. The method of claim 1, wherein prior to obtaining the migration instructions for the database service, the method further comprises:
when the main node corresponding to the database service works normally, receiving alarm information of the abnormality of the first server; and/or
Determining that the load of the first server reaches a preset load threshold.
3. The method of claim 1, wherein after the obtaining the migration instruction of the database service and before the synchronizing the data stored in the master node corresponding to the database service into the first slave node, the method further comprises:
determining that a node corresponding to the database service meets a preset fault tolerance mechanism; the nodes comprise a master node corresponding to the database service and a slave node corresponding to the database service.
4. The method of claim 3, wherein determining that the node corresponding to the database service satisfies a predetermined fault tolerance mechanism by at least one of:
determining that a master node corresponding to the database service and at least one slave node corresponding to the database service both work normally;
determining that the target node is a master node corresponding to the database service, and the second server is different from the server where at least one slave node corresponding to the database service is located;
and determining that the target node is a slave node corresponding to the database service, and the second server is different from the server where the master node corresponding to the database service is located.
5. The method of claim 1, wherein after the updating of the address information of the master node corresponding to the database service in the name service according to the address information of the first slave node and before the shutting down of the target node, the method further comprises:
if the target node is a master node corresponding to the database service and receives a request sent by a client, sending a rejection instruction to the client and determining that the first slave node works as the master node corresponding to the database service; wherein the rejection instruction is used for rejecting the request sent by the response client.
6. The method of claim 1, wherein after the obtaining the migration instruction of the database service and before the determining that the first slave node operates as the master node corresponding to the database service, the method further comprises:
keeping a second slave node serving a corresponding slave node for the database; wherein a creation time of the second slave node is earlier than a creation time of the first slave node.
7. The method of claim 1, wherein the method further comprises:
if the target node is a slave node corresponding to the database service and the first slave node completes data synchronization, closing the target node and determining that the first slave node continues to work as the slave node corresponding to the database service.
8. The method of claim 7, wherein the method further comprises:
if the master node corresponding to the database service is determined to be in fault, determining a target slave node which is allowed to be switched to the master node corresponding to the database service from at least one slave node corresponding to the database service;
determining that the state of the target slave node is a writable state, and updating address information of a master node corresponding to the database service in the name service according to the address information of the target slave node;
and determining that the target slave node works as a master node corresponding to the database service.
9. The method according to claim 7 or 8, wherein after the obtaining of the migration instruction of the database service and before the determining that the first slave node operates as the master node corresponding to the database service, the method further comprises:
when the type of the target node is a master node, if the target node is determined to be abnormal and the first slave node does not finish data synchronization, stopping continuously synchronizing the data to the first slave node and rolling back the node corresponding to the database service;
when the type of the target node is a master node, if the target node is determined to be abnormal and the first slave node completes data synchronization, continuing to execute the step of data migration;
when the type of the target node is a master node, if the first slave node is determined to be abnormal, rolling back the node corresponding to the database service;
when the type of the target node is a slave node, if the target node is determined to be abnormal, deleting the target node;
when the type of the target node is a slave node, if the first slave node is determined to be abnormal and the second slave node is not deleted, rolling back the node corresponding to the database service and deleting the first slave node; wherein a creation time of the second slave node is earlier than a creation time of the first slave node.
10. An apparatus for data migration, the apparatus comprising:
the system comprises an acquisition unit, a migration unit and a migration unit, wherein the acquisition unit is used for acquiring a migration instruction of the database service; wherein the migration instruction is to migrate a target node of the database service from a first server to a second server;
the first processing unit is used for newly building a first slave node for the database service in the second server and synchronizing data stored in a master node corresponding to the database service to the first slave node;
an updating unit, configured to update a state of the first slave node to a writable state when it is determined that the first slave node completes data synchronization if the target node is a master node corresponding to the database service, and update address information of the master node corresponding to the database service in name service according to address information of the first slave node;
and the second processing unit is used for closing the target node and determining that the first slave node continues to work as the master node corresponding to the database service.
11. A data migration device, characterized in that the data migration device comprises at least a processor and a memory, the processor being adapted to carry out the steps of the data migration method according to any one of claims 1 to 9 when executing a computer program stored in the memory.
12. A computer-readable storage medium, characterized in that it stores a computer program which, when being executed by a processor, carries out the steps of the data migration method according to any one of claims 1 to 9.
13. A computer program product, the computer program product comprising: computer program code for causing a computer to perform the steps of the data migration method as claimed in any one of the preceding claims 1 to 9 when said computer program code is run on a computer.
CN202210324045.9A 2022-03-29 2022-03-29 Data migration method, device, equipment and medium Active CN114598711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210324045.9A CN114598711B (en) 2022-03-29 2022-03-29 Data migration method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210324045.9A CN114598711B (en) 2022-03-29 2022-03-29 Data migration method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114598711A true CN114598711A (en) 2022-06-07
CN114598711B CN114598711B (en) 2024-04-16

Family

ID=81813344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210324045.9A Active CN114598711B (en) 2022-03-29 2022-03-29 Data migration method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114598711B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150400A (en) * 2022-07-05 2022-10-04 普联技术有限公司 Service fault processing method and device, cloud service platform and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027290A (en) * 2016-05-12 2016-10-12 深圳市永兴元科技有限公司 Fault processing method and device
US20170364423A1 (en) * 2016-06-21 2017-12-21 EMC IP Holding Company LLC Method and apparatus for failover processing
WO2018019023A1 (en) * 2016-07-27 2018-02-01 腾讯科技(深圳)有限公司 Data disaster recovery method, apparatus and system
WO2020015366A1 (en) * 2018-07-18 2020-01-23 华为技术有限公司 Method and device for data migration
CN111078667A (en) * 2019-12-12 2020-04-28 腾讯科技(深圳)有限公司 Data migration method and related device
US10657154B1 (en) * 2017-08-01 2020-05-19 Amazon Technologies, Inc. Providing access to data within a migrating data partition
CN111200532A (en) * 2020-01-02 2020-05-26 广州虎牙科技有限公司 Method, device, equipment and medium for master-slave switching of database cluster node
WO2020253596A1 (en) * 2019-06-21 2020-12-24 深圳前海微众银行股份有限公司 High availability method and apparatus for redis cluster
CN112269693A (en) * 2020-10-23 2021-01-26 北京浪潮数据技术有限公司 Node self-coordination method, device and computer readable storage medium
CN112306993A (en) * 2020-11-06 2021-02-02 平安科技(深圳)有限公司 Data reading method, device and equipment based on Redis and readable storage medium
CN113127444A (en) * 2020-01-15 2021-07-16 中移(苏州)软件技术有限公司 Data migration method, device, server and storage medium
CN114116671A (en) * 2021-11-25 2022-03-01 深圳Tcl新技术有限公司 Database migration method, system, server and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027290A (en) * 2016-05-12 2016-10-12 深圳市永兴元科技有限公司 Fault processing method and device
US20170364423A1 (en) * 2016-06-21 2017-12-21 EMC IP Holding Company LLC Method and apparatus for failover processing
WO2018019023A1 (en) * 2016-07-27 2018-02-01 腾讯科技(深圳)有限公司 Data disaster recovery method, apparatus and system
US10657154B1 (en) * 2017-08-01 2020-05-19 Amazon Technologies, Inc. Providing access to data within a migrating data partition
WO2020015366A1 (en) * 2018-07-18 2020-01-23 华为技术有限公司 Method and device for data migration
WO2020253596A1 (en) * 2019-06-21 2020-12-24 深圳前海微众银行股份有限公司 High availability method and apparatus for redis cluster
CN111078667A (en) * 2019-12-12 2020-04-28 腾讯科技(深圳)有限公司 Data migration method and related device
CN111200532A (en) * 2020-01-02 2020-05-26 广州虎牙科技有限公司 Method, device, equipment and medium for master-slave switching of database cluster node
CN113127444A (en) * 2020-01-15 2021-07-16 中移(苏州)软件技术有限公司 Data migration method, device, server and storage medium
CN112269693A (en) * 2020-10-23 2021-01-26 北京浪潮数据技术有限公司 Node self-coordination method, device and computer readable storage medium
CN112306993A (en) * 2020-11-06 2021-02-02 平安科技(深圳)有限公司 Data reading method, device and equipment based on Redis and readable storage medium
CN114116671A (en) * 2021-11-25 2022-03-01 深圳Tcl新技术有限公司 Database migration method, system, server and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NIKOLAY BAYCHENKO: "Implementing a master/slave architecture for a data synchronization service", Retrieved from the Internet <URL:https://www.theseus.fi/bitstream/handle/10024/143791/Nikolay_Baychenko.pdf?sequence=1> *
谢梦怡: "混合云存储架构下分布式大数据异步迁移系统设计", 《电子设计工程》, vol. 27, no. 23, pages 45 - 49 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150400A (en) * 2022-07-05 2022-10-04 普联技术有限公司 Service fault processing method and device, cloud service platform and storage medium
CN115150400B (en) * 2022-07-05 2024-04-30 普联技术有限公司 Service fault processing method and device, cloud service platform and storage medium

Also Published As

Publication number Publication date
CN114598711B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110784350B (en) Design method of real-time high-availability cluster management system
KR100575497B1 (en) Fault tolerant computer system
CN108964948A (en) Principal and subordinate&#39;s service system, host node fault recovery method and device
CN111327467A (en) Server system, disaster recovery backup method thereof and related equipment
CN111045745A (en) Method and system for managing configuration information
US20210136145A1 (en) Method for Changing Member in Distributed System and Distributed System
JP6905161B2 (en) Management methods, systems, and devices for master and standby databases
CN107404509B (en) Distributed service configuration system and information management method
CN106874142B (en) Real-time data fault-tolerant processing method and system
US9846624B2 (en) Fast single-master failover
US20170206148A1 (en) Cross-region failover of application services
US11321199B2 (en) System and method for on-demand warm standby disaster recovery
CN115562911B (en) Virtual machine data backup method, device, system, electronic equipment and storage medium
CN112100004A (en) Management method and storage medium of Redis cluster node
CN105959078A (en) Cluster time synchronization method, cluster and time synchronization system
CN111966466A (en) Container management method, device and medium
CN114598711B (en) Data migration method, device, equipment and medium
CN111858190A (en) Method and system for improving cluster availability
CN113434340B (en) Server and cache cluster fault rapid recovery method
CN114020279A (en) Application software distributed deployment method, system, terminal and storage medium
US11327679B2 (en) Method and system for bitmap-based synchronous replication
CN112231399A (en) Method and device applied to graph database
CN104052799A (en) Method for achieving high availability storage through resource rings
CN116302691A (en) Disaster recovery method, device and system
CN114153655B (en) Disaster recovery system creation method, disaster recovery method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant