CN115292293A - Data migration method and device of distributed cache system - Google Patents

Data migration method and device of distributed cache system Download PDF

Info

Publication number
CN115292293A
CN115292293A CN202211005410.6A CN202211005410A CN115292293A CN 115292293 A CN115292293 A CN 115292293A CN 202211005410 A CN202211005410 A CN 202211005410A CN 115292293 A CN115292293 A CN 115292293A
Authority
CN
China
Prior art keywords
information
tenant
storage
data
storage node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211005410.6A
Other languages
Chinese (zh)
Inventor
孙扬
郑宝城
王洁如
朱小珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202211005410.6A priority Critical patent/CN115292293A/en
Publication of CN115292293A publication Critical patent/CN115292293A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data migration method and a data migration device of a distributed cache system, which relate to the technical field of distributed storage and can be used in the financial field, and the method comprises the following steps: acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located; acquiring storage information of the tenant needing to be migrated according to the basic information, wherein the storage information comprises storage node information and main-standby relation information; determining the number of fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes; creating containers with corresponding quantity according to the number of the fragments; the configuration information and data of the old storage node are synchronized to the corresponding container. According to the method and the device, for a user of the migration tool, the minimum granularity of migration is the tenant, migration is performed by taking the fragments as the granularity in the program, the fragments synchronously execute migration, the migration time is greatly shortened, and the migration efficiency is improved.

Description

Data migration method and device of distributed cache system
Technical Field
The invention relates to the technical field of distributed storage, can be used in the financial field, and particularly relates to a data migration method and device of a distributed cache system.
Background
With the rapid increase of data storage capacity, the demand for rapid data access is increasing, and the use of memory type databases and distributed cache systems is becoming widespread. As the bottom-layer facility for providing high concurrent data read-write service, namely the memory type database, the data volume borne by the memory type database is continuously increased, the read-write pressure is continuously increased, and the dependence of business processing on cache is increasingly strong. Meanwhile, the technology and arrangement of the container are continuously developed and mature, the container is gradually favored by users due to the advantages of lightness, elastic expansion, isolation, easy deployment and maintenance and the like, and more applications are selected to be deployed on the container. Some inventory applications are also migrating gradually to the vessel as the technology advances.
In order to complete migration of bottom storage nodes from a virtual machine to a container in a distributed cache system, the existing migration scheme is difficult to ensure the integrity of migration data, the manual steps are complicated, the configuration information needing to be manually filled and confirmed before migration is very much, and under the condition of batch migration, the manual operation is more complicated, the migration complexity is high, and errors are easy to occur.
Disclosure of Invention
In view of the above, the present invention provides a data migration method and apparatus for a distributed cache system to solve at least one of the above-mentioned problems.
In order to achieve the purpose, the invention adopts the following scheme:
according to a first aspect of the present invention, there is provided a data migration method of a distributed cache system, the method including: acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located; acquiring storage information of the tenant needing to be migrated according to the basic information, wherein the storage information comprises storage node information and main-standby relation information; determining the number of fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes; creating containers with corresponding quantity according to the quantity of the fragments; the configuration information and data of the old storage node are synchronized to the corresponding container.
According to a second aspect of the present invention, there is provided a data migration apparatus of a distributed cache system, the apparatus comprising: the basic information acquisition unit is used for acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located; a storage information obtaining unit, configured to obtain storage information of a tenant to be migrated according to the basic information, where the storage information includes storage node information and primary/secondary relationship information; the fragment determining unit is used for determining the number of fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes; the container creating unit is used for creating containers with corresponding quantity according to the number of the fragments; and the synchronization unit is used for synchronizing the configuration information and the data of the old storage node to the corresponding container.
According to a third aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
According to a fourth aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to a fifth aspect of the invention, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the above method.
According to the technical scheme, for a user of the migration tool, the minimum granularity of migration is the tenant, migration is performed by taking the fragments as the granularity in the program, the fragments synchronously execute migration, the migration time is greatly shortened, and the migration efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is an overall architecture diagram of a distributed cache system in the prior art;
fig. 2 is a schematic flowchart of a data migration method of a distributed cache system according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data migration method of a distributed cache system according to another embodiment of the present application;
fig. 4 is a schematic flowchart of disconnecting a primary-standby relationship of storage nodes according to an embodiment of the present application;
fig. 5 is a data migration plan of a distributed cache system according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a data migration apparatus of a distributed cache system according to the present application;
fig. 7 is a schematic block diagram of a system configuration of an electronic device according to another embodiment of the present application.
Detailed Description
The data migration method and device of the distributed cache system provided by the embodiment of the invention can be used in the financial field and other fields, and it should be noted that the data migration method and device of the distributed cache system can be used in the financial field and can also be used in any fields except the financial field.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
Fig. 1 is a diagram illustrating an overall architecture of a distributed cache system in the prior art, and the following description is first made of key components of the overall system as follows:
the proxy node: the method is mainly used for analyzing and forwarding the application request, calculating the fragments of the data according to the key, forwarding the data to the corresponding storage nodes, and simultaneously, requesting the tracking record of the full path.
A storage node: redis is used as a storage node of the system, in a main and standby cluster mode, the storage node adopts a main and standby deployment mode, the standby node only provides data backup capability, and the main node is responsible for providing data read-write capability. Redis is one of the most popular key-value pair (key-value) databases at present, is a memory type database, is known for fast read-write processing capability, supports multiple data types, and can meet the needs of multiple service scenes.
Monitoring the nodes: the system is used for monitoring the states of all nodes in the cluster, synchronizing the abnormal state node information to the registration center and simultaneously taking charge of alarming of the abnormal nodes.
The registration center: for management of cluster state.
Fig. 2 is a schematic flow chart of a data migration method of a distributed cache system according to an embodiment of the present application, where the present application refers to data migration performed by a storage node in a master-standby mode shown in fig. 1, and the method includes the following steps:
step S201: acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located. Since there may be multiple sets of clusters in the entire distributed cache system, and each set of cluster includes multiple tenants, it is first determined in which cluster the tenant to be migrated is located, and therefore, the tenant name and the cluster name of the tenant to be migrated must be obtained first to locate the tenant.
Step S202: and acquiring storage information of the tenant needing to be migrated according to the basic information, wherein the storage information comprises storage node information and main-standby relationship information.
The tenant storage information obtained in this step, that is, the tenant data is distributed on which storage nodes in the cluster, the master-slave relationship data between these storage nodes, and the like.
Step S203: and determining the number of the fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes.
In order to meet the requirement of tenants on data storage capacity, multiple groups of storage nodes are allocated to the tenants according to the data volume of the tenants when an environment is built, wherein one group is a pair of main and standby storage nodes, and one group is called a fragment. Therefore, according to the storage data obtained in step S202, it can be determined how many shards the tenant has.
Step S204: and creating a corresponding number of containers according to the number of the fragments.
In this embodiment, preferably, one partition may correspond to two containers, that is, the number of containers and the old primary and secondary storage nodes are in one-to-one correspondence, that is, the primary and secondary relationships of the containers are primary and secondary. Of course, it may be clear to those skilled in the art that one fragment may also correspond to three or more containers, and when one fragment corresponds to three containers, the primary-standby relationship between the containers is a primary-standby relationship. Therefore, the corresponding relationship between the number of slices and the number of containers is not limited in this embodiment.
Step S205: the configuration information and data of the old storage node are synchronized to the corresponding container.
In this embodiment, the configuration information of the storage node may be synchronized to the corresponding container, and since the storage node of the tenant may open some special configuration items in the use process, the special configuration items need to be synchronized to a new containerization node, so as to avoid configuration loss. And after the configuration information is synchronized, synchronizing the data of the old storage node to the corresponding container to complete data migration.
According to the technical scheme, the data migration method of the distributed cache system provided by the application has the advantages that for a user of a migration tool, the minimum migration granularity is the tenant, migration is performed by taking the fragments as the granularity in the program, and the fragments synchronously execute migration, so that the migration time is greatly shortened, and the migration efficiency is improved.
Fig. 3 is a schematic flow chart of a data migration method of a distributed cache system according to another embodiment of the present application, where the method includes the following steps:
step S301: acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located.
Step S302: a blacklist is opened in the registry for tenants that need to be migrated.
In the normal operation process of the whole cluster, the monitoring node can perform state check and fault transfer on all nodes of the whole cluster, namely the monitoring node detects that the state of the main storage node is abnormal through check, main and standby switching is performed on the storage nodes, the standby storage node provides read-write service to the outside, the original main storage node is continuously tried to be restarted, and if the original main storage node is restarted successfully, the original main storage node is hung under a new main storage node as the standby node.
Therefore, in order to prevent the monitoring node from modifying the storage node information of the tenant in the migration process in the data migration process, which causes the modification failure of the migration tool on the node information, before the data is formally migrated, the black list of the tenant to be migrated is opened in the registry in this embodiment, so that the monitoring node does not detect the state of the storage node of the tenant any more, and does not modify the registration information of the tenant.
Step S303: and acquiring storage information of the tenant needing to be migrated according to the basic information, wherein the storage information comprises storage node information and main/standby relation information.
Step S304: and determining the number of the fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes.
Step S305: and creating a corresponding number of containers according to the number of the fragments.
Step S306: and generating a migration table for recording various information before migration, wherein the information in the migration table is stored by taking the tenant fragment group as granularity, and the information in the migration table comprises a registration center node IP and a service port, a tenant name, a fragment group, a fragment interval, a new storage node IP and an old storage node port, a cluster mode, a container name and a cluster to which the container belongs. Through the generation of the migration table, the troubleshooting and the running environment recovery can be conveniently carried out when problems occur in the data migration process.
Step S307: the configuration information of the old storage node is synchronized to the corresponding container.
Step S308: a first new container is mounted under an old primary storage node as a backup node for the old primary storage node, and a second new container is mounted under the first new container as a backup node for the old primary storage node.
For a set of slices, comprising an old primary storage node (assumed to be node a) and an old backup storage node (assumed to be node B), a first new container (assumed to be node C) and a second new container (assumed to be node D) are created in step S305, i.e. node C is mounted under node a and then node D is mounted under node C, so that there are two duplicate links after mounting: a- > B and A- > C- > D.
Step S309: synchronizing data of an old primary storage node to the first new container and the second new container.
In this step, the data of the node A is synchronized to the node C and the node D according to the copy link A-, C-, D.
Step S310: and disconnecting the main-standby relationship between the old main storage node and the first new container and closing the old main storage node.
In this embodiment, to avoid dirty data appearing in the new container after data synchronization, the execution sequence shown in fig. 4 is adopted when the primary-backup relationship between the old primary storage node and the first new container is disconnected:
step S3101: the old primary storage node is closed.
Step S3102: and disconnecting the main-standby relationship between the old main storage node and the first new container.
Step S3103: and modifying the node information of the registration center.
The old main storage node is closed first, and at the moment, the node information of the registry is not changed, so that the situation that data writing into the old main storage node fails exists, but the situation that the data is written into the old main storage node but not updated into the new main storage node does not exist, and therefore dirty data cannot appear in the new main storage node. However, if the primary-secondary relationship between the old primary storage node and the first new container is disconnected, the node information of the registry is not changed at this time, so that the old primary storage node is directly written when the transaction comes in, but at this time, because the primary-secondary relationship is disconnected, the new primary storage node cannot synchronize the data, the data is not updated in the new primary storage node, so that dirty data exists in the new primary storage node, and the processing time of the process can be finished in milliseconds.
Step S311: and verifying whether the new service of the tenant can be normally carried out, and removing the tenant needing to be migrated from the blacklist in the registry in response to the verification passing.
After the tenant is removed from the blacklist, the monitoring node can continue to monitor and alarm the tenant, at the moment, the whole data migration is completed, and the distributed cache system recovers to normally operate.
According to the technical scheme, the data migration method of the distributed cache system provided by the application has the advantages that for a user of a migration tool, the minimum migration granularity is the tenant, migration is performed by taking the fragments as the granularity in the program, and the fragments synchronously execute migration, so that the migration time is greatly shortened, and the migration efficiency is improved. In addition, the operation method of disconnecting the main-standby relationship after data synchronization can also avoid the problem of dirty data. And finally, the whole data migration process can be automatically completed without manual intervention, and the practicability is high.
The method of the embodiment of the present application may be completed by a migration tool, as shown in fig. 5, by configuring the migration tool, first modify a blacklist file of a registry to make a monitoring node unable to monitor tenants needing to be migrated, then mount two new storage containers (new 1 and new 2) under an old primary storage node (primary 1), then synchronize data of the primary 1 to the new 1 and the new 2, then close the primary 1, disconnect the primary-secondary relationship between the primary 1 and the new 1, and modify primary-secondary relationship information of the registry to make the new 1 serve as a new primary storage node and the new 2 serve as a new secondary storage node, and finally modify blacklist information of the registry, remove a tenant node pair from the blacklist, and restore monitoring of the monitoring tenant.
Fig. 6 is a schematic structural diagram of a data migration apparatus of a distributed cache system provided in the present application, where the apparatus includes: the base information acquiring unit 610, the storage information acquiring unit 620, the slice determining unit 630, the container creating unit 640, and the synchronizing unit 650, which are connected in sequence.
The basic information obtaining unit 610 is configured to obtain basic information of a tenant that needs to be migrated, where the basic information includes a tenant name and a cluster name where the tenant is located.
The storage information obtaining unit 620 is configured to obtain storage information of the tenant to be migrated according to the basic information, where the storage information includes storage node information and primary-backup relationship information.
The fragmentation determining unit 630 is configured to determine, according to the storage information, the number of fragments of the tenant, where each group of fragments corresponds to a group of storage nodes.
The container creating unit 640 is configured to create a corresponding number of containers according to the number of slices.
The synchronization unit 650 serves to synchronize the configuration information and data of the old storage node to the corresponding container.
Preferably, the apparatus further includes a blacklist opening unit, configured to open a blacklist for the tenant requiring migration in the registry before the storage information obtaining unit 620 obtains the storage information of the tenant requiring migration according to the basic information.
Preferably, the system further comprises a blacklist removing unit, configured to verify whether new services of the tenant can be performed normally after the synchronization unit 650 synchronizes the configuration information and data of the old storage node to the corresponding container, and in response to the verification passing, remove the tenant that needs to be migrated from the blacklist in the registry.
Preferably, the synchronizing unit 650 specifically synchronizes the data of the old storage node to the corresponding container includes: mounting a first new container under an old primary storage node as a standby node of the old primary storage node; mounting a second new container under the first new container as a standby node for the old primary storage node; synchronizing data of an old primary storage node to the first new container and the second new container.
Preferably, the apparatus further includes a primary/standby disconnection unit, configured to, after the synchronization unit 650 synchronizes the data of the old primary storage node to the first new container and the second new container, close the old primary storage node, disconnect the primary/standby relationship between the old primary storage node and the first new container, and modify node information of the registry.
Preferably, the apparatus further includes a migration table generating unit, configured to generate a migration table for recording various pieces of information before migration after the container creating unit 640 creates a corresponding number of containers according to the number of fragments, where the information in the migration table is stored with tenant fragment groups as granularity, and the information in the migration table includes a registration center node IP and a service port, a tenant name, a fragment group, a fragment interval, a new and old storage node IP and port, a cluster mode, a container name, and a cluster to which the container belongs.
According to the technical scheme, the data migration device of the distributed cache system provided by the application has the advantages that for a user of a migration tool, the minimum granularity of migration is a tenant, migration is carried out by taking the fragments as the granularity in the program, the fragments synchronously execute migration, the migration time is greatly shortened, and the migration efficiency is improved. In addition, the operation method of disconnecting the main-standby relationship after the data synchronization can also avoid the problem of dirty data. Finally, the whole data migration process can be automatically completed without manual intervention, and the practicability is high.
The embodiment of the invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the method.
Embodiments of the present invention further provide a computer program product, which includes a computer program/instruction, and the computer program/instruction implements the steps of the above method when executed by a processor.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program for executing the foregoing method is stored in the computer-readable storage medium.
As shown in fig. 7, the electronic device 600 may further include: communication module 110, input unit 120, audio processor 130, display 160, power supply 170. It is noted that the electronic device 600 does not necessarily include all of the components shown in fig. 7; in addition, the electronic device 600 may also include components not shown in fig. 7, which may be referred to in the prior art.
As shown in fig. 7, the central processor 100, sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, the central processor 100 receiving input and controlling the operation of the various components of the electronic device 600.
The memory 140 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable devices. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 100 may execute the program stored in the memory 140 to realize information storage or processing, etc.
The input unit 120 provides input to the cpu 100. The input unit 120 is, for example, a key or a touch input device. The power supply 170 is used to provide power to the electronic device 600. The display 160 is used to display an object to be displayed, such as an image or a character. The display may be, for example, but is not limited to, an LCD display.
The memory 140 may be a solid state memory such as Read Only Memory (ROM), random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 140 may also be some other type of device. Memory 140 includes buffer memory 141 (sometimes referred to as a buffer). The memory 140 may include an application/function storage section 142, and the application/function storage section 142 is used to store application programs and function programs or a flow for executing the operation of the electronic device 600 by the central processing unit 100.
The memory 140 may also include a data store 143 for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by the electronic device. The driver storage portion 144 of the memory 140 may include various drivers of the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging application, address book application, etc.).
The communication module 110 is a transmitter/receiver 110 that transmits and receives signals via an antenna 111. The communication module (transmitter/receiver) 110 is coupled to the central processor 100 to provide an input signal and receive an output signal, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 110 is also coupled to a speaker 131 and a microphone 132 via an audio processor 130 to provide audio output via the speaker 131 and receive audio input from the microphone 132 to implement general telecommunications functions. Audio processor 130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, an audio processor 130 is also coupled to the central processor 100, so that recording on the local can be enabled through a microphone 132, and so that sound stored on the local can be played through a speaker 131.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A data migration method of a distributed cache system, the method comprising:
acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located;
acquiring storage information of the tenant needing to be migrated according to the basic information, wherein the storage information comprises storage node information and main-standby relation information;
determining the number of fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes;
creating containers with corresponding quantity according to the number of the fragments;
the configuration information and data of the old storage node are synchronized to the corresponding container.
2. The data migration method of the distributed cache system according to claim 1, wherein before obtaining the storage information of the tenant to be migrated according to the basic information, the method further comprises: and opening a blacklist for the tenant needing to be migrated in the registry, and preventing the registration information of the storage node where the tenant is located from being modified.
3. The method for migrating data in a distributed cache system according to claim 2, wherein after synchronizing the configuration information and data of the old storage node to the corresponding container, the method further comprises: and verifying whether the new service of the tenant can be normally carried out, and removing the tenant needing to be migrated from the blacklist in the registry in response to the verification passing.
4. The data migration method of the distributed cache system according to claim 1, wherein synchronizing the data of the old storage node to the corresponding container comprises:
mounting a first new container under an old primary storage node as a standby node of the old primary storage node;
mounting a second new container under the first new container as a standby node for the old primary storage node;
synchronizing data of an old primary storage node to the first new container and the second new container.
5. The method for data migration in a distributed cache system as recited in claim 4, wherein said synchronizing data of the old primary storage node to the first new container and the second new container further comprises:
closing the old master storage node;
disconnecting the primary-standby relationship between the old primary storage node and the first new container;
and modifying the node information of the registration center.
6. The data migration method of the distributed cache system according to claim 1, wherein the creating the corresponding number of containers according to the number of fragments further comprises:
and generating a migration table for recording various information before migration, wherein the information in the migration table is stored by taking the tenant fragment group as granularity, and the information in the migration table comprises a registration center node IP and a service port, a tenant name, a fragment group, a fragment interval, a new storage node IP and an old storage node IP and port, a cluster mode, a container name and a cluster to which a container belongs.
7. An apparatus for data migration in a distributed cache system, the apparatus comprising:
the basic information acquisition unit is used for acquiring basic information of a tenant needing to be migrated, wherein the basic information comprises a tenant name and a cluster name where the tenant is located;
a storage information obtaining unit, configured to obtain storage information of a tenant to be migrated according to the basic information, where the storage information includes storage node information and primary/secondary relationship information;
the fragment determining unit is used for determining the number of fragments of the tenant according to the storage information, wherein each group of fragments corresponds to one group of storage nodes;
the container creating unit is used for creating containers with corresponding quantity according to the number of the fragments;
and the synchronization unit is used for synchronizing the configuration information and the data of the old storage node to the corresponding container.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the data migration method of the distributed cache system of any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the data migration method of the distributed cache system according to any one of claims 1 to 6.
10. A computer program product comprising computer programs/instructions for implementing the steps of the data migration method of the distributed caching system of any one of claims 1 to 6 when executed by a processor.
CN202211005410.6A 2022-08-22 2022-08-22 Data migration method and device of distributed cache system Pending CN115292293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211005410.6A CN115292293A (en) 2022-08-22 2022-08-22 Data migration method and device of distributed cache system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211005410.6A CN115292293A (en) 2022-08-22 2022-08-22 Data migration method and device of distributed cache system

Publications (1)

Publication Number Publication Date
CN115292293A true CN115292293A (en) 2022-11-04

Family

ID=83830027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211005410.6A Pending CN115292293A (en) 2022-08-22 2022-08-22 Data migration method and device of distributed cache system

Country Status (1)

Country Link
CN (1) CN115292293A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546092A (en) * 2023-07-04 2023-08-04 深圳市亲邻科技有限公司 Redis-based object model storage system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546092A (en) * 2023-07-04 2023-08-04 深圳市亲邻科技有限公司 Redis-based object model storage system
CN116546092B (en) * 2023-07-04 2023-10-13 深圳市亲邻科技有限公司 Redis-based object model storage system

Similar Documents

Publication Publication Date Title
CN113641511B (en) Message communication method and device
CN110413685B (en) Database service switching method, device, readable storage medium and computer equipment
CN109376197B (en) Data synchronization method, server and computer storage medium
CN101808127B (en) Data backup method, system and server
CN102890716B (en) The data back up method of distributed file system and distributed file system
CN103164523A (en) Inspection method, device and system of data consistency inspection
JP2021524104A (en) Master / Standby Container System Switching
US20150312340A1 (en) Method and system for data synchronization
CN107729515B (en) Data synchronization method, device and storage medium
CN112612851B (en) Multi-center data synchronization method and device
CN112291082B (en) Disaster recovery processing method, terminal and storage medium for machine room
CN112256477A (en) Virtualization fault-tolerant method and device
CN115292293A (en) Data migration method and device of distributed cache system
CN112929438B (en) Business processing method and device of double-site distributed database
CN110851528B (en) Database synchronization method and device, storage medium and computer equipment
CN110597467B (en) High-availability data zero-loss storage system and method
WO2023019953A1 (en) Data synchronization method and system, server, and storage medium
CN105511808A (en) Data operation method, system and related device
CN106502831B (en) A kind of method and device of image file duplication
CN115587141A (en) Database synchronization method and device
CN108429813B (en) Disaster recovery method, system and terminal for cloud storage service
CN102014008A (en) Data disaster-tolerant method and system
CN112910971B (en) Multi-station data synchronization method, device and system
CN110445664B (en) Multi-center server dual-network main selection system of automatic train monitoring system
CN113467717B (en) Dual-machine volume mirror image management method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination