WO2018099397A1 - 数据库集群中数据迁移的方法、装置及存储介质 - Google Patents
数据库集群中数据迁移的方法、装置及存储介质 Download PDFInfo
- Publication number
- WO2018099397A1 WO2018099397A1 PCT/CN2017/113563 CN2017113563W WO2018099397A1 WO 2018099397 A1 WO2018099397 A1 WO 2018099397A1 CN 2017113563 W CN2017113563 W CN 2017113563W WO 2018099397 A1 WO2018099397 A1 WO 2018099397A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- migrated
- migration
- incremental
- node
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/214—Database migration support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/80—Database-specific techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
Definitions
- the present application relates to the field of computer application technologies, and in particular, to a method, an apparatus, and a storage medium for data migration in a database cluster.
- the storage capacity and processing capacity of the database cluster will reach the upper limit of the cluster capacity. This requires data migration to alleviate the storage pressure and load pressure of the original server.
- the embodiment of the present invention provides a multimedia playing method and device, and the technical solution adopted by the embodiment of the present invention is:
- the present application provides a method for data migration in a database cluster, where the database cluster is composed of at least one coordination node, a source data node, and a target data node, the method includes: obtaining a snapshot of the source data node, and backing up according to the snapshot Data to be migrated Inventory data, record the incremental data in the data segment to be migrated; migrate the backed up inventory data to the target data node; migrate the recorded incremental data, during the migration process, when the incremental data is not migrated When the preset write lock condition is met, notifying the source data node to perform a write lock operation on the data slice to be migrated, and migrating the unmigrated incremental data to the target data node; After the data migration is completed, the coordination node is notified to switch the route corresponding to the data fragment to be migrated from the source data node to the target data node.
- the application provides an apparatus for data migration in a database cluster, one or more memories; one or more processors; wherein the one or more memories store one or more instruction modules configured by the one Or one or more processor modules; wherein the one or more instruction modules include: the database cluster is composed of at least one coordination node, a source data node, and a target data node, and the device includes: an incremental data recording module, configured to: Obtaining a snapshot of the source data node, and recording the incremental data in the data fragment to be migrated according to the inventory data in the data fragment to be migrated backed up in the snapshot; the inventory data migration module is configured to store the backup The data is migrated to the target data node; the incremental data migration module is configured to migrate the recorded incremental data.
- the source data node is notified.
- the route switching module is configured to notify the coordination node to switch the route corresponding to the data fragment to be migrated from the source data node to the target after the incremental data migration is completed Data node.
- the present application also proposes a non-transitory computer readable storage medium storing computer readable instructions that cause at least one processor to perform the methods described above.
- FIG. 1 is a schematic diagram of an implementation environment involved in an embodiment of the present invention.
- FIG. 2 is a block diagram of a server according to an exemplary embodiment
- FIG. 3 is a flowchart of a method for data migration in a database cluster according to an exemplary embodiment
- FIG. 4 is a flow chart of the step of recording the incremental data in the data segment to be migrated according to the inventory data in the data segment to be migrated backed up in the snapshot in the corresponding embodiment of FIG. 3;
- Figure 5 is a flow diagram of one embodiment of an iterative migration step of the incremental data by switching a number of said record files;
- FIG. 6 is a flow chart of another embodiment of an iterative migration step of the incremental data by switching a plurality of the record files;
- FIG. 7 is a schematic diagram of a specific implementation of a method for data migration in a database cluster in an application scenario
- Figure 7b is a schematic diagram of the data node added in Figure 7a;
- FIG. 8 is a block diagram of an apparatus for data migration in a database cluster, according to an exemplary embodiment
- Figure 9 is a block diagram of an embodiment of an incremental data recording module in a corresponding embodiment
- Figure 10 is a block diagram of an iterative migration unit in one embodiment.
- database clusters need to migrate data to reduce the storage pressure and load pressure of the original server.
- the existing data migration process must stop the client's access to the data being migrated, that is, the database service must be stopped to ensure the consistency of data migration, which will inevitably affect the user's access efficiency, resulting in poor user access experience. .
- the multi-copy mechanism can realize functions such as data migration and data rebalancing that are almost unperceived to the user.
- ACID full data transactions
- this requires applications to ensure transactions in business logic, thereby ensuring data migration consistency, which is for the vast majority. App developers are unacceptable.
- the multi-copy mechanism is only suitable for data migration within a homogeneous database, and data migration between heterogeneous databases is not possible.
- PostgreSQL-based databases are widely used, for example, open source database clusters Postgres-xc, Postgres-xl, Postgres-x2, etc. Distribution, the ability to automatically aggregate data.
- some data cannot be migrated. Only after exporting all the data on the original server, all the exported data will be imported to the newly added server. In order to achieve the purpose of data redistribution.
- this data migration solution must stop the database service to ensure the consistency of data migration. If the amount of data on the original server is large, the database service will be stopped for a long time, which seriously affects the user's access efficiency.
- the PostgreSQL-based database middleware pg_shard and citusdata have the capability of sharding, and some data can be migrated by relocating shards.
- it is still necessary to stop the database service that is, to stop the client's access to the data being migrated, in order to ensure the consistency of data migration.
- FIG. 1 is an implementation environment involved in a method for data migration in the above database cluster.
- the implementation environment includes a database cluster 100, a server 200, and a client 300.
- the database cluster 100 is composed of several servers.
- the database cluster 100 includes at least one server as the coordination node 101 and a plurality of servers as the data node 103.
- the coordination node 101 provides the client 300 with automatic data distribution and automatic data aggregation.
- Data node 103 is used to store data that is accessible.
- any one of the data nodes 103 can be used as a source data node or a target data node, wherein, without loss of generality, the data node where the data to be migrated is called the source data node, and the data is about to move in.
- the data node is called the target data node.
- the coordination node 101 is responsible for receiving the write operation performed by the client 300, and importing the data to be written corresponding to the write operation into the data node 103 owned by the database cluster 100.
- Data is fragmented. That is, the fragment number of the data to be written is calculated according to a preset rule (for example, a hash algorithm, a routing algorithm, etc.), and the data node 103 corresponding to the fragment number is searched through a preset route mapping table, and the The data to be written is forwarded to the corresponding data slice owned by the data node 103 for storage.
- a preset rule for example, a hash algorithm, a routing algorithm, etc.
- the coordination node 101 calculates the fragment number of the data to be queried according to the query condition, and finds the score through the preset route mapping table.
- the data node 103 corresponding to the slice number, and then the data to be queried is obtained from the corresponding data slice owned by the one or more data nodes 103 and returned to the client 300.
- the server 200 controls the database cluster 100 to perform data migration by interacting with the coordination node 101 and the data node 103 in the database cluster 100, for example, fragmentation.
- the data on the data slice numbered 0 is migrated from data node a to data node c, thereby alleviating the storage pressure and load pressure of data node a.
- the server 200 may be embedded in the database cluster 100 or may be set independently of the database cluster 100.
- Client 300 refers to the application client.
- FIG. 2 is a block diagram of a server according to an exemplary embodiment.
- the hardware structure is only an example of the application of the embodiment of the present invention, and is not to be construed as limiting the scope of use of the embodiments of the present invention.
- the server 200 may have a large difference due to different configurations or performances.
- the server 200 includes: a power source 210, an interface 230, at least one storage medium 250, and at least one central processing unit (CPU, Central Processing Units) 270.
- CPU Central Processing Unit
- the power source 210 is configured to provide an operating voltage for each hardware device on the server 200.
- the interface 230 includes at least one wired or wireless network interface 231, at least one string and converted The interface 233, the at least one input/output interface 235, the at least one USB interface 237, and the like are configured to communicate with an external device.
- the storage medium 250 may be a random storage medium, a magnetic disk, or an optical disk.
- the resources stored thereon include the operating system 251, the application program 253, and the data 255.
- the storage manner may be short-term storage or permanent storage.
- the operating system 251 is used to manage and control various hardware devices and applications 253 on the server 200 to implement calculation and processing of the massive data 255 by the central processing unit 270, which may be Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
- the application 253 is a computer program that performs at least one specific work based on the operating system 251, which may include at least one module (not shown), each of which may include a series of operations on the server 200, respectively. instruction.
- Data 255 can be files, pictures, and the like stored on disk.
- Central processor 270 can include one or more processors and is configured to communicate with storage medium 250 over a bus for computing and processing massive amounts of data 255 in storage medium 250.
- the server 200 to which the embodiment of the present invention is applied will control the database cluster for data migration, that is, by the central processor 270 reading a series of operation instructions stored in the storage medium 250 to implement the database cluster. Data migration to solve the problem that the prior art needs to stop the database service during the data migration process.
- embodiments of the present invention can also be implemented by a hardware circuit or a hardware circuit in combination with software instructions. Therefore, the implementation of the embodiments of the present invention is not limited to any specific hardware circuit, software, or a combination of the two.
- a method for data migration in a database cluster is applicable to the server 200 of the implementation environment shown in FIG. 1, and the method for data migration in the database cluster may be performed by the server 200. , can include the following steps:
- Step 310 Obtain a snapshot of the source data node, and according to the backup to be migrated in the snapshot
- the inventory data in the data fragment records the incremental data in the data fragment to be migrated.
- data migration can be performed, that is, the data is loaded by the load.
- a data node with a high pressure that is, a large amount of user access
- migrates to a data node with a small load pressure ie, a small amount of user access.
- the clustering capability of the database cluster can be improved by expanding the capacity of the data node.
- data migration is also required to reduce the Storage pressure and load pressure of the original data nodes in the database cluster.
- the server can monitor whether the database cluster needs to perform data migration by monitoring the running status of the database cluster.
- the running state of the database cluster can be represented by the load capacity of each data node in the database cluster (ie, the amount of user access), and can also be represented by the CPU usage of each data node in the database cluster.
- the database cluster needs to perform data migration.
- a preset threshold for example, 80%
- the database service is not stopped, and the client still writes data on the data fragments in the data node.
- the write operation includes data addition, data deletion, data modification, and the like.
- data migration includes migration of inventory data and migration of incremental data.
- the stock data refers to the data before the data migration
- the incremental data refers to the new data generated by the write operation during the data migration process or by the write operation The inventory data of the changes made.
- a snapshot is defined as a copy of data in a specified data set that includes an image of the data at a point in time, such as the point in time when the copy begins.
- the inventory data and the incremental data on the data slice to be migrated in the source data node are distinguished by the snapshot of the source data node.
- the data fragment to be migrated may be located on the source data node.
- the data backed up in the snapshot of the source data node includes the inventory data in the data slice to be migrated.
- all write operations on the data fragment to be migrated in the data migration process will be recorded, thereby generating incremental data in the data fragment to be migrated.
- any data that is different from the inventory data in the data segment to be migrated will be regarded as incremental data in the migrated data segment.
- the incremental data can be recorded in several log files to facilitate the migration of subsequent incremental data.
- step 330 the backed up inventory data is migrated to the target data node.
- the inventory data in the data fragment to be migrated backed up in the snapshot can be obtained, thereby performing the migration of the inventory data.
- the migration of the inventory data may be directly migrated from the source data node to the target data node, or may be first imported into the preset storage space by the source data node, and then exported to the target data node by the preset storage space.
- a persistent connection is established between the source data node and the target data node, through which the stored data stream is transmitted from the source data node to the target data node.
- step 350 the recorded incremental data is migrated.
- the source data node is notified to perform the write lock operation on the migrated data slice, and the migration is not performed.
- the incremental data is migrated to the target data node.
- the migration of the inventory data After the migration of the inventory data is completed, the migration of incremental data can be started.
- the migration of incremental data is performed after the migration of the inventory data, that is, when the incremental data migration is performed, the inventory data is already stored on the target data node, and the incremental data refers to the data migration process.
- the target data node will be The stored stock data is changed accordingly to generate the changed stock data.
- the migration of the incremental data may be directly migrated from the source data node to the target data node, or may be first imported into the preset storage space by the source data node, and then exported to the target data node by the preset storage space.
- a persistent connection is established between the source data node and the target data node, through which the incremental data stream is transmitted from the source data node to the target data node.
- the speed at which the source data node generates incremental data may be inconsistent with the speed at which the target node redoes the incremental data, that is, the read/write speed of the incremental data is inconsistent, the connection will have a certain size of data caching capability. Adapt to the application scenario where the read/write speed of incremental data is inconsistent, thus improving the applicability of incremental data migration.
- the client since the database service is not stopped during the data migration process, the client still writes the data on the data segment to be migrated in the source data node, if not locked. On the source data node, the write operation of the migrated data fragments, incremental data will be continuously generated, and full migration of incremental data will not be guaranteed.
- the write lock operation may cause the client to perform a write operation failure or block on the migration data fragment. Therefore, the preset write lock condition is preset according to how the client fails the write operation or the congestion is not perceived.
- the preset write lock condition may be the amount of data of the unmigrated incremental data, the redo time of the unmigrated incremental data, and the like. It can be understood that if the amount of data of the incremental data that is not migrated is very small, or the redo time of the unmigrated incremental data is extremely short, the client can make the write operation fail or block the data during the data migration process without any perception.
- the client continues to write the data in the migrated data slice while continuing the incremental data migration.
- the source data node is notified to perform a write lock operation on the migrated data slice.
- the client performs a new write operation on the data slice to be migrated. Will fail or block, and the previous write will continue.
- the unmigrated incremental data is migrated to the target data node after all the write operations are completed before waiting for the data slice to be migrated, thereby ensuring the integrity of the incremental data migration.
- Step 370 After the incremental data migration is completed, the coordination node is notified to switch the route corresponding to the data fragment to be migrated from the source data node to the target data node.
- the client After the coordination node completes the handover of the corresponding route of the data fragment to be migrated, the client reads and writes the data on the data slice to be migrated from the source data node to the target data node. At this point, the data migration is complete.
- the client does not perceive data migration, avoiding the number According to the interruption of the database service during the migration process, the user's access efficiency is effectively improved, and the user's access experience is provided.
- the data migration method in the above database cluster can not only support the complete data transaction, but also ensure the consistency of the data migration, and can support the data migration between the heterogeneous databases, and effectively expand the application scenario of the data migration.
- step 310 may include the following steps:
- Step 311 Receive, according to the inventory data, a number of write operations performed by the client to migrate the data slice.
- the data backed up in the snapshot of the source data node includes the inventory data in the data slice to be migrated.
- the snapshot of the source data node is generated at the point in time when the data migration is ready.
- Step 313 Generate a plurality of record files according to a plurality of write operations, and record the incremental data in the data slice to be migrated through the plurality of record files.
- the amount of data of the incremental data recorded in each log file is limited.
- all the write operations are recorded in a plurality of record files, and the incremental data in the data slice to be migrated is formed by the plurality of record files, that is, the target data node can record all the records in the plurality of record files.
- the write operation performs redoing of the incremental data to implement the migration of the incremental data in the data slice to be migrated, thereby ensuring the consistency of the data migration.
- a threshold is set for the data amount of the recorded incremental data, for example, the threshold is set to record 100 pieces of incremental data, and when the incremental data generated by the same write operation exceeds the threshold, the time Write operations will be recorded in at least two log files For example, when the incremental data generated by the same write operation does not exceed the threshold, the incremental data generated by at least two write operations will be recorded in the same record file, thereby ensuring the storage efficiency of the record file. .
- step of migrating the recorded incremental data in step 350 may include the following steps:
- the migration of incremental data is performed in an iterative migration manner.
- the target data node can perform an iterative migration of each incremental data based on each log file.
- the amount of data of the incremental data recorded in each of the log files is inconsistent.
- the amount of incremental data recorded in the log file used for each iteration is reduced compared to the amount of incremental data recorded in the log file used in the previous iteration, in other words, finally The incremental data recorded in the log file used in one iteration has the least amount of data.
- the reduction in the amount of data of the incremental data recorded in each of the log files is controlled by the server, may be randomly reduced, or may be reduced according to a preset amount of data.
- the step of iteratively migrating incremental data by switching a plurality of record files may include the following steps:
- Step 410 The incremental data end position recorded in the previous iteration migration is used as the incremental data starting position of the current iteration migration, and the initial position of the incremental data according to the current iteration is switched to the corresponding recording file.
- the incremental data recorded in each record file has a corresponding incremental data start position and an incremental data end position, and the incremental data start position and the incremental data end position correspond to the iteration of the record file. Rounds. It can be understood that, since the record files are sequentially generated, correspondingly, the incremental data end position of the current iteration migration is also the incremental data start position of the next iteration migration, that is, the increment of the current iteration migration in the record file. The incremental data before the end of the data will be migrated in this iteration, and the incremental data after the end of the incremental data will be migrated in subsequent iterative migrations.
- the starting position of the incremental data of the current iteration migration can be determined, and the log file corresponding to the iteration is obtained.
- step 430 the incremental data of the current iteration migration is obtained from the record file, and the end position of the incremental data of the current iteration migration is recorded.
- the incremental data recorded therein can be obtained as the incremental data of the iterative migration.
- Step 450 Migrate the acquired incremental data to the target data node.
- the migration of the incremental data is completed by using the preset storage space, and the incremental data to be obtained is imported from the source data node into the preset storage space, and then exported to the target data node by the preset storage space.
- the preset storage space is independent of the database cluster setting, so as to avoid occupying the storage space of the database cluster itself, which is beneficial to alleviating the starvation symptoms of the system disk space, improving the stability of the system, and realizing data migration and database.
- the decoupling of the cluster avoids the interruption of the database service during the data migration process, further effectively improving the user's access efficiency and improving the user's access experience.
- the step of performing iterative migration of incremental data by switching a plurality of record files may further include the following steps:
- Step 510 Determine whether the data amount of the incremental data migrated by the iteration or the migration time of the incremental data is not greater than a preset threshold.
- the client still writes the data on the data segment to be migrated in the source data node. If the source data node is not locked, the data segment to be migrated is not locked. Write operations, incremental data will be continuously generated, and full migration of incremental data will not be guaranteed.
- the incremental data that is not migrated in the cache space of the connection established between the source data node and the target data node at a certain moment can be used to determine whether the write of the data fragment to be migrated on the source data node needs to be locked. operating. For example, at a certain moment, when the data volume of the incremental data that is not migrated in the cache space is less than a preset threshold, it is determined that a write lock operation needs to be performed on the migrated data slice.
- the log file is continuously generated, so that The server cannot know how much incremental data is not migrated, and thus cannot directly determine whether it is necessary to perform a write lock operation on the migrated data fragments by using the unmigrated incremental data.
- the last time The delta data recorded in the log file used for the iteration has the least amount of data.
- the preset write lock condition is set to be the data amount of the incremental data migrated for this iteration is not greater than a preset threshold. That is to say, it is indirectly determined by the incremental data migrated in this iteration whether the unmigrated incremental data satisfies the preset write lock condition, and then determines whether it is necessary to perform a write lock operation on the migrated data slice.
- step 530 it is determined that the incremental data that has not been migrated satisfies the preset write lock condition.
- step 410 return to step 410 to continue the iterative migration of incremental data.
- the preset write lock condition may be set to the migration time of the incremental data migrated for the iterative migration is not greater than a preset threshold, where the migration time refers to the time consumed by the target data node to redo the incremental data, which is It is obtained by calculating the ratio of the data amount of the incremental data migrated by this iteration to the speed of the target data node redoing the incremental data. For example, if the typical write lock duration that the client does not perceive is 10ms to 30ms, the preset write lock condition may set the redo time of the incremental data migrated for this iteration to be no more than 10ms.
- the process proceeds to step 530, where it is determined that the incremental data that has not been migrated satisfies the preset write lock condition.
- step 410 return to step 410 to continue the iterative migration of incremental data.
- the method as described above may further include the following steps:
- the source data node is notified to perform the unlock operation on the migrated data fragment, and the incremental data in the data fragment to be migrated is stopped.
- the write lock operation performed on the data fragment to be migrated is released, and the read and write recovery on the data fragment to be migrated is resumed, that is, the subsequent client performs the data fragmentation on the data to be migrated.
- Read and write operations are changed from the source data node to the target data node.
- the incremental data of the data fragment to be migrated is no longer generated in the source data node, and therefore, the source data node may not have to continue to record the increment in the data fragment to be migrated based on the snapshot. data. At this point, the incremental data migration is complete.
- FIG. 7a is a schematic diagram of a specific implementation of a method for data migration in a database cluster in an application scenario
- FIG. 7b is a schematic diagram of a new data node involved in FIG. 7a.
- the database cluster is expanded, that is, the new data node d is taken as an example, and the data migration in the database cluster in each embodiment of the present invention is performed. The process is described.
- the server obtains a snapshot of the source data node a by executing step 601, and based on the snapshot, starts to record the incremental data on the data slice 3 to be migrated in the source data node a by performing step 602. At the same time, by performing step 603, the inventory data in the data slice 3 to be migrated from the source data node a is started.
- step 604 the data to be migrated is sliced 3
- the inventory data is migrated from the source data node a to the target data node d.
- the incremental data in the migrated data slice 3 is then migrated in an iterative migration.
- step 606 By performing step 606 to step 607, the current iterative migration of the incremental data in the data slice 3 to be migrated is completed. After the migration is completed in this iteration, by performing step 608 to step 609, it is determined whether the migration to the last iteration is entered.
- step 606 If no, return to step 606 to continue the incremental data non-last iterative migration.
- step 610 performs a write lock operation on the data fragment 3 to be migrated by executing step 610, and wait for completion of all the write operations on the data slice 3 to be migrated by performing steps 611 to 612. The last iteration of the incremental data in the data slice 3 to be migrated.
- step 613 to step 615 the coordination node 101 is notified to switch the route corresponding to the data slice 3 to be migrated from the source data node a to the target data node d, and resume reading and writing of the data slice 3 to be migrated, so that the client The read and write operations performed by the terminal to migrate the data slice 3 are changed from the source data node a to the target data node d.
- the database cluster completes the expansion of the target data node d, and the data completes the migration from the source data node a to the target data node d.
- the client when the storage capacity or processing capacity of the database cluster is insufficient to meet the user's access requirements, the client can support the non-perceived data expansion of the client, that is, when the database cluster is expanded, the data migration does not have to stop.
- the database service effectively improves the user's access efficiency, improves the user's access experience, and can support complete transactions to ensure data migration consistency.
- the following is an apparatus embodiment of an embodiment of the present invention, which may be used to perform data migration in a database cluster according to an embodiment of the present invention.
- the device embodiment of the embodiment of the present invention refer to the data migration in the database cluster involved in the embodiment of the present invention.
- Method embodiment of the shift refer to the data migration in the database cluster involved in the embodiment of the present invention.
- an apparatus 700 for data migration in a database cluster includes, but is not limited to: one or more memories;
- One or more processors among them,
- the one or more memories storing one or more instruction modules configured to be executed by the one or more processors;
- the one or more instruction modules include: an incremental data recording module 710, a stock data migration module 730, an incremental data migration module 750, and a route switching module 770.
- the incremental data recording module 710 is configured to acquire a snapshot of the source data node, and record the incremental data in the data fragment to be migrated according to the inventory data in the data fragment to be migrated backed up in the snapshot.
- the inventory data migration module 730 is configured to migrate the backed up inventory data to the target data node.
- the incremental data migration module 750 is configured to migrate the recorded incremental data. During the migration process, when the unmigrated incremental data meets the preset write lock condition, the source data node is notified to perform a write lock operation on the migrated data slice. And migrate the unmigrated incremental data to the target data node.
- the routing switching module 770 is configured to complete the migration of the data to be incremental, and notify the coordination node to switch the route corresponding to the data fragment to be migrated from the source data node to the target data node.
- the incremental data recording module 710 includes, but is not limited to, a write operation receiving unit 711 and a record file generating unit 713.
- the write operation receiving unit 711 is configured to receive, according to the inventory data, a number of write operations performed by the client to migrate the data slice.
- the record file generating unit 713 is configured to generate a plurality of record files according to a plurality of write operations, and record the incremental data in the data slice to be migrated through the plurality of record files.
- the delta data migration module 750 includes an iterative migration unit.
- the iterative migration unit is used for iterative migration of incremental data by switching several record files.
- the iterative migration unit 751 includes, but is not limited to, a record file acquisition unit 7511, an increment data acquisition unit 7513, and a migration unit 7515.
- the record file obtaining unit 7511 is configured to use the incremental data end position recorded during the previous iteration migration as the incremental data start position of the current iteration migration, and switch to the corresponding position according to the incremental data start position of the current iteration migration. Record the file.
- the incremental data obtaining unit 7513 is configured to obtain the incremental data of the current iteration migration from the record file, and record the incremental data end position of the current iteration migration.
- the migration unit 7515 is configured to migrate the acquired incremental data to the target data node.
- the iterative migration unit 751 further includes, but is not limited to, a determination unit.
- the determining unit is configured to determine whether the data amount of the incremental data migrated by the iteration or the migration time of the incremental data is not greater than a preset threshold.
- the apparatus as described above further includes, but is not limited to, an unlocking module.
- the unlocking module is configured to notify the source data node to perform an unlocking operation on the migrated data fragment and stop recording the incremental data in the data fragment to be migrated.
- the notification source data node performs a write lock operation on the migrated data slice, when the data to be migrated When all current write operations on the shard are complete, the unmigrated incremental data is migrated to the target data node.
- the client's write operation to the migrated data slice will fail or block, but the unmigrated incremental data that meets the preset write lock condition is minimal, causing the write operation to fail or block the time. It is extremely short and is non-aware to the client. It avoids stopping the database service during the data migration process, which effectively improves the user's access efficiency and improves the user's access experience.
- the data migration in the database cluster is only illustrated by the division of the foregoing functional modules. In actual applications, the foregoing may be performed as needed.
- the function allocation is completed by different functional modules, that is, the internal structure of the device for data migration in the database cluster is divided into different functional modules to complete all or part of the functions described above.
- the embodiment of the data migration in the database cluster provided by the foregoing embodiment is the same as the embodiment of the data migration method in the database cluster, and the specific manner in which each module performs the operation has been described in detail in the method embodiment. I won't go into details here.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
一种数据库集群中数据迁移的方法及装置。所述方法包括:获取源数据节点的快照,并根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据(310);将备份的存量数据迁移至目标数据节点(330);对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知所述源数据节点对所述待迁移数据分片执行写锁操作,并将所述未迁移的增量数据迁移至所述目标数据节点(350);待所述增量数据迁移完毕,通知所述协调节点将所述待迁移数据分片对应的路由从所述源数据节点切换至所述目标数据节点(370)。
Description
本申请要求于2016年12月01日提交中国专利局、申请号为201611090677.4、发明名称为“数据库集群中数据迁移的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机应用技术领域,尤其涉及一种数据库集群中数据迁移的方法、装置及存储介质。
背景
随着当某个应用的用户访问量较大时,仅使用一台服务器为用户提供数据库服务势必影响用户体验,这就需要多台服务器共同为用户提供数据库服务,从而形成了所谓的数据库集群。
随着用户访问量的逐步增加,数据库集群的存储能力和处理能力也将达到集群能力的上限,这就需要通过数据迁移的方式缓解原有服务器的存储压力和负载压力。
技术内容
本申请实施例提供一种多媒体播放方法及设备,本发明实施例所采用的技术方案为:
本申请提供一种数据库集群中数据迁移的方法,所述数据库集群由至少一个协调节点、源数据节点和目标数据节点,所述方法包括:获取源数据节点的快照,并根据所述快照中备份的待迁移数据分片中
的存量数据,记录所述待迁移数据分片中的增量数据;将备份的存量数据迁移至目标数据节点;对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知所述源数据节点对所述待迁移数据分片执行写锁操作,并将所述未迁移的增量数据迁移至所述目标数据节点;待所述增量数据迁移完毕,通知所述协调节点将所述待迁移数据分片对应的路由从所述源数据节点切换至所述目标数据节点。
本申请提供一种数据库集群中数据迁移的装置,一个或一个以上存储器;一个或一个以上处理器;其中,所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,所述一个或者一个以上指令模块包括:所述数据库集群由至少一个协调节点、源数据节点和目标数据节点,所述装置包括:增量数据记录模块,用于获取源数据节点的快照,并根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据;存量数据迁移模块,用于将备份的存量数据迁移至目标数据节点;增量数据迁移模块,用于对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知所述源数据节点对所述待迁移数据分片执行写锁操作,并将所述未迁移的增量数据迁移至所述目标数据节点;路由切换模块,用于待所述增量数据迁移完毕,通知所述协调节点将所述待迁移数据分片对应的路由从所述源数据节点切换至所述目标数据节点。
本申请还提出了一种非易失性计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行以上所述的方法。
附图简要说明
为了此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并于说明书一起用于解释本发明实施例的原理。
图1是根据本发明实施例所涉及的实施环境的示意图;
图2是根据一示例性实施例示出的一种服务端的框图;
图3是根据一示例性实施例示出的一种数据库集群中数据迁移的方法的流程图;
图4是图3对应实施例中根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据步骤在一个实施例的流程图;
图5是通过切换若干个所述记录文件对所述增量数据进行迭代迁移步骤在一个实施例的流程图;
图6是通过切换若干个所述记录文件对所述增量数据进行迭代迁移步骤在另一个实施例的流程图;
图7a是一应用场景中一种数据库集群中数据迁移的方法的具体实现示意图;
图7b是图7a所涉及的数据节点新增的示意图;
图8是根据一示例性实施例示出的一种数据库集群中数据迁移的装置的框图;
图9是对应实施例中增量数据记录模块在一个实施例的框图;
图10是迭代迁移单元在一个实施例的框图。
通过上述附图,已示出本发明实施例明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本发明实施例构思的范围,而是通过参考特定实施例为本领域技术人员
说明本发明实施例的概念。
下面这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明实施例的一些方面相一致的装置和方法的例子。
如前所述,随着用户访问量的逐步增加,数据库集群需要通过数据迁移的方式,来缓解原有服务器的存储压力和负载压力。然而,现有的数据迁移过程必须停止客户端对正在迁移的数据的访问,即必须停止数据库服务,才能够保证数据迁移的一致性,这必然影响用户的访问效率,而导致用户的访问体验差。
目前,通用的数据库包括键值(Key-Value)数据库、基于PostgreSQL的数据库等等。
对于键值数据库而言,通过多副本机制能够实现对用户几乎无感知的数据迁移、数据再平衡等功能。但是由于键值数据库不支持完整的数据事务(ACID),也不具备分布式事务的能力,这就使得应用需要在业务逻辑上保证事务,进而保证数据迁移的一致性,而这对于绝大多数的应用开发者来说是无法接受的。此外,多副本机制仅适用于同构数据库内部实现数据迁移,而无法实现异构数据库之间的数据迁移。
基于此,基于PostgreSQL的数据库被广泛地使用,例如,开源数据库集群Postgres-xc、Postgres-xl、Postgres-x2等,具备数据自动
分发、数据自动聚合的能力。在数据迁移的过程中,尤其是针对数据库集群扩容的场景,无法实现部分数据的迁移,只能将原服务器上的所有数据全量导出之后,再将导出的所有数据全部导入到新增的服务器上,方可达到数据重新分布的目的。然而,这种数据迁移方案必须停止数据库服务来保证数据迁移的一致性,若原服务器上的数据量很大,还将导致数据库服务停止的时间会非常漫长,严重影响了用户的访问效率。
又例如,基于PostgreSQL的数据库中间件pg_shard、citusdata,具备数据分片(sharding)的能力,可以通过搬迁分片(shard)来实现部分数据的迁移。但是,在数据迁移过程中,仍然需要停止数据库服务,即停止客户端对正在迁移的数据的访问,才能保证数据迁移的一致性。
因此,为了避免数据迁移过程中数据库服务被中断,特提出了一种数据库集群中数据迁移的方法。
图1为上述数据库集群中数据迁移的方法所涉及的实施环境。该实施环境包括数据库集群100、服务端200和客户端300。
其中,数据库集群100由若干台服务器组成。上述数据库集群100包括至少一个服务器作为协调节点101和若干个服务器作为数据节点103,其中协调节点101为客户端300提供数据自动分发、数据自动聚合。数据节点103用于存储可供访问的数据。在本申请中,任何一个数据节点103既可以作为源数据节点也可以作为目标数据节点,其中,不失一般性地,所要迁移的数据所在的数据节点称为源数据节点,而数据即将迁入的数据节点称为目标数据节点。
协调节点101负责接收客户端300进行的写操作,并将写操作所对应的待写入数据导入到数据库集群100中的数据节点103所拥有的
数据分片上。即,按照预设规则(例如哈希算法、路由算法等)计算该待写入数据的分片号,并通过预置的路由映射表查找到该分片号对应的数据节点103,进而将该待写入数据转发至数据节点103所拥有的相应的数据分片上进行存储。
当用户进行数据访问,即客户端300对数据库集群100中的数据进行查询时,协调节点101将根据查询条件计算出待查询数据的分片号,并通过预置的路由映射表查找到该分片号对应的数据节点103,进而由一个或者多个数据节点103所拥有的相应的数据分片上获取到待查询数据并返回至客户端300。
当数据库集群100的存储能力和处理能力达到集群能力的上限时,服务端200通过与数据库集群100中协调节点101和数据节点103的交互,将控制数据库集群100进行数据迁移,例如,将分片号为0的数据分片上的数据由数据节点a迁移至数据节点c,以此缓解数据节点a的存储压力和负载压力。
其中,服务端200可以内嵌于数据库集群100中,也可以独立于数据库集群100设置。客户端300则是指应用客户端。
图2是根据一示例性实施例示出的一种服务端的框图。该硬件结构只是一个适用本发明实施例的示例,不能认为是对本发明实施例的使用范围的任何限制,也不能解释为本发明实施例需要依赖于该服务端200。
该服务端200可因配置或者性能的不同而产生较大的差异,如图2所示,服务端200包括:电源210、接口230、至少一存储介质250、以及至少一中央处理器(CPU,Central Processing Units)270。
其中,电源210用于为服务端200上的各硬件设备提供工作电压。
接口230包括至少一有线或无线网络接口231、至少一串并转换
接口233、至少一输入输出接口235以及至少一USB接口237等,用于与外部设备通信。
存储介质250作为资源存储的载体,可以是随机存储介质、磁盘或者光盘等,其上所存储的资源包括操作系统251、应用程序253及数据255等,存储方式可以是短暂存储或者永久存储。其中,操作系统251用于管理与控制服务端200上的各硬件设备以及应用程序253,以实现中央处理器270对海量数据255的计算与处理,其可以是Windows ServerTM、Mac OS XTM、UnixTM、LinuxTM、FreeBSDTM等。应用程序253是基于操作系统251之上完成至少一项特定工作的计算机程序,其可以包括至少一模块(图示未示出),每个模块都可以分别包含有对服务端200的一系列操作指令。数据255可以是存储于磁盘中的文件、图片等等。
中央处理器270可以包括一个或多个以上的处理器,并设置为通过总线与存储介质250通信,用于运算与处理存储介质250中的海量数据255。如上面所详细描述的,适用本发明实施例的服务端200将控制数据库集群进行数据迁移,即通过中央处理器270读取存储介质250中存储的一系列操作指令的形式来实现数据库集群中的数据迁移,以解决现有技术在数据迁移过程中需要停止数据库服务的问题。
此外,通过硬件电路或者硬件电路结合软件指令也能同样实现本发明实施例,因此,实现本发明实施例并不限于任何特定硬件电路、软件以及两者的组合。
请参阅图3,在一示例性实施例中,一种数据库集群中数据迁移的方法适用于图1所示实施环境的服务端200,该种数据库集群中数据迁移的方法可以由服务端200执行,可以包括以下步骤:
步骤310,获取源数据节点的快照,并根据快照中备份的待迁移
数据分片中的存量数据,记录待迁移数据分片中的增量数据。
应当理解,某个数据节点中的数据远多于其余数据节点,即数据库集群中发生了数据偏斜,为了使数据能够在各数据节点中分布的更加均匀,可以进行数据迁移,即将数据由负载压力较大(即用户访问量较大)的数据节点迁移至负载压力较小(即用户访问量较小)的数据节点上。
又例如,当数据库集群的存储能力和处理能力即将达到集群能力的上限时,可以通过扩容,即增加数据节点的方式来提高数据库集群的集群能力,此时,也需要进行数据迁移,以此降低数据库集群中原有数据节点的存储压力和负载压力。
由此,服务端可以通过对数据库集群的运行状态进行监测,来判断数据库集群是否需要进行数据迁移。该数据库集群的运行状态可以通过数据库集群中各数据节点的负载能力(即用户访问量)来表示,还可以通过数据库集群中各数据节点的CPU占用率来表示。
举例来说,若监测到数据库集群中某个数据节点的用户访问量远大于其余数据节点,即表示数据库集群中发生了数据偏斜,则判定数据库集群需要进行数据迁移。又或者,若监测到数据库集群中所有数据节点的CPU占用率均超过了预设阈值(例如80%),即表示数据库集群的集群能力即将达到上限,则判定数据库集群需要进行数据迁移。
在数据迁移过程中,数据库服务并未停止,客户端仍然会对数据节点中数据分片上的数据进行写操作,例如,该写操作包括数据新增、数据删除、数据修改等。基于此,数据迁移包括存量数据的迁移和增量数据的迁移。其中,存量数据指的是进行数据迁移之前的数据,增量数据指的是数据迁移过程中由写操作而产生的新数据或者由写操
作而产生的变更的存量数据。
快照被定义为对某个指定数据集合中数据的拷贝,该拷贝包括该数据在某个时间点(例如拷贝开始的时间点)的映像。本实施例中,通过源数据节点的快照的获取,来区别源数据节点中待迁移数据分片上的存量数据和增量数据。如此,上述待迁移数据分片可以位于所述源数据节点上。
具体地,在准备进行数据迁移的时间点,对源数据节点所拥有的数据分片上的所有数据进行拷贝,得到源数据节点的快照。相应地,源数据节点的快照中备份的数据即包括待迁移数据分片中的存量数据。基于该待迁移数据分片中的存量数据,数据迁移过程中对该待迁移数据分片进行的所有写操作都将被记录,从而产生该待迁移数据分片中的增量数据。换而言之,在数据迁移过程中,任何与该待迁移数据分片中的存量数据有所区别的数据,都将被视为该迁移数据分片中的增量数据。
进一步地,该增量数据可以被记录于若干个记录文件中,以利于后续增量数据的迁移。
步骤330,将备份的存量数据迁移至目标数据节点。
在获取到源数据节点的快照之后,即可得到快照中备份的待迁移数据分片中的存量数据,以此进行存量数据的迁移。
其中,存量数据的迁移可以直接由源数据节点迁移至目标数据节点,还可以先由源数据节点导入至预设存储空间,再由预设存储空间中导出至目标数据节点。
以直接迁移为例,在源数据节点和目标数据节点之间建立一个持续的连接,通过该连接从源数据节点向目标数据节点传输存量数据流。
步骤350,对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知源数据节点对待迁移数据分片执行写锁操作,并将未迁移的增量数据迁移至目标数据节点。
待存量数据迁移完毕之后,即可开始进行增量数据的迁移。
值得一提的是,由于增量数据的迁移是在存量数据的迁移之后进行的,即进行增量数据迁移时,目标数据节点上已经存储有存量数据,而增量数据是指数据迁移过程中由写操作而产生的新数据或者由写操作而产生的变更的存量数据,因此,增量数据迁移至目标数据节点实际上是在目标数据节点上根据增量数据所对应的写操作重做增量数据。
例如,若增量数据是由写操作产生的新数据,则目标数据节点上将生成相应的新数据;若增量数据是由写操作产生的变更的存量数据,则目标数据节点上将对已存储的存量数据进行相应的变更,从而生成变更的存量数据。
进一步地,增量数据的迁移可以直接由源数据节点迁移至目标数据节点,也可以先由源数据节点导入至预设存储空间,再由预设存储空间中导出至目标数据节点。
以直接迁移为例,在源数据节点和目标数据节点之间建立一个持续的连接,通过该连接从源数据节点向目标数据节点传输增量数据流。此外,由于源数据节点产生增量数据的速度可能与目标节点重做增量数据的速度不一致,即增量数据的读写速度不一致,因此,该连接将具有一定大小的数据缓存能力,以此适应增量数据的读写速度不一致的应用场景,从而提高了增量数据迁移的适用性。
可以理解,由于数据迁移过程中数据库服务并未停止,客户端仍然会对源数据节点中待迁移数据分片上的数据进行写操作,若不锁住
源数据节点上对待迁移数据分片的写操作,增量数据将源源不断地产生,将无法保证增量数据的完全迁移。
因此,在迁移过程中,通过判断未迁移的增量数据是否满足预设写锁条件,来进一步判断是否需要对待迁移数据分片执行写锁操作。
其中,写锁操作会导致客户端对待迁移数据分片所进行的写操作失败或者阻塞,故而,预设写锁条件是根据如何使客户端对写操作失败或者阻塞毫无感知而预先设置的,例如,该预设写锁条件可以是未迁移的增量数据的数据量、未迁移的增量数据的重做时间等等。可以理解,若未迁移的增量数据的数据量极少、或者未迁移的增量数据的重做时间极短,都能够使得客户端对数据迁移过程中的写操作失败或者阻塞毫无感知。
若未迁移的增量数据不满足预设写锁条件,则在继续进行增量数据迁移的同时,保持客户端对待迁移数据分片中数据的写操作。
反之,若未迁移的增量数据满足预设写锁条件,则通知源数据节点对待迁移数据分片执行写锁操作,此时,客户端对该待迁移数据分片所进行的新的写操作将失败或者阻塞,而之前所进行的写操作仍将继续。基于此,在等待该待迁移数据分片上之前所有的写操作完成之后,将未迁移的增量数据迁移至目标数据节点,从而保证了增量数据迁移的完整性。
步骤370,待增量数据迁移完毕,通知协调节点将待迁移数据分片对应的路由从源数据节点切换至目标数据节点。
在协调节点完成待迁移数据分片对应路由的切换之后,客户端对该待迁移数据分片上数据的读写操作即由源数据节点更改至目标数据节点上。至此,数据迁移完成。
通过如上所述的过程,实现了客户端无感知的数据迁移,避免数
据迁移过程中数据库服务被中断,有效地提高了用户的访问效率,提供了用户的访问体验。
此外,上述数据库集群中数据迁移的方法不仅能够支持完整的数据事务,以此保证数据迁移的一致性,而且能够支持异构数据库之间的数据迁移,有效地扩展了数据迁移的应用场景。
请参阅图4,在一示例性实施例中,步骤310可以包括以下步骤:
步骤311,基于存量数据,接收客户端对待迁移数据分片进行的若干次写操作。
如前所述,相应地,源数据节点的快照中备份的数据即包括待迁移数据分片中的存量数据。该源数据节点的快照是在准备进行数据迁移的时间点生成的。
为此,基于存量数据,即在准备进行数据迁移的时间点之后,记录客户端对待迁移数据分片所进行的所有写操作,以利于后续在目标节点上根据该所有写操作进行增量数据的重做。
步骤313,根据若干次写操作生成若干个记录文件,通过若干个记录文件记录待迁移数据分片中的增量数据。
应当理解,每一个记录文件中所记录的增量数据的数据量是有限的。本实施例中,所有写操作被记录于若干个记录文件中,进而通过若干个记录文件形成待迁移数据分片中的增量数据,即目标数据节点能够根据该若干个记录文件中记录的所有写操作进行增量数据的重做,从而实现待迁移数据分片中的增量数据的迁移,以此保证数据迁移的一致性。
进一步地,每一个记录文件中为记录的增量数据的数据量设置阈值,例如,阈值设置为记录100条增量数据,则当同一次写操作所产生的增量数据超过该阈值,该次写操作将被记录在至少两个记录文件
中,又譬如,当同一次写操作所产生的增量数据未超过该阈值,则同一个记录文件中将记录由至少两次写操作所产生的增量数据,以此保证记录文件的存储效率。
相应地,步骤350中对记录的增量数据进行迁移的步骤可以包括以下步骤:
通过切换若干个记录文件对增量数据进行迭代迁移。
如前所述,增量数据直接迁移时,在源数据节点和目标数据节点之间需要建立一个持续的连接,并且由于源数据节点产生增量数据的速度可能与目标节点重做增量数据的速度不一致,该连接还需要具有一定大小的数据缓存能力。
这不仅会使得数据库集群需要对该连接的整个生命周期进行维护,对数据库集群的内核代码具有一定的侵入性,而且该连接所具备的数据缓存能力需要占用数据库集群自身的存储空间,可能在进行数据流传输的漫长过程中导致系统磁盘空间不足,而影响系统的稳定性。
基于此,本实施例中,以迭代迁移的方式进行增量数据的迁移。
具体地,数据迁移过程中,客户端对待迁移数据分片所进行的所有写操作将被记录在若干个不同的记录文件中,以在每一个记录文件中记录形成待迁移数据分片中的增量数据。相应地,目标数据节点即可根据每一个记录文件完成每一次增量数据的迭代迁移。
进一步地,每一个记录文件中所记录的增量数据的数据量不一致。较优地,每一次迭代所使用的记录文件中记录的增量数据的数据量都会比上一次迭代所使用的记录文件中记录的增量数据的数据量有所减少,换而言之,最后一次迭代所使用的记录文件中记录的增量数据的数据量最少。
更进一步地,每一个记录文件中所记录的增量数据的数据量的减少由服务端控制,可以随机减少,也可以按照预设数据量减少。
进一步地,请参阅图5,在一示例性实施例中,所述通过切换若干个记录文件对增量数据进行迭代迁移的步骤可以包括以下步骤:
步骤410,以上一次迭代迁移时记录的增量数据结束位置作为本次迭代迁移的增量数据起始位置,根据本次迭代迁移的增量数据起始位置切换至对应的记录文件。
每一个记录文件所记录的增量数据都有对应的增量数据起始位置和增量数据结束位置,并且该增量数据起始位置和增量数据结束位置对应的是该记录文件所在的迭代轮次。可以理解,由于记录文件是依序生成的,相应地,本次迭代迁移的增量数据结束位置同时也是下一次迭代迁移的增量数据起始位置,即记录文件中本次迭代迁移的增量数据结束位置之前的增量数据将在本次迭代中完成迁移,而该增量数据结束位置之后的增量数据都将在后续迭代迁移中进行迁移。
基于此,在得到上一次迭代迁移时记录的增量数据结束位置之后,即可确定本次迭代迁移的增量数据起始位置,进而得到本次迭代所对应的记录文件。
步骤430,由记录文件中获取本次迭代迁移的增量数据,并记录本次迭代迁移的增量数据结束位置。
在切换至本次迭代迁移的增量数据起始位置对应的记录文件之后,即可获取到其中记录的增量数据,以此作为本次迭代迁移的增量数据。
进一步地,由于每一个记录文件中所记录的增量数据的数据量不一致,即每一个记录文件中的增量数据起始位置和增量数据结束位置都不同,因此,在本次迭代迁移的增量数据完成迁移时,还将记录本
次迭代迁移的增量数据结束位置,以供后续迭代迁移时使用。
步骤450,将获取到的增量数据迁移至目标数据节点。本实施例中,增量数据的迁移将通过预设存储空间完成,即将获取到的增量数据由源数据节点导入至预设存储空间,再由预设存储空间导出至目标数据节点。
其中,预设存储空间是独立于数据库集群设置的,以此避免占用数据库集群自身的存储空间,有利于减轻系统磁盘空间的饥饿症状,有利于提高系统的稳定性,而且能够实现数据迁移与数据库集群的解耦,避免数据迁移过程中数据库服务被中断,进一步有效地提高了用户的访问效率,提高用户的访问体验。
请参阅图6,在一示例性实施例中,所述通过切换若干个记录文件对增量数据进行迭代迁移的步骤还可以包括以下步骤:
步骤510,判断本次迭代迁移的增量数据的数据量或者该增量数据的迁移时间是否不大于预设阈值。
如前所述,由于数据迁移过程中数据库服务并未停止,客户端仍然会对源数据节点中待迁移数据分片上的数据进行写操作,若不锁住源数据节点上对待迁移数据分片的写操作,增量数据将源源不断地产生,将无法保证增量数据的完全迁移。
若是直接迁移,则可以通过某一时刻源数据节点与目标数据节点之间建立的连接的缓存空间中未迁移的增量数据,来判断是否需要锁住源数据节点上对待迁移数据分片的写操作。例如,某一时刻,当缓存空间中未迁移的增量数据的数据量小于预设阈值,则判定需要对待迁移数据分片执行写锁操作。
然而,在迭代迁移过程中,由于在进行本次迭代迁移时,并未停止将增量数据记录于记录文件中,即记录文件仍不断地生成,使得服
务端无法知悉未迁移的增量数据究竟还有多少,进而无法直接通过未迁移的增量数据来判断是否需要对待迁移数据分片执行写锁操作。
再如前所述,若每一次迭代所使用的记录文件中记录的增量数据的数据量都比上一次迭代所使用的记录文件中记录的增量数据的数据量有所减少,则最后一次迭代所使用的记录文件中记录的增量数据的数据量最少。
基于此,本实施例中,预设写锁条件被设置为本次迭代迁移的增量数据的数据量不大于预设阈值。也就是说,通过本次迭代迁移的增量数据来间接地判断未迁移的增量数据是否满足预设写锁条件,进而判断是否需要对待迁移数据分片执行写锁操作。
若本次迭代迁移的增量数据的数据量不大于预设阈值,则最后一次迭代迁移的增量数据,即未迁移的增量数据的数据量势必也不大于预设阈值,此时,进入步骤530,判定未迁移的增量数据满足预设写锁条件。
可以理解,最后一次迭代迁移所需切换的记录文件可能仅有一个,也可能有若干个。
否则,返回步骤410,继续进行增量数据的迭代迁移。
或者,也可以将预设写锁条件设置为本次迭代迁移的增量数据的迁移时间不大于预设阈值,该迁移时间指的是目标数据节点重做增量数据所消耗的时间,其是通过计算本次迭代迁移的增量数据的数据量与目标数据节点重做增量数据的速度的比值得到的。例如,使客户端毫无感知的典型写锁时长为10ms至30ms,则预设写锁条件可以设置为本次迭代迁移的增量数据的重做时间不大于10ms。
若本次迭代迁移的增量数据的迁移时间不大于预设阈值,则最后一次迭代迁移的增量数据,即未迁移的增量数据的迁移时间势必也不
大于预设阈值,此时,进入步骤530,判定未迁移的增量数据满足预设写锁条件。
否则,返回步骤410,继续进行增量数据的迭代迁移。
在一示例性实施例中,如上所述的方法还可以包括以下步骤:
当待迁移数据分片对应的路由切换完毕,通知源数据节点对待迁移数据分片执行解锁操作,并停止记录待迁移数据分片中的增量数据。
通过对待迁移数据分片执行解锁操作,即可解除对待迁移数据分片所执行的写锁操作,使得待迁移数据分片上的读写恢复,即后续客户端对该待迁移数据分片所进行的读写操作将由源数据节点更改至目标数据节点。
进一步地,在完成更改之后,源数据节点中将不再产生关于该待迁移数据分片的增量数据,因此,源数据节点也可以不必继续基于快照记录该待迁移数据分片中的增量数据。至此,增量数据迁移完毕。
图7a是一应用场景中一种数据库集群中数据迁移的方法的具体实现示意图,图7b是图7a所涉及的数据节点新增的示意图。现结合图7a所示的具体应用场景和图7b所示的数据节点新增的示意图,以数据库集群扩容,即新增数据节点d为例,对本发明实施例各实施例中数据库集群中数据迁移的过程进行描述。
服务端通过执行步骤601,获取源数据节点a的快照,并基于该快照,通过执行步骤602开始记录源数据节点a中待迁移数据分片3上的增量数据。同时,通过执行步骤603,开始从源数据节点a上到处待迁移数据分片3中的存量数据。
在完成上述步骤之后,即可开始数据迁移。
首先,通过执行步骤604至步骤605,将待迁移数据分片3中的
存量数据由源数据节点a迁移至目标数据节点d。
然后,以迭代迁移的方式对待迁移数据分片3中的增量数据进行迁移。
通过执行步骤606至步骤607,完成待迁移数据分片3中增量数据的本次迭代迁移。待本次迭代迁移完成,通过执行步骤608至步骤609,判断是否进入最后一次迭代迁移。
若为否,则返回步骤606继续增量数据非最后一次的迭代迁移。
反之,若为是,则通过执行步骤610,对待迁移数据分片3执行写锁操作,并通过执行步骤611至步骤612,等待该待迁移数据分片3上当前的所有写操作完成之后,完成待迁移数据分片3中增量数据的最后一次迭代迁移。
最后,通过执行步骤613至步骤615,通知协调节点101将待迁移数据分片3对应的路由从源数据节点a切换至目标数据节点d,并恢复对待迁移数据分片3的读写,使得客户端后续对待迁移数据分片3所进行的读写操作均由源数据节点a更改至目标数据节点d。
至此,数据库集群完成目标数据节点d的扩容,数据完成由源数据节点a至目标数据节点d的迁移。
在该具体应用场景中,不仅在数据库集群的存储能力或者处理能力不足以应对用户的访问需求时,能够支持客户端无感知的数据扩容,即数据库集群扩容时,其所进行的数据迁移不必停止数据库服务,有效地提高了用户的访问效率,提高了用户的访问体验,而且能够支持完整的事务,保证数据迁移的一致性。
下述为本发明实施例装置实施例,可以用于执行本发明实施例所涉及的数据库集群中数据迁移的方法。对于本发明实施例装置实施例中未披露的细节,请参照本发明实施例所涉及的数据库集群中数据迁
移的方法实施例。
请参阅图8,在一示例性实施例中,一种数据库集群中数据迁移的装置700包括但不限于:一个或一个以上存储器;
一个或一个以上处理器;其中,
所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,
所述一个或者一个以上指令模块包括:增量数据记录模块710、存量数据迁移模块730、增量数据迁移模块750和路由切换模块770。
其中,增量数据记录模块710用于获取源数据节点的快照,并根据快照中备份的待迁移数据分片中的存量数据,记录待迁移数据分片中的增量数据。
存量数据迁移模块730用于将备份的存量数据迁移至目标数据节点。
增量数据迁移模块750用于对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知源数据节点对待迁移数据分片执行写锁操作,并将未迁移的增量数据迁移至目标数据节点。
路由切换模块770用于待增量数据迁移完毕,通知协调节点将待迁移数据分片对应的路由从源数据节点切换至目标数据节点。
请参阅图9,在一示例性实施例中,增量数据记录模块710包括但不限于:写操作接收单元711和记录文件生成单元713。
其中,写操作接收单元711用于基于存量数据,接收客户端对待迁移数据分片进行的若干次写操作。
记录文件生成单元713用于根据若干次写操作生成若干个记录文件,通过若干个记录文件记录待迁移数据分片中的增量数据。
相应地,增量数据迁移模块750包括:迭代迁移单元。
其中,迭代迁移单元用于通过切换若干个记录文件对增量数据进行迭代迁移。
请参阅图10,在一示例性实施例中,迭代迁移单元751包括但不限于:记录文件获取单元7511、增量数据获取单元7513和迁移单元7515。
其中,记录文件获取单元7511用于以上一次迭代迁移时记录的增量数据结束位置作为本次迭代迁移的增量数据起始位置,根据本次迭代迁移的增量数据起始位置切换至对应的记录文件。
增量数据获取单元7513用于由记录文件中获取本次迭代迁移的增量数据,并记录本次迭代迁移的增量数据结束位置。
迁移单元7515用于将获取到的增量数据迁移至目标数据节点。
在一示例性实施例中,迭代迁移单元751还包括但不限于:判断单元。
其中,判断单元用于判断本次迭代迁移的增量数据的数据量或者该增量数据的迁移时间是否不大于预设阈值。
在一示例性实施例中,如上所述的装置还包括但不限于:解锁模块。
其中,解锁模块用于当待迁移数据分片对应的路由切换完毕,通知源数据节点对待迁移数据分片执行解锁操作,并停止记录待迁移数据分片中的增量数据。
通过本申请的技术方案以及装置,可以实现在数据迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知源数据节点对待迁移数据分片执行写锁操作,当待迁移数据分片上当前所有的写操作完成时,将未迁移的增量数据迁移至目标数据节点。虽然对待迁移数据分
片执行写锁操作之后,客户端对待迁移数据分片所进行的写操作会失败或者阻塞,但是满足预设写锁条件的未迁移的增量数据是极少的,使得写操作失败或者阻塞时间极短,相对客户端而言是无感知的,避免在数据迁移过程中停止数据库服务,从而有效地提高了用户的访问效率,提高了用户的访问体验。
需要说明的是,上述实施例所提供的数据库集群中数据迁移的装置在进行数据库集群中的数据迁移时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即数据库集群中数据迁移的装置的内部结构将划分为不同的功能模块,以完成以上描述的全部或者部分功能。
另外,上述实施例所提供的数据库集群中数据迁移的装置与数据库集群中数据迁移的方法的实施例属于同一构思,其中各个模块执行操作的具体方式已经在方法实施例中进行了详细描述,此处不再赘述。
上述内容,仅为本发明实施例的较佳示例性实施例,并非用于限制本发明实施例的实施方案,本领域普通技术人员根据本发明实施例的主要构思和精神,可以十分方便地进行相应的变通或修改,故本发明实施例的保护范围应以权利要求书所要求的保护范围为准。
Claims (12)
- 一种数据库集群中数据迁移的方法,所述数据库集群包括协调节点、源数据节点和目标数据节点,其中,所述方法包括:获取源数据节点的快照,并根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据;将所述备份的存量数据迁移至目标数据节点;对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知所述源数据节点对所述待迁移数据分片执行写锁操作,并将所述未迁移的增量数据迁移至所述目标数据节点;以及待所述增量数据迁移完毕,通知所述协调节点将所述待迁移数据分片对应的路由从所述源数据节点切换至所述目标数据节点。
- 如权利要求1所述的方法,其中,所述根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据的步骤包括:基于所述存量数据,接收客户端对所述待迁移数据分片进行的若干次写操作;根据若干次所述写操作生成若干个记录文件,通过若干个所述记录文件记录所述待迁移数据分片中的增量数据;相应地,所述对记录的增量数据进行迁移的步骤包括:通过切换若干个所述记录文件对所述增量数据进行迭代迁移。
- 如权利要求2所述的方法,其中,所述通过切换若干个所述记录文件对所述增量数据进行迭代迁移的步骤包括:以上一次迭代迁移时记录的增量数据结束位置作为本次迭代迁移的增量数据起始位置,根据本次迭代迁移的增量数据起始位置切换 至对应的所述记录文件;由所述记录文件中获取本次迭代迁移的增量数据,并记录本次迭代迁移的增量数据结束位置;以及将获取到的所述增量数据迁移至所述目标数据节点。
- 如权利要求3所述的方法,其中,所述通过切换若干个所述记录文件对所述增量数据进行迭代迁移的步骤还包括:判断本次迭代迁移的增量数据的数据量或者该增量数据的迁移时间是否不大于预设阈值;若为是,则判定所述未迁移的增量数据满足预设写锁条件;否则,继续进行所述增量数据的迭代迁移。
- 如权利要求1所述的方法,其中,所述方法还包括:当所述待迁移数据分片对应的路由切换完毕,通知所述源数据节点对所述待迁移数据分片执行解锁操作,并停止记录所述待迁移数据分片中的增量数据。
- 一种数据库集群中数据迁移的方法,应用于服务端,其中,所述数据库集群包括协调节点、源数据节点和目标数据节点,所述方法包括:获取源数据节点的快照,并根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据,其中,所述待迁移数据分片位于所述源数据节点上;将所述备份的存量数据迁移至目标数据节点;对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知所述源数据节点对所述待迁移数据分片执行写锁操作,并将所述未迁移的增量数据迁移至所述目标数据节点;以及待所述增量数据迁移完毕,通知所述协调节点将所述待迁移数据分片对应的路由从所述源数据节点切换至所述目标数据节点。
- 一种数据库集群中数据迁移的装置,所述数据库集群包括协调节点、源数据节点和目标数据节点,其中,所述装置包括:一个或一个以上存储器;一个或一个以上处理器;其中,所述一个或一个以上存储器存储有一个或者一个以上指令模块,经配置由所述一个或者一个以上处理器执行;其中,所述一个或者一个以上指令模块包括:增量数据记录模块,用于获取源数据节点的快照,并根据所述快照中备份的待迁移数据分片中的存量数据,记录所述待迁移数据分片中的增量数据,其中,所述待迁移数据分片位于所述源数据节点上;存量数据迁移模块,用于将备份的存量数据迁移至目标数据节点;增量数据迁移模块,用于对记录的增量数据进行迁移,在迁移过程中,当未迁移的增量数据满足预设写锁条件时,通知所述源数据节点对所述待迁移数据分片执行写锁操作,并将所述未迁移的增量数据迁移至所述目标数据节点;以及路由切换模块,用于待所述增量数据迁移完毕,通知所述协调节点将所述待迁移数据分片对应的路由从所述源数据节点切换至所述目标数据节点。
- 如权利要求7所述的装置,其中,所述增量数据记录模块包括:写操作接收单元,用于基于所述存量数据,接收客户端对所述待迁移数据分片进行的若干次写操作;记录文件生成单元,用于根据若干次所述写操作生成若干个记录 文件,通过若干个所述记录文件记录所述待迁移数据分片中的增量数据;相应地,所述增量数据迁移模块包括:迭代迁移单元,用于通过切换若干个所述记录文件对所述增量数据进行迭代迁移。
- 如权利要求8所述的装置,其中,所述迭代迁移单元包括:记录文件获取单元,用于以上一次迭代迁移时记录的增量数据结束位置作为本次迭代迁移的增量数据起始位置,根据本次迭代迁移的增量数据起始位置切换至对应的所述记录文件;增量数据获取单元,用于由所述记录文件中获取本次迭代迁移的增量数据,并记录本次迭代迁移的增量数据结束位置;以及迁移单元,用于将获取到的所述增量数据迁移至所述目标数据节点。
- 如权利要求9所述的装置,其中,所述迭代迁移单元还包括:判断单元,用于判断本次迭代迁移的增量数据的数据量或者该增量数据的迁移时间是否不大于预设阈值。
- 如权利要求7所述的装置,其中,所述装置还包括:解锁模块,用于当所述待迁移数据分片对应的路由切换完毕,通知所述源数据节点对所述待迁移数据分片执行解锁操作,并停止记录所述待迁移数据分片中的增量数据。
- 一种非易失性计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如权利要求1-6任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/276,168 US11243922B2 (en) | 2016-12-01 | 2019-02-14 | Method, apparatus, and storage medium for migrating data node in database cluster |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611090677.4A CN108132949B (zh) | 2016-12-01 | 2016-12-01 | 数据库集群中数据迁移的方法及装置 |
CN201611090677.4 | 2016-12-01 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/276,168 Continuation US11243922B2 (en) | 2016-12-01 | 2019-02-14 | Method, apparatus, and storage medium for migrating data node in database cluster |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018099397A1 true WO2018099397A1 (zh) | 2018-06-07 |
Family
ID=62241307
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/113563 WO2018099397A1 (zh) | 2016-12-01 | 2017-11-29 | 数据库集群中数据迁移的方法、装置及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11243922B2 (zh) |
CN (1) | CN108132949B (zh) |
WO (1) | WO2018099397A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110209654A (zh) * | 2019-06-05 | 2019-09-06 | 深圳市网心科技有限公司 | 一种文本文件数据入库方法、系统及电子设备和存储介质 |
CN110909062A (zh) * | 2019-11-29 | 2020-03-24 | 迈普通信技术股份有限公司 | 数据处理方法、装置、电子设备及可读存储介质 |
CN111241068A (zh) * | 2020-01-14 | 2020-06-05 | 阿里巴巴集团控股有限公司 | 信息处理方法、装置及设备、计算机可读存储介质 |
CN111782633A (zh) * | 2020-06-29 | 2020-10-16 | 北京百度网讯科技有限公司 | 数据处理方法、装置及电子设备 |
CN113126884A (zh) * | 2019-12-30 | 2021-07-16 | 阿里巴巴集团控股有限公司 | 数据迁移方法、装置、电子设备及计算机存储介质 |
CN113360479A (zh) * | 2021-06-29 | 2021-09-07 | 平安普惠企业管理有限公司 | 数据迁移方法、装置、计算机设备和存储介质 |
WO2022048622A1 (zh) * | 2020-09-04 | 2022-03-10 | 阿里云计算有限公司 | 数据迁移方法、装置、设备、分布式系统及存储介质 |
US20230087447A1 (en) * | 2020-05-29 | 2023-03-23 | Alibaba Group Holding Limited | Data migration method and device |
CN111680019B (zh) * | 2020-04-29 | 2023-11-24 | 杭州趣链科技有限公司 | 一种区块链的数据扩容方法及其装置 |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10503714B2 (en) * | 2017-06-02 | 2019-12-10 | Facebook, Inc. | Data placement and sharding |
US11120082B2 (en) | 2018-04-18 | 2021-09-14 | Oracle International Corporation | Efficient, in-memory, relational representation for heterogeneous graphs |
CN110196880B (zh) * | 2018-06-08 | 2023-05-12 | 腾讯科技(深圳)有限公司 | 异构数据库数据同步方法和装置、存储介质及电子装置 |
CN110858194A (zh) * | 2018-08-16 | 2020-03-03 | 北京京东尚科信息技术有限公司 | 一种数据库扩容的方法和装置 |
CN109254960B (zh) * | 2018-08-24 | 2020-11-13 | 中国人民银行清算总中心 | 一种数据库海量数据的迁移方法及装置 |
CN111078121B (zh) * | 2018-10-18 | 2024-08-20 | 深信服科技股份有限公司 | 一种分布式存储系统数据迁移方法、系统、及相关组件 |
CN111475483B (zh) * | 2019-01-24 | 2023-05-05 | 阿里巴巴集团控股有限公司 | 数据库迁移方法、装置及计算设备 |
CN109819048B (zh) * | 2019-02-27 | 2022-03-15 | 北京字节跳动网络技术有限公司 | 数据同步方法、装置、终端及存储介质 |
CN109933632B (zh) * | 2019-04-04 | 2021-04-27 | 杭州数梦工场科技有限公司 | 一种数据库的数据迁移方法、装置及设备 |
CN110333824B (zh) * | 2019-06-05 | 2022-10-25 | 腾讯科技(深圳)有限公司 | 一种存储系统的扩容方法和装置 |
CN110442558B (zh) * | 2019-07-30 | 2023-12-29 | 深信服科技股份有限公司 | 数据处理方法、分片服务器、存储介质及装置 |
WO2021046750A1 (zh) * | 2019-09-11 | 2021-03-18 | 华为技术有限公司 | 数据重分布方法、装置及系统 |
CN110688370A (zh) * | 2019-10-12 | 2020-01-14 | 新奥(中国)燃气投资有限公司 | 一种数据迁移的方法及装置 |
CN111400273B (zh) * | 2019-11-19 | 2024-02-02 | 杭州海康威视系统技术有限公司 | 数据库扩容方法、装置、电子设备及机器可读存储介质 |
CN111104404B (zh) * | 2019-12-04 | 2021-10-01 | 星辰天合(北京)数据科技有限公司 | 基于分布式对象的数据存储方法及装置 |
CN111078667B (zh) * | 2019-12-12 | 2023-03-10 | 腾讯科技(深圳)有限公司 | 一种数据迁移的方法以及相关装置 |
CN111143324B (zh) * | 2019-12-20 | 2023-05-02 | 浪潮软件股份有限公司 | 一种kudu的基于大小的数据库数据均衡系统及实现方法 |
CN111339061B (zh) * | 2020-02-12 | 2023-09-26 | 杭州涂鸦信息技术有限公司 | 一种分布式数据库的数据迁移方法及系统 |
CN111324596B (zh) * | 2020-03-06 | 2021-06-11 | 腾讯科技(深圳)有限公司 | 数据库集群的数据迁移方法、装置及电子设备 |
CN111459913B (zh) * | 2020-03-31 | 2023-06-23 | 北京金山云网络技术有限公司 | 分布式数据库的容量扩展方法、装置及电子设备 |
CN111638940A (zh) * | 2020-05-19 | 2020-09-08 | 无锡江南计算技术研究所 | 一种面向申威平台的容器热迁移方法 |
CN113760858B (zh) * | 2020-06-05 | 2024-03-19 | 中国移动通信集团湖北有限公司 | 内存库数据动态迁移方法、装置、计算设备及存储设备 |
US11487703B2 (en) * | 2020-06-10 | 2022-11-01 | Wandisco Inc. | Methods, devices and systems for migrating an active filesystem |
CN111708763B (zh) * | 2020-06-18 | 2023-12-01 | 北京金山云网络技术有限公司 | 分片集群的数据迁移方法、装置和分片集群系统 |
CN112131286B (zh) * | 2020-11-26 | 2021-03-02 | 畅捷通信息技术股份有限公司 | 一种基于时间序列的数据处理方法、装置及存储介质 |
CN112527777A (zh) * | 2020-12-18 | 2021-03-19 | 福建天晴数码有限公司 | 一种基于追日志的数据库扩展的方法及其装置 |
CN113051247A (zh) * | 2021-03-18 | 2021-06-29 | 福建星瑞格软件有限公司 | 一种基于日志同步的数据库迁移方法及系统 |
CN113239011B (zh) * | 2021-05-11 | 2024-10-18 | 京东科技控股股份有限公司 | 数据库的扩容方法、装置及系统 |
CN113392067A (zh) * | 2021-06-11 | 2021-09-14 | 北京金山云网络技术有限公司 | 一种针对分布式数据库的数据处理方法、装置及系统 |
CN113596153B (zh) * | 2021-07-28 | 2024-07-05 | 新华智云科技有限公司 | 一种数据均衡方法及系统 |
CN113468148B (zh) * | 2021-08-13 | 2023-02-17 | 上海浦东发展银行股份有限公司 | 一种数据库的数据迁移方法、装置、电子设备及其存储介质 |
CN114077602B (zh) * | 2022-01-13 | 2022-05-17 | 中兴通讯股份有限公司 | 数据迁移方法和装置、电子设备、存储介质 |
CN114979153B (zh) * | 2022-04-07 | 2023-10-27 | 浙江大华技术股份有限公司 | 负载均衡方法、计算机设备及存储装置 |
CN114925020B (zh) * | 2022-07-20 | 2022-10-25 | 中电云数智科技有限公司 | 一种基于数据增量写方式的快照版本数据迁移方法 |
CN115237892B (zh) * | 2022-09-20 | 2022-12-16 | 联通智网科技股份有限公司 | 数据迁移方法和装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101079902A (zh) * | 2007-06-29 | 2007-11-28 | 清华大学 | 海量数据分级存储方法 |
CN103067433A (zh) * | 2011-10-24 | 2013-04-24 | 阿里巴巴集团控股有限公司 | 一种分布式存储系统的数据迁移方法、设备和系统 |
CN103294675A (zh) * | 2012-02-23 | 2013-09-11 | 上海盛霄云计算技术有限公司 | 一种分布式存储系统中的数据更新方法及装置 |
WO2015014152A1 (zh) * | 2013-07-31 | 2015-02-05 | 华为技术有限公司 | 一种访问共享内存的方法和装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8229945B2 (en) * | 2008-03-20 | 2012-07-24 | Schooner Information Technology, Inc. | Scalable database management software on a cluster of nodes using a shared-distributed flash memory |
US20120137367A1 (en) * | 2009-11-06 | 2012-05-31 | Cataphora, Inc. | Continuous anomaly detection based on behavior modeling and heterogeneous information analysis |
US10635316B2 (en) * | 2014-03-08 | 2020-04-28 | Diamanti, Inc. | Methods and systems for data storage using solid state drives |
US10417190B1 (en) * | 2014-09-25 | 2019-09-17 | Amazon Technologies, Inc. | Log-structured file system for zone block devices with small zones |
CN105528368B (zh) * | 2014-09-30 | 2019-03-12 | 北京金山云网络技术有限公司 | 一种数据库迁移方法及装置 |
US10210115B2 (en) * | 2015-06-02 | 2019-02-19 | Box, Inc. | System for handling event messages for file collaboration |
US11829253B2 (en) * | 2015-09-25 | 2023-11-28 | Mongodb, Inc. | Systems and methods for non-blocking backups |
CN105718570B (zh) * | 2016-01-20 | 2019-12-31 | 北京京东尚科信息技术有限公司 | 用于数据库的数据迁移方法和装置 |
CN105472045A (zh) * | 2016-01-26 | 2016-04-06 | 北京百度网讯科技有限公司 | 数据库迁移的方法和装置 |
US10671496B2 (en) * | 2016-05-31 | 2020-06-02 | Mongodb, Inc. | Method and apparatus for reading and writing committed data |
US10362092B1 (en) * | 2016-10-14 | 2019-07-23 | Nutanix, Inc. | Entity management in distributed systems |
-
2016
- 2016-12-01 CN CN201611090677.4A patent/CN108132949B/zh active Active
-
2017
- 2017-11-29 WO PCT/CN2017/113563 patent/WO2018099397A1/zh active Application Filing
-
2019
- 2019-02-14 US US16/276,168 patent/US11243922B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101079902A (zh) * | 2007-06-29 | 2007-11-28 | 清华大学 | 海量数据分级存储方法 |
CN103067433A (zh) * | 2011-10-24 | 2013-04-24 | 阿里巴巴集团控股有限公司 | 一种分布式存储系统的数据迁移方法、设备和系统 |
CN103294675A (zh) * | 2012-02-23 | 2013-09-11 | 上海盛霄云计算技术有限公司 | 一种分布式存储系统中的数据更新方法及装置 |
WO2015014152A1 (zh) * | 2013-07-31 | 2015-02-05 | 华为技术有限公司 | 一种访问共享内存的方法和装置 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110209654A (zh) * | 2019-06-05 | 2019-09-06 | 深圳市网心科技有限公司 | 一种文本文件数据入库方法、系统及电子设备和存储介质 |
CN110909062A (zh) * | 2019-11-29 | 2020-03-24 | 迈普通信技术股份有限公司 | 数据处理方法、装置、电子设备及可读存储介质 |
CN113126884A (zh) * | 2019-12-30 | 2021-07-16 | 阿里巴巴集团控股有限公司 | 数据迁移方法、装置、电子设备及计算机存储介质 |
CN113126884B (zh) * | 2019-12-30 | 2024-05-03 | 阿里巴巴集团控股有限公司 | 数据迁移方法、装置、电子设备及计算机存储介质 |
CN111241068A (zh) * | 2020-01-14 | 2020-06-05 | 阿里巴巴集团控股有限公司 | 信息处理方法、装置及设备、计算机可读存储介质 |
CN111241068B (zh) * | 2020-01-14 | 2023-04-07 | 阿里巴巴集团控股有限公司 | 信息处理方法、装置及设备、计算机可读存储介质 |
CN111680019B (zh) * | 2020-04-29 | 2023-11-24 | 杭州趣链科技有限公司 | 一种区块链的数据扩容方法及其装置 |
US20230087447A1 (en) * | 2020-05-29 | 2023-03-23 | Alibaba Group Holding Limited | Data migration method and device |
CN111782633A (zh) * | 2020-06-29 | 2020-10-16 | 北京百度网讯科技有限公司 | 数据处理方法、装置及电子设备 |
CN111782633B (zh) * | 2020-06-29 | 2024-04-30 | 北京百度网讯科技有限公司 | 数据处理方法、装置及电子设备 |
WO2022048622A1 (zh) * | 2020-09-04 | 2022-03-10 | 阿里云计算有限公司 | 数据迁移方法、装置、设备、分布式系统及存储介质 |
CN113360479B (zh) * | 2021-06-29 | 2023-10-20 | 深圳市天汇世纪科技有限公司 | 数据迁移方法、装置、计算机设备和存储介质 |
CN113360479A (zh) * | 2021-06-29 | 2021-09-07 | 平安普惠企业管理有限公司 | 数据迁移方法、装置、计算机设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US11243922B2 (en) | 2022-02-08 |
CN108132949B (zh) | 2021-02-12 |
CN108132949A (zh) | 2018-06-08 |
US20190179808A1 (en) | 2019-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018099397A1 (zh) | 数据库集群中数据迁移的方法、装置及存储介质 | |
US10572454B2 (en) | Storage method and apparatus for distributed file system | |
US10296494B2 (en) | Managing a global namespace for a distributed filesystem | |
US10838829B2 (en) | Method and apparatus for loading data from a mirror server and a non-transitory computer readable storage medium | |
US11693789B2 (en) | System and method for mapping objects to regions | |
US20240045598A1 (en) | Cloud object storage and versioning system | |
US10289496B1 (en) | Parallel proxy backup methodology | |
US20140006357A1 (en) | Restoring an archived file in a distributed filesystem | |
US10372547B1 (en) | Recovery-chain based retention for multi-tier data storage auto migration system | |
US9984139B1 (en) | Publish session framework for datastore operation records | |
US9571584B2 (en) | Method for resuming process and information processing system | |
US10298709B1 (en) | Performance of Hadoop distributed file system operations in a non-native operating system | |
WO2015054998A1 (zh) | 一种在线重建索引的方法和装置 | |
CN111386521B (zh) | 在数据库集群中重分布表数据 | |
TW201738781A (zh) | 資料表連接方法及裝置 | |
US12050603B2 (en) | Opportunistic cloud data platform pipeline scheduler | |
US10031777B2 (en) | Method and system for scheduling virtual machines in integrated virtual machine clusters | |
US20240134761A1 (en) | Application recovery configuration validation | |
US11442663B2 (en) | Managing configuration data | |
CN116594551A (zh) | 一种数据存储方法及装置 | |
US10255139B2 (en) | Synchronized backup and recovery of heterogeneous DBMSs using third party backup tool | |
WO2020207078A1 (zh) | 数据处理方法、装置和分布式数据库系统 | |
Krogh et al. | Configuration | |
Karyakin | Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing Databases | |
CN117891612A (zh) | 快照克隆方法、装置及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17876061 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17876061 Country of ref document: EP Kind code of ref document: A1 |