CN117493303A - Data migration method and device, electronic equipment and storage medium - Google Patents

Data migration method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117493303A
CN117493303A CN202311325674.4A CN202311325674A CN117493303A CN 117493303 A CN117493303 A CN 117493303A CN 202311325674 A CN202311325674 A CN 202311325674A CN 117493303 A CN117493303 A CN 117493303A
Authority
CN
China
Prior art keywords
transaction data
database
incremental
data
target database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311325674.4A
Other languages
Chinese (zh)
Inventor
林培晖
周国晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202311325674.4A priority Critical patent/CN117493303A/en
Publication of CN117493303A publication Critical patent/CN117493303A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a data migration method, a data migration device, electronic equipment and a storage medium, which are applied to the technical field of data processing. The data migration method may include: receiving transaction data; writing transaction data into a MySQL database and a target database; matching the increment transaction data in the MySQL database and the target database, and if the matching results are consistent, writing the subsequently received transaction data into the target database when the transaction data are subsequently received; the incremental transaction data represents the newly written transaction data. By applying the data migration method, the data migration device, the electronic equipment and the storage medium provided by the embodiment of the invention, the stability of business service can be improved.

Description

Data migration method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data migration method, a data migration device, an electronic device, and a storage medium.
Background
Because MySQL (My Structured Query Language, relational database management system) has limited single-machine disk space, when a large amount of data is stored by using a single-machine MySQL database, for example, when the number of single-table rows of the database exceeds 500 ten thousand rows or the single-table capacity exceeds 2GB, the time for executing the SQL once (Structured Query Language ) is also prolonged, and further for some high concurrency scenarios, the more the number of SQLs to be executed is required to be executed, the longer the execution waiting time is, namely, the MySQL database has a bottleneck of read-write performance, which is difficult to provide services for a service end stably.
For this situation, data is typically migrated to another database with better performance, and the MySQL database is replaced by the other database to provide services to the service end, but in the related art, the service needs to be suspended to achieve data migration, which leads to interruption of the service.
Disclosure of Invention
The embodiment of the invention aims to provide a data migration method, a data migration device, electronic equipment and a storage medium, so as to improve the stability of business services. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a data migration method, the method comprising:
receiving transaction data;
writing the transaction data into a MySQL database and a target database;
matching the increment transaction data in the MySQL database and the target database, and if the matching results are consistent, writing the subsequently received transaction data into the target database when the transaction data are subsequently received; the incremental transaction data represents newly written transaction data.
Optionally, the writing the transaction data into the MySQL database and the target database includes:
writing transaction data into a MySQL database based on a transaction data form for storing the transaction data in the MySQL database;
And writing the transaction data into a target database by taking an identification which is uniquely corresponding to the transaction data and has random attribute as a primary key.
Optionally, taking the incremental transaction data written in the MySQL database as first incremental transaction data and the incremental transaction data written in the target database as second incremental transaction data;
the matching the incremental transaction data in the MySQL database and the target database includes:
for each identifier in the identifiers which uniquely corresponds to the first incremental transaction data and has random attribute, determining second target incremental transaction data which respectively corresponds to each identifier in the second incremental transaction data, and taking the first incremental transaction data and the second target incremental transaction data which are determined by the same identifier as a group of data to be verified;
and judging whether the transaction data to be verified are matched.
Optionally, after said matching of incremental transaction data in said MySQL database and said target database, said method further comprises:
if the matching results are inconsistent, modifying all information included in the second target incremental transaction data based on all information included in the first incremental transaction data in the transaction data to be verified, which are inconsistent in the matching results, aiming at the transaction data to be verified; the second target incremental transaction data includes information that matches the information included in the first incremental transaction data after modification.
Optionally, before writing the transaction data received subsequently into the target database when the matching results are consistent, the method further includes:
acquiring historical transaction data; the historical transaction data comprises transaction data stored in MySQL data before the transaction data is written into a MySQL database and a target database;
writing the historical transaction data into the target database.
Optionally, the writing the historical transaction data into the target database includes:
creating a transaction data form for storing transaction data in the target database; the structure of the transaction data form is the same as that of the transaction data form used for storing transaction data in the MySQL database;
deleting the index field with the incremental attribute from the transaction data form to obtain a transaction data form adapted to the target database;
based on the structure of a transaction data form adapted to the target database, writing the historical transaction data into the target database by taking an identification which uniquely corresponds to the transaction data and has a random attribute as a primary key.
Optionally, after the writing the historical transaction data to the target database, the method further comprises:
judging whether all information included in the historical transaction data with the same identifier in the MySQL database and the target database is matched or not according to each identifier which is uniquely corresponding to the historical transaction data and has random attribute;
if the matching results are consistent, writing the transaction data received subsequently into a target database when the transaction data is received subsequently, wherein the method comprises the following steps:
and if the matching result of the incremental transaction data is consistent and the matching result of the historical transaction data is consistent, writing the subsequently received transaction data into a target database when the transaction data is subsequently received.
Optionally, after said matching of incremental transaction data in said MySQL database and said target database, said method further comprises:
and if the matching results are consistent, reading the transaction data from the target database when the transaction end requests to read the transaction data.
In a second aspect of the present invention, there is provided a data migration apparatus, the apparatus comprising:
The receiving module is used for receiving transaction data;
the first writing module is used for writing the transaction data into a MySQL database and a target database;
the first matching module is used for matching the MySQL database with the incremental transaction data in the target database; the incremental transaction data representing newly written transaction data;
and the first switching module is used for writing the transaction data received subsequently into the target database when the transaction data is received subsequently if the matching results are consistent.
Optionally, the first writing module is specifically configured to write the transaction data into the MySQL database based on a transaction data table that stores the transaction data in the MySQL database; and writing the transaction data into a target database by taking an identification which is uniquely corresponding to the transaction data and has random attribute as a primary key.
Optionally, taking the incremental transaction data written in the MySQL database as first incremental transaction data and the incremental transaction data written in the target database as second incremental transaction data;
the first matching module is specifically configured to determine, for each identifier of the identifiers that uniquely corresponds to the first incremental transaction data and has a random attribute, second target incremental transaction data that respectively corresponds to each identifier in the second incremental transaction data, and use the first incremental transaction data and the second target incremental transaction data that are determined by the same identifier as a set of data to be verified; and judging whether the transaction data to be verified are matched.
Optionally, the apparatus further comprises,
the modification module is used for modifying various information included in the second target incremental transaction data based on various information included in the first incremental transaction data in the transaction data to be verified, which are inconsistent in the matching result, aiming at the transaction data to be verified, which are inconsistent in the matching result, after the incremental transaction data in the MySQL database and the target database are matched; the second target incremental transaction data includes information that matches the information included in the first incremental transaction data after modification.
Optionally, the apparatus further comprises:
the second writing module is used for acquiring historical transaction data before writing the transaction data received subsequently into the target database when the matching results are consistent and the transaction data received subsequently are received subsequently; the historical transaction data comprises transaction data stored in MySQL data before the transaction data is written into a MySQL database and a target database; writing the historical transaction data into the target database.
Optionally, the second writing module is specifically configured to create, in the target database, a transaction data table for storing transaction data; the structure of the transaction data form is the same as that of the transaction data form used for storing transaction data in the MySQL database; deleting the index field with the incremental attribute from the transaction data form to obtain a transaction data form adapted to the target database; based on the structure of a transaction data form adapted to the target database, writing the historical transaction data into the target database by taking an identification which uniquely corresponds to the transaction data and has a random attribute as a primary key.
Optionally, the apparatus further comprises,
the second matching module is used for judging whether all information included in the historical transaction data with the same identifier is matched in the MySQL database and the target database according to each identifier which is uniquely corresponding to the historical transaction data and has random attribute;
the first switching module is specifically configured to write the transaction data received subsequently into the target database when the transaction data is received subsequently if the matching result of the incremental transaction data is consistent and the matching result of the historical transaction data is consistent.
Optionally, the apparatus further comprises,
and the second switching module is used for reading the transaction data from the target database when the transaction end requests to read the transaction data if the matching results are consistent after the incremental transaction data in the MySQL database and the target database are matched.
In a third aspect of the present invention, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
The memory is used for storing a computer program;
the processor is configured to implement the data migration method described in the first aspect when executing the program stored in the memory.
In a fourth aspect of the present invention, there is provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the data migration method of the first aspect.
In a further aspect of the present invention there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the data migration method of the first aspect described above.
The data migration method provided by the embodiment of the invention is implemented by receiving transaction data; writing transaction data into a MySQL database and a target database; matching the increment transaction data in the MySQL database and the target database, and if the matching results are consistent, writing the subsequently received transaction data into the target database when the transaction data are subsequently received; the incremental transaction data represents the newly written transaction data.
When transaction data is received, writing the transaction data into the MySQL database and the target database at the same time, matching incremental transaction data in the two databases, and considering the data written into the target database to be credible when the matching results are consistent, wherein the target database can be utilized to replace the MySQL database to provide service. The embodiment of the invention can realize the online data migration without suspending the service, thereby improving the stability of the service.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart of a data migration method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an application data migration method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data migration apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
With the advancement of business, the transaction data volume stored in MySQL database has been enormous, for example, an idempotent table of member transactions paid by iOS (mobile operating system developed by apple company) is stored in MySQL, and the single table data volume has reached 200GB at present.
However, due to the limitation of single-machine capacity, namely disk space, when the single-table number of the database exceeds 500 ten thousand lines or the single-table capacity exceeds 2GB, the read-write performance of the MySQL database is affected. For example, when the index is missed, the whole table scanning is performed, and thus the time for executing the query SQL once is prolonged, so that the waiting time for executing the SQL is increased for some high concurrency scenes, and the performance bottleneck appears in the database, thereby affecting the stability of the business service.
In view of the foregoing problems, as shown in fig. 1, an embodiment of the present invention provides a data migration method, where the method may include:
s101, receiving transaction data.
S102, writing transaction data into a MySQL database and a target database.
S103, matching the increment transaction data in the MySQL database and the target database, and if the matching results are consistent, writing the subsequently received transaction data into the target database when the transaction data are subsequently received;
wherein the incremental transaction data represents the newly written transaction data.
According to the data migration method provided by the embodiment of the invention, when transaction data are received, the data are written into the MySQL database and the target database at the same time, incremental transaction data in the two databases are matched, and when the matching results are consistent, the data written into the target database are considered to be credible, and then the target database can be utilized to replace the MySQL database to provide service. The embodiment of the invention can realize the online data migration without suspending the service, thereby improving the stability of the service.
Referring to fig. 1, a data migration method according to an embodiment of the present invention will be described in detail.
S101, receiving transaction data.
The transaction data may include transaction data generated in real-time by the business side.
S102, writing transaction data into a MySQL database and a target database.
MySQL is a relational database management system that uses a tabular model to store data, which is organized and stored in the form of tables.
In a high concurrency scenario of a large data volume, if the service end still adopts the MySQL database to provide service, for example, the MySQL database is generally required to be subjected to database splitting and table splitting, and the data is scattered into a plurality of database nodes or a plurality of tables, so that a read-write request of the data is changed from 'read-write of a single main database' to 'read-write of a plurality of sub databases', and the read-write performance is maintained in the high concurrency scenario, so that the service end can stably provide service by using the MySQL database after the database splitting and table splitting. However, the database splitting table of the database is difficult to develop and maintain and is easy to make mistakes, so that a developer cannot choose to split the database into tables unless necessary. However, if the database and the table are not separated, the problem of data read-write performance degradation inevitably occurs, and the stability of business service is affected.
The distributed database adopts a distributed transaction model, supports distributed deployment, can easily realize transverse expansion and automatically perform load balancing, so that the distributed database can obtain better performance and usability under the condition of high concurrency, and has better expandability and higher fault tolerance.
Since the distributed database itself stores data in pieces, in an embodiment of the present invention, the target database may include a distributed database. After the data stored in the MySQL database is migrated to the distributed database, the data can be dispersed to a plurality of database nodes or a plurality of tables without splitting the database and the tables, so that the reading and writing of the data are changed from reading and writing of a single main database to reading and writing of a plurality of sub databases. The distributed database is utilized to avoid the situation that the MySQL database needs to be divided into banks and tables in order to maintain the data read-write performance under the high concurrency scene, thereby providing stable read-write support for the business end and improving the stability of business service.
In the initial state, or before data migration, the service side provides a read-write service for the transaction data by using the MySQL database, so it can be understood that the MySQL database may include a transaction data form for storing the transaction data, and further when the transaction data is written into the MySQL database, the transaction data may be directly written into the MySQL database based on the structure of the transaction data form.
When writing transaction data to a target database, in particular, a distributed database, rows (Rows) are arranged in a byte order of a primary key in a table of the distributed database, and consecutive Rows are stored in close proximity to the same machine with a high probability. For example, for a row with a primary key of 100, 101, … … 110, the large probability would be stored in the same machine.
If the transaction data is still written into the MySQL database according to the method, the transaction data is written into the distributed database by taking the self-increasing serial number as a main key, the written data is concentrated at the tail of a few or even one table due to the increment of the main key, so that hot spots are formed at tail regions of the table, and the hot spots are not distributed to other machines, so that the writing pressure of the batch of data is concentrated at a few or even a single nodes although the batch of data is written into the distributed system, and the writing pressure is not distributed to each node of the distributed system, namely the advantages of distributed reading and writing are not fully utilized.
Therefore, when writing transaction data into the target database, specifically, the distributed database, an identification (hereinafter simply referred to as an identification corresponding to transaction data) which uniquely corresponds to the transaction data and has a random attribute may be used as a primary key, and the transaction data may be written into the target database.
The random attribute of the identity may include that the characters that make up the digits of the identity are all random, e.g., "3713143362849348", "5573284409728775", "4393295715559727", etc., where 16 bits are all random. It may also include character randomization that forms part of the digits of the logo, i.e., the logo may be formed of characters of partial digits regularity and characters of partial digits randomness, e.g., "1000000044311813", "1000000076267123", "1000000099811539", etc., where the first 8 bits "10000000" are characters of regularity and the last 8 bits are characters of randomness. The embodiment of the invention does not limit the form of the identification which is uniquely corresponding to the transaction data and has random attribute.
In order to avoid repeated notification of the same transaction process by the service end and generation of repeated transaction data, idempotent processing can be performed on the transaction process, so that when the same operation is repeatedly executed, the generated result is the same as the result when the operation is executed for the first time, and the correctness and reliability of the generated transaction data are ensured.
After the transaction process is subjected to idempotent processing, the idempotent table stores the identification which is uniquely corresponding to the transaction and has random attribute. Therefore, the identifier which uniquely corresponds to the transaction and has a random attribute is stored in the idempotent table as the primary key on which the transaction data is stored, and the transaction data can be written into the target database. Wherein the identification may comprise a unique order number for the transaction, the unique order number being randomly generated.
Meanwhile, when other information with an incremental attribute is also included in the transaction data, for example, information related to time, such as a time stamp, update time (update_time), and the like. If the time-related information is used as an index field to write data into the target database, the transaction data is also inserted into the target database continuously with a monotonically increasing index, which also causes a region hot spot at the tail of the index and affects the throughput of data writing.
Thus, when writing transaction data to the target database, the index field having the incremental attribute may be deleted and the transaction data may be written to the target database with the identification uniquely corresponding to the transaction data and having the random attribute as the primary key.
In one implementation, the distributed database may comprise a TiDB database (TiDB Distributed SQL, distributed database system), i.e., the target database may comprise a TiDB database. In addition, the TiDB supports MySQL transmission protocol and most grammar of MySQL, so that the existing application can still be used continuously after MySQL data is migrated to the TiDB. Meanwhile, the capacity expansion of the TiDB is not perceived by a service side, and when the data volume is too large, the capacity expansion of the TiDB node can be realized, so that the operation is simple.
Writing transaction data into the MySQL database and the target database can comprise writing the newly generated transaction data into the MySQL database and the TiDB database aiming at the transaction data newly generated by the service end. This process may be referred to as data double writing.
In one implementation, database dual write logic may be pre-developed and deployed for enabling simultaneous writing of transaction data to MySQL database and target database. When data migration is executed, the online database double-write logic can be used or directly called, so that the MySQL database and the target database are simultaneously utilized to provide the writing service for data. And providing support for the writing service of the data for the target database to replace the MySQL database.
S103, matching the increment transaction data in the MySQL database and the target database, and if the matching results are consistent, writing the subsequently received transaction data into the target database when the transaction data are subsequently received;
wherein the incremental transaction data represents the newly written transaction data.
After the online database double-write logic, the transaction data newly written into the MySQL database can be considered as incremental transaction data of the MySQL database. Before data migration, the MySQL database is in butt joint with the service end to provide read-write service support, and the online double-write logic does not interrupt the process of writing transaction data into the MySQL database, namely the process of writing the transaction data into the MySQL database is coherent, so that the increment transaction data written into the MySQL database can be considered to be credible without losing, error and the like.
However, although the process of writing transaction data to MySQL database is coherent, the process of writing transaction data to target database may be considered to begin suddenly, while it is understood that the database double-write logic may not be completed immediately after the online, and thus the process of writing transaction data to target database may not be stable due to factors such as delay of program response, fluctuation of network, etc. when the database double-write logic is just online, that is, the situation that the transaction data cannot be successfully written to target database may occur. Therefore, the transaction data newly written into the target database in the period from the online database double-write logic to the online database double-write logic, that is, the incremental transaction data of the target database in the period cannot be determined whether the incremental transaction data is credible or not. Further, it is necessary to verify the trustworthiness of the incremental transaction data in the time target database.
In one implementation, the incremental transaction data written in the MySQL database may be used as the first incremental transaction data, and the incremental transaction data written in the target database may be used as the second incremental transaction data; wherein the first incremental transaction data may be considered to be trusted incremental transaction data and the second incremental transaction data is to be verified for trustworthiness.
And matching the incremental transaction data in the MySQL database and the target database, namely matching the first incremental transaction data with the second incremental transaction data to determine whether the second incremental transaction data is credible. This process can also be seen as verifying the consistency of the transaction data in the two databases.
Because the first incremental transaction data is trusted, the embodiment of the invention can determine the second target incremental transaction data corresponding to each identifier in the second incremental transaction data aiming at each identifier which is uniquely corresponding to the first incremental transaction data and has random attribute, and takes the first incremental transaction data and the second target incremental transaction data determined by the same identifier as a group of data to be verified; and judging whether the second increment transaction data is credible or not by judging whether the data to be verified are matched or not.
The process of determining whether the second incremental transaction data is authentic may be performed on a piece-by-piece basis, i.e., after determining whether there is a match for one set of data to be verified, a determination is performed for the next set of data to be verified.
For a certain identifier in the identifiers corresponding to the first incremental transaction data, second target incremental transaction data corresponding to the identifier can be searched in the second incremental transaction data. In one implementation manner, searching the second incremental transaction data corresponding to the identifier in the second incremental transaction data may include determining whether the identifier exists in the second incremental transaction data, and determining the second target incremental transaction data corresponding to the identifier based on the identifier if the identifier exists; if not, the matching results are not consistent.
When the identifier exists in the second incremental transaction data, the second target incremental transaction data corresponding to the identifier is acquired in the second incremental transaction data based on the identifier, and specifically, various pieces of information included in the second target incremental transaction data, such as information of a user ID, a commodity name, a price, and the like, may be acquired. And further determining whether the first incremental transaction data is matched with the second incremental transaction data by judging whether various pieces of information included in the first incremental transaction data are matched with various pieces of information included in the second target incremental transaction data.
If the information which is not matched with the information exists in each item of information, the first incremental transaction data and the second target incremental transaction data are considered to be inconsistent, namely the matching result of the group of transaction data to be verified is inconsistent. At this time, the first incremental transaction data included in the set of transaction data to be verified, that is, the transaction data acquired from the MySQL database, may be used to modify the second target incremental transaction data included in the set of transaction data to be verified, that is, the transaction data acquired from the target database.
Specifically, the various information included in the second target incremental transaction data may be modified corresponding to the various information included in the first incremental transaction data, so that the various information included in the second target incremental transaction data matches the various information included in the first incremental transaction data after modification.
For example, for a certain identity, the first incremental transaction data obtained from MySQL database includes, the user ID: 001. trade name: x, price: 10; and the second target incremental transaction data obtained from the target database includes a user ID: 001. trade name: x, price: 1. the first incremental transaction data is considered to be trusted, so that the price information of the second target incremental transaction data is known to be incorrect, the matching result is inconsistent, and the price information of the second target incremental transaction data is modified based on the price information of the first incremental transaction data to obtain the user ID: 001. trade name: x, price: 10 modified second target incremental transaction data for each item of information, such that the second target incremental transaction data matches the first incremental transaction data after modification.
If all the information is matched, the matching result of the group of transaction data to be verified can be considered to be consistent, and at the moment, the next group of transaction data to be verified, namely the first incremental transaction data corresponding to the next identifier and the second target incremental transaction data can be matched in consistency.
It will be appreciated that the modification process of the transaction data described above is based on modification of the second incremental transaction data in the target database by the first incremental transaction data in the MySQL database, such that the modified second incremental transaction data matches the first incremental transaction data. This process may also be referred to as a synchronization process of incremental transaction data.
In one implementation manner, in the process of synchronizing incremental transaction data, the error rate of writing into the target database, namely the second incremental transaction data, can be counted, namely the probability of inconsistent matching results of the transaction data to be verified is counted, so that the writing performance of the target database is judged, and the lower the error rate is, the better the writing performance of the target database is generally considered; the higher the error rate, the poorer the write performance of the target database.
In one implementation manner, a switch can be added for writing transaction data into the target database, namely, the switch is used for controlling the opening and closing of writing the transaction data into the target database, so that when the error rate of the transaction data written into the target database is high, the transaction data written into the target database can be stopped in time, and the traffic gray scale and rollback can be facilitated.
The identification may not exist for the second incremental transaction data, and the matching result is inconsistent, and based on the modification of the first incremental transaction data to the second target increment and each item of information of the transaction data, the method may include writing the first incremental transaction data corresponding to the identification into the target database by taking the identification as a primary key.
When the matching results of the incremental transaction data in the MySQL database and the target database are consistent, the second incremental transaction data is considered to be credible, and normal read-write service can be provided for the service end by using the target database.
At this time, the writing and reading process of the transaction data by using the MySQL data can be stopped, and the target database takes over the reading and writing service of the transaction data provided for the service end by the MySQL database. Namely, when new transaction data is received subsequently, the subsequently received transaction data is written into a target database; when the business end requests to read the transaction data, the transaction data is selected to be read from the target database, and the process of reading the transaction data from the MySQL database is stopped, so that the switching of the database read-write service is realized.
In one implementation, switching of transaction data write services may be achieved by closing write logic for MySQL database. Switching of transaction data reading services may be achieved by closing the read logic for the MySQL database.
In one implementation, the transaction data reading service may be switched first and then the transaction data writing service may be switched if the matching results are consistent.
Meanwhile, switching of the reading service of the data can be sequentially performed according to the importance degree of the items, and in the switching process, the reading performance and the error rate of the target can be counted. Therefore, the stability of business service in the switching process is better maintained.
The read-write switching of the database can be performed on line, and the embodiment of the invention can realize smooth migration of transaction data from the MySQL database to the target database without suspending the read-write service of the data. Meanwhile, the online switching process does not influence online functions, such as order payment, and the service side does not have perception, so that the use experience of a user can be improved.
After the read-write switching of the database is completed, the online read-write service can be further performed, namely, the online read-write service based on the target database is performed again, so that relevant configuration about the MySQL database read-write method in the development environment can be deleted, and configuration resources are saved.
The MySQL database may also include historical transaction data, i.e., transaction data may have been written to the MySQL database prior to writing the transaction data to both the MySQL database and the target database, which may be referred to as historical transaction data. Therefore, in the embodiment of the invention, the transaction data aimed at by the data migration method can also comprise historical transaction data besides incremental transaction data, so that the integrity of data migration is further ensured.
In one implementation, historical transaction data may be migrated prior to a database service switch, i.e., prior to writing received transaction data only to the target database, and further prior to migrating incremental transaction data. After the migration of the historical transaction data is completed, the steps S101-S103 are executed again to migrate the incremental transaction data.
The migration process for the historical transaction data may include obtaining the historical transaction data from a MySQL database and writing the obtained historical transaction data to a target database. The step of acquiring the historical transaction data from the MySQL database may specifically include acquiring the historical transaction data from a transaction data form storing the transaction data in the MySQL database.
In one implementation, a transaction data form for storing transaction data may be created in the target database prior to migrating the historical transaction data in the MySQL database to the target database. Specifically, a transaction data form for storing transaction data, which is the same in structure, can be created in the target database corresponding to the transaction data form for storing transaction data in MySQL.
For example, in a MySQL database, the header of a transaction data form for storing transaction data may include: the self-increasing serial number (namely, increasing the main key), the order number, the user ID (identity), the commodity name, the commodity price, the commodity quantity and the creation time are used for creating a transaction data form which has the same structure and is used for storing transaction data in a target database, and the self-increasing serial number, the order number, the user ID (identity), the commodity name, the commodity price, the commodity quantity and the creation time are used for creating the form for the table head.
Because the transaction process is idempotent, in one implementation, the transaction data form may include an idempotent table, where the idempotent table may be used to store an association of a unique identifier of a terminal for one transaction with a unique order number of a participant transaction, specifically, the terminal may include an iOS terminal, and further the idempotent table may include an idempotent table for iOS.
In the transaction data form of the MySQL database, since there is an index field with an incremental attribute, for example, a self-increasing primary key, and such an index field with an incremental attribute is irrelevant to business operations, in order to avoid a problem of regional hot spots when data is read and written in the target database, particularly, a distributed database, the index field with an incremental attribute in the transaction data form of the target database may be deleted. And aiming at the characteristics of the distributed database, carrying out adaptive cutting aiming at the distributed database on the transaction data form, and carrying out optimization adjustment on the structure of the original form to obtain the transaction data form matched with the target database, in particular the distributed database.
Therefore, in the scenario of migrating historical transaction data in advance, when the transaction data is written into the target database, the transaction data can be written into the target database based on the structure of the transaction data form adapted to the target database with the identification which uniquely corresponds to the transaction data and has the random attribute as the primary key.
In one implementation, the index field with the increasing attribute may include a monotonically increasing primary key, at which point the index field with the increasing attribute is deleted, i.e., the monotonically increasing primary key is deleted.
In another implementation, the index field with the incremental attribute may further include time-related information, at which point the index field with the incremental attribute is deleted, and deleting the time-related index field.
Meanwhile, for other index fields and header information in the transaction data form of the target database, for example, other indexes such as user names, trade names, etc. other than the index field having the incremental attribute may be kept consistent with the transaction data form of the MySQL database.
In one implementation, a time node may be provided that has transaction data previously stored in the MySQL database as historical transaction data.
In an embodiment of the present invention, the time node may include a time point when the transaction data starts to be written to the MySQL database and the target database simultaneously, or the time node may include a time point of the online database double-write logic. For example, the time node may include a current time. It will be appreciated that the transaction data written into the MySQL database and the target database after the time node is incremental transaction data.
The time node for synchronizing the first historical transaction data may also be used as a time node for dividing the historical transaction data, and the incremental data may include transaction data for starting to synchronize the first historical transaction data until writing the MySQL database and the target database between the double write logic of the database is completed.
To provide higher fault tolerance, in one implementation, the incremental transaction data may further include writing the transaction data of the MySQL database and the target database after a time node before beginning to synchronize the first piece of historical transaction data until the database double write logic is completed, thereby providing greater time buffering to further ensure that all transaction data stored in the MySQL database can be migrated to the target database. The time node before the historical transaction data starts to be synchronized may include a day, a week, etc. before the historical transaction data starts to be synchronized, and the specific advanced time is not limited and may be determined according to actual needs.
Because the structure of the transaction data form adapted to the target database is not completely the same as that of the transaction data form in the MySQL database after the transaction data form is cut for the target database, when the transaction data is written into the target database, the historical transaction data can be correspondingly written in by taking the mark which is uniquely corresponding to the transaction data and has random attribute as a primary key based on the structure of the transaction data form adapted to the target database.
After the historical transaction data is migrated from the MySQL database to the target database, in order to ensure the consistency and the integrity of the data, the MySQL database and the historical transaction data of the target database can be matched to judge whether the historical transaction data migrated to the target database is credible or not.
In one implementation, for each of the identifiers that uniquely corresponds to the historical transaction data and has random attributes, it is determined whether each item of information included in the historical transaction data having the same identifier matches in the MySQL database and the target database. The matching process can refer to the matching process of the incremental transaction data when the incremental transaction data is migrated, and when the matching results are inconsistent, the historical transaction data in the target database is modified based on the historical transaction data in the MySQL database. The specific manner is not described in detail.
Therefore, the matching result is consistent, which can include that the matching result of the incremental transaction data is consistent and the matching result of the historical transaction data is consistent, namely, when the matching result of the incremental transaction data is consistent and the matching result of the historical transaction data is consistent, the transaction data in the target database is considered to be credible, the target database can be utilized to replace the MySQL database to provide the read-write service of the data, at the moment, when the transaction data is subsequently received, the subsequently received transaction data can be written into the target database, and when the transaction end requests to read the transaction data, the transaction data is read from the target database, so that the switching of the read-write service of the data is realized.
It will be appreciated that the matching process of historical transaction data and incremental transaction data may also be considered as preparing for switching the reading of transaction data from MySQL to the target.
Binlog (binary format log file), SQL statement of data generation or potential modification is recorded in binary log mode, and stored in disk in binary form.
In one implementation, the transaction data migration process may also include Binlog synchronization. And synchronizing Binlog in the MySQL database to a target database, and executing SQL sentences for changing transaction data by the target database based on Binlog, so as to realize the same change to the transaction data in the target database as the MySQL database, and enabling the transaction data in the target database to be matched with the transaction data in the MySQL database after the change.
MD5 (Message-Digest Algorithm) is a widely used cryptographic hash function that produces a 128-bit (16-byte) hash value that is used to ensure that the information transfer is complete and consistent. In one implementation, MD5 may be used to generate a corresponding MD5 value for each transaction data, and when matching transaction data, specifically, when matching historical transaction data and/or incremental transaction data, it may also be determined whether two transaction data match by comparing MD5 of the two transaction data acquired by the two databases respectively.
The logic script for matching the transaction data can be pre-developed and deployed, and related logic can be directly called when matching of the transaction data is required.
According to the data migration method provided by the embodiment of the invention, the target database can be a distributed database, and data slicing can be performed when data is stored, so that the target database is utilized to store transaction data, the database splitting can be avoided, the index fields with incremental attributes in the transaction data form are deleted, the transaction data form is cut for the target database, the hot spot problem that the read-write performance of the data is influenced due to the fact that the index increment concentrates the read-write of the data at the tail of the table after the transaction data is migrated to the target database can be avoided, and further the read-write performance of the data is improved under the condition that the database splitting is not performed, and the expandability and the usability of the system are improved. In addition, through strict switching flow, smooth migration of read-write service of data can be realized under the condition that online service is not stopped, stability of service is improved, and meanwhile, a service side does not have perception, and use experience of a user can also be improved.
In practical application, as shown in fig. 2, taking a target database as a TiDB database as an example, the data migration method may include:
S201, creating a TiDB table; removing the self-increment ID field and the time index; the order number index is incremented.
A TiDB table is created in the TiDB database for storing data. Specifically, a TiDB database transaction data form with the same structure for storing transaction data can be created in the TiDB database first corresponding to the transaction data form for storing transaction data in the MySQL database.
Then, a TiDB table is created for the TiDB database, and cutting for the TiDB database is performed so that the TiDB database can adapt to the characteristics of the TiDB data.
Specifically, an index field having a self-increment attribute, for example, a self-increment ID (identification) field, a time index (update_time), may be deleted; and taking an order number (order_code) index as a primary key index, so that hot spots are prevented from being formed in the tail area of the table when data is read and written in the TiDB database, the read-write performance is influenced, and a TiDB table adapting to the TiDB database is obtained. And other information in the TiDB table is kept unchanged, namely, the other information is consistent with the MySQL transaction data table.
S202, double writing logic and TiDB reading logic.
The double-write logic refers to database double-write logic, namely, transaction data is written by using the MySQL database and the TiDB database simultaneously. And, for the write logic of the TiDB database, a switch may be provided, by which the write logic of the TiDB database is controlled to be turned on and off.
TiDB read logic, i.e., logic that reads transaction data using a TiDB database, or otherwise obtains transaction data from a TiDB database.
The dual-write logic and the TiDB read logic can be pre-written by a technician and are deployed in equipment for executing data migration, so that when the dual-write and the TiDB read are required to be started in the data migration process, the dual-write logic and the TiDB read logic can be directly called to provide corresponding services.
S203, comparing the data and checking the script.
The data comparison and verification, i.e. the matching of the transaction data, may comprise a verification of the MD5 value of the transaction data and a piece-by-piece verification of the information comprised by the transaction data.
The MD5 value of the corresponding transaction data can be acquired from the MySQL database and the TiDB database respectively aiming at the identification which is uniquely corresponding to the transaction data and has random attribute, and whether the transaction data are consistent is judged by comparing whether the MD5 values are the same.
The corresponding transaction data can be acquired from the MySQL database and the TiDB database respectively according to the identification which is uniquely corresponding to the transaction data and has random attribute, and each item of information included in the transaction data is compared one by one to judge whether the transaction data are matched.
For the two verification modes, when the transaction data are inconsistent, the transaction data acquired from the MySQL database can be utilized to modify the corresponding transaction data in the TiDB database, so that the corresponding same transaction data stored in the MySQL database and the TiDB database can be realized.
The script for data comparison and verification can be pre-written by a technician and deployed in the equipment for executing data migration, so that when the data comparison verification is needed in the data migration process, the logic can be directly called to realize the comparison verification of the data consistency.
S201-S203 may be understood as preparation prior to performing data migration, providing a technical basis for data migration, and the execution order of S201-S203 is not limited herein.
S204, synchronizing the historical data through MySQL IO.
The historical data is synchronized through MySQL IO (input output), i.e., the historical transaction data is migrated. Historical transaction data can be obtained from the MySQL database and synchronized to the TiDB database, i.e., the historical transaction data is written to the TiDB table of the adapted TiDB database.
Specifically, the TiDB table of the TiDB database may be correspondingly adapted, and the history transaction data obtained from the MySQL database may be written into the TiDB database with the order number index as the primary key index.
S205, online double writing.
The online database double-write logic can be called, and newly generated transaction data can be written by using the MySQL database and the TiDB database simultaneously.
S206, resynchronizing the online data.
And synchronizing the on-line data, namely migrating the on-line data.
When the data double writing exists, the problems of data coverage and the like caused by the update sequence between the data double writing and the data synchronization due to the fact that the writing of the TiDB database fails; after online double writing, transaction data stored in the MySQL database and the TiDB database may not be consistent, and at this time, the transaction data may be re-synchronized.
In the embodiment of the invention, the online data, namely corresponding incremental transaction data, can comprise transaction data which is written into the MySQL database and the TiDB data simultaneously after online double writing, specifically, the incremental transaction data can comprise transaction data which is generated and written into the MySQL database and the TiDB database after historical transaction data, more specifically, a time node for synchronizing the first historical transaction data can be used as a time node for dividing the historical transaction data, and the transaction data which is generated and written into the MySQL database and the TiDB database after the initial synchronization of the first historical transaction data until the online double writing logic of the database is completed is used as the incremental transaction data for synchronization.
In another implementation, to provide a higher fault tolerance, a greater time buffer may be provided to further ensure that all transaction data stored in the MySQL database can be synchronized to the TiDB database, i.e., the online data may include transaction data generated and stored in the MySQL database after a time node before starting to synchronize the first piece of historical transaction data until the double write logic of the database is completed online, and synchronized as incremental transaction data.
In addition, in the process of resynchronizing the online data, consistency of the transaction data generated and written in the MySQL database and the TiDB database after the historical transaction data can be checked, and specifically, S203 can be invoked to check and modify consistency of the transaction data by pre-developing and deploying data comparison and check scripts.
S207, reading and switching the online MySQL to TiDB.
After the verification is completed, the transaction data stored in the MySQL database and the TiDB database can be considered to be consistent, namely the transaction data stored in the TiDB database is trusted, and the TiDB database can be used for replacing the MySQL database at the moment to provide data reading service, namely the reading switching from MySQL to TiDB is performed. And, the process can be performed online without suspending online services.
S208, performing double-write on the line.
After the read switch of the database is completed, online de-duplication writing can be performed, i.e. writing of transaction data to the MySQL database and the TiDB database simultaneously is stopped.
Specifically, the write logic to write transaction data to the MySQL database may be turned off while keeping the write logic to write transaction data to the TiDB database on to write transaction data to the TiDB database when transaction data is generated at the business end. Thereby switching the writing service of the data from the MySQL database to the TiDB database. Thus, the switching of the database read-write service is realized.
According to the embodiment of the invention, smooth migration of transaction data from the MySQL database to the TiDB database can be realized without suspending the read-write service of the data, so that the read-write service for the transaction data is smoothly switched from the MySQL database to the TiDB database, the on-line function is not influenced, the service side has no perception, and the use experience of a user can be improved.
And the transaction data with large data volume in the MySQL list table is migrated to the TiDB, so that the problem of rapid performance reduction caused by continuously using the MySQL database by the service can be avoided, and economic loss caused to the company is avoided. For example, when the service end executes the order payment process, the performance of the MySQL database is reduced, so that a large amount of services are accumulated to cause the service side to wait for too long, and the transaction cancellation and other conditions occur. And the TiDB supports MySQL transmission protocol and most grammar of MySQL, so that the existing application can still be used continuously after the MySQL data is migrated to the TiDB, and the high availability is realized.
According to the data migration method provided by the embodiment of the invention, the TiDB database is a distributed database, and data slicing is performed when data is stored, so that transaction data is stored by using the TiDB database, the database splitting and the table splitting can be avoided, the index field with the incremental attribute in the TiDB table is deleted, the table is cut for the TiDB database, and the hot spot problem that the read-write performance of the data is influenced because the index increment concentrates the read-write of the data at the tail after the transaction data is migrated to the TiDB database is solved, and the read-write performance of the data is improved under the condition that the database splitting and the table splitting are not performed.
As shown in fig. 3, an embodiment of the present invention provides a data migration apparatus, which may include:
a receiving module 301, configured to receive transaction data;
a first writing module 302, configured to write transaction data into the MySQL database and the target database;
the first matching module 303 is configured to match incremental transaction data in the MySQL database and the target database; the incremental transaction data represents newly written transaction data;
the first switching module 304 is configured to write the transaction data received subsequently into the target database when the matching result is consistent.
Optionally, the first writing module 302 is specifically configured to write the transaction data into the MySQL database based on a transaction data table that stores the transaction data in the MySQL database; and writing the transaction data into the target database by taking an identification which is uniquely corresponding to the transaction data and has random attribute as a primary key.
Optionally, taking the incremental transaction data written in the MySQL database as first incremental transaction data and the incremental transaction data written in the target database as second incremental transaction data;
the first matching module 303 is specifically configured to determine, for each identifier that uniquely corresponds to the first incremental transaction data and has a random attribute, second target incremental transaction data that respectively corresponds to each identifier in the second incremental transaction data, and use the first incremental transaction data and the second target incremental transaction data determined by the same identifier as a set of data to be verified; and judging whether the transaction data to be verified are matched.
Optionally, the apparatus further comprises,
the modification module is used for modifying various information included in the second target incremental transaction data based on various information included in the first incremental transaction data in the to-be-verified transaction data with inconsistent matching results aiming at the to-be-verified transaction data with inconsistent matching results after the incremental transaction data in the MySQL database and the target database are matched; the second target incremental transaction data includes information that matches the information included in the first incremental transaction data after modification.
Optionally, the apparatus further comprises:
the second writing module is used for acquiring historical transaction data before writing the transaction data received subsequently into the target database when the transaction data received subsequently is received subsequently if the matching results are consistent; the historical transaction data includes transaction data stored in MySQL data prior to writing the transaction data to the MySQL database and the target database; historical transaction data is written to the target database.
Optionally, the second writing module is specifically configured to create a transaction data form for storing transaction data in the target database; the structure of the transaction data form is the same as that of a transaction data form used for storing transaction data in a MySQL database; deleting the index field with the incremental attribute from the transaction data form to obtain a transaction data form matched with the target database; based on the structure of the transaction data form adapted to the target database, the historical transaction data is written into the target database by taking the identification which is uniquely corresponding to the transaction data and has random attribute as a primary key.
Optionally, the apparatus further comprises,
the second matching module is used for judging whether all information included in the historical transaction data with the same identifier is matched in the MySQL database and the target database according to each identifier which is uniquely corresponding to the historical transaction data and has random attribute;
the first switching module 304 is specifically configured to, if the matching result of the incremental transaction data is consistent and the matching result of the historical transaction data is consistent, write the subsequently received transaction data into the target database when the transaction data is subsequently received.
Optionally, the apparatus further comprises,
and the second switching module is used for reading the transaction data from the target database when the transaction end requests to read the transaction data if the matching results are consistent after the incremental transaction data in the MySQL database and the target database are matched.
According to the data migration device provided by the embodiment of the invention, when transaction data are received, the data are written into the MySQL database and the target database at the same time, incremental transaction data in the two databases are matched, and when the matching results are consistent, the data written into the target database are considered to be credible, and then the target database can be utilized to replace the MySQL database to provide service. The embodiment of the invention can realize the online data migration without suspending the service, thereby improving the stability of the service.
The embodiment of the invention also provides an electronic device, as shown in fig. 4, which comprises a processor 401, a communication interface 402, a memory 403 and a communication bus 404, wherein the processor 401, the communication interface 402 and the memory 403 complete communication with each other through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401 is configured to implement the steps of the data migration method when executing the program stored in the memory 403.
The communication bus mentioned by the above terminal may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, where a computer program is stored, the computer program implementing the data migration method according to any one of the above embodiments when executed by a processor.
In yet another embodiment of the present invention, a computer program product comprising instructions that, when run on a computer, cause the computer to perform the data migration method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, electronic devices, computer-readable storage media, and computer program product embodiments containing instructions, the description is relatively simple as it is substantially similar to method embodiments, as relevant points are found in the partial description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (11)

1. A method of data migration, the method comprising:
receiving transaction data;
writing the transaction data into a MySQL database and a target database;
matching the increment transaction data in the MySQL database and the target database, and if the matching results are consistent, writing the subsequently received transaction data into the target database when the transaction data are subsequently received; the incremental transaction data represents newly written transaction data.
2. The method of claim 1, wherein writing the transaction data to a MySQL database and a target database comprises:
writing transaction data into a MySQL database based on a transaction data form for storing the transaction data in the MySQL database;
and writing the transaction data into a target database by taking an identification which is uniquely corresponding to the transaction data and has random attribute as a primary key.
3. The method of claim 1, wherein the incremental transaction data written in the MySQL database is used as first incremental transaction data and the incremental transaction data written in the target database is used as second incremental transaction data;
the matching the incremental transaction data in the MySQL database and the target database includes:
for each identifier in the identifiers which uniquely corresponds to the first incremental transaction data and has random attribute, determining second target incremental transaction data which respectively corresponds to each identifier in the second incremental transaction data, and taking the first incremental transaction data and the second target incremental transaction data which are determined by the same identifier as a group of data to be verified;
and judging whether the transaction data to be verified are matched.
4. The method of claim 3, wherein after said matching of incremental transaction data in said MySQL database and said target database, said method further comprises:
if the matching results are inconsistent, modifying all information included in the second target incremental transaction data based on all information included in the first incremental transaction data in the transaction data to be verified, which are inconsistent in the matching results, aiming at the transaction data to be verified; the second target incremental transaction data includes information that matches the information included in the first incremental transaction data after modification.
5. The method of claim 1, wherein if the matching results are consistent, then before writing the subsequently received transaction data to the target database upon subsequent receipt of the transaction data, the method further comprises:
acquiring historical transaction data; the historical transaction data comprises transaction data stored in MySQL data before the transaction data is written into a MySQL database and a target database;
writing the historical transaction data into the target database.
6. The method of claim 5, wherein the writing the historical transaction data to the target database comprises:
creating a transaction data form for storing transaction data in the target database; the structure of the transaction data form is the same as that of the transaction data form used for storing transaction data in the MySQL database;
deleting the index field with the incremental attribute from the transaction data form to obtain a transaction data form adapted to the target database;
based on the structure of a transaction data form adapted to the target database, writing the historical transaction data into the target database by taking an identification which uniquely corresponds to the transaction data and has a random attribute as a primary key.
7. The method of claim 5, wherein after said writing said historical transaction data to said target database, said method further comprises:
judging whether all information included in the historical transaction data with the same identifier in the MySQL database and the target database is matched or not according to each identifier which is uniquely corresponding to the historical transaction data and has random attribute;
if the matching results are consistent, writing the transaction data received subsequently into a target database when the transaction data is received subsequently, wherein the method comprises the following steps:
and if the matching result of the incremental transaction data is consistent and the matching result of the historical transaction data is consistent, writing the subsequently received transaction data into a target database when the transaction data is subsequently received.
8. The method of claim 1, wherein after said matching of incremental transaction data in the MySQL database and the target database, the method further comprises:
and if the matching results are consistent, reading the transaction data from the target database when the transaction end requests to read the transaction data.
9. A data migration apparatus, the apparatus comprising:
The receiving module is used for receiving transaction data;
the first writing module is used for writing the transaction data into a MySQL database and a target database;
the first matching module is used for matching the MySQL database with the incremental transaction data in the target database; the incremental transaction data representing newly written transaction data;
and the first switching module is used for writing the transaction data received subsequently into the target database when the transaction data is received subsequently if the matching results are consistent.
10. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the method of any one of claims 1-8 when executing a program stored on a memory.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-8.
CN202311325674.4A 2023-10-13 2023-10-13 Data migration method and device, electronic equipment and storage medium Pending CN117493303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311325674.4A CN117493303A (en) 2023-10-13 2023-10-13 Data migration method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311325674.4A CN117493303A (en) 2023-10-13 2023-10-13 Data migration method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117493303A true CN117493303A (en) 2024-02-02

Family

ID=89666710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311325674.4A Pending CN117493303A (en) 2023-10-13 2023-10-13 Data migration method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117493303A (en)

Similar Documents

Publication Publication Date Title
US10204114B2 (en) Replicating data across data centers
US11385830B2 (en) Data storage method, apparatus and system, and server, control node and medium
CN106815218B (en) Database access method and device and database system
US20190245919A1 (en) Method and apparatus for information processing, server and computer readable medium
US20150213100A1 (en) Data synchronization method and system
EP3258396A1 (en) Data synchronization method, device and system
CN113766035A (en) Method and device for service acceptance and consensus
CN111143382B (en) Data processing method, system and computer readable storage medium
CN111049928B (en) Data synchronization method, system, electronic device and computer readable storage medium
CN112636992B (en) Dynamic routing method, device, equipment and storage medium
US11223471B2 (en) Blockchain-type data storage
CN107181773A (en) Data storage and data managing method, the equipment of distributed memory system
EP3107010B1 (en) Data integration pipeline
CN112148206A (en) Data reading and writing method and device, electronic equipment and medium
US12001450B2 (en) Distributed table storage processing method, device and system
US10853892B2 (en) Social networking relationships processing method, system, and storage medium
CN112749172A (en) Data synchronization method and system between cache and database
CN117493303A (en) Data migration method and device, electronic equipment and storage medium
CN113760519B (en) Distributed transaction processing method, device, system and electronic equipment
US10712959B2 (en) Method, device and computer program product for storing data
CN112699129A (en) Data processing system, method and device
CN112711382A (en) Data storage method and device based on distributed system and storage node
CN110968267A (en) Data management method, device, server and system
US11947822B2 (en) Maintaining a record data structure using page metadata of a bookkeeping page
WO2024108348A1 (en) Method and system for eventual consistency of data types in geo-distributed active-active database systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination