CN110569142A - ORACLE data increment synchronization system and method - Google Patents

ORACLE data increment synchronization system and method Download PDF

Info

Publication number
CN110569142A
CN110569142A CN201910810650.5A CN201910810650A CN110569142A CN 110569142 A CN110569142 A CN 110569142A CN 201910810650 A CN201910810650 A CN 201910810650A CN 110569142 A CN110569142 A CN 110569142A
Authority
CN
China
Prior art keywords
data
component
oracle
message queue
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910810650.5A
Other languages
Chinese (zh)
Inventor
王伟
王征
孙美君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910810650.5A priority Critical patent/CN110569142A/en
Publication of CN110569142A publication Critical patent/CN110569142A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an ORACLE data increment synchronization system and a method, which comprises four modules, namely a management component, a data reading component, a data writing component and a message queue component, wherein a plurality of ORACLE data increment synchronization systems can be deployed, each ORACLE data increment synchronization system also comprises a high-availability component, and the ORACLE data increment synchronization system comprises the steps of 1, firstly acquiring and analyzing an ORACLE log file, extracting a transaction from the log file, assembling the analyzed data into protocol data, then sending the protocol data to a message queue and transmitting the protocol data to a different place; and 2, the remote process acquires the cache data from the message queue, completes data mapping, transmits the cache data to the remote according to a specified data protocol, processes the protocol data according to the type of the database to optimize the data storage mode, splices the cache data into a new SQL aiming at the target database according to the mapping relation, and finally lands on the remote database to complete the synchronous operation. The invention has more flexible synchronous operation and better reliability, saves a large amount of cost for enterprises and improves the data security.

Description

ORACLE data increment synchronization system and method
Technical Field
the invention relates to the technical field of real-time data synchronization of databases, in particular to an ORACLE data synchronization method.
Background
with the development of enterprise business systems, in order to provide higher customer service and ensure user data security and business continuity, some important systems have not been satisfied with simple local dual-computer hot backup and fault-tolerant switching. More and more customers put forward higher system availability requirements, and the realization of real remote application level disaster recovery protection is required. The comprehensive remote disaster recovery protection scheme means that besides local fault recovery protection, real-time remote copying of data and real-time remote switching of a service system (including a database and application software) are realized. At present, the database products and versions are numerous, local and remote databases may be isomorphic or heterogeneous, so that data synchronization between different databases is also to be realized, and local and remote databases may be isomorphic or heterogeneous environments, so that for a production system and a disaster recovery system, hardware platforms of the production system and the disaster recovery system can belong to different manufacturers and different models, and different operating systems can be adopted.
from the development at home and abroad, the ORACLE database allopatric fault-tolerant system has some solutions, and the existing disaster recovery and synchronization means of the ORACLE mainly comprise ADG \ OGG \ DSG \ stream replication, which has advantages and disadvantages, but the cost is high or the universality and the performance are limited, so the ORACLE database allopatric fault-tolerant system is not suitable for wide application. Therefore, the method has great development value and significance for the remote disaster recovery protection requirement, the universality requirement, the high-performance requirement and the like of the ORACLE database.
Disclosure of Invention
In summary, the present invention provides an ORACLE data increment synchronization system and method, which utilize the solution of remote database synchronization to solve the remote disaster recovery protection requirement for the ORACLE database.
The ORACLE data increment synchronization system comprises a management component 10, a reading data component 20, a writing data component 30 and a message queue component 40, wherein the management component is respectively connected with the reading data component, the writing data component and the message queue component, the reading data component takes ORACLE data as input, and the writing data component takes SQL as output; wherein:
The management component 10 is responsible for uniformly controlling the interaction of the data reading component and the data writing component, verifying the data processing flow of the reading and writing end, ensuring the consistency of data processing of the reading and writing end, providing a restful calling interface for the outside and uniformly managing abnormal information;
the read data assembly 20 is responsible for reading an ORACLE log file, analyzing the log file, filtering a transaction, disassembling a field type, recombining transmission data and the like, then sending the processed data to a message queue, supporting a breakpoint continuous transmission function after the system is restarted, and counting and analyzing DML data;
The data writing component 30 is responsible for acquiring cache data from the message queue, completing data mapping, optimizing a data storage mode, splicing and executing SQL, and counting the condition of DML data after execution;
the message queue component 40 is responsible for temporarily storing the data sent by the read data component for data processing by the write data component.
the invention relates to an ORACLE data increment synchronization method, which specifically comprises the following steps:
Step 1, firstly, obtaining and analyzing an ORACLE log file, extracting a transaction from the log file, splitting each SQL in the transaction, investigating which data protocol is adopted for transmission, assembling the analyzed data into protocol data, then sending the processed data to a message queue, and transmitting the protocol data to a different place; meanwhile, the step supports the breakpoint resume function after the system is restarted and the DML data is analyzed through statistics;
And 2, the remote process acquires the cache data from the message queue, completes data mapping, transmits the cache data to the remote according to a specified data protocol, processes protocol data according to the type of the database to optimize a data warehousing mode, splices the cache data into a new SQL aiming at a target library according to the mapping relation, counts the condition of the DML data which is completed, and finally lands on the remote database to complete synchronous operation.
compared with the prior art, the method can achieve the following beneficial effects:
(1) the safety of the production environment database and the universality of the database at different places are greatly improved;
(2) The cost of the enterprise for data security is reduced;
(3) When the core function of the disaster recovery system is completed, other functions are enriched and improved, so that the synchronous operation is more flexible, the reliability is better, and the performance can reach more than half of the ogg;
(4) The method saves a large amount of cost for enterprises, improves data security and creates more value for the enterprises.
Drawings
FIG. 1 is a diagram of the overall architecture of an ORACLE data incremental synchronization system according to the present invention;
FIG. 2 is a flow chart of an ORACLE data increment synchronization method of the present invention;
FIG. 3 is a schematic diagram of an example ORACLE LogMiner tool interface.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and examples. .
the invention uses ORACLE LogMiner tool to periodically collect the mined log file as the source data at the bottom layer, and sequentially processes the source data according to the transaction as a unit, in order to ensure the performance of data transmission and the isomerism and transmission performance of the database at different places, the whole transaction information is analyzed into protocol data (protobuf data structure), and the protocol data is sent to a message queue (Kafka or rabbitMQ), and is transmitted to a remote program through the network for processing, and the process at different places acquires the protocol data from the message queue and analyzes the protocol data, and then restores the protocol data into specific SQL according to the database at different places and executes the SQL.
as shown in fig. 1, the overall architecture diagram of an ORACLE data incremental synchronization system according to the present invention mainly includes five modules, i.e., a management component 10, a read data component 20, a write data component 30, a message queue 40, and a high availability component 50.
The management component 10 is responsible for uniformly controlling the interaction of the data reading component and the data writing component, verifying the data processing flow of the reading and writing end, ensuring the consistency of data processing of the reading and writing end, providing a restful calling interface for the outside, and uniformly managing abnormal information. The state information of the system can be provided externally through a restful interface, and the data operation condition of the production library is forecasted by combining a machine learning method (the condition that the system analyzes logs and completes synchronous data is obtained through the restful interface, and the condition that DML is specifically executed is obtained at the same time).
Detailed explanation of each function
The read-write components are uniformly controlled: the read-write components interact during working and carry out coordination and message forwarding through the management components.
Verification of metadata and mapping relationships: and verifying the source data information of the source library and the target library in the configuration file, and if the verification fails, not performing synchronous operation.
And (3) abnormal information management: and uniformly managing the abnormal information of each component, and if any component is abnormal, reporting an error and exiting.
Calling an external interface: and providing a restful interface for the third party to call (only temporarily supporting the interface call for the information such as synchronous rate, health state, DML number and the like).
The read data assembly 20 is responsible for reading an ORACLE log file, analyzing the log file, filtering a transaction, disassembling a field type, recombining transmission data and the like, then sending the processed data to a message queue (rabbitmq or Kafka), supporting a breakpoint resume function after the system is restarted, and statistically analyzing the data to DML data;
the specific functions are explained as follows:
Reading an ORACLE log file: the ORACLE logs are classified into a plurality of categories, and how to acquire the log files, how to control the range of acquiring the log files, and how to improve the performance of acquiring the log files are researched.
Analyzing the log file: the ORACLE log is in a binary form, how to analyze the ORACLE log into a plain-text log file is researched, meanings of various parameters of the ORACLE log file are researched, so that how to extract a transaction in a log file is known, how to guarantee the sequence among the transactions (the transaction refers to the transaction only containing DML operation and never considering the transaction containing DDL operation), how to split each SQL in the transaction is researched (the SQL statements recorded in the ORACLE log are primarily researched, but a read-write end database may not be isomorphic, and column names, table names and library names may not be consistent, so that the SQL statements are finally disassembled), how to analyze different data types of the ORACLE is researched, and how to improve the performance of analyzing the log file is researched.
And (3) recombining data: and researching which data protocol is adopted for transmission, and assembling the analyzed data into protocol data.
breakpoint resuming: after the system is abnormally stopped, the system can be continuously synchronized from the node finished last time when being restarted, so that the incremental data of the production library is not lost; data recovery from a relatively distant point may also be performed artificially.
And (3) sending data: how to send the recombined data to the message queue, the transmission efficiency is considered, especially the overall performance of the producer and the consumer.
statistical information: the information about the DML operation analyzed by the statistics reading component can be obtained by external calling through a restful interface, and mining, analyzing and predicting the operation condition of the production library through a machine learning related algorithm.
The data writing component 30 is responsible for acquiring cache data from the message queue, completing data mapping, optimizing a data storage mode, splicing and executing SQL, and counting the data condition of the executed DML.
The specific functions are explained as follows:
obtaining cache data: data sent by the reading component is obtained from the message queue, and if the data is sent by sub-packaging for a large transaction, how to combine the packages is considered so as to ensure the integrity of the transaction.
And finishing data mapping: the writing component can reconstruct SQL according to the configuration of the configuration file, and can support the reconstruction of a library level, a table level and a column level. The library level is that the library names of the read-write end can be different, but the table names and the column names below the library are completely consistent; the table level is that the library names and the table names can be different, the column names are the same, but the corresponding relation of the table names and the corresponding relation of the library names need to be configured; the column level is the library name, the list owner and the column name can be different, and the corresponding relation of the library, the corresponding relation of the list and the corresponding relation of the column are configured.
Splicing SQL and executing: and splicing into a new SQL aiming at the target library according to the mapping relation, and considering the execution performance.
and (3) counting DML information: and counting DML data information completed by the writing component for calling a restful interface.
The message queue component 40 is responsible for temporarily storing data sent by the read data component for data processing by the write data component, and adopts a third-party Kafka system or a Rabbit MQ message queue, both of which are high-throughput distributed publish-subscribe message systems, which can process all action stream data of a consumer scale and ensure high availability of processing data.
The high availability component 50 may be configured to deploy multiple sets of the system in order to ensure the availability of the whole system as much as possible, and when the working system exits due to a problem, the working system may be switched to another set of the system for synchronous operation, so as to ensure the overall availability of the system.
FIG. 2 is a schematic diagram of an example ORACLE LogMiner tool interface. ORACLE LogMiner is a practical and very useful analysis tool provided by ORACLE corporation from product 8i, with which specific content in ORACLE online/archive log files can be easily obtained, and in particular, which can analyze all DML and DDL statements for database operations. After the logminer is correctly installed, corresponding information can be obtained through the steps of creating a data dictionary, adding a log file to be analyzed, analyzing a log and the like, and the following steps are described as examples:
Performing a storing procedure by specifying scn number (scn number being understood as a sequence number for redo log information)
EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>'D:\..\practice\LOGMNR\dictionary.ora',
StartScn=>20,
EndScn=>50);
Acquiring information mined by the logminer at this time by inquiring the v $ logmnr _ contents of the dynamic performance view;
SELECT sql_redo FROM v$logmnr_contents;
And checking the result.
The sql _ redo column in the drawing is the corresponding information requested, and this column records the sql statement of the operation that occurred in the ORACLE database, and is also the source of the analyzed and processed data.
Fig. 3 is a flowchart of an ORACLE data increment synchronization method according to the present invention. The method specifically comprises the following steps:
Step 1, firstly, obtaining and analyzing an ORACLE log file, reading and analyzing the ORACLE log file, extracting transactions from the log file, ensuring the sequence among the transactions, splitting each SQL in the transactions, investigating which data protocol is adopted for transmission, assembling the analyzed data into protocol data, and then sending the processed data to a message queue to transmit to different places; meanwhile, the method supports the breakpoint resume function after the system is restarted and counts and analyzes DML data;
step 2, the remote process acquires cache data from the message queue (if the cache data is sent by sub-packets of large transactions, how to merge the packets is considered to ensure the integrity of the transactions), completes data mapping, transmits the data to the remote according to a specified data protocol after processing, then processes the protocol data to optimize the data storage mode in the remote according to the type of the database, splices the data into a new SQL aiming at the target database according to the mapping relation, counts the condition of DML data which is completed, and finally completes the synchronization operation on the remote database.

Claims (4)

1. An ORACLE data increment synchronization system is characterized by comprising four modules, namely a management component (10), a reading data component (20), a writing data component (30) and a message queue component (40), wherein the management component is respectively connected with the reading data component, the writing data component and the message queue component, the reading data component takes ORACLE data as input, and the writing data component takes SQL as output; wherein:
the management component (10) is responsible for uniformly controlling the interaction of the data reading component and the data writing component, verifying the data processing flow of the reading and writing end, ensuring the consistency of data processing of the reading and writing end, providing a restful calling interface for the outside and uniformly managing abnormal information;
the read data assembly (20) is responsible for reading an ORACLE log file, analyzing the log file, filtering a transaction, disassembling a field type, recombining transmission data and the like, then sending the processed data to a message queue, supporting a breakpoint resume function after the system is restarted, and counting and analyzing DML data;
The data writing component (30) is responsible for acquiring cache data from the message queue, completing data mapping, optimizing a data storage mode, splicing and executing SQL (structured query language), and counting the condition of DML (distributed language) data which is executed;
And the message queue component (40) is responsible for temporarily storing the data sent by the reading data component for data processing of the writing data component.
2. The ORACLE data incremental synchronization system of claim 1, wherein a plurality of the ORACLE data incremental synchronization systems are deployed.
3. An ORACLE data incremental synchronization system according to claim 2, further comprising a high availability component (50) in each ORACLE data incremental synchronization system, and when a problem exits in the working system, switching to another ORACLE data incremental synchronization system for synchronization operation, so as to ensure the overall availability of the system.
4. an ORACLE data increment synchronization method specifically comprises the following steps:
step 1, firstly, obtaining and analyzing an ORACLE log file, extracting a transaction from the log file, splitting each SQL in the transaction, investigating which data protocol is adopted for transmission, assembling the analyzed data into protocol data, then sending the processed data to a message queue, and transmitting the protocol data to a different place; meanwhile, the step supports the breakpoint resume function after the system is restarted and the DML data is analyzed through statistics;
and 2, the remote process acquires the cache data from the message queue, completes data mapping, transmits the cache data to the remote according to a specified data protocol, processes protocol data according to the type of the database to optimize a data warehousing mode, splices the cache data into a new SQL aiming at a target library according to the mapping relation, counts the condition of the DML data which is completed, and finally lands on the remote database to complete synchronous operation.
CN201910810650.5A 2019-08-29 2019-08-29 ORACLE data increment synchronization system and method Pending CN110569142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910810650.5A CN110569142A (en) 2019-08-29 2019-08-29 ORACLE data increment synchronization system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910810650.5A CN110569142A (en) 2019-08-29 2019-08-29 ORACLE data increment synchronization system and method

Publications (1)

Publication Number Publication Date
CN110569142A true CN110569142A (en) 2019-12-13

Family

ID=68776832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910810650.5A Pending CN110569142A (en) 2019-08-29 2019-08-29 ORACLE data increment synchronization system and method

Country Status (1)

Country Link
CN (1) CN110569142A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414358A (en) * 2019-12-30 2020-07-14 杭州美创科技有限公司 Method for loading relational database data
CN111488243A (en) * 2020-03-19 2020-08-04 北京金山云网络技术有限公司 MongoDB database backup and recovery method and device, electronic equipment and storage medium
CN113486019A (en) * 2021-07-27 2021-10-08 中国银行股份有限公司 Method and device for automatically triggering real-time batch synchronization of remote multi-database data
CN113918657A (en) * 2021-12-14 2022-01-11 天津南大通用数据技术股份有限公司 Parallel high-performance incremental synchronization method
CN114003622A (en) * 2021-12-30 2022-02-01 天津南大通用数据技术股份有限公司 Huge transaction increment synchronization method between transaction type databases
CN115203336A (en) * 2022-09-19 2022-10-18 平安银行股份有限公司 Database data real-time synchronization method, system, computer terminal and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183377A (en) * 2007-12-10 2008-05-21 华中科技大学 High availability data-base cluster based on message middleware
CN103067483A (en) * 2012-12-25 2013-04-24 广东邮电职业技术学院 Remote data increment synchronization method and device based on data package
CN103617176A (en) * 2013-11-04 2014-03-05 广东电子工业研究院有限公司 Method for achieving automatic synchronization of multi-source heterogeneous data resources
CN104506496A (en) * 2014-12-10 2015-04-08 山大地纬软件股份有限公司 Quasi-real-time data increment distribution middleware based on Oracle Streams technology and method
CN105320769A (en) * 2015-10-28 2016-02-10 浪潮(北京)电子信息产业有限公司 Data synchronization method and system for Oracle database
CN105868078A (en) * 2016-04-14 2016-08-17 国家电网公司 System and method for Oracle RAC (real application clusters) database SQL (structured query language) stream capture on basis of dynamic view monitoring
CN106126753A (en) * 2016-08-23 2016-11-16 易联众信息技术股份有限公司 The method of increment extractions based on big data
CN107423452A (en) * 2017-09-02 2017-12-01 国网辽宁省电力有限公司 A kind of power network heterogeneous database synchronously replicates moving method
CN108228756A (en) * 2017-12-21 2018-06-29 江苏瑞中数据股份有限公司 Data based on the PG databases of daily record analytic technique to Hadoop platform synchronize clone method
CN108804613A (en) * 2018-05-30 2018-11-13 国网山东省电力公司经济技术研究院 A kind of Various database real time fusion system and its fusion method
CN109101627A (en) * 2018-08-14 2018-12-28 交通银行股份有限公司 heterogeneous database synchronization method and device
US20190102266A1 (en) * 2017-09-29 2019-04-04 Oracle International Corporation Fault-tolerant stream processing
CN109960710A (en) * 2019-01-16 2019-07-02 平安科技(深圳)有限公司 Method of data synchronization and system between database

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183377A (en) * 2007-12-10 2008-05-21 华中科技大学 High availability data-base cluster based on message middleware
CN103067483A (en) * 2012-12-25 2013-04-24 广东邮电职业技术学院 Remote data increment synchronization method and device based on data package
CN103617176A (en) * 2013-11-04 2014-03-05 广东电子工业研究院有限公司 Method for achieving automatic synchronization of multi-source heterogeneous data resources
CN104506496A (en) * 2014-12-10 2015-04-08 山大地纬软件股份有限公司 Quasi-real-time data increment distribution middleware based on Oracle Streams technology and method
CN105320769A (en) * 2015-10-28 2016-02-10 浪潮(北京)电子信息产业有限公司 Data synchronization method and system for Oracle database
CN105868078A (en) * 2016-04-14 2016-08-17 国家电网公司 System and method for Oracle RAC (real application clusters) database SQL (structured query language) stream capture on basis of dynamic view monitoring
CN106126753A (en) * 2016-08-23 2016-11-16 易联众信息技术股份有限公司 The method of increment extractions based on big data
CN107423452A (en) * 2017-09-02 2017-12-01 国网辽宁省电力有限公司 A kind of power network heterogeneous database synchronously replicates moving method
US20190102266A1 (en) * 2017-09-29 2019-04-04 Oracle International Corporation Fault-tolerant stream processing
CN108228756A (en) * 2017-12-21 2018-06-29 江苏瑞中数据股份有限公司 Data based on the PG databases of daily record analytic technique to Hadoop platform synchronize clone method
CN108804613A (en) * 2018-05-30 2018-11-13 国网山东省电力公司经济技术研究院 A kind of Various database real time fusion system and its fusion method
CN109101627A (en) * 2018-08-14 2018-12-28 交通银行股份有限公司 heterogeneous database synchronization method and device
CN109960710A (en) * 2019-01-16 2019-07-02 平安科技(深圳)有限公司 Method of data synchronization and system between database

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414358A (en) * 2019-12-30 2020-07-14 杭州美创科技有限公司 Method for loading relational database data
CN111488243A (en) * 2020-03-19 2020-08-04 北京金山云网络技术有限公司 MongoDB database backup and recovery method and device, electronic equipment and storage medium
CN111488243B (en) * 2020-03-19 2023-07-07 北京金山云网络技术有限公司 Backup and recovery method and device for MongoDB database, electronic equipment and storage medium
CN113486019A (en) * 2021-07-27 2021-10-08 中国银行股份有限公司 Method and device for automatically triggering real-time batch synchronization of remote multi-database data
CN113486019B (en) * 2021-07-27 2024-02-23 中国银行股份有限公司 Automatic triggering real-time batch synchronization method and device for remote multi-database data
CN113918657A (en) * 2021-12-14 2022-01-11 天津南大通用数据技术股份有限公司 Parallel high-performance incremental synchronization method
CN113918657B (en) * 2021-12-14 2022-03-15 天津南大通用数据技术股份有限公司 Parallel high-performance incremental synchronization method
CN114003622A (en) * 2021-12-30 2022-02-01 天津南大通用数据技术股份有限公司 Huge transaction increment synchronization method between transaction type databases
CN114003622B (en) * 2021-12-30 2022-04-08 天津南大通用数据技术股份有限公司 Huge transaction increment synchronization method between transaction type databases
CN115203336A (en) * 2022-09-19 2022-10-18 平安银行股份有限公司 Database data real-time synchronization method, system, computer terminal and storage medium

Similar Documents

Publication Publication Date Title
CN110569142A (en) ORACLE data increment synchronization system and method
CN110209726B (en) Distributed database cluster system, data synchronization method and storage medium
CN104809202B (en) A kind of method and apparatus of database synchronization
CN110222036B (en) Method and system for automated database migration
CN107145403B (en) Relational database data backtracking method oriented to Web development environment
CN107818431B (en) Method and system for providing order track data
US10621049B1 (en) Consistent backups based on local node clock
JP6254606B2 (en) Database streaming restore from backup system
CN101187888A (en) Method for coping database data in heterogeneous environment
WO2007036932A2 (en) Data table management system and methods useful therefor
WO2019109854A1 (en) Data processing method and device for distributed database, storage medium, and electronic device
CN109086216B (en) Automatic test system
CN110597891B (en) Device, system, method and storage medium for aggregating MySQL into PostgreSQL database
CN112163039A (en) Data resource standardization management system based on enterprise-level data middling analysis domain
CN111913933B (en) Power grid historical data management method and system based on unified support platform
CN111339118A (en) Kubernetes-based resource change history recording method and device
CN112100227A (en) Big data processing method based on multilevel heterogeneous data storage
CN101937334A (en) Calculation support method and system
CN114416868A (en) Data synchronization method, device, equipment and storage medium
CN112363873A (en) Distributed consistent backup and recovery system and backup method thereof
CN112905676A (en) Data file importing method and device
CN115905413A (en) Data synchronization platform based on Python corotation and DataX
CN116186082A (en) Data summarizing method based on distribution, first server and electronic equipment
CN108681495A (en) A kind of bad block repair method and device
CN108664503A (en) A kind of data archiving method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213