CN116860869A - Queue delivery method and system under primary key concurrency scene - Google Patents
Queue delivery method and system under primary key concurrency scene Download PDFInfo
- Publication number
- CN116860869A CN116860869A CN202310613040.2A CN202310613040A CN116860869A CN 116860869 A CN116860869 A CN 116860869A CN 202310613040 A CN202310613040 A CN 202310613040A CN 116860869 A CN116860869 A CN 116860869A
- Authority
- CN
- China
- Prior art keywords
- queue
- delivery
- primary key
- change
- statement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000002716 delivery method Methods 0.000 title claims abstract description 12
- 230000008859 change Effects 0.000 claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 17
- 238000012163 sequencing technique Methods 0.000 claims abstract description 16
- 238000012384 transportation and delivery Methods 0.000 claims description 49
- 238000004590 computer program Methods 0.000 claims description 10
- 230000003139 buffering effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 2
- 230000010076 replication Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000000084 colloidal system Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 235000003642 hunger Nutrition 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000037351 starvation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The application relates to a queue delivery method and a system under a primary key concurrency scene in the technical field of big data processing, and the method comprises the following steps: pre-sequencing WAL log change sentences according to UK/PK, and caching the pre-sequencing result into a memory queue; the change of the queue is released, the queue with the spare position is searched in a polling way, and the DML operation related to WAL log change sentences is sequentially delivered according to the sequence of the spare position of the queue from more to less and the sequence of the pre-sequencing result, so that the problem of unbalanced concurrent queue when the hot spot update exists in the source data change is solved.
Description
Technical Field
The application relates to the technical field of big data processing, in particular to a queue delivery method and a system under a primary key concurrency scene.
Background
In the process of data synchronous replication, a DML generated by a source database needs to be synchronized to a target database through a replication link, and a transaction log (WAL log) of the database generally records all transaction operations in series, if the transaction log is to be delivered to the target end efficiently, the transaction log needs to be delivered concurrently, so that the serial log needs to be delivered concurrently according to a main key or a unique key of a change record, and the change sequence of a record in a certain row is ensured to be consistent with the sequence of the source end, so that the data correctness is ensured.
In order to realize concurrency synchronization of the primary key or the unique key, a consistent Hash algorithm is generally adopted based on the primary key or the unique key, the primary key or the unique key is mapped to a specific queue number, and the fact that the same primary key or the unique key can be mapped to the same queue number is ensured, so that concurrency between the primary key and the unique key is realized, and the purpose of ordering inside the queue is realized. However, when the source data change has a hot primary key or a unique key update, the situation that the concurrent queue is unbalanced may occur, that is, some synchronous queues are idle and some synchronous queues are busy.
Disclosure of Invention
Aiming at the defects in the prior art, the application provides a queue delivery method and a system under a primary key concurrency scene, which solve the problem of unbalanced concurrency queues when hot spot update exists in source data change.
In order to solve the technical problems, the application is solved by the following technical scheme:
a queue delivery method under a primary key concurrency scene comprises the following steps:
pre-sequencing WAL log change sentences according to UK/PK, and caching the pre-sequencing result into a memory queue;
and releasing the change of the queue, polling to find the queue with the spare position, and sequentially delivering the DML operation related to the WAL log change statement according to the sequence of the spare position of the queue from more to less and the sequence of the pre-sequencing result.
Optionally, the method further comprises the following steps:
stopping delivering the current batch after the empty positions of all the queues are delivered to be full, and executing the DML operation of the current batch;
after the delivery of the current batch is completed, the DML operation related to the last WAL log change statement is carried out, and the change operation which is not delivered and is the same as the WAL log main key is remained in the pre-ordering result, and the change operation is preferentially delivered to the queue with the most spare positions according to the sequence of the spare positions from the most spare positions.
Optionally, after caching the pre-ordered result in the memory queue, the method further includes the following steps:
judging whether a statement is to be executed before the first statement of the queue, if not, releasing the change of the queue, polling and searching the queue with the vacant position, otherwise, waiting for the completion of the submission of the preamble statement.
Optionally, when delivering DML operations related to WAL log change statements, the number of deliveries at one time is set to be less than or equal to the number of empty positions of the queue.
Optionally, the number of empty positions of the queue = queue length-number of delivery + number of operations completed execution.
A queue delivery system under a primary key concurrency scene, wherein the queue delivery system under the primary key concurrency scene executes the queue delivery method under the primary key concurrency scene, and comprises a pre-ordering unit and a queue delivery unit;
the pre-ordering unit is used for pre-ordering WAL log change sentences according to UK/PK and caching pre-ordering results into a memory queue;
the queue delivery unit is used for releasing the change of the queue, polling and searching the queue with the vacant position, and sequentially delivering the DML operation related to the WAL log change statement according to the sequence of the vacant position of the queue from more to less and the sequence of the pre-sequencing result.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the method for queue delivery under the primary key concurrency scenario described in any one of the above.
Compared with the prior art, the technical scheme provided by the application has the following beneficial effects:
the DML sentences are put into the queue according to the UK/PK sequence by introducing the pre-ordered queue cache, and the next sentence or N sentences of the UK/PK queue to which the sentence belongs are released after the current sentence is executed, so that the clean new delivery of each delivery is ensured, and the DML sentences can be randomly put into the most idle queue to be submitted;
at the same time, there are three advantages: the problem of hot spot queues when the PK/UK is scattered directly can be avoided, free busy is avoided, and the total throughput is improved; the problem that some frequently updated main keys or only one body fills up a certain submitting thread queue and other sentences mapped to the thread are blocked is avoided, the condition that some sentences are submitted to starvation or the delay is extremely large is avoided, and the long tail delay of data synchronization is reduced; and subsequent sentences of the same UK/PK queue are released in batches after the previous sentence is submitted, so that the overall performance is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a flowchart of a queue delivery method under a primary key concurrency scenario according to a first embodiment;
fig. 2 is a schematic illustration of delivery of a specific case according to the present embodiment.
Detailed Description
The present application will be described in further detail with reference to the following examples, which are illustrative of the present application and are not intended to limit the present application thereto.
Example 1
As shown in fig. 1, a queue delivery method in a primary key concurrency scenario includes the following steps: the WAL log change statement is pre-ordered according to UK (unique Value constraint)/PK (primary Key constraint), and the pre-ordering result is cached in a memory queue (RowListMap), specifically, firstly, a block of queue cache for pre-ordering is applied to a memory, which is called a memory queue, can be a self-caching Map of each table, the name of UK/PK is stored in the Map, one List is Value, the related DML statement of UK/PK is stored in the List, therefore, all the DML operation statements of the table are stored in each sub-caching Map, the change statement belonging to the same UK/PK is added into the corresponding List according to the sequence, and the pre-ordering result for distinguishing delivery sequence is obtained.
After the pre-ordering result is cached in the memory queue, whether a statement is to be executed or not is judged before the first statement of the queue, if not, the change of the queue is released to the queue to be submitted, the queue with a spare position is searched in a polling way, and otherwise, the thread to be submitted is waited for completing the submitting of the preamble statement.
Specifically, when a certain DML operation statement is located in the first one of the list, it indicates that other statements in front of the operation statement have been executed, and delivery is required by the delivery thread, otherwise, delivery is required to be performed after the previous DML operation statement completes delivery.
Further, before delivery, in order to ensure the equilibrium of the queues, each queue will not be blocked or idle, after delivery threads are released, the queues with idle positions need to be searched in a polling way, and then DML operations related to WAL log change sentences are sequentially delivered according to the sequence of the idle positions of the queues from more to less and the sequencing of the pre-sequencing results.
Specifically, since each submitting thread corresponds to one queue to be submitted, the submitting thread circularly fetches the DML operation statement of the queue to submit and then takes down one statement, therefore, if the length of the queue to be submitted of a certain thread is shortest, even empty, the thread is the most idle thread, the DML operation delivered to the queue can be processed more quickly, namely, the purpose of polling the queue searching for the free position is to acquire the free degree of the thread through the free position, so that delivery operation is carried out according to the free degree, and the problem of equilibrium delivery of each queue is solved, on the other hand, before delivery of the DML operation, the DML operation to be delivered needs to be carried out, and therefore, a consistent hash fixed thread mapping model is changed into a random selection most idle thread for delivery.
Further, after the empty positions of all the queues are delivered to be full, stopping delivering the current batch, waiting for delivering the next round, and executing the DML operation in the queue to be submitted by the execution thread; after the delivery of the current batch completes the DML operation related to the last WAL log change statement, taking out the rest undelivered change operation which is the same as the WAL log primary key from the pre-ordered cache queue; since the preamble change of the change operation fetched from the pre-ordering cache is already executed, the change operation fetched from the queue with the most positions can be selected according to the order of the spare positions of the queues from more to less.
Specifically, after all DML operations of the first batch are executed, when delivering the next batch, finding the DML operation of the first batch to be delivered, then preferentially finding the queue in the buffer memory to which PK/UK of the DML operation statement belongs, if the queue is idle, taking the next statement of the same PK/UK queue to carry out delivering action, thus ensuring that all the statements in the buffer memory queue ordered according to PK/UK are finally submitted to the target end in sequence, polling and searching the length of each generation of submitting queue, and preferentially selecting the most idle queue for delivering, thereby ensuring delivering efficiency, avoiding idle queues and reducing the waste of delivering resources.
By the method, the memory queue of the cache queue is introduced to be responsible for DML operation sequencing, the DML is taken from the Map by the ReleaseDML and put into the queue to be submitted, and the next is triggered only when the PK/UK corresponding to the DML is submitted completely, so that the DML is not dependent on the sequencing in the queue to be submitted, and can be put into the most idle queue to be submitted at will, and the problem of hot queues when the DML is scattered directly according to the PK/UK can be avoided.
In this embodiment, the auxiliary description of the delivery process is performed by taking 3 submitting threads as an example, where each thread corresponds to a queue to be submitted with a size of 10, each thread takes a DML operation from its own queue, and after the extraction is completed, the next DML operation can be extracted from the queue, and each time a DML operation is extracted from the queue, the queue is free of a location for the operation log to be delivered subsequently.
As shown in fig. 2, for the a, b, c, d primary keys of a table, frequent colloid updates are performed, wherein, table a is updated 100 times, table b is updated 2 times, table c is updated 100 times, and table d is updated 10 times, firstly, the database records serial DML operation logs according to the transaction submitting sequence, and then delivery is performed according to the optimized delivery queue.
Firstly, sorting in a cache according to a, b, c, d, and then in each delivery, polling and searching 3 queues, finding the queue with the most free position for delivery, wherein when delivering the DML operation related to the WAL log change statement, the number of one delivery is set to be less than or equal to the number of the free positions of the queue, for example, the length of the queue is 10, and 10 operations can be delivered until the delivery of the cache queue is completed.
Specifically, the delivery mode is as follows: first, a is carried out 1 ~a 10 Batch posted to queue 1, b 1 、b 2 Batch delivery to queue 2; will c 1 ~c 10 Batch delivery to queue 3, then start delivering d related DML operation, first find the most idle queue, assuming that each commit thread has only completed 1 commit at present, then the current queue idle position situation is: the number of empty positions of the queue=the length of the queue-the number of delivery+the number of execution completion operations, i.e. queue 1 is idle for 1, since 10 positions deliver 10, complete 1; queue 2 is free 9 because 10 locations, deliver 2, complete 1; queue 3 is idle 1, since 10 positions, posted 10, completed 1, so queue 2 is the most idle, idle 9, so d will be 1 ~d 9 Posted to queue 2.
Further, the last run, such as a, is completed for the previous batch 10 After execution, the queue is found according to the main key, and triggered by a 11 For initial batch delivery, a 11 ~a n The batch may still find the most free queue to commit, and it should be noted that this embodiment only enumerates a, b, c, d four primary keys, then a 11 ~a n The batch will still be sent to thread 1 first, but because there are more primary keys in the real scenario, if g primary keys are also mapped to thread 1, then a is performed 10 May have been plugged with g 1 ~g 9 Such DML operationsAt this time, a is 11 ~a n The batch can be delivered to other idle queues, and if other queues are idle positions, polling is waited, so that the problems of delivery blocking and queue resource balanced allocation are solved.
Example two
A queue delivery system under a primary key concurrency scene comprises a pre-ordering unit and a queue delivery unit; the pre-ordering unit is used for pre-ordering WAL log change sentences according to the UK/PK and caching the pre-ordering result into the memory queue; the queue delivery unit is used for releasing the change of the queue, polling and searching the queue with the vacant position, and sequentially delivering the DML operation related to the WAL log change statement according to the sequence of the vacant position of the queue from more to less and the sequence of the pre-sequencing result.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the queue delivery method in the primary key concurrency scenario described in the first embodiment.
More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wire segments, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and the division of modules, or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units, modules, or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed.
The units may or may not be physically separate, and the components shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU). The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (7)
1. A queue delivery method under a primary key concurrency scene is characterized by comprising the following steps:
pre-sequencing WAL log change sentences according to UK/PK, and caching the pre-sequencing result into a memory queue;
and releasing the change of the queue, polling to find the queue with the spare position, and sequentially delivering the DML operation related to the WAL log change statement according to the sequence of the spare position of the queue from more to less and the sequence of the pre-sequencing result.
2. The method for queue delivery in a primary key concurrency scenario of claim 1, further comprising the steps of:
stopping delivering the current batch after the empty positions of all the queues are delivered to be full, and executing the DML operation of the current batch;
after the delivery of the current batch is completed, the DML operation related to the last WAL log change statement is carried out, and the change operation which is not delivered and is the same as the WAL log main key is remained in the pre-ordering result, and the change operation is preferentially delivered to the queue with the most spare positions according to the sequence of the spare positions from the most spare positions.
3. The method for queue delivery under the primary key concurrency scenario of claim 1, further comprising the following steps after buffering the pre-ordered result in the memory queue:
judging whether a statement is to be executed before the first statement of the queue, if not, releasing the change of the queue, polling and searching the queue with the vacant position, otherwise, waiting for the completion of the submission of the preamble statement.
4. The method for delivering queues in a primary key concurrency scenario according to claim 1, wherein the number of deliveries at a time is set to be equal to or less than the number of empty positions of the queues when delivering DML operations related to WAL log change statements.
5. The method for queue delivery in the primary key concurrency scenario according to claim 2, wherein the number of empty positions of the queue = queue length-delivery number + number of operations completed execution.
6. A queue delivery system in a primary key concurrency scenario, wherein the queue delivery system in the primary key concurrency scenario performs the queue delivery method in the primary key concurrency scenario according to any one of claims 1-5, comprising a pre-ordering unit and a queue delivery unit;
the pre-ordering unit is used for pre-ordering WAL log change sentences according to UK/PK and caching pre-ordering results into a memory queue;
the queue delivery unit is used for releasing the change of the queue, polling and searching the queue with the vacant position, and sequentially delivering the DML operation related to the WAL log change statement according to the sequence of the vacant position of the queue from more to less and the sequence of the pre-sequencing result.
7. A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the queue posting method in a primary key concurrency scenario of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310613040.2A CN116860869A (en) | 2023-05-29 | 2023-05-29 | Queue delivery method and system under primary key concurrency scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310613040.2A CN116860869A (en) | 2023-05-29 | 2023-05-29 | Queue delivery method and system under primary key concurrency scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116860869A true CN116860869A (en) | 2023-10-10 |
Family
ID=88225692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310613040.2A Pending CN116860869A (en) | 2023-05-29 | 2023-05-29 | Queue delivery method and system under primary key concurrency scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116860869A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647579A (en) * | 2019-08-16 | 2020-01-03 | 北京百度网讯科技有限公司 | Data synchronization method and device, computer equipment and readable medium |
CN111339207A (en) * | 2020-03-20 | 2020-06-26 | 宁夏菲麦森流程控制技术有限公司 | Method for synchronizing data among multi-type databases |
CN112084206A (en) * | 2020-09-15 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Database transaction request processing method, related device and storage medium |
CN112131002A (en) * | 2020-09-24 | 2020-12-25 | 腾讯科技(深圳)有限公司 | Data management method and device |
CN112800026A (en) * | 2021-01-18 | 2021-05-14 | 中国银联股份有限公司 | Data transfer node, method, system and computer readable storage medium |
CN113918657A (en) * | 2021-12-14 | 2022-01-11 | 天津南大通用数据技术股份有限公司 | Parallel high-performance incremental synchronization method |
CN114328722A (en) * | 2021-12-06 | 2022-04-12 | 深圳市六度人和科技有限公司 | Data synchronization method and device supporting multiple data sources and computer equipment |
CN114328747A (en) * | 2021-12-31 | 2022-04-12 | 北京人大金仓信息技术股份有限公司 | Data synchronization method, data synchronization device, computer equipment and medium |
CN115168434A (en) * | 2022-07-08 | 2022-10-11 | 武汉达梦数据库股份有限公司 | Data synchronization method and equipment for shared storage cluster database |
CN115794352A (en) * | 2022-12-27 | 2023-03-14 | 天翼云科技有限公司 | Method and system for online migration of S3 object storage bucket level data |
-
2023
- 2023-05-29 CN CN202310613040.2A patent/CN116860869A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647579A (en) * | 2019-08-16 | 2020-01-03 | 北京百度网讯科技有限公司 | Data synchronization method and device, computer equipment and readable medium |
CN111339207A (en) * | 2020-03-20 | 2020-06-26 | 宁夏菲麦森流程控制技术有限公司 | Method for synchronizing data among multi-type databases |
CN112084206A (en) * | 2020-09-15 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Database transaction request processing method, related device and storage medium |
CN112131002A (en) * | 2020-09-24 | 2020-12-25 | 腾讯科技(深圳)有限公司 | Data management method and device |
CN112800026A (en) * | 2021-01-18 | 2021-05-14 | 中国银联股份有限公司 | Data transfer node, method, system and computer readable storage medium |
CN114328722A (en) * | 2021-12-06 | 2022-04-12 | 深圳市六度人和科技有限公司 | Data synchronization method and device supporting multiple data sources and computer equipment |
CN113918657A (en) * | 2021-12-14 | 2022-01-11 | 天津南大通用数据技术股份有限公司 | Parallel high-performance incremental synchronization method |
CN114328747A (en) * | 2021-12-31 | 2022-04-12 | 北京人大金仓信息技术股份有限公司 | Data synchronization method, data synchronization device, computer equipment and medium |
CN115168434A (en) * | 2022-07-08 | 2022-10-11 | 武汉达梦数据库股份有限公司 | Data synchronization method and equipment for shared storage cluster database |
CN115794352A (en) * | 2022-12-27 | 2023-03-14 | 天翼云科技有限公司 | Method and system for online migration of S3 object storage bucket level data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10698833B2 (en) | Method and apparatus for supporting a plurality of load accesses of a cache in a single cycle to maintain throughput | |
US9177027B2 (en) | Database management system and method | |
US20210097043A1 (en) | Data processing method, device, and a storage medium | |
US8799583B2 (en) | Atomic execution over accesses to multiple memory locations in a multiprocessor system | |
US10296389B2 (en) | Time-bound conditional resource deallocation | |
CN107368450B (en) | Multi-chip processor and operation method thereof | |
US20130007762A1 (en) | Processing workloads using a processor hierarchy system | |
US11074203B2 (en) | Handling an input/output store instruction | |
CN112231101B (en) | Memory allocation method and device and readable storage medium | |
CN111125769B (en) | Mass data desensitization method based on ORACLE database | |
US20200371827A1 (en) | Method, Apparatus, Device and Medium for Processing Data | |
EP3267329A1 (en) | Data processing method having structure of cache index specified to transaction in mobile environment dbms | |
US8667008B2 (en) | Search request control apparatus and search request control method | |
CN116860869A (en) | Queue delivery method and system under primary key concurrency scene | |
CN105786917A (en) | Concurrent time series data loading method and device | |
US10740311B2 (en) | Asynchronous index loading for database computing system startup latency managment | |
CN111290700A (en) | Distributed data reading and writing method and system | |
CN115617859A (en) | Data query method and device based on knowledge graph cluster | |
CN114090539A (en) | Data migration method, device, computer system and storage medium | |
US20030182507A1 (en) | Methods and apparatus for control of asynchronous cache | |
CN113282619A (en) | Data rapid query method and system | |
US10747627B2 (en) | Method and technique of achieving extraordinarily high insert throughput | |
US10122643B2 (en) | Systems and methods for reorganization of messages in queuing systems | |
US11659071B2 (en) | Packet processing | |
CN111414259A (en) | Resource updating method, system, device, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |