CN117688094A - Data synchronization method, equipment and medium based on message queue - Google Patents

Data synchronization method, equipment and medium based on message queue Download PDF

Info

Publication number
CN117688094A
CN117688094A CN202311690899.XA CN202311690899A CN117688094A CN 117688094 A CN117688094 A CN 117688094A CN 202311690899 A CN202311690899 A CN 202311690899A CN 117688094 A CN117688094 A CN 117688094A
Authority
CN
China
Prior art keywords
message
buffer
lock
buffer area
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311690899.XA
Other languages
Chinese (zh)
Inventor
李兆锐
孙立新
李文峰
胡天岳
时凯旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur General Software Co Ltd
Original Assignee
Inspur General Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur General Software Co Ltd filed Critical Inspur General Software Co Ltd
Priority to CN202311690899.XA priority Critical patent/CN117688094A/en
Publication of CN117688094A publication Critical patent/CN117688094A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data synchronization method, equipment and medium based on a message queue, wherein the method comprises the following steps: determining concurrency of service data according to service requirements, determining the number of classification lists of the service data, creating all classification lists in a message queue, adding a sending lock to each classification list, creating a buffer area corresponding to the classification list, acquiring a change dataset, packaging the change dataset into a message, adding sequence attributes to the message, acquiring the sending lock, adding the message into the corresponding buffer area, adding the message lock to the buffer area, executing, sending the message to the corresponding classification list in the message queue, and releasing the sending lock and the message lock after the message sending is completed. By setting a plurality of topics in the message queue for each type of service data, the message time sequence between the sending end and the consuming end is ensured, and by setting a buffer area corresponding to the topics and a mechanism for setting a sending lock and the buffer area, batch sending of the messages is realized.

Description

Data synchronization method, equipment and medium based on message queue
Technical Field
The present invention relates to the field of data management, and in particular, to a method, an apparatus, and a medium for synchronizing data based on a message queue.
Background
With the development of modern informatization environments, more and more enterprises need to synchronize data of different systems or data sources, and the data sources may be generated by different platforms and databases in different areas and different time periods, and differences exist among the data sources, such as data formats, data types, data structures, and the like, so that the data needs to be synchronized in order to ensure the accuracy and consistency of the data.
The data synchronization is usually performed by using a data synchronization technique (Change Data Capture, CDC) to pull the changed data in the database at regular time or near real time and synchronize the captured data to other databases for storage, but some databases do not provide an interface for the CDC, and cannot perform data synchronization by the CDC, but can perform interception at an application layer, collect data changes, and send the changed data set to a message queue.
Disclosure of Invention
In order to solve the above problems, the present application proposes a switch command line annotation method based on SDN, including:
determining concurrency of service data according to service requirements, and determining the number of classification lists of the service data according to the concurrency;
creating all the classification lists in a message queue, adding a sending lock to each classification list, and creating a buffer area corresponding to the classification list;
acquiring a change data set, packaging the change data set into a message, and adding a sequence attribute to the message;
determining the corresponding classification list based on the sequence attribute, thereby determining the corresponding buffer area, acquiring the sending lock, and adding the message into the corresponding buffer area;
and adding a message lock to the buffer area, executing the message lock, transmitting the message from the buffer area to a corresponding classification list in a message queue, and releasing the transmission lock and the message lock after the message transmission is completed.
In another aspect, the present application further proposes a data synchronization device based on a message queue, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform operations such as: the message queue based data synchronization method described in the above example.
In another aspect, the present application also proposes a non-volatile computer storage medium storing computer-executable instructions configured to: the message queue based data synchronization method described in the above example.
The data synchronization method based on the message queue has the following beneficial effects:
by setting a plurality of topics in the message queue for each type of service data, the problem of larger performance delay caused by high concurrency of the service data can be solved, the message time sequence between a sending end and a consuming end is ensured, and the performance and throughput of message sending are improved.
Under the mode of synchronously sending the messages, batch sending of the messages is realized by setting a buffer area corresponding to the topic and setting a mechanism of sending locks and the buffer area, the inconsistency of the messages caused by the occurrence of new messages during the sending of the messages is prevented, the batch size and sending time of the messages can be automatically controlled, the management by using an additional thread is not needed, and the management of the message synchronization is simplified.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a flow chart of a data synchronization method based on a message queue in an embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation flow of an embodiment of a data synchronization method based on a message queue in the embodiments of the present application;
fig. 3 is a schematic diagram of a data synchronization device based on a message queue in an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a data synchronization method based on a message queue, including:
s101: and determining the concurrency of the service data according to the service requirement, and determining the number of the classification list of the service data according to the concurrency.
The data synchronization refers to a process of performing consistency update on data in a plurality of data sources so as to ensure the accuracy and the integrity of the data, and the traditional data synchronization method uses a CDC data synchronization technology to capture data changes in a database in real time or near real time, analyze the changed data and synchronize the changed data to other databases for storage, thereby maintaining the consistency of the data.
Meanwhile, some small databases do not provide an interface of CDC, and data synchronization cannot be realized through CDC, in this case, interception can be performed at an application layer, data change is collected, the collected change data set is sent to a message queue for temporary storage in a transaction, and a consumption end obtains the change data set from the message queue and consumes the change data set, so that change data is obtained and synchronized to other databases, and data consistency is determined.
However, with the data synchronization method using the message queue, since the time consumed from the sending end to the consuming end of the message is prolonged, the situation that the performance delay is large and the data is inconsistent is easy to occur, for the data synchronization method, the embodiment of the application performs corresponding processing in the data synchronization process to solve the problems generated by the data synchronization method, wherein the method is mainly characterized by the following two points: under the synchronous message sending mode, batch sending of the messages is realized through a data buffering and locking mechanism, the batch size and sending time are automatically controlled, and additional thread management is not needed; in order to improve the transmission performance and throughput, in the case of distributing message data to a plurality of sorted lists, the message chronology in the plurality of sorted lists can be ensured at the transmitting end and the consuming end.
Specifically, the concurrency of service data is determined according to service requirements, so that the number of classification lists is determined. In general, in a data synchronization scenario, only one classification list needs to be created for each type of service data, where in this embodiment, a topic is used to explain the classification list in the database, hereinafter the topic refers to the classification list, and the classification list is used to distinguish different types of data or messages, but because some enterprises or organizations have complex service relationships, the corresponding service data has higher concurrency, the generated message amount is larger, and only one topic is difficult to support the high concurrency of the service data, which may result in higher delay, so for the service data with higher concurrency, multiple topics are created for temporary storage, so as to ensure the message chronology between the sending end and the consuming end.
S102: creating all the classification lists in a message queue, adding a sending lock to each classification list, and creating a buffer area corresponding to the classification list.
Specifically, after determining the number of topics, creating all the topics in the message queue, and naming the topics, a naming rule may be a service name_number, so as to distinguish data in the message queue, where each topic includes three dictionaries, namely, a buffer dictionary, a buffer addition lock dictionary, and a buffer execution lock dictionary, keywords in the buffer dictionary, the buffer addition lock dictionary, and the buffer execution lock dictionary are topicId, the buffer dictionary is mainly used to create a buffer, the buffer addition lock dictionary is mainly used to control thread security added to the buffer, and a buffer execution lock dictionary value is mainly used to control thread security when the buffer is sent.
Further, a lock dictionary is added through a buffer, a sending lock is added to each topic, and the ID of the corresponding buffer is determined based on the ID of the topic, so that the corresponding buffer is created through the buffer dictionary of the topic based on the ID of the buffer, the buffer is created to be used for temporarily storing a message array, and a message sent to the topic is added to the buffer corresponding to the topic for temporary caching.
S103: acquiring a change data set, packaging the change data set into a message, and adding a sequence attribute to the message.
Specifically, interception is performed at an application layer, data change is collected, a change data set is obtained, the obtained change data set is packaged, wherein the packaging mode comprises various types, such as packaging into a class, an object, a structure body, an array and the like.
Further, a sequence attribute is added to the message array, wherein the sequence attribute is an attribute for identifying the order of the messages in the message queue, so as to ensure that the order of the messages in the message queue is consistent with the order when the messages are sent. And setting a sequence value corresponding to the message array in an incremental manner to ensure that the sequence value corresponding to each message array is unique, and simultaneously creating a maximum sequence number table in a service database to store the sequence value corresponding to the message, wherein the sequence table structure is shown in table 1 and comprises two field names, namely TopicId and Maxsequence.
Field name Type(s) Remarks
TopicId Varchar Queue numbering
MaxSequence Long Current maximum sequence number
TABLE 1
For example, as shown in fig. 2, sales order data is used as service data, multiple topics are created according to the concurrence of the sales order data, the multiple topics are named, the multiple topics are respectively named as order_0 and order_1 and … order_n, the multiple topics are put into an order Topic list of a message queue, the IDs of corresponding buffers are determined according to the ID of each Topic, the first buffer and the second buffer … are respectively the nth buffer, the buffers to be sent of the sending ends of the buffers are respectively used for temporarily buffering order messages generated in order threads, the order threads comprise M threads, multiple messages are generated, a sequence attribute is added to each message array, and the sequence values are respectively set to be set 11 and set 12 to set … set 1N, and the messages are respectively added to the corresponding buffers for temporary buffering according to a preset routing rule.
The sales order data is sent to the order message of the order Topic list, and the order message is firstly placed into a corresponding buffer zone in the buffer zones to be sent for temporary buffering.
S104: and determining the corresponding classification list based on the sequence attribute, thereby determining the corresponding buffer area, acquiring the sending lock, and adding the message into the corresponding buffer area.
Specifically, after the change dataset is obtained and packaged into the messages, the ID of the topic to which the messages need to be sent is obtained through a preset routing rule according to the sequence value corresponding to each message, so as to obtain the ID of the buffer area corresponding to the topic, wherein the preset routing rule is formula 1: topicId = sequence% topicNum, where topicId is the ID of the corresponding topic in equation 1, sequence is the sequence value of the message, and topicNum is the number of topic corresponding to the traffic data.
Further, the sending lock corresponding to the topic is obtained, so that the buffer area cannot be modified or deleted by other threads before the corresponding buffer area is read, invalid or inconsistent data is prevented from being read, and after the sending lock is obtained, whether the buffer area exists is judged according to the ID of the buffer area corresponding to the obtained topic.
If the buffer area does not exist, a buffer area corresponding to the topic is newly created so as to prevent the topic from having the corresponding buffer area when the message transmission in the buffer area is completed and the buffer area is deleted, and after the new buffer area is created, the corresponding message array is added into the new buffer area; if the buffer area exists according to the buffer area ID, the corresponding message array is directly added into the buffer area, and the message array is temporarily cached into the buffer area.
S105: and adding a message lock to the buffer area, executing the message lock, transmitting the message from the buffer area to a corresponding classification list in a message queue, and releasing the transmission lock and the message lock after the message transmission is completed.
Before the message is sent, a lock is added to the buffer area to ensure that only one thread executes the sending of the buffer area message, the buffer area lock is locked to prevent new messages from being added into the buffer area, further, the message is sent to a topic corresponding to a message queue from the buffer area, after the message sending of the buffer area is completed, an execution result is returned, and the execution result is stored in a message sending result table in a service database, wherein the table structure is shown in table 2.
Field name Type(s) Remarks
TopicId Varchar Queue numbering
Sequence Long Current sequence number
TABLE 2
And further, deleting the buffer area to realize the switching of the buffer area, wherein in the process of sending the message, if a new message is generated and needs to be sent to the topic, the buffer area corresponding to the topic is newly created, the newly generated message is automatically added to the new buffer area, after the message is sent, the buffer area is set to be in a finished state, if the message is abnormal, the buffer area is set to be in a sending failure state, the sending lock corresponding to the topic is released, a thread of the next batch can acquire the sending lock, and the next batch of message is sent without additional thread management.
As shown in fig. 2, after a sender sends a message to a message queue, a consumer orders according to SEQ, that is, orders according to a sequence value of the message, listens to topic in a consumption queue, acquires the message from the specified topic according to a current offset, so as to consume the message correspondingly, and according to the sequence value corresponding to the current topic, the method passes through formula 2: topicid= (sequence+1)% topicNum to acquire the ID of the topic corresponding to the next message, thereby acquiring the next message in the designated topic.
If the corresponding message can be acquired through the method, the steps are circulated; if not, the service database can be queried according to the ID or sequence value of the topic, so that the corresponding message is obtained.
As shown in fig. 3, the embodiment of the present application further proposes a data synchronization device based on a message queue, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform operations such as: the method for synchronizing data based on a message queue according to any one of the above embodiments.
The embodiments also provide a non-volatile computer storage medium storing computer executable instructions configured to: the method for synchronizing data based on a message queue according to any one of the embodiments.
All embodiments in the application are described in a progressive manner, and identical and similar parts of all embodiments are mutually referred, so that each embodiment mainly describes differences from other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not described in detail herein.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method for synchronizing data based on a message queue, comprising:
determining concurrency of service data according to service requirements, and determining the number of classification lists of the service data according to the concurrency;
creating all the classification lists in a message queue, adding a sending lock to each classification list, and creating a buffer area corresponding to the classification list;
acquiring a change data set, packaging the change data set into a message, and adding a sequence attribute to the message;
determining the corresponding classification list based on the sequence attribute, thereby determining the corresponding buffer area, acquiring the sending lock, and adding the message into the corresponding buffer area;
and adding a message lock to the buffer area, executing the message lock, transmitting the message from the buffer area to a corresponding classification list in a message queue, and releasing the transmission lock and the message lock after the message transmission is completed.
2. The method according to claim 1, wherein adding a transmission lock to each of the classification lists and creating a buffer corresponding to the classification list specifically includes:
adding a lock dictionary through a buffer area in the classification list, and adding a sending lock to each classification list;
and determining the ID of a corresponding buffer zone based on the ID of the classification list, and creating the corresponding buffer zone through a buffer zone dictionary of the classification list based on the ID of the buffer zone so as to temporarily store the message array.
3. The method according to claim 1, wherein the obtaining the change dataset, encapsulating the change dataset as a message, and adding a sequence attribute to the message, in particular comprises:
acquiring a change data set in a service database, and packaging the change data set into a message;
and adding a sequence attribute to the message, setting a sequence value corresponding to the message in an incremental mode to ensure that the sequence value corresponding to the message is unique, and creating a maximum sequence number table in the service database to store the sequence value corresponding to the message.
4. A method according to claim 3, wherein said determining the corresponding sorted list based on the sequence attribute, thereby determining the corresponding buffer, obtaining the sending lock, adding the message to the corresponding buffer, comprises:
determining the ID of the classification list through a routing rule according to the sequence value corresponding to the message, thereby determining the ID of the corresponding buffer zone;
and acquiring the sending lock corresponding to the classification list, judging whether the buffer area exists according to the ID of the buffer area, and adding the message into the corresponding buffer area.
5. The method of claim 4, wherein the obtaining the sending lock corresponding to the classification list, determining whether the buffer exists according to the ID of the buffer, so as to add the message to the corresponding buffer, specifically includes:
acquiring the sending lock corresponding to the classification list, and judging whether the buffer area exists or not according to the ID of the buffer area;
if the buffer area does not exist, the buffer area is newly created, and the message is added into the corresponding buffer area;
if the buffer exists, the message is directly added into the corresponding buffer.
6. A method according to claim 3, wherein the message is sent from the buffer to a corresponding topic in a message queue, and wherein after the message is sent, the method further comprises, after releasing the send lock:
after the message is sent to the message queue, acquiring the ID of the corresponding classification list according to the sequence value;
and sending the message in the classification list corresponding to the message queue to a consumer, and consuming the message by the consumer.
7. The method according to claim 1, wherein the adding a message lock to the buffer is performed, the message is sent from the buffer to a corresponding classification list in a message queue, and after the message is sent, the sending lock and the message lock are released, including:
adding a message lock to the buffer and executing the message lock to prevent new messages from being added to the buffer;
transmitting the message from the buffer area to a corresponding classification list in a message queue, and releasing the transmission and the message lock after the message in the buffer area is transmitted, so as to transmit the message next time;
and deleting the buffer area to realize the switching of the buffer area.
8. The method of claim 7, wherein after adding a message lock to the buffer and executing the message lock to prevent new messages from being added to the buffer, the method further comprises:
if a new message is generated, a new buffer zone corresponding to the classification list is created, and the new message is automatically added into the new buffer zone so as to realize the next transmission of the new message.
9. A message queue-based data synchronization device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform operations such as: a method of message queue based data synchronisation as claimed in any one of claims 1 to 8.
10. A non-transitory computer storage medium storing computer-executable instructions, the computer-executable instructions configured to: a method of message queue based data synchronisation as claimed in any one of claims 1 to 8.
CN202311690899.XA 2023-12-08 2023-12-08 Data synchronization method, equipment and medium based on message queue Pending CN117688094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311690899.XA CN117688094A (en) 2023-12-08 2023-12-08 Data synchronization method, equipment and medium based on message queue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311690899.XA CN117688094A (en) 2023-12-08 2023-12-08 Data synchronization method, equipment and medium based on message queue

Publications (1)

Publication Number Publication Date
CN117688094A true CN117688094A (en) 2024-03-12

Family

ID=90127830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311690899.XA Pending CN117688094A (en) 2023-12-08 2023-12-08 Data synchronization method, equipment and medium based on message queue

Country Status (1)

Country Link
CN (1) CN117688094A (en)

Similar Documents

Publication Publication Date Title
US8335769B2 (en) Executing replication requests for objects in a distributed storage system
WO2020147392A1 (en) Method and system for data synchronization between databases
EP3564835B1 (en) Data redistribution method and apparatus, and database cluster
US9875259B2 (en) Distribution of an object in volatile memory across a multi-node cluster
CN108509462B (en) Method and device for synchronizing activity transaction table
JP2018505496A (en) Method, apparatus and system for synchronizing data
JP5686034B2 (en) Cluster system, synchronization control method, server device, and synchronization control program
CN103761162A (en) Data backup method of distributed file system
US10749955B2 (en) Online cache migration in a distributed caching system using a hybrid migration process
CN106570113B (en) Mass vector slice data cloud storage method and system
CN112579692B (en) Data synchronization method, device, system, equipment and storage medium
JP2023541298A (en) Transaction processing methods, systems, devices, equipment, and programs
CN107153680B (en) Method and system for on-line node expansion of distributed memory database
CN112328702A (en) Data synchronization method and system
CN103365740B (en) A kind of data cold standby method and device
CN107493309B (en) File writing method and device in distributed system
KR20130038517A (en) System and method for managing data using distributed containers
US20150347516A1 (en) Distributed storage device, storage node, data providing method, and medium
CN116303789A (en) Parallel synchronization method and device for multi-fragment multi-copy database and readable medium
CN117688094A (en) Data synchronization method, equipment and medium based on message queue
WO2022002044A1 (en) Method and apparatus for processing distributed database, and network device and computer-readable storage medium
CN116186082A (en) Data summarizing method based on distribution, first server and electronic equipment
CN115587141A (en) Database synchronization method and device
EP3859549B1 (en) Database migration method, apparatus, and device, and computer readable medium
CN114880717A (en) Data archiving method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination