CN111625592A - Load balancing method and device for distributed database - Google Patents

Load balancing method and device for distributed database Download PDF

Info

Publication number
CN111625592A
CN111625592A CN201910152835.1A CN201910152835A CN111625592A CN 111625592 A CN111625592 A CN 111625592A CN 201910152835 A CN201910152835 A CN 201910152835A CN 111625592 A CN111625592 A CN 111625592A
Authority
CN
China
Prior art keywords
synchronized
node
main node
main
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910152835.1A
Other languages
Chinese (zh)
Inventor
杨全文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910152835.1A priority Critical patent/CN111625592A/en
Publication of CN111625592A publication Critical patent/CN111625592A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a load balancing method and device for a distributed database. One embodiment of the method comprises: in response to the fact that the increase of the task quantity to be synchronized exists in part of main nodes in the main node cluster, selecting a first main node from the main node cluster, wherein the task quantity to be synchronized is larger than a first preset threshold value, so as to generate a backlog processing queue; selecting a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue; and aiming at a first main node in the backlog processing queue, selecting a second main node corresponding to the first main node from the equilibrium replacement queue, exchanging a data partition of the selected second main node with a data partition of the first main node, and removing the first main node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first main node is reduced. This embodiment reduces the risk of stored data inconsistency in the cluster of master and slave nodes.

Description

Load balancing method and device for distributed database
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a load balancing method and device for a distributed database.
Background
In order to ensure high reliability of cluster data, an existing distributed database generally needs to establish a master-slave cluster disaster recovery mode across machine rooms (that is, the distributed database includes a master node cluster and a slave node cluster for disaster recovery), and perform real-time data synchronization on the master-slave cluster.
At present, the problem of cluster node synchronization performance is usually not considered in data synchronization of a master cluster and a slave cluster of a distributed database, so that if the disaster recovery synchronization speed of a certain slave node in the master cluster and the slave cluster is low, backlog of data synchronization tasks is often caused. Further, if the data synchronization task backlog situation is serious, the risk of inconsistency of data in the master cluster and the slave cluster in a long time is caused, and thus the disaster recovery effect of the distributed database is affected.
Disclosure of Invention
The embodiment of the application provides a load balancing method and device for a distributed database.
In a first aspect, an embodiment of the present application provides a load balancing method for a distributed database, where the distributed database includes a master node cluster and a slave node cluster for disaster recovery, and the method includes: in response to the fact that the increase of the task quantity to be synchronized exists in part of main nodes in the main node cluster, selecting a first main node from the main node cluster, wherein the task quantity to be synchronized is larger than a first preset threshold value, so as to generate a backlog processing queue; selecting a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue, wherein the number of the first main nodes is smaller than or equal to that of the second main nodes; and aiming at a first main node in the backlog processing queue, selecting a second main node corresponding to the first main node from the equilibrium replacement queue, exchanging a data partition of the selected second main node with a data partition of the first main node, and removing the first main node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first main node is reduced.
In some embodiments, the above method further comprises: constructing a synchronous task queue in a main node of a main node cluster, wherein the synchronous task queue is used for caching a task to be synchronized; and for the main node in the main node cluster, in response to the fact that the number of the tasks to be synchronized in the synchronization task pair queue in the main node is detected to be continuously increased within a preset time period, determining that the number of the tasks to be synchronized in the main node is increased.
In some embodiments, before, in response to detecting that there is an increase in the amount of tasks to be synchronized in a part of master nodes in the master node cluster, selecting a first master node from the master node cluster, where the amount of tasks to be synchronized is greater than a first preset threshold, to generate a backlog processing queue, the method further includes: detecting tasks to be synchronized of main nodes in a main node cluster; determining the time difference between the time of detecting the task to be synchronized of the main node in the main node cluster at this time and the time of detecting the task to be synchronized of the main node in the main node cluster at the last time; and in response to the fact that the time difference is larger than a third preset threshold value, determining whether the main node cluster has the condition that the task amount to be synchronized of part of the main nodes is increased.
In some embodiments, in response to detecting that there is an increase in the amount of tasks to be synchronized in a part of master nodes in the master node cluster, selecting a first master node from the master node cluster, where the amount of tasks to be synchronized is greater than a first preset threshold, to generate a backlog processing queue, includes: in response to the fact that the increase of the task quantity to be synchronized exists in part of the main nodes in the main node cluster, determining whether the main node cluster is divided into at least two groups or not; in response to determining that the master node cluster is divided into at least two groups, detecting the increase of the task amount to be synchronized of all master nodes in the at least two groups; determining a first group with part of main nodes with increased task quantity to be synchronized from at least two groups, and generating a main node list, wherein the main node list is formed by the main nodes with increased task quantity to be synchronized in the first group; and selecting a first main node with the task quantity to be synchronized larger than a first preset threshold value from the main node list to generate a backlog processing queue.
In some embodiments, for a first master node in a backlog processing queue, selecting a second master node corresponding to the first master node from a balanced replacement queue, and exchanging a data partition of the selected second master node with a data partition of the first master node includes: aiming at a first main node in a backlog processing queue, in response to determining that the first main node is the first main node with the largest backlog task amount to be synchronized in the backlog processing queue, selecting a second main node with the smallest backlog task amount from a balanced replacement queue; determining the selected second main node as a second main node corresponding to the first main node, determining a data partition with the largest task quantity to be synchronized in the first main node, and determining a data partition with the smallest task quantity to be synchronized in the second main node corresponding to the first main node; exchanging the determined data partition of the first host node with the determined data partition of the second host node.
In some embodiments, the above method further comprises: and in response to determining that the backlog processing queue is empty, reselecting the first master node in the master node cluster, wherein the task quantity to be synchronized is larger than a first preset threshold value.
In a second aspect, an embodiment of the present application provides a load balancing apparatus for a distributed database, where the distributed database includes a master node cluster and a slave node cluster for disaster recovery, and the apparatus includes: the backlog processing queue generating unit is configured to respond to the situation that the increase of the task quantity to be synchronized of part of main nodes in the main node cluster is detected, and select a first main node with the task quantity to be synchronized larger than a first preset threshold value from the main node cluster to generate a backlog processing queue; the balance replacement queue generating unit is configured to select a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balance replacement queue, wherein the number of the first main nodes is smaller than or equal to the number of the second main nodes; and the first main node removing unit is configured to select a second main node corresponding to the first main node from the equilibrium replacement queue aiming at the first main node in the backlog processing queue, exchange the data partition of the selected second main node with the data partition of the first main node, and remove the first main node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first main node is reduced.
In some embodiments, the above apparatus further comprises: the synchronous task queue building unit is configured to build a synchronous task queue in a main node of a main node cluster, wherein the synchronous task queue is used for caching a task to be synchronized; and the to-be-synchronized task amount increase determining unit is configured to respond to the situation that the to-be-synchronized task amount of the synchronization task pair queue in the main node is continuously increased within a preset time period by detecting the main node in the main node cluster, and determine that the main node has the situation that the to-be-synchronized task amount is increased.
In some embodiments, the above apparatus further comprises: the system comprises a detection unit, a synchronization unit and a synchronization unit, wherein the detection unit is configured to detect tasks to be synchronized of main nodes in a main node cluster; the time difference determining unit is configured to determine a time difference between a time of detecting a task to be synchronized of a master node in the master node cluster this time and a time of detecting a task to be synchronized of a master node in the master node cluster last time; and the master node determining unit is configured to determine whether the master node cluster has a situation that the task amount to be synchronized of part of the master nodes is increased or not in response to the fact that the time difference is greater than the third preset threshold.
In some embodiments, the backlog processing queue generating unit is further configured to: in response to the fact that the increase of the task quantity to be synchronized exists in part of the main nodes in the main node cluster, determining whether the main node cluster is divided into at least two groups or not; in response to determining that the master node cluster is divided into at least two groups, detecting the increase of the task amount to be synchronized of all master nodes in the at least two groups; determining a first group with part of main nodes with increased task quantity to be synchronized from at least two groups, and generating a main node list, wherein the main node list is formed by the main nodes with increased task quantity to be synchronized in the first group; and selecting a first main node with the task quantity to be synchronized larger than a first preset threshold value from the main node list to generate a backlog processing queue.
In some embodiments, the first master node removing unit is further configured to: aiming at a first main node in a backlog processing queue, in response to determining that the first main node is the first main node with the largest backlog task amount to be synchronized in the backlog processing queue, selecting a second main node with the smallest backlog task amount from a balanced replacement queue; determining the selected second main node as a second main node corresponding to the first main node, determining a data partition with the largest task quantity to be synchronized in the first main node, and determining a data partition with the smallest task quantity to be synchronized in the second main node corresponding to the first main node; exchanging the determined data partition of the first host node with the determined data partition of the second host node.
In some embodiments, the above apparatus further comprises: and the reselection unit is configured to reselect the first main node with the task quantity to be synchronized larger than a first preset threshold value in the main node cluster in response to the fact that the backlog processing queue is determined to be empty.
The load balancing method and device for the distributed database provided by the embodiment of the application respond to the situation that the number of tasks to be synchronized is increased in part of main nodes in a main node cluster, a first master node from the cluster of master nodes whose number of tasks to be synchronized is greater than a first preset threshold may be selected to generate a backlog processing queue, then selecting a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue, finally selecting a second main node corresponding to the first main node from the balanced replacement queue aiming at the first main node in the backlog processing queue, to facilitate the exchange of the selected data partition of the second host node with the data partition of the first host node, and removing the first main node from the backlog processing queue under the condition that the amount of the tasks to be synchronized in the first main node is determined to be reduced. Therefore, load balance of tasks to be synchronized in the main node cluster can be achieved, the risk of inconsistency of data stored in the main node cluster and the slave node cluster in the distributed database is reduced, and the safety of data storage of the distributed database is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for load balancing of distributed databases according to the present application;
FIG. 3 is a flow diagram of another embodiment of a method of load balancing of distributed databases according to the present application;
FIG. 4 is a schematic diagram of an embodiment of a load balancing apparatus for a distributed database according to the present application;
FIG. 5 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the load balancing method of the distributed database or the load balancing apparatus of the distributed database of the present application may be applied.
As shown in fig. 1, system architecture 100 may include a cluster of master nodes 101 for a distributed database, a network 102, and a cluster of slave nodes 103 for disaster recovery in the distributed database. Network 102 is used to provide a medium for communication links between master node cluster 101 and slave node cluster 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The master node cluster 101 of the distributed database may store pre-written data and the like. The cluster of master nodes 101 of the distributed database may include n master nodes, where n may be a positive integer. The master nodes in the master node cluster 101 may be hardware or software. When the master node is a hardware device, it may be an electronic device for storing data, such as a server or a terminal device. When the master node is software, it may be installed in an electronic device such as a server or a terminal device. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
In a distributed database, a master node in a master node cluster 101 may interact with slave nodes in a slave node cluster 103 over a network 102 to receive or transmit data or the like. The slave nodes in the slave node cluster 103 may provide disaster recovery services for the distributed database, for example, backup the data stored in the master node. The cluster of slave nodes 103 of the distributed database may include m slave nodes, where m may be a positive integer.
It should be noted that the load balancing method for the distributed database provided in the embodiment of the present application is generally executed by the master node in the master node cluster 101, and accordingly, the load balancing apparatus for the distributed database is generally disposed in the master node cluster.
It should be noted that the slave nodes in the slave node cluster 102 may be hardware or software. When the slave node is hardware, the slave node cluster 102 may be implemented as a distributed server cluster comprised of a plurality of servers. When the slave node is software, the cluster of slave nodes 102 may be implemented as a plurality of software or software modules (e.g., to provide distributed services). And is not particularly limited herein.
It should be understood that the number of master node clusters, networks, and slave node clusters in fig. 1 is merely illustrative. There may be any number of master node clusters, networks, and slave node clusters, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for load balancing of distributed databases in accordance with the present application is shown. The load balancing method of the distributed database comprises the following steps:
step 201, in response to detecting that the increase of the task amount to be synchronized exists in a part of the master nodes in the master node cluster, selecting a first master node from the master node cluster, where the task amount to be synchronized is greater than a first preset threshold, to generate a backlog processing queue.
In this embodiment, the distributed database may include a master node cluster and a slave node cluster (e.g., the master node cluster and the slave node cluster shown in fig. 1). Here, the master node cluster may include at least two master nodes, and the slave node cluster may include at least two slave nodes, where each slave node in the slave node cluster may perform synchronous storage on tasks to be synchronized in each master node of the master node cluster in a wired connection manner or a wireless connection manner, so as to achieve a disaster recovery effect of the distributed database. When detecting that a part of the master nodes in the master node cluster has an increase in the amount of tasks to be synchronized, an executing agent (for example, the master nodes in the master node cluster shown in fig. 1) of the load balancing method for the distributed database may select a master node whose amount of tasks to be synchronized is greater than a first preset threshold from the master nodes whose amount of tasks to be synchronized has an increase. The execution main body may determine the selected master node as a first master node, and generate a backlog processing queue for each first master node. It can be understood that, if there are unfinished tasks to be synchronized in the master node and the number of unfinished tasks to be synchronized continuously increases, it may be determined that there is a situation where the number of tasks to be synchronized increases in the master node.
In some optional implementation manners of this embodiment, if the execution main body detects that there is a situation in which the amount of tasks to be synchronized increases in all the master nodes in the master node cluster, it is not necessary to generate a backlog processing queue. It can be understood that, if the number of tasks to be synchronized increases in all the master nodes in the master node cluster, it may be said that the master node cluster and the slave node cluster in the distributed database are synchronized and have a problem that cannot be solved by the load balancing method for the distributed database in the present application, and therefore, the scheme provided by the present application does not need to be executed. Of course, if the execution main body detects that there is no master node in the master node cluster and the amount of tasks to be synchronized increases, it is not necessary to execute the scheme provided in the present application.
In some optional implementation manners of this embodiment, each master node in the master node cluster may construct a synchronization task queue therein for caching a task to be synchronized. Then, for any master node in the master node cluster, the execution main body may detect a task to be synchronized in a synchronization task queue in the master node. If it is determined that the task amount to be synchronized in the master node continuously increases within a preset time period (for example, within the past hour), it may be determined that the master node has a situation in which the task amount to be synchronized increases, and otherwise, it may be determined that the master node does not have a situation in which the task amount to be synchronized increases.
In some optional implementations of this embodiment, before performing step 201, the executing main body may further perform the following steps: detecting tasks to be synchronized of main nodes in a main node cluster; determining the time difference between the time of detecting the task to be synchronized of the main node in the main node cluster at this time and the time of detecting the task to be synchronized of the main node in the main node cluster at the last time; and in response to the fact that the time difference is larger than a third preset threshold value, determining whether the main node cluster has the condition that the task amount to be synchronized of part of the main nodes is increased. It can be understood that, if the time difference is greater than the third preset threshold, it may be indicated that the load balancing scheme of the present application has not been executed for a long time by the distributed database, and in this case, the problem of data inconsistency caused by slow synchronization of the master and slave clusters can be solved by executing the load balancing scheme of the present application; if the time difference is less than or equal to the third preset threshold, it may be indicated that the time from the last time the distributed database executes the load balancing scheme of the present application is short, and in order to avoid performance jitter caused by frequent adjustment, the execution main body may suspend executing the load balancing scheme of the present application.
Step 202, selecting a second main node from the main node cluster, wherein the task quantity to be synchronized is smaller than a second preset threshold value, so as to generate a balanced replacement queue.
In this embodiment, the execution main body (for example, the master node in the master node cluster shown in fig. 1) may detect the task amount to be synchronized of each master node in the master node cluster, and determine the master node as the second master node when it is determined that the task amount to be synchronized in the master node is smaller than the second preset threshold. And then, generating a balanced replacement queue for each determined second main node. The second master node and the first master node are different master nodes in the master node cluster, and the number of the second master nodes in the equilibrium replacement queue may be the same as the number of the first master nodes in the backlog processing queue. Alternatively, the number of second primary nodes in the equalization replacement queue may be greater than the number of first primary nodes in the backlog processing queue described above. Optionally, the executing entity may determine the second master nodes in the master node cluster without the first master node, so as to ensure that the second master nodes are different from the first master node. It will be appreciated that the load on each secondary master node in the balanced replacement queue is typically less, and therefore each secondary master node can assume more tasks to be synchronized.
In step 203, for any first master node in the backlog processing queue, load balancing of the task to be synchronized in the first master node may be completed through step 2031 and step 2032.
Step 2031, selecting a second master node corresponding to the first master node from the balanced replacement queue, and exchanging the data partition of the selected second master node with the data partition of the first master node.
In this embodiment, for any first master node in the backlog queue, the execution main body may select a second master node corresponding to the first master node from the balanced replacement queue. As an example, the second master node corresponding to the first master node may be any second master node in the balanced replacement queue, or the second master node corresponding to the first master node may also be any second master node in the balanced replacement queue whose task amount to be synchronized is smaller than a preset value. Then, the execution main body can exchange the selected data partition of the second main node corresponding to the first main node with the data partition of the first main node, so that the selected second main node can process the task to be synchronized with a larger quantity in the first main node, and the first main node can process the task to be synchronized with a smaller quantity in the selected second main node, thereby improving the processing speed of the task to be synchronized in the first main node, enabling the master-slave cluster to be synchronized rapidly, and improving the load balancing effect of each main node.
In some optional implementation manners of this embodiment, for each first master node in the backlog processing queue, the execution main body may detect an amount of tasks to be synchronized in each first master node, and determine the first master node with the largest amount of tasks to be synchronized in the backlog processing queue, for example, the first master node with the largest amount of tasks to be synchronized is a first master node a. For each second master node in the balanced replacement queue, the execution main body may detect the amount of tasks to be synchronized in each second master node, and select the second master node with the smallest amount of tasks to be synchronized in the balanced replacement queue, for example, the second master node with the smallest amount of tasks to be synchronized is the second master node B. Then, the executing entity may determine the selected second master node as the second master node corresponding to the first master node, that is, may determine the second master node B as the second master node corresponding to the first master node a. Then, the executing body may determine the data partition with the largest total to-be-synchronized task amount in the first master node, and determine the data partition with the smallest total to-be-synchronized task amount in the second master node, that is, may determine the data partition with the largest total to-be-synchronized task amount of the first master node a and the data partition with the smallest total to-be-synchronized task amount of the second master node B. Finally, the execution subject may exchange the determined data partition of the first host node and the data partition of the second host node, that is, may exchange the determined data partition of the first host node a and the determined data partition of the second host node B. The method can further improve the load balancing performance of the distributed database.
Step 2032, in response to determining that the amount of tasks to be synchronized in the first master node is reduced, removing the first master node from the backlog processing queue.
In this embodiment, after exchanging the selected data partition of the second master node with the data partition of the first master node, the execution main body may continue to detect the amount of tasks to be synchronized in the first master node, and remove the first master node from the backlog processing queue when determining that the amount of tasks to be synchronized in the first master node is reduced. It can be understood that, if it is detected that the amount of the task to be synchronized in the first master node decreases, it may be determined that the first master node does not have an increase in the amount of the task to be synchronized, that is, the first master node is not a backlog master node of the amount of the task to be synchronized, and at this time, the first master node may be removed from the backlog processing queue.
In some optional implementation manners of this embodiment, the executing entity may continuously monitor the number of the first master nodes in the backlog processing queue, and when it is determined that the backlog processing queue is empty, may end the load balancing operation of this time, and start a next round of load balancing operation to reselect the first master node from the master node cluster. Optionally, if any first master node is still in the backlog processing queue after performing the partition exchange of the data for the preset number of times, at this time, the first master node may be directly removed from the backlog processing queue, and the load balancing operation of this time is forcibly ended.
The load balancing method and apparatus for distributed database provided by the above embodiments of the present application, in response to detecting that there is an increase in the number of tasks to be synchronized in some master nodes in a master node cluster, a first master node from the cluster of master nodes whose number of tasks to be synchronized is greater than a first preset threshold may be selected to generate a backlog processing queue, then selecting a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue, finally selecting a second main node corresponding to the first main node from the balanced replacement queue aiming at the first main node in the backlog processing queue, to facilitate the exchange of the selected data partition of the second host node with the data partition of the first host node, and removing the first main node from the backlog processing queue under the condition that the amount of the tasks to be synchronized in the first main node is determined to be reduced. Therefore, load balancing of tasks to be synchronized in the main node cluster can be achieved, the risk that data stored in the main node cluster and the data stored in the slave node cluster in the distributed database are inconsistent is reduced, and the safety of data storage of the distributed database is improved.
Continuing next with reference to FIG. 3, a flow 300 of another embodiment of a method for load balancing of distributed databases is illustrated. The process 300 of the load balancing method for the distributed database includes the following steps:
step 301, in response to detecting that there is an increase in the amount of tasks to be synchronized in some master nodes in the master node cluster, determining whether the master node cluster is divided into at least two groups.
In this embodiment, the cluster of master nodes of the distributed database may be divided into different groups according to function, etc. When detecting that there is an increase in the amount of tasks to be synchronized in a part of the master nodes in the master node cluster, an executing subject of the load balancing method for the distributed database (for example, the master nodes in the master node cluster shown in fig. 1) may continue to determine whether the master node cluster is divided into at least two groups. Here, each packet may include at least one master node.
Step 302, in response to determining that the master node cluster is divided into at least two groups, detecting an increase in the number of tasks to be synchronized of all master nodes in the at least two groups.
In this embodiment, after determining that the master node cluster is divided into at least two groups, the executing entity may continue to monitor the increase of the number of tasks to be synchronized of all master nodes included in each of the at least two groups. Here, what the execution subject needs to detect is the number of tasks to be synchronized in each master node, and whether the tasks to be synchronized in each master node increase and the increase speed.
Step 303, determining that there is a first group with part of master nodes to be synchronized, and generating a master node list.
In this embodiment, the executing entity may determine the first packet from the packets based on the increase of the number of tasks to be synchronized of all the master nodes in the at least two packets detected in step 302. The first packet may be a packet in which the number of tasks to be synchronized is increased in which a part of the master nodes exists in the at least two packets. It can be understood that if the amount of tasks to be synchronized of all the master nodes in a group increases, the group is not the first group; and if the task amount to be synchronized of all the main nodes in the group is not increased, the group is not the first group. Then, the executing entity may configure the master nodes in the first group, where the increase of the amount of the task to be synchronized exists, into a master node list.
Step 304, selecting a first master node from the master node list, wherein the task volume to be synchronized is greater than a first preset threshold value, so as to generate a backlog processing queue.
In this embodiment, based on the master node list generated in step 303, the executing entity may select a master node in the master node list, where the task amount to be synchronized is greater than a first preset threshold, and determine the selected master node as the first master node. Finally, aggregating all primary nodes may generate a backlog processing queue.
And 305, selecting a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue.
In this embodiment, the executing body may detect the task amount to be synchronized of each master node in the master node cluster, and determine the master node as the second master node when it is determined that the task amount to be synchronized in the master node is smaller than the second preset threshold. And then, generating a balanced replacement queue for each determined second main node. The second master node and the first master node are different master nodes in the master node cluster, and the number of the second master nodes in the equilibrium replacement queue may be the same as the number of the first master nodes in the backlog processing queue. Alternatively, the number of second primary nodes in the equalization replacement queue may be greater than the number of first primary nodes in the backlog processing queue described above.
Step 306, for any first master node in the backlog processing queue, load balancing of the task to be synchronized in the first master node can be completed through step 3061 and step 3062.
Step 3061, select the second master node corresponding to the first master node from the equalization replacement queue, and exchange the data partition of the selected second master node with the data partition of the first master node.
In this embodiment, for any first master node in the backlog queue, the execution main body may select a second master node corresponding to the first master node from the balanced replacement queue. As an example, the second master node corresponding to the first master node may be any second master node in the equal replacement queue, or the second master node corresponding to the first master node may also be any second master node in the equal replacement queue whose task amount to be synchronized is smaller than a preset value. Then, the execution main body can exchange the selected data partition of the second main node corresponding to the first main node with the data partition of the first main node, so that the selected second main node can process the task to be synchronized with a larger quantity in the first main node, and the first main node can process the task to be synchronized with a smaller quantity in the selected second main node, thereby improving the processing speed of the task to be synchronized in the first main node, enabling the master-slave cluster to be synchronized rapidly, and improving the load balancing effect of each main node.
Step 3062, in response to determining that the amount of tasks to be synchronized in the primary master node has decreased, removing the primary master node from the backlog processing queue.
In this embodiment, after exchanging the selected data partition of the second master node with the data partition of the first master node, the execution main body may continue to detect the amount of tasks to be synchronized in the first master node, and remove the first master node from the backlog processing queue when determining that the amount of tasks to be synchronized in the first master node is reduced. It can be understood that, if it is detected that the amount of the task to be synchronized in the first master node decreases, it may be determined that the first master node does not have an increase in the amount of the task to be synchronized, that is, the first master node is not a backlog master node of the amount of the task to be synchronized, and at this time, the first master node may be removed from the backlog processing queue.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the load balancing method for distributed databases in this embodiment highlights the step of generating the backlog processing queue. Therefore, the scheme described in this embodiment can directly filter the packets with the task amount to be synchronized increasing condition in the whole packets by taking the group as a unit under the condition that the master node cluster is grouped, so as to improve the generation efficiency of the backlog processing queue and further improve the load balancing effect.
With further reference to fig. 4, as an implementation of the methods shown in the above diagrams, the present application provides an embodiment of a load balancing apparatus for a distributed database, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 4, the load balancing apparatus 400 of the distributed database of the present embodiment includes: a backlog processing queue generating unit 401, a balance replacement queue generating unit 402, and a first master node removing unit 403. The backlog processing queue generating unit 401 is configured to, in response to detecting that there is an increase in the number of tasks to be synchronized in a part of the master nodes in the master node cluster, select a first master node from the master node cluster whose number of tasks to be synchronized is greater than a first preset threshold to generate a backlog processing queue; the equilibrium replacement queue generating unit 402 is configured to select a second master node from the master node cluster, where the number of tasks to be synchronized is less than a second preset threshold, to generate an equilibrium replacement queue, where the number of first master nodes is less than or equal to the number of second master nodes; the first master node removing unit 403 is configured to, for a first master node in a backlog processing queue, select a second master node corresponding to the first master node from the balanced replacement queue, exchange a data partition of the selected second master node with a data partition of the first master node, and remove the first master node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first master node decreases.
In some optional implementations of this embodiment, the apparatus 400 further includes: the synchronous task queue building unit is configured to build a synchronous task queue in a main node of a main node cluster, wherein the synchronous task queue is used for caching a task to be synchronized; and the to-be-synchronized task amount increase determining unit is configured to respond to the situation that the to-be-synchronized task amount of the synchronization task pair queue in the main node is continuously increased within a preset time period by detecting the main node in the main node cluster, and determine that the main node has the situation that the to-be-synchronized task amount is increased.
In some optional implementations of this embodiment, the apparatus 400 further includes: the system comprises a detection unit, a synchronization unit and a synchronization unit, wherein the detection unit is configured to detect tasks to be synchronized of main nodes in a main node cluster; the time difference determining unit is configured to determine a time difference between a time of detecting a task to be synchronized of a master node in the master node cluster this time and a time of detecting a task to be synchronized of a master node in the master node cluster last time; and the master node determining unit is configured to determine whether the master node cluster has a situation that the task amount to be synchronized of part of the master nodes is increased or not in response to the fact that the time difference is greater than the third preset threshold.
In some optional implementations of this embodiment, the backlog processing queue generating unit 401 is further configured to: in response to the fact that the increase of the task quantity to be synchronized exists in part of the main nodes in the main node cluster, determining whether the main node cluster is divided into at least two groups or not; in response to determining that the master node cluster is divided into at least two groups, detecting the increase of the task amount to be synchronized of all master nodes in the at least two groups; determining a first group with part of main nodes with increased task quantity to be synchronized from at least two groups, and generating a main node list, wherein the main node list is formed by the main nodes with increased task quantity to be synchronized in the first group; and selecting a first main node with the task quantity to be synchronized larger than a first preset threshold value from the main node list to generate a backlog processing queue.
In some optional implementations of this embodiment, the first master node removing unit 403 is further configured to: aiming at a first main node in a backlog processing queue, in response to determining that the first main node is the first main node with the largest backlog task amount to be synchronized in the backlog processing queue, selecting a second main node with the smallest backlog task amount from a balanced replacement queue; determining the selected second main node as a second main node corresponding to the first main node, determining a data partition with the largest task quantity to be synchronized in the first main node, and determining a data partition with the smallest task quantity to be synchronized in the second main node corresponding to the first main node; exchanging the determined data partition of the first host node with the determined data partition of the second host node.
In some optional implementations of this embodiment, the apparatus 400 further includes: and the reselection unit is configured to reselect the first main node with the task quantity to be synchronized larger than a first preset threshold value in the main node cluster in response to the fact that the backlog processing queue is determined to be empty.
The units recited in the apparatus 400 correspond to the various steps in the method described with reference to fig. 2 and 3. Thus, the operations and features described above for the method are equally applicable to the apparatus 400 and the units included therein, and are not described in detail here.
Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use in implementing an electronic device (e.g., the master node shown in FIG. 1) of an embodiment of the present application. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 501. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a backlog processing queue generating unit, a balanced replacement queue generating unit, and a first master node removing unit. For example, the backlog processing queue generating unit may also be described as "a unit that selects a first master node from the master node cluster, the number of tasks to be synchronized of which is greater than a first preset threshold, to generate a backlog processing queue in response to detecting that there is an increase in the number of tasks to be synchronized in a part of the master nodes in the master node cluster".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: in response to the fact that the increase of the task quantity to be synchronized exists in part of main nodes in the main node cluster, selecting a first main node from the main node cluster, wherein the task quantity to be synchronized is larger than a first preset threshold value, so as to generate a backlog processing queue; selecting a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue, wherein the number of the first main nodes is smaller than or equal to that of the second main nodes; and aiming at a first main node in the backlog processing queue, selecting a second main node corresponding to the first main node from the equilibrium replacement queue, exchanging a data partition of the selected second main node with a data partition of the first main node, and removing the first main node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first main node is reduced.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A load balancing method for a distributed database, wherein the distributed database comprises a main node cluster and a slave node cluster for disaster recovery, and the method comprises the following steps:
responding to the situation that the increase of the task quantity to be synchronized exists in part of the main nodes in the main node cluster, and selecting a first main node with the task quantity to be synchronized larger than a first preset threshold value from the main node cluster to generate a backlog processing queue;
selecting a second main node with the task amount to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balanced replacement queue, wherein the number of the first main nodes is smaller than or equal to the number of the second main nodes;
and aiming at the first main node in the backlog processing queue, selecting a second main node corresponding to the first main node from the equilibrium replacement queue, exchanging the data partition of the selected second main node with the data partition of the first main node, and removing the first main node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first main node is reduced.
2. The method of claim 1, wherein the method further comprises:
constructing a synchronous task queue in a main node of the main node cluster, wherein the synchronous task queue is used for caching a task to be synchronized;
and for the main node in the main node cluster, in response to the fact that the number of the tasks to be synchronized in the synchronization task pair queue in the main node is detected to be continuously increased within a preset time period, determining that the number of the tasks to be synchronized in the main node is increased.
3. The method of claim 1, wherein before selecting a first master node from the master node cluster to generate a backlog processing queue, the first master node having a task amount to be synchronized greater than a first preset threshold in response to detecting that there is an increase in the task amount to be synchronized in a portion of the master nodes in the master node cluster, the method further comprises:
detecting tasks to be synchronized of the main nodes in the main node cluster;
determining a time difference between the time of detecting the task to be synchronized of the master node in the master node cluster this time and the time of detecting the task to be synchronized of the master node in the master node cluster last time;
and in response to determining that the time difference is greater than a third preset threshold, determining whether a situation that the number of tasks to be synchronized of a part of the master nodes is increased exists in the master node cluster.
4. The method of claim 1, wherein the selecting a first master node from the master node cluster, in which the amount of tasks to be synchronized is greater than a first preset threshold, to generate a backlog processing queue in response to detecting that there is an increase in the amount of tasks to be synchronized in a portion of the master nodes in the master node cluster comprises:
responding to the situation that the increase of the task quantity to be synchronized exists in part of the main nodes in the main node cluster, and determining whether the main node cluster is divided into at least two groups;
in response to determining that the master node cluster is divided into at least two groups, detecting the increase of the task amount to be synchronized of all master nodes in the at least two groups;
determining a first group with part of main nodes with increased task quantity to be synchronized from the at least two groups, and generating a main node list, wherein the main node list is formed by the main nodes with increased task quantity to be synchronized in the first group;
and selecting a first main node with the task amount to be synchronized larger than a first preset threshold value from the main node list to generate a backlog processing queue.
5. The method of claim 1, wherein the selecting, for a first master node in the backlog processing queue, a second master node corresponding to the first master node from the balanced replacement queue, and exchanging data partitions of the selected second master node with data partitions of the first master node comprises:
for the first main node in the backlog processing queue, in response to determining that the first main node is the first main node with the largest backlog task amount to be synchronized in the backlog processing queue, selecting a second main node with the smallest backlog task amount from the balanced replacement queue; determining the selected second main node as a second main node corresponding to the first main node, determining a data partition with the largest task quantity to be synchronized in the first main node, and determining a data partition with the smallest task quantity to be synchronized in the second main node corresponding to the first main node; exchanging the determined data partition of the first host node with the determined data partition of the second host node.
6. The method according to one of claims 1-5, wherein the method further comprises:
and in response to determining that the backlog processing queue is empty, reselecting the first master node in the master node cluster, wherein the task quantity to be synchronized is larger than a first preset threshold value.
7. A load balancing apparatus for a distributed database, wherein the distributed database includes a master node cluster and a slave node cluster for disaster recovery, the apparatus comprising:
the backlog processing queue generating unit is configured to respond to the situation that the increase of the task quantity to be synchronized of part of the main nodes in the main node cluster is detected, and select a first main node with the task quantity to be synchronized larger than a first preset threshold value from the main node cluster to generate a backlog processing queue;
the balance replacement queue generating unit is configured to select a second main node with the task quantity to be synchronized smaller than a second preset threshold value from the main node cluster to generate a balance replacement queue, wherein the number of the first main nodes is smaller than or equal to the number of the second main nodes;
and the first main node removing unit is configured to select a second main node corresponding to the first main node from the equilibrium replacement queue aiming at the first main node in the backlog processing queue, exchange a data partition of the selected second main node with a data partition of the first main node, and remove the first main node from the backlog processing queue in response to determining that the amount of tasks to be synchronized in the first main node is reduced.
8. The apparatus of claim 7, wherein the apparatus further comprises:
the synchronization task queue building unit is configured to build a synchronization task queue in a master node of the master node cluster, wherein the synchronization task queue is used for caching a task to be synchronized;
and the to-be-synchronized task amount increase determining unit is configured to respond to the situation that the to-be-synchronized task amount of the synchronization task pair queue in the main node is continuously increased within a preset time period by detecting the main node in the main node cluster, and determine that the main node has the situation that the to-be-synchronized task amount is increased.
9. The apparatus of claim 7, wherein the apparatus further comprises:
a detection unit configured to detect a task to be synchronized of a master node in the master node cluster;
a time difference determining unit configured to determine a time difference between a time of detecting a task to be synchronized of a master node in the master node cluster this time and a time of detecting a task to be synchronized of a master node in the master node cluster last time;
and the master node determining unit is configured to determine whether a situation that the number of tasks to be synchronized of a part of master nodes is increased exists in the master node cluster or not in response to the fact that the time difference is greater than a third preset threshold value.
10. The apparatus of claim 7, wherein the backlog processing queue generating unit is further configured to:
responding to the situation that the increase of the task quantity to be synchronized exists in part of the main nodes in the main node cluster, and determining whether the main node cluster is divided into at least two groups;
in response to determining that the master node cluster is divided into at least two groups, detecting the increase of the task amount to be synchronized of all master nodes in the at least two groups;
determining a first group with part of main nodes with increased task quantity to be synchronized from the at least two groups, and generating a main node list, wherein the main node list is formed by the main nodes with increased task quantity to be synchronized in the first group;
and selecting a first main node with the task amount to be synchronized larger than a first preset threshold value from the main node list to generate a backlog processing queue.
11. The apparatus of claim 7, wherein the first master node removal unit is further configured to:
for the first main node in the backlog processing queue, in response to determining that the first main node is the first main node with the largest backlog task amount to be synchronized in the backlog processing queue, selecting a second main node with the smallest backlog task amount from the balanced replacement queue; determining the selected second main node as a second main node corresponding to the first main node, determining a data partition with the largest task quantity to be synchronized in the first main node, and determining a data partition with the smallest task quantity to be synchronized in the second main node corresponding to the first main node; exchanging the determined data partition of the first host node with the determined data partition of the second host node.
12. The apparatus according to one of claims 7-11, wherein the apparatus further comprises:
and the reselection unit is configured to reselect the first main node with the task quantity to be synchronized larger than a first preset threshold value in the main node cluster in response to the fact that the backlog processing queue is empty.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN201910152835.1A 2019-02-28 2019-02-28 Load balancing method and device for distributed database Pending CN111625592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910152835.1A CN111625592A (en) 2019-02-28 2019-02-28 Load balancing method and device for distributed database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910152835.1A CN111625592A (en) 2019-02-28 2019-02-28 Load balancing method and device for distributed database

Publications (1)

Publication Number Publication Date
CN111625592A true CN111625592A (en) 2020-09-04

Family

ID=72259640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910152835.1A Pending CN111625592A (en) 2019-02-28 2019-02-28 Load balancing method and device for distributed database

Country Status (1)

Country Link
CN (1) CN111625592A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115123A (en) * 2020-09-21 2020-12-22 中国建设银行股份有限公司 Method and apparatus for performance optimization of distributed databases
CN112231415A (en) * 2020-12-16 2021-01-15 腾讯科技(深圳)有限公司 Data synchronization method and system of block chain network, electronic device and readable medium
CN112632075A (en) * 2020-12-25 2021-04-09 创新科技术有限公司 Storage and reading method and device of cluster metadata
CN114500547A (en) * 2022-03-22 2022-05-13 新浪网技术(中国)有限公司 Session information synchronization system, method, device, electronic equipment and storage medium
CN115391018A (en) * 2022-09-30 2022-11-25 中国建设银行股份有限公司 Task scheduling method, device and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115123A (en) * 2020-09-21 2020-12-22 中国建设银行股份有限公司 Method and apparatus for performance optimization of distributed databases
CN112231415A (en) * 2020-12-16 2021-01-15 腾讯科技(深圳)有限公司 Data synchronization method and system of block chain network, electronic device and readable medium
CN112632075A (en) * 2020-12-25 2021-04-09 创新科技术有限公司 Storage and reading method and device of cluster metadata
CN114500547A (en) * 2022-03-22 2022-05-13 新浪网技术(中国)有限公司 Session information synchronization system, method, device, electronic equipment and storage medium
CN115391018A (en) * 2022-09-30 2022-11-25 中国建设银行股份有限公司 Task scheduling method, device and equipment

Similar Documents

Publication Publication Date Title
CN111625592A (en) Load balancing method and device for distributed database
US10209908B2 (en) Optimization of in-memory data grid placement
CN108681565B (en) Block chain data parallel processing method, device, equipment and storage medium
JP2022500775A (en) Data synchronization methods, equipment, computer programs, and electronic devices for distributed systems
JP6692000B2 (en) Risk identification method, risk identification device, cloud risk identification device and system
US10664390B2 (en) Optimizing execution order of system interval dependent test cases
US11150999B2 (en) Method, device, and computer program product for scheduling backup jobs
CN110753112A (en) Elastic expansion method and device of cloud service
CN108123851A (en) The lifetime detection method and device of main and subordinate node synchronization link in distributed system
CN111400041A (en) Server configuration file management method and device and computer readable storage medium
CN113193947A (en) Method, apparatus, medium, and program product for implementing distributed global ordering
CN113722055A (en) Data processing method and device, electronic equipment and computer readable medium
EP2829972B1 (en) Method and apparatus for allocating stream processing unit
CN113032412A (en) Data synchronization method and device, electronic equipment and computer readable medium
US20130275444A1 (en) Management of Log Data in a Networked System
US9577869B2 (en) Collaborative method and system to balance workload distribution
CN112817687A (en) Data synchronization method and device
US10997058B2 (en) Method for performance analysis in a continuous integration pipeline
CN112732979B (en) Information writing method, information writing device, electronic equipment and computer readable medium
CN114035861A (en) Cluster configuration method and device, electronic equipment and computer readable medium
CN113961641A (en) Database synchronization method, device, equipment and storage medium
CN112817701A (en) Timer processing method and device, electronic equipment and computer readable medium
CN113760469A (en) Distributed computing method and device
CN110493071B (en) Message system resource balancing device, method and equipment
US9674282B2 (en) Synchronizing SLM statuses of a plurality of appliances in a cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination