CN114124962A - Multi-machine room message load balancing processing method and device - Google Patents

Multi-machine room message load balancing processing method and device Download PDF

Info

Publication number
CN114124962A
CN114124962A CN202111441126.9A CN202111441126A CN114124962A CN 114124962 A CN114124962 A CN 114124962A CN 202111441126 A CN202111441126 A CN 202111441126A CN 114124962 A CN114124962 A CN 114124962A
Authority
CN
China
Prior art keywords
consumption rate
application
load balancing
room
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111441126.9A
Other languages
Chinese (zh)
Inventor
于锡璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
ICBC Technology Co Ltd
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
ICBC Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC, ICBC Technology Co Ltd filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202111441126.9A priority Critical patent/CN114124962A/en
Publication of CN114124962A publication Critical patent/CN114124962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Abstract

The embodiment of the application provides a multi-computer room message load balancing processing method and a device, wherein the method comprises the following steps: acquiring the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time; sending the target consumption rate to each application node in the distributed service framework so that each application node determines a corresponding message queue consumption rate according to the target consumption rate; the method and the device can effectively adjust the throughput of the downstream system and ensure the stability of the system.

Description

Multi-machine room message load balancing processing method and device
Technical Field
The application relates to the field of distributed data processing, can also be used in the financial field, and particularly relates to a multi-computer-room message load balancing processing method and device.
Background
In the existing bank payment settlement system, along with the evolution of technical means and the gradual maturity of a distributed system, the construction of a bank system in multiple machine rooms becomes a normal state, and in complex high-concurrency scenes such as killing second, robbery of commemorative coins and the like, current limiting measures are often adopted for protecting the stability of the system. Under the multi-machine-room scene, higher challenges are presented to the scenes such as the stability of the downstream service flow and the like.
The inventor finds that in the prior art, when multiple rooms are built, the multiple rooms control downstream access through total flow control and a single room, the situation that a certain room has flow and another room has no flow often occurs, and the QPS request received downstream is unstable in the case of discontinuous and stable traffic. The existing solution adjusts the sending rate of different machine rooms, frequently needs manual adjustment and auditing, is not stable in external requests when the traffic is low, and is easy to cause instantaneous traffic spurts. The existing solution needs to adjust the flow control of a single application when expanding the capacity of the system, and cannot dynamically and automatically adjust the outgoing rate.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides a multi-computer-room message load balancing processing method and device, which can effectively adjust the throughput of a downstream system and ensure the stability of the system.
In order to solve at least one of the above problems, the present application provides the following technical solutions:
in a first aspect, the present application provides a multi-chassis message load balancing processing method, including:
acquiring the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time;
and sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
Further, before the obtaining of the current message queue backlog data volume and the number of application nodes of each machine room by the master node in the preset distributed service framework, the method further includes:
and determining a main node from all application nodes of the preset distributed service framework through a preset election strategy.
Further, the determining, by the application nodes, the respective corresponding message queue consumption rates according to the target consumption rate includes:
and after obtaining the target consumption rate, the application nodes determine the respective corresponding message queue consumption rate according to a preset token bucket algorithm.
Further, after the sending the target consumption rate to each application node in the distributed service framework, the method further includes:
and re-determining the target consumption rate of each machine room according to a set time period and sending the target consumption rate to each application node in the distributed service framework.
In a second aspect, the present application provides a multi-room message load balancing processing apparatus, including:
the consumption rate determining module is used for acquiring the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time;
and the consumption rate adjusting module is used for sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
Further, the consumption rate determination module includes:
and the master node election unit is used for determining a master node from all application nodes of the preset distributed service framework through a preset election strategy.
Further, the consumption rate adjustment module includes:
and the token bucket algorithm adjusting unit is used for determining the consumption rate of the message queue corresponding to each application node according to a preset token bucket algorithm after the application nodes acquire the target consumption rate.
Further, the consumption rate adjustment module further comprises:
and the circulating calculation unit is used for re-determining the target consumption rate of each machine room according to a set time period and sending the target consumption rate to each application node in the distributed service framework.
In a third aspect, the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the multi-room message load balancing processing method when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the multi-room message load balancing processing method.
According to the technical scheme, the method and the device for processing the message load balance of the multiple machine rooms are characterized in that the current message queue backlog data volume and the number of application nodes of each machine room are obtained through a main node in a distributed service framework, the target consumption rate of each machine room is determined according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time, and each application node determines the corresponding message queue consumption rate according to the target consumption rate, so that the throughput of a downstream system can be effectively adjusted, and the stability of the system per se is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a multi-chassis message load balancing processing method in an embodiment of the present application;
fig. 2 is one of the structural diagrams of a multi-room message load balancing processing apparatus in the embodiment of the present application;
fig. 3 is a second block diagram of a multi-room message load balancing processing apparatus according to an embodiment of the present application;
fig. 4 is a third structural diagram of a multi-room message load balancing processing apparatus in the embodiment of the present application;
fig. 5 is a fourth structural diagram of a multi-machine-room message load balancing processing apparatus in the embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In consideration of the fact that in the prior art, when a plurality of rooms are built, the plurality of rooms control downstream access through total traffic control and a single room, the situation that traffic exists in a certain room and does not exist in another room often occurs, and the QPS request received downstream is unstable in the case of discontinuous and stable traffic. The existing solution adjusts the sending rate of different machine rooms, frequently needs manual adjustment and auditing, is not stable in external requests when the traffic is low, and is easy to cause instantaneous traffic spurts. The application provides a multi-computer room message load balancing processing method and device, wherein a main node in a distributed service framework acquires the current message queue backlog data volume and the number of application nodes of each computer room, and determines the target consumption rate of each computer room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time, so that each application node determines the corresponding message queue consumption rate according to the target consumption rate, thereby effectively adjusting the throughput of a downstream system and ensuring the stability of the system.
In order to effectively adjust the throughput of a downstream system and ensure the stability of the system, the present application provides an embodiment of a multi-machine-room message load balancing processing method, and referring to fig. 1, the multi-machine-room message load balancing processing method specifically includes the following contents:
step S101: the method comprises the steps of obtaining the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time.
Optionally, the preset distributed service framework may be a zookeeper framework, the data volume of the backlog messages of the current Message Queue (MQ) of each machine room is counted through a zookeeper election mechanism, and meanwhile, each application registers in the zookeeper, so that a master node in the zookeeper framework can dynamically sense the current traffic volume of each machine room, and the target consumption rate of each machine room is dynamically calculated to ensure that the overall external stable issuing rate of the plurality of machine rooms is ensured in combination with the number of application nodes and the preset plan execution time.
Specifically, the calculation process of the target consumption rate of each machine room may be:
1. and the main node acquires the number of machines registered in each machine room on the zookeeper.
2. And the main node acquires the quantity of the messages to be consumed extruded in the MQ queues of each machine room.
3. And each machine room calculates the target consumption rate, and the calculation formula is as follows: the number of messages to be consumed/(number of machines 60), taking 3 machine rooms as an example, the consumption rates of each machine room M1, M2, M3 are obtained, when M1+ M2+ M3 is less than or equal to the downstream target overall rate M0, the consumption rates of the three machine rooms are M1, M2, M3, when M1+ M2+ M3 is greater than the downstream target overall rate M0, the consumption rates of the three machine rooms are (M1M 0)/(M1+ M2+ M3), (M2M 0)/(M1+ M2+ M3), (M3M 0)/(M1+ M2+ M3).
Step S102: and sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
Optionally, the master node of the distributed service framework in the application may issue the target consumption rate of the current machine room of each application node in real time through the zookeeper, and after the application node obtains the target consumption rate, the application node may control the consumption rate of the message queue corresponding to each application node by using a token bucket algorithm, and finally, a stable issuing rate of a plurality of machine rooms to a downstream system is maintained.
Specifically, the process of the application node controlling the consumption rate of the respective corresponding message queue by using the token bucket algorithm may be as follows:
1. each node judges the consumption rate in each node according to the consumption rate calculated by the main node, and whether a key exists or not is judged at each stage. The initialization key is accessed for the first time.
2. If the key does not exist, the token bucket is initialized, the initial token count is prevented, and the key expiration time is set to interval 2. The initial token number here may be set to a throttling threshold such as throttling 10qps in general, and the initial value may be set to 10 to handle the initial traffic. interval is an interval time, such as the current limit threshold 10qps, with interval set to 1 s. The expiration time is the time of the key in the cache, and interval x 2 is to prevent the key from being expired and not intercepting the traffic.
3. If a key exists, the current request time is compared to the last token placed time for the current key. And if the interval exceeds the interval, entering the step 4, and if the interval does not exceed the interval, entering the step 5.
4. The interval has exceeded 1s and tokens are placed directly to the maximum number.
5. The interval does not exceed 1s, delta is defined as the time difference, and the number of tokens placed is delta/(1/qps). The number of tokens is guaranteed not to exceed the capacity of the bucket when the tokens are put in. At the same time, the time to put the token is reset.
6. Obtaining a token from the bucket, and executing the request after the token is successfully obtained; obtaining the token time and rejecting the request.
As can be seen from the above description, the multi-machine-room message load balancing processing method provided in the embodiment of the present application can obtain the current message queue backlog data volume and the number of application nodes of each machine room through the master node in the distributed service framework, and determine the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes, and the preset scheduled execution time, so that each application node determines the corresponding message queue consumption rate according to the target consumption rate, thereby effectively adjusting the throughput of the downstream system and ensuring the stability of the system itself.
In order to accurately determine the master node, in an embodiment of the method for load balancing of multi-computer-room messages according to the present application, before the step S101, the following may be further included:
and determining a main node from all application nodes of the preset distributed service framework through a preset election strategy.
Optionally, the election policy may be:
the core of the election is the atomic broadcast by using zookeeper, the mechanism ensures the synchronization among all the servers, and the whole election process has the following roles:
and the leader (leader) is responsible for initiating and resolving votes and updating the system state.
Learners (learners), including followers (followers) and observers (observers), are used for accepting client requests and returning results to clients, and participate in voting in the election process. The Observer can accept client connection and forward a write request to the leader, but the Observer does not participate in the voting process and only synchronizes the state of the leader, and the aim of the Observer is to expand a system and improve the reading speed.
Each node has three states during election:
LOOKING: the current node does not know who the leader is and is searching.
Leader: the current node is the elected leader.
FOLLOWING: the leader node has already been elected and the current node is synchronized with it.
To ensure order consistency of transactions, an incremented transaction id number (zxid) is used to identify the transactions. All proposals (proposals) were added zxid at the time of their proposal. Each change in the state of a node receives a different globally unique zxid for zookeeper. If a node is deleted, creation of the node will cause the zookeeper state to change, zxid to increment.
When the leader server elects, the state of the leader server is changed into leader, and the state of the server which is not the observer server is automatically changed into FOLLOWERING.
The specific process is exemplified as follows:
assuming that there are currently 5 servers, each of which has no data, and their numbers are 1,2,3,4, and 5, respectively, which are started sequentially according to the numbers, the selection process is as follows:
the server 1 starts to vote for itself and then sends out voting information, and since other machines are not started yet, the other machines cannot receive feedback information, and the state of the server 1 always belongs to the Looking (election state).
The server 2 starts up to vote for itself and exchanges results with the previously started server 1, and the server 2 wins the number of the server 2, but the number of votes is not more than half of the number at this time, so that the states of the two servers are still LOOKING.
The server 3 starts up and votes for itself and exchanges information with the previously started servers 1,2, and since the number of the server 3 is the largest and the server 3 wins, the number of votes is just greater than half, the server 3 becomes the leader and the servers 1,2 become followers.
The server 4 starts up to vote for itself and exchanges information with the previously started servers 1,2,3, and although the number of the server 4 is large, the previous server 3 has won, so the server 4 can only become a follower.
The server 5 starts and the following logic becomes the follower with the server 4.
In order to accurately adjust the message queue consumption rate corresponding to each application node, in an embodiment of the multi-computer room message load balancing processing method of the present application, the step S102 may further specifically include the following contents:
and after obtaining the target consumption rate, the application nodes determine the respective corresponding message queue consumption rate according to a preset token bucket algorithm.
In order to ensure that the system is stable for a long time, in an embodiment of the multi-room message load balancing processing method of the present application, after the step S102, the following may be further included:
and re-determining the target consumption rate of each machine room according to a set time period and sending the target consumption rate to each application node in the distributed service framework.
Optionally, the master node of the present application may perform a recalculation at set time intervals (e.g., 30 seconds), and repeat the above process to keep the system Throughput (TPS) in a steady state.
In order to effectively adjust the throughput of the downstream system and ensure the stability of the system itself, the present application provides an embodiment of a multi-machine-room message load balancing processing apparatus for implementing all or part of the contents of the multi-machine-room message load balancing processing method, and referring to fig. 2, the multi-machine-room message load balancing processing apparatus specifically includes the following contents:
the consumption rate determining module 10 is configured to obtain a current message queue backlog data volume and an application node number of each machine room through a master node in a preset distributed service framework, and determine a target consumption rate of each machine room according to the current message queue backlog data volume, the application node number, and a preset scheduled execution time.
And the consumption rate adjusting module 20 is configured to send the target consumption rate to each application node in the distributed service framework, so that each application node determines a corresponding message queue consumption rate according to the target consumption rate.
As can be seen from the above description, the multi-machine-room message load balancing processing apparatus provided in the embodiment of the present application can obtain the current message queue backlog data volume and the number of application nodes of each machine room through the master node in the distributed service framework, and determine the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes, and the preset scheduled execution time, so that each application node determines the corresponding message queue consumption rate according to the target consumption rate, thereby effectively adjusting the throughput of the downstream system and ensuring the stability of the system itself.
In order to be able to accurately determine the master node, in an embodiment of the multi-room message load balancing processing apparatus of the present application, referring to fig. 3, the consumption rate determining module 10 includes:
the master node election unit 11 is configured to determine a master node from all application nodes of the preset distributed service framework through a preset election policy.
In order to accurately adjust the message queue consumption rate corresponding to each application node, in an embodiment of the multi-room message load balancing processing apparatus of the present application, referring to fig. 4, the consumption rate adjusting module 20 includes:
and the token bucket algorithm adjusting unit 21 is configured to determine, according to a preset token bucket algorithm, respective corresponding message queue consumption rates after the application nodes obtain the target consumption rate.
In order to ensure long-term stability of the system, in an embodiment of the multi-room message load balancing processing apparatus of the present application, referring to fig. 5, the consumption rate adjusting module 20 further includes:
and the cycle calculation unit 22 is configured to re-determine the target consumption rate of each machine room according to a set time period, and send the target consumption rate to each application node in the distributed service framework.
To further illustrate the present solution, the present application further provides a specific application example of implementing a multi-machine-room message load balancing processing method by using the multi-machine-room message load balancing processing apparatus, which specifically includes the following contents:
through a zookeeper election mechanism, the backlog messages of the machine rooms MQ are counted, the zookeeper is applied to register, the current traffic of the machine rooms is dynamically sensed, and the consumption rates of different machine rooms are dynamically calculated to ensure that the overall stable issuing rates of the machine rooms are externally and stably issued.
The method specifically comprises the following steps:
(1) and selecting a main node for consumption rate control through an election strategy.
(2) And the master node respectively acquires the number of the messages backlogged in the current MQ of each machine room.
(3) And the main node acquires the total number of the consumer application nodes in the current computer room through the zookeeper.
(4) And the master node dynamically calculates the consumption rate of each machine room according to the total MQ service data volume, the number of each application node and the scheduled execution time.
(5) And the main node issues the current computer room consumption rate to the application node in real time through the zookeeper.
(6) The application node obtains the current consumption rate, controls the MQ consumption rate by adopting a token bucket algorithm, and finally keeps a stable issuing rate of a plurality of machine rooms to a downstream system.
(7) The master node performs a recalculation every 30 seconds and repeats the above process to keep the TPS in a steady state.
According to the method, the TPS at the downstream can be automatically controlled within a stable range, the speed of the downstream condition can be dynamically regulated, the stability of the system is ensured, and meanwhile, the method plays a protective role in the downstream system.
In terms of hardware, in order to effectively adjust the throughput of the downstream system and ensure the stability of the system itself, the present application provides an embodiment of an electronic device for implementing all or part of the contents in the multi-room message load balancing processing method, where the electronic device specifically includes the following contents:
a processor (processor), a memory (memory), a communication Interface (Communications Interface), and a bus; the processor, the memory and the communication interface complete mutual communication through the bus; the communication interface is used for realizing information transmission between the multi-machine room message load balancing processing device and relevant equipment such as a core service system, a user terminal and a relevant database; the logic controller may be a desktop computer, a tablet computer, a mobile terminal, and the like, but the embodiment is not limited thereto. In this embodiment, the logic controller may refer to an embodiment of the multi-room message load balancing processing method and an embodiment of the multi-room message load balancing processing apparatus in the embodiment for implementation, and the contents thereof are incorporated herein, and repeated details are not repeated.
It is understood that the user terminal may include a smart phone, a tablet electronic device, a network set-top box, a portable computer, a desktop computer, a Personal Digital Assistant (PDA), an in-vehicle device, a smart wearable device, and the like. Wherein, intelligence wearing equipment can include intelligent glasses, intelligent wrist-watch, intelligent bracelet etc..
In practical applications, part of the multi-room message load balancing processing method may be executed on the electronic device side as described above, or all operations may be completed in the client device. The selection may be specifically performed according to the processing capability of the client device, the limitation of the user usage scenario, and the like. This is not a limitation of the present application. The client device may further include a processor if all operations are performed in the client device.
The client device may have a communication module (i.e., a communication unit), and may be communicatively connected to a remote server to implement data transmission with the server. The server may include a server on the task scheduling center side, and in other implementation scenarios, the server may also include a server on an intermediate platform, for example, a server on a third-party server platform that is communicatively linked to the task scheduling center server. The server may include a single computer device, or may include a server cluster formed by a plurality of servers, or a server structure of a distributed apparatus.
Fig. 6 is a schematic block diagram of a system configuration of an electronic device 9600 according to an embodiment of the present application. As shown in fig. 6, the electronic device 9600 can include a central processor 9100 and a memory 9140; the memory 9140 is coupled to the central processor 9100. Notably, this FIG. 6 is exemplary; other types of structures may also be used in addition to or in place of the structure to implement telecommunications or other functions.
In one embodiment, the multi-room message load balancing processing method function may be integrated into the central processor 9100. The central processor 9100 may be configured to control as follows:
step S101: the method comprises the steps of obtaining the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time.
Step S102: and sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
As can be seen from the above description, in the electronic device provided in the embodiment of the present application, the master node in the distributed service framework obtains the current message queue backlog data volume and the number of application nodes of each machine room, and determines the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes, and the preset scheduled execution time, so that each application node determines the corresponding message queue consumption rate according to the target consumption rate, thereby effectively adjusting the throughput of the downstream system and ensuring the stability of the system itself.
In another embodiment, the multi-room message load balancing processing apparatus may be configured separately from the central processing unit 9100, for example, the multi-room message load balancing processing apparatus may be configured as a chip connected to the central processing unit 9100, and the function of the multi-room message load balancing processing method may be implemented by the control of the central processing unit.
As shown in fig. 6, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 also does not necessarily include all of the components shown in fig. 6; further, the electronic device 9600 may further include components not shown in fig. 6, which may be referred to in the art.
As shown in fig. 6, a central processor 9100, sometimes referred to as a controller or operational control, can include a microprocessor or other processor device and/or logic device, which central processor 9100 receives input and controls the operation of the various components of the electronic device 9600.
The memory 9140 can be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 9100 can execute the program stored in the memory 9140 to realize information storage or processing, or the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. Power supply 9170 is used to provide power to electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, an LCD display, but is not limited thereto.
The memory 9140 can be a solid state memory, e.g., Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 9140 could also be some other type of device. Memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 being used for storing application programs and function programs or for executing a flow of operations of the electronic device 9600 by the central processor 9100.
The memory 9140 can also include a data store 9143, the data store 9143 being used to store data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers for the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, contact book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. The communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and receive audio input from the microphone 9132, thereby implementing ordinary telecommunications functions. The audio processor 9130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100, thereby enabling recording locally through the microphone 9132 and enabling locally stored sounds to be played through the speaker 9131.
An embodiment of the present application further provides a computer-readable storage medium capable of implementing all steps in the multi-room message load balancing processing method in which the execution subject is the server or the client in the foregoing embodiment, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements all steps of the multi-room message load balancing processing method in which the execution subject is the server or the client, for example, when the processor executes the computer program, the processor implements the following steps:
step S101: the method comprises the steps of obtaining the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time.
Step S102: and sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
As can be seen from the above description, in the computer-readable storage medium provided in this embodiment of the present application, the master node in the distributed service framework obtains the current message queue backlog data volume and the number of application nodes of each machine room, and determines the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes, and the preset scheduled execution time, so that each application node determines the corresponding message queue consumption rate according to the target consumption rate, thereby effectively adjusting the throughput of the downstream system and ensuring the stability of the system itself.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A multi-room message load balancing processing method is characterized by comprising the following steps:
acquiring the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time;
and sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
2. The multi-computer room message load balancing processing method according to claim 1, wherein before the obtaining of the current message queue backlog data volume and the number of application nodes of each computer room by the master node in the preset distributed service framework, the method further comprises:
and determining a main node from all application nodes of the preset distributed service framework through a preset election strategy.
3. The multi-chassis message load balancing processing method according to claim 1, wherein said determining, by said application nodes, respective corresponding message queue consumption rates according to said target consumption rate comprises:
and after obtaining the target consumption rate, the application nodes determine the respective corresponding message queue consumption rate according to a preset token bucket algorithm.
4. The multi-chassis message load balancing processing method according to claim 1, further comprising, after the sending the target consumption rate to each application node in the distributed service framework:
and re-determining the target consumption rate of each machine room according to a set time period and sending the target consumption rate to each application node in the distributed service framework.
5. A multi-room message load balancing processing device is characterized by comprising:
the consumption rate determining module is used for acquiring the current message queue backlog data volume and the number of application nodes of each machine room through a main node in a preset distributed service framework, and determining the target consumption rate of each machine room according to the current message queue backlog data volume, the number of application nodes and the preset plan execution time;
and the consumption rate adjusting module is used for sending the target consumption rate to each application node in the distributed service framework so that each application node determines the corresponding message queue consumption rate according to the target consumption rate.
6. The multi-room message load balancing processing apparatus according to claim 5, wherein the consumption rate determining module comprises:
and the master node election unit is used for determining a master node from all application nodes of the preset distributed service framework through a preset election strategy.
7. The multi-room message load balancing processing apparatus according to claim 5, wherein the consumption rate adjusting module comprises:
and the token bucket algorithm adjusting unit is used for determining the consumption rate of the message queue corresponding to each application node according to a preset token bucket algorithm after the application nodes acquire the target consumption rate.
8. The multi-room message load balancing processing apparatus according to claim 5, wherein the consumption rate adjusting module further comprises:
and the circulating calculation unit is used for re-determining the target consumption rate of each machine room according to a set time period and sending the target consumption rate to each application node in the distributed service framework.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the multi-room message load balancing processing method of any one of claims 1 to 4 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the multi-room message load balancing processing method according to any one of claims 1 to 4.
CN202111441126.9A 2021-11-30 2021-11-30 Multi-machine room message load balancing processing method and device Pending CN114124962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111441126.9A CN114124962A (en) 2021-11-30 2021-11-30 Multi-machine room message load balancing processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111441126.9A CN114124962A (en) 2021-11-30 2021-11-30 Multi-machine room message load balancing processing method and device

Publications (1)

Publication Number Publication Date
CN114124962A true CN114124962A (en) 2022-03-01

Family

ID=80368311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111441126.9A Pending CN114124962A (en) 2021-11-30 2021-11-30 Multi-machine room message load balancing processing method and device

Country Status (1)

Country Link
CN (1) CN114124962A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866799A (en) * 2022-05-11 2022-08-05 北京奇艺世纪科技有限公司 Server scheduling method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180019950A1 (en) * 2016-07-14 2018-01-18 International Business Machines Corporation Flow Controller Automatically Throttling Rate of Service Provided by Web API
CN113645151A (en) * 2021-09-02 2021-11-12 深圳云豹智能有限公司 DUP equipment message management method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180019950A1 (en) * 2016-07-14 2018-01-18 International Business Machines Corporation Flow Controller Automatically Throttling Rate of Service Provided by Web API
CN113645151A (en) * 2021-09-02 2021-11-12 深圳云豹智能有限公司 DUP equipment message management method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866799A (en) * 2022-05-11 2022-08-05 北京奇艺世纪科技有限公司 Server scheduling method and device
CN114866799B (en) * 2022-05-11 2024-04-05 北京奇艺世纪科技有限公司 Server scheduling method and device

Similar Documents

Publication Publication Date Title
US10491535B2 (en) Adaptive data synchronization
CN111031058A (en) Websocket-based distributed server cluster interaction method and device
CN111245732B (en) Flow control method, device and equipment
CN108355350A (en) A kind of application service cut-in method and device based on mobile edge calculations
CN111736772A (en) Storage space data processing method and device of distributed file system
CN114358307A (en) Federal learning method and device based on differential privacy law
CN108370353A (en) It is increased network utilization using network assistance agreement
CN112817694A (en) Automatic load balancing method and device for distributed system
CN114124962A (en) Multi-machine room message load balancing processing method and device
CN113079139B (en) Block chain-based consensus group master node determination method, device and system
CN112231106B (en) Access data processing method and device of Redis cluster
CN111010339B (en) Enterprise-level high-performance API service gateway design method
CN112035066A (en) Method and device for calculating log retention time
CN116954926A (en) Server resource allocation method and device
CN111352719A (en) Transaction book-keeping service data processing method, device and system
CN111338905A (en) Application node data processing method and device
CN111930624B (en) Test link message data processing method and device
CN112087365A (en) Instant messaging method and device applied to group, electronic equipment and storage medium
CN113052691A (en) Distributed account checking system service balancing method, node and cluster
CN113645151A (en) DUP equipment message management method and device
CN112559158A (en) Micro-service timing task scheduling method and device
US20150236987A1 (en) Device, method and non-transitory computer readable storage medium for performing instant message communication
CN112115279A (en) Risk control method and device based on knowledge graph
CN112766698B (en) Application service pressure determining method and device
CN110968641B (en) Data writing control method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination