CN113687928A - Message scheduling control method and corresponding device, equipment and medium thereof - Google Patents

Message scheduling control method and corresponding device, equipment and medium thereof Download PDF

Info

Publication number
CN113687928A
CN113687928A CN202110886743.3A CN202110886743A CN113687928A CN 113687928 A CN113687928 A CN 113687928A CN 202110886743 A CN202110886743 A CN 202110886743A CN 113687928 A CN113687928 A CN 113687928A
Authority
CN
China
Prior art keywords
per
message thread
current message
request
total
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110886743.3A
Other languages
Chinese (zh)
Other versions
CN113687928B (en
Inventor
黄育才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huaduo Network Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN202110886743.3A priority Critical patent/CN113687928B/en
Publication of CN113687928A publication Critical patent/CN113687928A/en
Application granted granted Critical
Publication of CN113687928B publication Critical patent/CN113687928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a message scheduling control method and a corresponding device, equipment and medium thereof, wherein the method comprises the following steps: continuously acquiring the total quantity of requests per second of downstream services of a message queue and the average execution duration of each request; when the monitored request per second total exceeds the request per second threshold value of the downstream service, the current message thread is controlled to be in a slow release state, and the release duration of the current message thread is correspondingly prolonged or shortened according to the maintenance of the increase and decrease state of the request per second total; counting the front blocking amount corresponding to the current message thread, and calculating the expected queuing time length corresponding to the current message thread according to the product of the front blocking amount and the average execution time length; and when the expected queuing time of the current message thread exceeds a preset delay threshold value, controlling the current message thread to be in an active state, and delivering the current message thread to be asynchronously executed. According to the method and the device, the consumption rate is automatically reduced when the consumption of the message thread is too fast, downstream services are protected, and the consumption speed is automatically increased when the consumption is too slow.

Description

Message scheduling control method and corresponding device, equipment and medium thereof
Technical Field
The embodiment of the application relates to an internet task scheduling technology, in particular to a message scheduling control method and a corresponding device, equipment and medium thereof.
Background
In internet applications, a message queue is often used as a middleware and is generally used for realizing functions such as service decoupling, traffic peak clipping, asynchronous processing and the like. In practice, however, the consumer end often encounters two problems when producing large quantities: one is that the consumption is too fast, resulting in pressure on huge requests to downstream services, or even overwhelming downstream services. Downstream services also include databases, file systems, etc. used internally in the system. Secondly, the consumption is too slow, the messages are seriously accumulated, the messages cannot be consumed in time, and the real-time performance of the service is influenced.
In the current main message middleware in the industry, the configuration parameters of the consumption end are all configured statically, so that the consumer cannot change dynamically after starting, and further cannot realize dynamic control according to the intention of a system manager along with the change of the message quantity. Therefore, the current common consuming end of the message queue has no technology for dynamically controlling the message consumption speed, so that when the message accumulation occurs, only the configuration parameters can be modified to restart the service, or the problem of too fast or too slow consumption is solved through a node scaling mode, which will result in the increase of the deployment cost.
Therefore, according to the understanding of the applicant, the task scheduling mechanism for the message queue affects the response efficiency and deployment cost of the internet background, and ideally, the message queue is uniformly scheduled according to the load condition of the downstream service, so that the dynamic control of consumption can be realized, and the system problem caused by too fast consumption or too slow consumption can be solved as quickly and conveniently as possible.
Disclosure of Invention
An object of the present application is to provide a message scheduling control method and corresponding apparatus, computer device and storage medium, which are at least partially insufficient in the prior art or meet at least part of the needs of the prior art.
In order to solve the technical problem, the application adopts a technical scheme that:
the application provides a message scheduling control method, which comprises the following steps:
continuously acquiring the total amount of requests per second and the average execution time length of each request, wherein the total amount of requests per second is obtained by continuously counting the total amount of message threads dispatched from a message queue to downstream services, and the average execution time length is the average value of each request calculated according to the total amount of requests per second;
monitoring the change of the total request per second, controlling the current message thread to be in a slow release state of delayed execution when the total request per second exceeds the request threshold per second of the downstream service, and correspondingly prolonging or shortening the release duration of the current message thread according to the maintenance of the increase and decrease state of the total request per second;
counting the front blocking amount corresponding to the current message thread, and calculating the expected queuing time length corresponding to the current message thread according to the product of the front blocking amount and the average execution time length;
monitoring the change of the expected queuing time length, controlling the current message thread to be in an active state waiting for synchronous execution when the expected queuing time length of the current message thread exceeds a preset delay threshold value, returning the release time length to zero, and delivering the current message thread to asynchronous execution when the current message thread is continuously in the active state for multiple times.
In a preferred embodiment, each of said message threads in said message queue independently performs the steps of the method as said current message thread.
In an embodiment, monitoring the change of the total amount of requests per second, when the total amount of requests per second exceeds the threshold value of requests per second of the downstream service, controlling the current message thread to be in a slow release state of deferred execution, and enabling the release duration of the current message thread to be correspondingly prolonged or shortened according to the maintenance of the increase and decrease state of the total amount of requests per second, the method includes the following steps:
monitoring and acquiring the total amount of requests per second generated by statistics each time, and comparing the difference between the total amount of requests per second and the threshold value of requests per second of the downstream service;
when the total request per second exceeds the request threshold per second, setting the release time length of the current message thread as the difference value between the limit execution time length and the average execution time length to enable the current message thread to enter a slow release state, and correspondingly prolonging the release time length of the current message thread according to the maintenance of the growth state of the total request per second, wherein the limit execution time length is the average value of each request calculated according to the request threshold per second;
and when the total request amount per second is lower than a certain range of the request threshold value per second and the release time length is not zero, correspondingly shortening the release time length of the current message thread according to the maintenance of the reduction state of the total request amount per second so as to enable the current message thread to be continuously in a slow release state.
In a further embodiment, the step performed when the total number of requests per second exceeds the threshold number of requests per second includes the steps of:
when the total request amount per second exceeds the request threshold per second for the first time, setting the release duration initialized to a zero value as the difference value between the limit execution duration and the average execution duration so as to enable the current message thread to enter a slow release state;
and when the total request amount per second continuously exceeds the request threshold value per second, accumulating the release time length by a preset value to realize fine adjustment, so that the release time length of the current message thread is correspondingly prolonged according to the maintenance of the increase state of the total request amount per second.
In a further embodiment, in the step executed when the total amount of requests per second is lower than a certain range of the request threshold per second and the release duration is not zeroed, the release duration is decremented by the preset value to achieve fine tuning, so that the release duration of the current message thread is correspondingly shortened according to the maintenance of the status of the decrement of the total amount of requests per second.
In an embodiment, monitoring the change of the expected queuing time, controlling the current message thread to be in an active state waiting for synchronous execution when the expected queuing time of the current message thread exceeds a preset delay threshold, and zeroing the release time, and delivering the current message thread to asynchronous execution when the current message thread is in the active state continuously for multiple times, the method includes the following steps:
monitoring and acquiring the expected queuing time obtained by each calculation;
judging whether the expected queuing time exceeds a preset delay threshold value or not, and when the expected queuing time exceeds the preset delay threshold value, resetting the release time of the current message thread to zero to enable the current message thread to be in an active state waiting for synchronous execution;
and judging whether the expected queuing time continuously exceeds the preset delay threshold for multiple times, if so, delivering the current message thread to a task thread pool for asynchronous message consumption threads to realize asynchronous execution.
In a preferred embodiment, the method further comprises the steps of:
monitoring the total amount of message threads in the task thread pool;
when the total number of the message threads in the task thread pool is greater than a preset upper limit, the message threads are blocked from being added;
and when the total number of message threads in the task thread pool is returned to zero for a plurality of times and the total number of requests per second is lower than the threshold value of requests per second, destroying the task thread pool.
In a preferred embodiment, continuously counting the total amount of requests per second obtained from the total amount of message threads dispatched from the message queue to the downstream service, and calculating the average execution time length of each message thread comprises the following steps:
periodically counting the total amount of message threads which are dispatched and dequeued from the message queue to be transmitted to downstream services and consumed as the total amount of requests per second;
and calculating the time slot occupied by each request in the total amount of the requests per second in one second averagely, and determining the time slot as the average execution time length.
In order to solve the above technical problem, another technical solution adopted by the present application is:
the application provides a message scheduling control device, which comprises a data acquisition module, a slow release control module, a congestion control module and an active control module, wherein the data acquisition module is used for continuously acquiring the total amount of requests per second and the average execution time length of each request, the total amount of requests per second is obtained by continuously counting the total amount of message threads scheduled from a message queue to downstream services, and the average execution time length is the average value of each request calculated according to the total amount of requests per second; the slow release control module is used for monitoring the change of the total request amount per second, controlling the current message thread to be in a slow release state of delayed execution when the total request amount per second exceeds the request threshold value per second of the downstream service, and correspondingly prolonging or shortening the release duration of the current message thread according to the maintenance of the increase and decrease state of the total request amount per second; the block counting module is used for counting the front block amount corresponding to the current message thread and calculating the expected queuing time length corresponding to the current message thread according to the product of the front block amount and the average execution time length; the active control module is used for monitoring the change of the expected queuing time length, controlling the current message thread to be in an active state waiting for synchronous execution when the expected queuing time length of the current message thread exceeds a preset delay threshold value, enabling the release time length to return to zero, and delivering the current message thread to be asynchronously executed when the current message thread is continuously in the active state for multiple times.
In a preferred embodiment, each of the message threads in the message queue is configured with the respective module of the apparatus.
In an embodiment, the slow release control module includes a slow release monitoring sub-module, configured to monitor and obtain a total amount of requests per second statistically generated each time, and compare a difference between the total amount of requests per second and a request threshold per second of the downstream service; the forward slow-release sub-module is used for setting the release time length of the current message thread as the difference value between the limit execution time length and the average execution time length when the total request per second exceeds the request threshold per second so as to enable the current message thread to enter a slow-release state, correspondingly prolonging the release time length of the current message thread according to the maintenance of the increase state of the total request per second, wherein the limit execution time length is the average value of each request calculated according to the request threshold per second; and the reverse slow-release sub-module is used for correspondingly shortening the release time length of the current message thread according to the maintenance of the reduction state of the total request amount per second when the total request amount per second is lower than a certain range of the request threshold value per second and the release time length is not zero, so that the current message thread is continuously in a slow-release state.
In a further embodiment, the forward sustained release sub-module comprises: the first slow-release secondary module is used for setting the release time length which is initialized to a zero value in advance as the difference value between the limit execution time length and the average execution time length when the total request per second firstly exceeds the request threshold per second so as to enable the current message thread to enter a slow-release state; and the continuous slow release secondary module is used for accumulating the release time length by a preset value to realize fine adjustment when the total request amount per second continuously exceeds the request threshold value per second, so that the release time length of the current message thread is correspondingly prolonged according to the maintenance of the increase state of the total request amount per second.
In a further embodiment, the reverse slow-release sub-module is further configured to decrement the release duration by the preset value to achieve fine tuning, so that the release duration of the current message thread is correspondingly shortened according to the maintenance of the state of reduction of the total amount of requests per second.
In a particular embodiment, the active control module includes: the active monitoring submodule is used for monitoring and acquiring the expected queuing time obtained by each calculation; the active switching secondary module is used for judging whether the expected queuing time exceeds a preset delay threshold value or not, and when the expected queuing time exceeds the preset delay threshold value, the release time of the current message thread is reset to zero so that the current message thread is in an active state waiting for synchronous execution; and the asynchronous delivery secondary module is used for judging whether the expected queuing time continuously exceeds the preset delay threshold for multiple times, and delivering the current message thread to a task thread pool for an asynchronous consumption message thread to realize asynchronous execution if the expected queuing time continuously exceeds the preset delay threshold.
In a preferred embodiment, the apparatus further comprises: the total amount monitoring submodule is used for monitoring the total amount of the message threads in the task thread pool; the overrun blocking sub-module is used for blocking the message thread added to the task thread pool when the total amount of the message threads in the task thread pool is larger than a preset upper limit; and the asynchronous recovery submodule is used for destroying the task thread pool when the total message thread amount in the task thread pool for a plurality of times is returned to zero and the total request per second amount is lower than the request per second threshold value.
In a preferred embodiment, the data acquisition module includes: a periodic statistic submodule, configured to periodically count, per second, a total amount of message threads dispatched from the message queue to be transmitted to a downstream service and consumed, as the total amount of requests per second; and the time length calculation submodule is used for calculating the time slot which is averagely occupied by each request in the total request amount per second within one second and determining the time slot as the average execution time length.
In order to solve the above technical problem, the present application further provides a computer device, which includes a memory and a processor, where the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the message scheduling control method.
The present invention also provides a storage medium storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to execute the steps of the message scheduling control method.
Compared with the prior art, the method has the following advantages:
according to the method, a dynamic adjusting mechanism is implanted into the message thread, so that each message thread can adjust the self release duration of the message thread according to the total quantity of requests per second (QPS) processed by the downstream service of the message queue, and when the response speed of the downstream service is found to be reduced through the total quantity of requests per second, the self release duration of the message thread is prolonged, so that the message queue integrally enters a scheduling deceleration mode to relieve the response pressure of the downstream service; when the response speed of the downstream service is found to be improved through the total amount of requests per second, the self-release duration is shortened, so that the message queue integrally enters a scheduling acceleration mode, and the processing efficiency of the downstream service is improved. Therefore, the whole message queue can adapt to the processing capacity change of downstream services to realize balanced scheduling under the coordination control of the dynamic adjustment mechanism of each message thread, so that the scheduling efficiency of the message queue is maximized.
According to the method and the device, after the message queue enters the scheduling acceleration mode, whether the asynchronous execution mechanism is started by the consuming thread can be determined according to the continuous response condition of the downstream service, the processing efficiency of the downstream service is further improved through the asynchronous execution mechanism, the faster acceleration processing of the message thread is realized, and the scheduling efficiency of the message queue is maximized.
In summary, the method and the device for processing the message queue automatically reduce the consumption rate to protect downstream services when the consumption of the message thread in the message queue is too fast, automatically increase the consumption speed when the consumption is too slow, and relieve the problems caused by consumption accumulation and delay processing.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart illustrating an exemplary embodiment of a message scheduling control method according to the present application;
fig. 2 is a schematic flow chart of a process of message process control state switching in the message scheduling control method according to the present application;
FIG. 3 is a schematic flow chart illustrating a process of micro-adjusting a release duration in a scheduling deceleration mode by the message scheduling control method according to the present application;
FIG. 4 is a schematic flow chart illustrating a process of starting a task thread pool in a scheduling acceleration mode according to the message scheduling control method of the present application;
FIG. 5 is a flowchart illustrating a process of maintaining a task thread pool by the message scheduling control method according to the present application;
fig. 6 is a schematic diagram of a basic structure of a message scheduling control apparatus according to the present application;
fig. 7 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, "client," "terminal," and "terminal device" as used herein include both devices that are wireless signal receivers, which are devices having only wireless signal receivers without transmit capability, and devices that are receive and transmit hardware, which have receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., a computer program is stored in the memory, and the central processing unit calls a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
According to the technical scheme, the cloud server can be deployed, data communication connection can be achieved between the cloud server and servers related to business so as to coordinate online service, and a logically related server cluster can be formed between the cloud server and other related servers so as to provide service for related terminal equipment such as a smart phone, a personal computer and a third-party server. The smart phone and the personal computer can both access the internet through a known network access mode, and establish a data communication link with the server of the application so as to access and use the service provided by the server.
For the server, a corresponding program interface is opened by a service engine providing an online service for remote invocation by various terminal devices, and the related technical solution applicable to be deployed in the server in the present application can be implemented in the server in this way.
The computer program, i.e., the application program, referred to in the present application, is developed in a computer program language, and is installed in a computer device, including a server, a terminal device, and the like, to implement the relevant functions defined in the present application, regardless of the development language used therein unless otherwise specified.
The person skilled in the art will know this: although the various methods of the present application are described based on the same concept so as to be common to each other, they may be independently performed unless otherwise specified. In the same way, for each embodiment disclosed in the present application, it is proposed based on the same inventive concept, and therefore, concepts of the same expression and concepts of which expressions are different but are appropriately changed only for convenience should be equally understood.
The embodiments to be disclosed herein can be flexibly constructed by cross-linking related technical features of the embodiments unless the mutual exclusion relationship between the related technical features is stated in the clear text, as long as the combination does not depart from the inventive spirit of the present application and can meet the needs of the prior art or solve the deficiencies of the prior art. Those skilled in the art will appreciate variations therefrom.
Referring to fig. 1, a basic flow diagram of a message scheduling control method in an exemplary embodiment of the present application is shown, in which a message scheduling control method provided in the present application is programmed to be executed by an application program in a computer device, and includes the following steps:
step S1100, continuously acquiring the total amount of requests per second and the average execution time length of each request, wherein the total amount of requests per second is obtained by continuously counting the total amount of message threads dispatched from a message queue to downstream services, and the average execution time length is the average value of each request calculated according to the total amount of requests per second:
after each message thread enters the message queue, the method can start to continuously acquire the total request per second QPS of the downstream service and the average execution time length calculated according to the total request per second. The total number of requests per second is the total number of message threads in the message queue consumed by the downstream service per second, and naturally, the average execution time length of each message thread calculated according to the total number of requests per second reflects the average response time length of each request corresponding to the downstream service processing.
Therefore, a background process can be constructed according to the following steps and is responsible for implementing the total amount of requests per second and the corresponding average execution time length, and the running time of the background process circularly executes the following specific steps:
step S1110, periodically counting the total amount of message threads that are dispatched from the message queue to be delivered to the downstream service and consumed, as the total amount of requests per second:
and the background process carries out statistics according to periodic circulation per second so as to obtain the total QPS per second, wherein within one second, the message queue dequeues the message thread to realize consumption execution, and the downstream service responds to the total quantity of the corresponding requests.
Step S1120, calculating the time slot occupied by each request in one second in the total amount of requests per second, and determining the time slot as the average execution duration:
and directly dividing the 1 second time length by the total request amount per second to obtain the time slot occupied by each request averagely, wherein the time slot is the average execution time length. The formula is expressed as:
RT=1/QPS
step S1200, monitoring the change of the total amount of requests per second, and when the total amount of requests per second exceeds the threshold value of requests per second of the downstream service, controlling the current message thread to be in the slow release state of deferred execution, and correspondingly prolonging or shortening the release duration of the current message thread according to the maintenance of the increase/decrease state of the total amount of requests per second:
the downstream service pre-configures a request per second threshold maxQPS to indicate the total amount of requests per second that it is adapted to handle, and the total amount of requests per second represents the total amount of requests actually handled by the current downstream service, so that by comparing these two values, the current load condition of the downstream service can be determined.
For the current message thread executing the method, the current message thread monitors the total amount of requests per second generated continuously, when the current message thread acquires the total amount of requests per second, the total amount of requests per second is compared with the threshold value of requests per second of downstream services, and then different processing is carried out respectively.
In this embodiment, this step is intended to adapt to the need of implementing the dynamic scheduling mechanism, and make corresponding adjustment on the slow release state of the current message thread according to the variation direction of the total amount of requests per second, so that it makes different specific processes according to whether the variation direction of the total amount of requests per second is increasing or decreasing.
The current message thread can be switched between a slow release state and an active state, the slow release state refers to a state that the current message thread is controlled to delay scheduled consumption, the active state is a state that can be scheduled to be executed, the current message thread usually sets the same sleep time parameter S to represent the slow release state and the active state, and when the sleep time parameter S is 0, the current message thread is represented to enter the active state; when the sleep time parameter is a value other than 0, the sleep time parameter indicates the sleep time length in the slow release state, that is, the release time length for which the current message thread needs to wait to be released to be schedulable to consume. Therefore, in this step, the specific control of the slow release state and the release duration of the current message thread can be realized by controlling the sleep time parameter S, and the specific control mode may refer to any one of the following modes:
in one mode, an average value corresponding to each request of the threshold maxQPS for each second may be obtained as the limit execution duration, and on this basis, the average execution duration RT calculated according to the total amount of requests per second is subtracted from the average execution duration, and the obtained value is used as the value of the sleep time parameter S, that is, the value may be used to represent the release duration, and the formula is as follows:
S=(1/maxQPS)–RT
it can be seen that the limit execution time length 1/maxQPS is a constant, since the total requested amount per second is a variable, the average execution time length RT is also a variable, when the total requested amount per second increases, RT decreases, and the release time length represented by S is extended, whereas when the total requested amount per second decreases, RT increases, and the release time length represented by S is shortened, it can be seen that, when entering the slow release state, refreshing the sleep time parameter S by applying the aforementioned formula each time, the corresponding extension or shortening of the release time length of the current message thread can be realized according to the maintenance of the increase and decrease state of the total requested amount per second.
It will be appreciated that this approach, since the sleep time parameter S is recalculated and refreshed each time, strongly correlates the release duration with the total change per second requested, and therefore the control capability it exerts is relatively rigid and the control effect is less flexible.
Therefore, in another mode, on the basis of the former mode, at the stage of the sustained release state, the new value of the sleep time parameter S is not replaced by recalculating the above formula each time, but S is fine-tuned by using a preset value representing a slight time change on the basis of the original value of the sleep time parameter S, thereby avoiding a drastic change of the value of S and smoothing the transition of the release duration. More specific processing of this approach will be further disclosed in subsequent embodiments, which are not shown here.
Therefore, whether the current message thread should enter or maintain the slow release state can be judged according to the change of the total request amount per second, so that the current message thread is controlled to keep sleeping in the message queue according to the release time length limited by the sleep time parameter S, and the current message thread is switched to the active state when the S is 0. The sleep time parameter S may be either configured to implement state switching under the control of the total amount of requests per second according to the first manner, or configured to implement state switching through fine tuning according to the second manner, in short, the current message thread in the release state may be configured to prolong sleep when the total amount of requests per second is increased, and shorten sleep when the total amount of requests per second is decreased, so as to achieve flexible adjustment, so as to finally implement switching from the release state to the active state.
Step S1300, counting the front blocking amount corresponding to the current message thread, and calculating the expected queuing time corresponding to the current message thread according to the product of the front blocking amount and the average execution time:
in addition to controlling the release duration of the current message thread depending on the change of the total amount of requests per second, so as to control the current message thread to sleep or stop sleeping, in order to achieve a more efficient scheduling effect, the method and the device also increase the adjustment dimension of the sleep time parameter S according to the blocking condition of the current message thread before queuing in the message queue. Thus, the total number of message threads in the message queue ahead of the current message thread, i.e., the amount of congestion ahead of the current message thread, C, needs to be counted. Since the average execution time length RT of the downstream service processing request is obtained by calculating according to the total per second request QPS, and the average execution time length RT may be used as a reference value for the current message thread to predict the time length to be processed, the forward blocking amount may be further multiplied by the average execution time length to obtain a product, which may be used to represent the expected queuing time length T corresponding to the current message thread, and the formula is as follows:
T=C*RT
it is understood that, for each message thread in the message queue, since each message thread will be used as the current message thread to execute the technical solution of the present application, each message thread can calculate and obtain its own expected queuing time T in response to each total amount of requests per second.
Step S1400, monitoring the change of the expected queuing time, controlling the current message thread to be in an active state waiting for synchronous execution when the expected queuing time of the current message thread exceeds a preset delay threshold, returning the release time to zero, and delivering the current message thread to asynchronous execution when the current message thread is continuously in the active state for multiple times:
in order to adjust the sleep time parameter in time so as to adapt to the actual digestion condition of the message thread of the downstream service and adjust the sleep time parameter of the message thread in time, therefore, a preset delay threshold value can be preset, and the preset delay threshold value represents the maximum delay time allowed by the current message thread, and accordingly, each current message thread can decide whether to start the acceleration mode or not after monitoring that the expected queuing time changes.
After monitoring the expected queuing time variation of the current message thread, the current message thread compares whether the expected queuing time exceeds the preset delay threshold maxDelay, and if the expected queuing time exceeds the preset delay threshold, this situation should be avoided, so that the scheduling acceleration mode is enabled, and a specific measure can be that the current message thread is activated to an active state for scheduling and dequeuing to consume by setting the sleep time parameter to a zero value, that is, S is 0. The mechanism is equivalent to that a priority strategy is applied to the current message thread, so that each message thread can be configured with a self-specific preset delay threshold value according to the actual response requirement when necessary, and after the message thread enters the message queue, the scheduling rhythm of the message thread in the message queue can be flexibly controlled according to the relation between the expected queuing time and the preset delay threshold value. In summary, when the sleep time parameter S is 0, the release duration is cleared, and the current message thread is rapidly restored from the slow release state to the active state.
In order to further enable the current message thread to be executed quickly and avoid staying in the current message queue to continue to wait for synchronous execution and being blocked by the preceding message thread, the application also allows the urgent message threads to be delivered to asynchronous execution in response to the urgent message threads. However, in order to avoid waste of system resources, asynchronous execution may be controlled to occur only under certain conditions, for example, in this embodiment, a mechanism for delivering asynchronous execution will be triggered when the current message thread is active for multiple consecutive times. Whether the current message thread is in an active state continuously for multiple times is judged, the judgment can be determined according to whether the expected queuing time of the current message thread exceeds the delay preset threshold value for multiple times, and the continuous judgment times can be flexibly determined by the technical personnel in the field, for example, the value is taken for 5 times to 20 times continuously, and is a recommended empirical value for reference.
In this embodiment, whether the expected queuing time exceeds the preset delay threshold is mainly considered, and the exceeding amplitude is not considered, so that theoretically, an exceeding amplitude can be further added to construct a judged buffer interval so as to reduce the sensitivity to the expected queuing time and avoid the increase of unnecessary extra operation resource messages caused by too frequent scheduling actions.
Summarizing the embodiment, a dynamic adjustment mechanism is implanted in a message thread, so that each message thread can adjust the release duration of the message thread according to the total amount of requests per second processed by downstream services of a message queue, and when the response speed of the downstream services is reduced through the total amount of requests per second, the release duration of the message thread is prolonged, so that the message queue enters a scheduling deceleration mode as a whole to relieve the response pressure of the downstream services; when the response speed of the downstream service is found to be improved through the total amount of requests per second, the self-release duration is shortened, so that the message queue integrally enters a scheduling acceleration mode, and the processing efficiency of the downstream service is improved. Therefore, the whole message queue can adapt to the processing capacity change of downstream services to realize balanced scheduling under the coordination control of the dynamic adjustment mechanism of each message thread, so that the scheduling efficiency of the message queue is maximized.
According to the method and the device, after the message queue enters the scheduling acceleration mode, whether the asynchronous execution mechanism is started by the consuming thread can be determined according to the continuous response condition of the downstream service, the processing efficiency of the downstream service is further improved through the asynchronous execution mechanism, the faster acceleration processing of the message thread is realized, each message thread can be guaranteed to realize personalized scheduling according to the preset delay threshold value which is configured in advance, priority management is realized, and the scheduling efficiency of the message queue is guaranteed to be maximized.
Referring to fig. 2, in an embodiment, the step S1200 includes the following steps:
step S1210, monitoring and obtaining the total amount of requests per second generated by statistics each time, and comparing the difference between the total amount of requests per second and the threshold value of requests per second of the downstream service:
the current message thread monitors the generation of the total request per second to obtain the total request per second QPS, and then the total request per second QPS is compared with a preset request threshold per second of downstream service to obtain a difference value so as to judge the sizes of the request total QPS and the request total QPS.
Step S1220, when the total amount of requests per second exceeds the request threshold per second, setting the release duration of the current message thread as the difference between the maximum execution duration and the average execution duration to make the current message thread enter the slow release state, and correspondingly extending the release duration of the current message thread according to the maintenance of the growth state of the total amount of requests per second, where the maximum execution duration is the average value of requests per second calculated according to the request threshold per second:
as described with reference to the exemplary embodiment of the present application, when the total amount of requests per second exceeds the request threshold per second, the release duration of the current message thread may be adjusted by setting the sleep time parameter S, specifically applying the aforementioned formula S ═ 1/maxQPS) -RT, that is, setting the release duration as a difference between a limit execution duration of the current message thread, which is an average value of 1/maxQPS per request calculated according to the request threshold per second, and an average execution duration of the downstream service, which is 1/QPS, since QPS > maxQPS, it can be understood that, at this time, the value S is a positive value greater than 0, which indicates that the release duration is greater than 0, and the corresponding sleep needs to be maintained, so that the slow release state is entered.
Similarly, when each message thread is used as the current message thread, the setting operation of the embodiment can be performed on the sleep time parameter S according to the monitored total amount of requests per second, so that the release duration of the current message thread can be correspondingly prolonged according to the maintenance of the increase state of the total amount of requests per second.
In an embodiment further optimized on this basis, as shown in fig. 3, the step S1220 includes the following steps:
step S1221, when the total amount of requests per second first exceeds the request threshold per second, setting the release duration initialized to a zero value in advance as a difference between the limit execution duration and the average execution duration to enable the current message thread to enter a slow release state:
when the message thread enters the message queue, the sleep time parameter S is set to a zero value by default, only when the total request amount per second exceeds the request threshold per second for the first time, the release duration of the current message thread is set according to the formula S (1/maxQPS) -RT, that is, the release duration is set to the difference between the limit execution duration and the average execution duration, so that the current message thread enters a slow release state, and fine tuning processing is performed when the total request amount per second exceeds the request threshold per second at other times.
Step S1222, when the total amount of requests per second continuously exceeds the threshold value of requests per second, the release duration is accumulated by a preset value to realize fine tuning, so that the release duration of the current message thread is correspondingly extended according to the maintenance of the increase status of the total amount of requests per second:
when the total requested amount per second continuously exceeds the request threshold per second, that is, after the total requested amount per second first exceeds the request threshold per second, the exceeding is continuously found, at this time, it is not necessary to recalculate by applying the formula S ═ 1/maxQPS) -RT, but a preset value is accumulated on the basis of the original sleep time parameter S to achieve fine adjustment, the preset value may be flexibly determined by those skilled in the art, the recommended empirical value is, for example, 0.001 ms, the preset value mainly serves to avoid causing a large change in the release time length, and if the increase state of the total requested amount per second QPS is continuously maintained, fine adjustment is performed on the basis of the original value of the sleep time parameter S each time the maintenance is monitored to appropriately extend the release time length.
The further optimized embodiment provides a fine-tuning means for correspondingly prolonging the release duration of the current message thread according to the maintenance of the growth state of the total amount of requests per second, and the fine-tuning means can control the release duration of the current message thread to smoothly change when the downstream service is in continuous high-voltage operation, so that the scheduling of the whole message queue is more balanced and efficient.
Step S1230, when the total amount of requests per second is lower than a certain range of the request threshold per second and the release duration is not zero, the release duration of the current message thread is correspondingly shortened according to the maintenance of the reduction state of the total amount of requests per second, so that the current message thread is continuously in the slow release state:
different from the exemplary embodiment of the present application, in the comparison between total requests per second QPS and requests per second masQPS, a buffer interval representing a certain range is set for the comparison between the total requests per second QPS and the requests per second masQPS, for example, this step may determine whether the total requests per second QPS is lower than 80% of the requests per second threshold, that is, 20% of the requests per second threshold is used as the buffer interval, on one hand, the step responds to a case where the total requests per second QPS exceeds 100% of the requests per second threshold in the previous step, and on the other hand, the step may respond to a case where the total requests per second QPS is lower than 80% of the requests per second threshold. In addition, for the purpose of controlling the release duration to achieve the state maintaining control, the simultaneous condition is also considered, that is, the control is implemented only when the release time of the current message thread is considered to be a non-zero value, that is, when the sleep time parameter S >0 and QPS < maxQPS × 80% of the current message thread are established.
The specific control means is that the release duration of the current message thread is correspondingly shortened according to the maintenance of the reduction state of the total amount of requests per second, so that the current message thread is continuously in the slow release state, and the method is realized by setting the sleep time parameter S of the current message thread.
In particular, in this step, the release duration is reduced by the preset value by modifying the sleep time parameter S, so that the release duration of the current message thread is correspondingly shortened according to the maintenance of the reduction state of the total amount of requests per second.
It can be seen from this embodiment and various variation embodiments optimized on the basis of this embodiment that, after the current message thread constructs the condition of entering the slow-release state, it is distinguished whether it enters the slow-release state for the first time or is continuously in the slow-release state subsequently, and a buffer interval for decision is set in consideration of avoiding the current message thread from being switched from the slow-release state to the active state at a fast speed, and a fine-tuning mechanism is applied in some embodiments, so that the adjustment of the sleep time parameter of the whole current message thread is smoother on the whole, the frequent state switching of the message threads in the message queue can be avoided, the running resource overhead of the computer is saved, and the running efficiency of the computer device is facilitated to be improved.
Referring to fig. 4, in an embodiment, the step S1400 includes the following steps:
step S1410, monitoring and acquiring the expected queuing time calculated each time:
in this embodiment, the message thread uses a monitoring mechanism to obtain the expected queuing time obtained by each calculation, and as described above, the expected queuing time represents the required waiting time caused by the current message thread being blocked before queuing.
Step S1420, determining whether the expected queuing time exceeds a preset delay threshold, and when the expected queuing time exceeds the preset delay threshold, zeroing the release time of the current message thread to enable the current message thread to be in an active state waiting for synchronous execution:
factors causing the release duration of the current message thread to be reset to zero mainly include the situation that the sleep time parameter S is continuously reduced due to the reduction of the total request per second of downstream services and the situation that the expected queuing duration is greater than the preset delay threshold, and under the two situations, the sleep time parameter S is likely to return to zero, so that the current message thread is switched to an active state, the message thread in the active state is entered, and the direct scheduling is waited in a message queue to realize the synchronous execution.
Step S1430, judging whether the expected queuing time continuously exceeds the preset delay threshold for multiple times, if yes, delivering the current message thread to a task thread pool for asynchronous message consumption thread to realize asynchronous execution:
as described above, it is determined whether the expected queuing time continuously exceeds the preset delay threshold for multiple times, the number of times of determination can be flexibly set, and after the multiple determination results of the predetermined number of times are met, it is determined that the access requirement will be affected by the fact that the current message thread continues to wait for synchronous execution in the current message queue.
The task thread pool can be created according to needs, can be configured in advance, and can be flexibly implemented by a person skilled in the art. However, in order to save the system overhead, the task thread pool needs to be maintained, so in a further embodiment, as shown in fig. 5, the method further includes the following steps:
step S1500, monitoring the total amount of the message threads in the task thread pool:
a service module of the task thread pool can be preset in the background and is responsible for the total number of the message threads in the task thread pool enabled by the method, so that the maintenance of the task thread pool is implemented.
Step S1600, when the total number of the message threads in the task thread pool is larger than a preset upper limit, the message threads are blocked from being added:
in order to maintain the task thread pool and avoid congestion of the task thread pool, an upper limit, that is, a preset upper limit, may be generally preset for a total amount of consumed threads that can be processed by the task thread pool, where the preset upper limit is generally smaller than a queue length of the message queue of the present application, for example, the former is 1/10 size of the latter, and when the total amount of message threads in the task thread pool is greater than the preset upper limit, it may be prohibited to add a new message thread from the message queue to the task thread pool, in this case, even if it is desired to add a current message thread to the task thread pool in step S1400 to implement asynchronous execution, failure may be caused by being blocked. The method effectively maintains the advantages of small and flexible task thread pool, does not influence the message queue to play a main role, and enables the resource allocation to be more balanced.
Step S1700, when the total message thread amount in the task thread pool for a plurality of times is zero and the total request per second amount is lower than the request per second threshold, destroying the task thread pool:
the task thread pool also wastes system resources due to long-term occupation of the system resources, so that idling judgment can be performed on the task thread pool in response to the total amount of the requests per second generated each time, specifically, on one hand, whether the total amount of the requests per second obtained each time is lower than the threshold value of the requests per second is judged, and if the judgment is true, the message queue is not jammed, so that the necessity of the task thread pool is not high; on the other hand, whether any message thread does not exist in the task thread pool can be judged, and if yes, the task thread pool is in an idle state currently. If the two determinations are established each time, it means that the idle running of the task thread pool is recognized once, and if the idle running of the task thread pool is confirmed by continuously performing the determinations a plurality of times, it is determined that the task thread pool can be recovered. As regards the total number of detections constituting the decision of the idle state, it can be set flexibly by the person skilled in the art, for example by taking the value between 5 and 10, preferably 10.
Due to the requirement of resource recovery to improve the efficiency of equipment, after the task thread pool is confirmed to be in an idle state, the task thread pool is directly destroyed.
According to the embodiment, the task thread pool is timely recovered by establishing a monitoring and maintaining mechanism of the task thread pool, so that the task thread pool is started as required and destroyed in an idling state, the system overhead is saved, and the system operation efficiency is improved.
Referring to fig. 6, an embodiment of the present application further provides a message scheduling control apparatus, which includes a data obtaining module 1100, a slow release control module 1200, a congestion control module 1300, and an active control module 1400, where the data obtaining module 1100 is configured to continuously obtain a total amount of requests per second and an average execution time per request, where the total amount of requests per second is obtained by continuously counting a total amount of message threads scheduled from a message queue to a downstream service, and the average execution time is an average value per request calculated according to the total amount of requests per second; the slow release control module 1200 is configured to monitor a change of a total amount of requests per second, control the current message thread to be in a slow release state of deferred execution when the total amount of requests per second exceeds a per second request threshold of the downstream service, and correspondingly extend or shorten a release duration of the current message thread according to maintenance of an increase/decrease state of the total amount of requests per second; the block counting module 1300 is configured to count a front block amount corresponding to the current message thread, and calculate an expected queuing time corresponding to the current message thread according to a product of the front block amount and the average execution time; the active control module 1400 is configured to monitor a change of an expected queuing time, control the current message thread to be in an active state waiting for synchronous execution when the expected queuing time of the current message thread exceeds a preset delay threshold, return the release time to zero, and deliver the current message thread to asynchronous execution when the current message thread is continuously in the active state for multiple times.
In a preferred embodiment, each of the message threads in the message queue is configured with the respective module of the apparatus.
In an embodiment, the slow release control module 1200 includes a slow release monitoring sub-module, configured to monitor and obtain a total amount of requests per second generated by each statistics, and compare a difference between the total amount of requests per second and a threshold value of requests per second for the downstream service; the forward slow-release sub-module is used for setting the release time length of the current message thread as the difference value between the limit execution time length and the average execution time length when the total request per second exceeds the request threshold per second so as to enable the current message thread to enter a slow-release state, correspondingly prolonging the release time length of the current message thread according to the maintenance of the increase state of the total request per second, wherein the limit execution time length is the average value of each request calculated according to the request threshold per second; and the reverse slow-release sub-module is used for correspondingly shortening the release time length of the current message thread according to the maintenance of the reduction state of the total request amount per second when the total request amount per second is lower than a certain range of the request threshold value per second and the release time length is not zero, so that the current message thread is continuously in a slow-release state.
In a further embodiment, the forward sustained release sub-module comprises: the first slow-release secondary module is used for setting the release time length which is initialized to a zero value in advance as the difference value between the limit execution time length and the average execution time length when the total request per second firstly exceeds the request threshold per second so as to enable the current message thread to enter a slow-release state; and the continuous slow release secondary module is used for accumulating the release time length by a preset value to realize fine adjustment when the total request amount per second continuously exceeds the request threshold value per second, so that the release time length of the current message thread is correspondingly prolonged according to the maintenance of the increase state of the total request amount per second.
In a further embodiment, the reverse slow-release sub-module is further configured to decrement the release duration by the preset value to achieve fine tuning, so that the release duration of the current message thread is correspondingly shortened according to the maintenance of the state of reduction of the total amount of requests per second.
In an embodied embodiment, the active control module 1400 includes: the active monitoring submodule is used for monitoring and acquiring the expected queuing time obtained by each calculation; the active switching secondary module is used for judging whether the expected queuing time exceeds a preset delay threshold value or not, and when the expected queuing time exceeds the preset delay threshold value, the release time of the current message thread is reset to zero so that the current message thread is in an active state waiting for synchronous execution; and the asynchronous delivery secondary module is used for judging whether the expected queuing time continuously exceeds the preset delay threshold for multiple times, and delivering the current message thread to a task thread pool for an asynchronous consumption message thread to realize asynchronous execution if the expected queuing time continuously exceeds the preset delay threshold.
In a preferred embodiment, the apparatus further comprises: the total amount monitoring submodule is used for monitoring the total amount of the message threads in the task thread pool; the overrun blocking sub-module is used for blocking the message thread added to the task thread pool when the total amount of the message threads in the task thread pool is larger than a preset upper limit; and the asynchronous recovery submodule is used for destroying the task thread pool when the total message thread amount in the task thread pool for a plurality of times is returned to zero and the total request per second amount is lower than the request per second threshold value.
In a preferred embodiment, the data acquisition module 1100 comprises: a periodic statistic submodule, configured to periodically count, per second, a total amount of message threads dispatched from the message queue to be transmitted to a downstream service and consumed, as the total amount of requests per second; and the time length calculation submodule is used for calculating the time slot which is averagely occupied by each request in the total request amount per second within one second and determining the time slot as the average execution time length.
The embodiment of the application also provides computer equipment. Referring to fig. 7, fig. 7 is a block diagram of a basic structure of a computer device according to the present embodiment.
Fig. 7 is a schematic diagram of the internal structure of the computer device. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can enable the processor to realize a message scheduling control method when being executed by the processor. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform a message scheduling control method. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of each module/sub-module in fig. 6, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data necessary for executing all the submodules in the message scheduling control device, and the server can call the program codes and data of the server to execute the functions of all the submodules.
The present application further provides a storage medium storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the message scheduling control method of any of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
In summary, the method and the device for processing the message queue automatically reduce the consumption rate to protect downstream services when the consumption of the message thread in the message queue is too fast, automatically increase the consumption speed when the consumption is too slow, and relieve the problems caused by consumption accumulation and delay processing.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A message scheduling control method is characterized by comprising the following steps:
continuously acquiring the total amount of requests per second and the average execution time length of each request, wherein the total amount of requests per second is obtained by continuously counting the total amount of message threads dispatched from a message queue to downstream services, and the average execution time length is the average value of each request calculated according to the total amount of requests per second;
monitoring the change of the total request per second, controlling the current message thread to be in a slow release state of delayed execution when the total request per second exceeds the request threshold per second of the downstream service, and correspondingly prolonging or shortening the release duration of the current message thread according to the maintenance of the increase and decrease state of the total request per second;
counting the front blocking amount corresponding to the current message thread, and calculating the expected queuing time length corresponding to the current message thread according to the product of the front blocking amount and the average execution time length;
monitoring the change of the expected queuing time length, controlling the current message thread to be in an active state waiting for synchronous execution when the expected queuing time length of the current message thread exceeds a preset delay threshold value, returning the release time length to zero, and delivering the current message thread to asynchronous execution when the current message thread is continuously in the active state for multiple times.
2. The method according to claim 1, wherein each of said message threads in said message queue independently performs the steps of the method as said current message thread.
3. The message scheduling control method according to claim 1, wherein the change of the total amount of requests per second is monitored, when the total amount of requests per second exceeds the threshold value of requests per second of the downstream service, the current message thread is controlled to be in the slow release state of deferred execution, and the release duration of the current message thread is correspondingly prolonged or shortened according to the maintenance of the increase/decrease state of the total amount of requests per second, comprising the following steps:
monitoring and acquiring the total amount of requests per second generated by statistics each time, and comparing the difference between the total amount of requests per second and the threshold value of requests per second of the downstream service;
when the total request per second exceeds the request threshold per second, setting the release time length of the current message thread as the difference value between the limit execution time length and the average execution time length to enable the current message thread to enter a slow release state, and correspondingly prolonging the release time length of the current message thread according to the maintenance of the growth state of the total request per second, wherein the limit execution time length is the average value of each request calculated according to the request threshold per second;
and when the total request amount per second is lower than a certain range of the request threshold value per second and the release time length is not zero, correspondingly shortening the release time length of the current message thread according to the maintenance of the reduction state of the total request amount per second so as to enable the current message thread to be continuously in a slow release state.
4. The message scheduling control method according to claim 3, wherein the step executed when the total amount of requests per second exceeds the threshold value of requests per second includes the steps of:
when the total request amount per second exceeds the request threshold per second for the first time, setting the release duration initialized to a zero value as the difference value between the limit execution duration and the average execution duration so as to enable the current message thread to enter a slow release state;
and when the total request amount per second continuously exceeds the request threshold value per second, accumulating the release time length by a preset value to realize fine adjustment, so that the release time length of the current message thread is correspondingly prolonged according to the maintenance of the increase state of the total request amount per second.
5. The message scheduling control method according to claim 4, wherein in the step executed when the total requested amount per second is lower than the certain range of the request per second threshold and the release duration is not zeroed, the release duration is decremented by the preset value to achieve fine tuning, so that the release duration of the current message thread is correspondingly shortened according to the maintenance of the status of the decrease of the total requested amount per second.
6. The message scheduling control method according to any one of claims 1 to 5, wherein the method monitors the change of the expected queuing time, controls the current message thread to be in an active state waiting for synchronous execution when the expected queuing time of the current message thread exceeds a preset delay threshold, and zeros the release time, and delivers the current message thread to asynchronous execution when the current message thread is in the active state for a plurality of consecutive times, comprising the following steps:
monitoring and acquiring the expected queuing time obtained by each calculation;
judging whether the expected queuing time exceeds a preset delay threshold value or not, and when the expected queuing time exceeds the preset delay threshold value, resetting the release time of the current message thread to zero to enable the current message thread to be in an active state waiting for synchronous execution;
and judging whether the expected queuing time continuously exceeds the preset delay threshold for multiple times, if so, delivering the current message thread to a task thread pool for asynchronous message consumption threads to realize asynchronous execution.
7. The message scheduling control method of claim 6, characterized in that the method further comprises the steps of:
monitoring the total amount of message threads in the task thread pool;
when the total number of the message threads in the task thread pool is greater than a preset upper limit, the message threads are blocked from being added;
and when the total number of message threads in the task thread pool is returned to zero for a plurality of times and the total number of requests per second is lower than the threshold value of requests per second, destroying the task thread pool.
8. A message scheduling control apparatus, comprising:
the data acquisition module is used for continuously acquiring the total amount of requests per second and the average execution time length of each request, wherein the total amount of requests per second is obtained by continuously counting the total amount of message threads dispatched from a message queue to downstream services, and the average execution time length is the average value of each request calculated according to the total amount of requests per second;
the slow release control module is used for monitoring the change of the total request amount per second, controlling the current message thread to be in a slow release state of delayed execution when the total request amount per second exceeds the request threshold value per second of the downstream service, and correspondingly prolonging or shortening the release duration of the current message thread according to the maintenance of the increase and decrease state of the total request amount per second;
the block counting module is used for counting the front block amount corresponding to the current message thread and calculating the expected queuing time length corresponding to the current message thread according to the product of the front block amount and the average execution time length;
and the activity control module is used for monitoring the change of the expected queuing time length, controlling the current message thread to be in an active state waiting for synchronous execution when the expected queuing time length of the current message thread exceeds a preset delay threshold value, returning the release time length to zero, and delivering the current message thread to asynchronous execution when the current message thread is continuously in the active state for multiple times.
9. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions which, when executed by the processor, cause the processor to carry out the steps of the message scheduling control method according to any one of claims 1 to 7.
10. A storage medium having computer-readable instructions stored thereon, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of message scheduling control of any of claims 1 to 7.
CN202110886743.3A 2021-08-03 2021-08-03 Message scheduling control method and corresponding device, equipment and medium thereof Active CN113687928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110886743.3A CN113687928B (en) 2021-08-03 2021-08-03 Message scheduling control method and corresponding device, equipment and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110886743.3A CN113687928B (en) 2021-08-03 2021-08-03 Message scheduling control method and corresponding device, equipment and medium thereof

Publications (2)

Publication Number Publication Date
CN113687928A true CN113687928A (en) 2021-11-23
CN113687928B CN113687928B (en) 2024-03-22

Family

ID=78578642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110886743.3A Active CN113687928B (en) 2021-08-03 2021-08-03 Message scheduling control method and corresponding device, equipment and medium thereof

Country Status (1)

Country Link
CN (1) CN113687928B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827033A (en) * 2022-04-15 2022-07-29 咪咕文化科技有限公司 Data flow control method, device, equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200264942A1 (en) * 2017-12-25 2020-08-20 Tencent Technology (Shenzhen) Company Limited Message management method and device, and storage medium
CN112398752A (en) * 2020-11-16 2021-02-23 广州华多网络科技有限公司 Message push control method and device, equipment and medium thereof
CN112631806A (en) * 2020-12-28 2021-04-09 平安银行股份有限公司 Asynchronous message arranging and scheduling method and device, electronic equipment and storage medium
CN112954004A (en) * 2021-01-26 2021-06-11 广州华多网络科技有限公司 Second-killing activity service response method and device, equipment and medium thereof
CN113138860A (en) * 2020-01-17 2021-07-20 中国移动通信集团浙江有限公司 Message queue management method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200264942A1 (en) * 2017-12-25 2020-08-20 Tencent Technology (Shenzhen) Company Limited Message management method and device, and storage medium
CN113138860A (en) * 2020-01-17 2021-07-20 中国移动通信集团浙江有限公司 Message queue management method and device
CN112398752A (en) * 2020-11-16 2021-02-23 广州华多网络科技有限公司 Message push control method and device, equipment and medium thereof
CN112631806A (en) * 2020-12-28 2021-04-09 平安银行股份有限公司 Asynchronous message arranging and scheduling method and device, electronic equipment and storage medium
CN112954004A (en) * 2021-01-26 2021-06-11 广州华多网络科技有限公司 Second-killing activity service response method and device, equipment and medium thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827033A (en) * 2022-04-15 2022-07-29 咪咕文化科技有限公司 Data flow control method, device, equipment and computer readable storage medium
CN114827033B (en) * 2022-04-15 2024-04-19 咪咕文化科技有限公司 Data flow control method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113687928B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US5875329A (en) Intelligent batching of distributed messages
US8547840B1 (en) Bandwidth allocation of bursty signals
CN106557369B (en) Multithreading management method and system
US20020007387A1 (en) Dynamically variable idle time thread scheduling
US20190042331A1 (en) Power aware load balancing using a hardware queue manager
US20120136850A1 (en) Memory usage query governor
CN113672383A (en) Cloud computing resource scheduling method, system, terminal and storage medium
CN114217993A (en) Method, system, terminal device and storage medium for controlling thread pool congestion
US20230305618A1 (en) Throughput-Optimized, Quality-Of-Service Aware Power Capping System
WO2022246759A1 (en) Power consumption adjustment method and apparatus
CN113687928B (en) Message scheduling control method and corresponding device, equipment and medium thereof
CN109324891A (en) A kind of periodic duty low-power consumption scheduling method of ratio free time distribution
US20230273833A1 (en) Resource scheduling method, electronic device, and storage medium
CN112398752B (en) Message push control method and device, equipment and medium thereof
US6604200B2 (en) System and method for managing processing
Wu et al. CPU scheduling for statistically-assured real-time performance and improved energy efficiency
CN114827033B (en) Data flow control method, device, equipment and computer readable storage medium
CN107911484B (en) Message processing method and device
CN113971082A (en) Task scheduling method, device, equipment, medium and product
CN113971083A (en) Task scheduling method, device, equipment, medium and product
WO2016058149A1 (en) Method for predicting utilization rate of processor, processing apparatus and terminal device
US20220342474A1 (en) Method and system for controlling peak power consumption
CN112000294A (en) IO queue depth adjusting method and device and related components
CN114079976B (en) Slice resource scheduling method, apparatus, system and computer readable storage medium
CN117492961A (en) Dynamic delay queue system based on time wheel and data processing method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant