CN111367637B - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN111367637B
CN111367637B CN202010138709.3A CN202010138709A CN111367637B CN 111367637 B CN111367637 B CN 111367637B CN 202010138709 A CN202010138709 A CN 202010138709A CN 111367637 B CN111367637 B CN 111367637B
Authority
CN
China
Prior art keywords
task
concurrency
predicted
processed
response time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010138709.3A
Other languages
Chinese (zh)
Other versions
CN111367637A (en
Inventor
贺财平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Shengxin Shanghai Information Technology Co ltd
Original Assignee
Ant Shengxin Shanghai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Shengxin Shanghai Information Technology Co ltd filed Critical Ant Shengxin Shanghai Information Technology Co ltd
Priority to CN202010138709.3A priority Critical patent/CN111367637B/en
Publication of CN111367637A publication Critical patent/CN111367637A/en
Application granted granted Critical
Publication of CN111367637B publication Critical patent/CN111367637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The task processing method and device provided by the embodiment of the specification, wherein the method comprises the following steps: receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request; acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and the weight; and sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.

Description

Task processing method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a task processing method. One or more embodiments of the present specification relate to a task processing device, a computing device, and a computer-readable storage medium.
Background
In data processing, there is a service that may be referred to as a timed task, i.e., a timer-triggered interface call request. When receiving the upstream system request, the service provider can drop the task first for the request with low timeliness requirement, return to accept success, trigger execution through a timer, and return the final processing result. When a timer triggers task execution, in order to improve throughput, multithreaded execution tasks are generally concurrent, and if the tasks need to depend on a downstream system, the concurrency number is set to be more or less, so that the system throughput is directly affected.
In the prior art, when a timing task is executed, the concurrency is usually fixed, the throughput of the system cannot be ensured, and the data transmission processing efficiency of the network system is affected, so that the task processing efficiency is improved by adaptively adjusting the proper concurrency.
Disclosure of Invention
In view of this, the present embodiment provides a task processing method. One or more embodiments of the present specification are also directed to a task processing device, a computing device, and a computer-readable storage medium that address the technical deficiencies of the prior art.
According to a first aspect of embodiments of the present specification, there is provided a task processing method, including:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and weights, wherein the weights comprise weights determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
And sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
Optionally, the calculating the predicted concurrency of the task to be processed based on the response time and the weight includes:
and calculating the predicted concurrency of the task to be processed through a linear regression algorithm based on the response time and the weight.
Optionally, the formula corresponding to the linear regression algorithm is:
f(x i )=w 0 +w 1 x 1 +w 2 x 2 +…+w i x i
wherein f (x) i ) Representing predicted concurrency, w represents weight, and x represents response time of downstream system.
Optionally, the receiving the processing result returned by the downstream system for processing the service request includes:
and receiving a processing result of the task to be processed, which is returned by the downstream system and corresponds to the predicted concurrency in the service request.
Optionally, after receiving the processing result of the task to be processed corresponding to the predicted concurrency in the service request returned by the downstream system, the method further includes:
recording the return time when the downstream system returns to process the processing result of the task to be processed corresponding to the predicted concurrency in the service request, and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
Optionally, before determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time, the method further includes:
and acquiring the sending time of sending the service request to the downstream system based on the predicted concurrency quantity.
Optionally, the determining, based on the return time, a response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request includes:
and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
Optionally, after determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference between the return time and the sending time, the method further includes:
storing the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and updating the preset response time database in real time or at fixed time.
Optionally, the current concurrency is a last concurrency of the predicted concurrency.
According to a second aspect of embodiments of the present specification, there is provided a task processing device comprising:
the task acquisition module is configured to receive a timing task processing request and acquire a task to be processed according to the timing task processing request;
the prediction concurrency calculation module is configured to acquire response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculate the prediction concurrency of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
the task processing module is configured to send a service request to the downstream system based on the predicted concurrency quantity and receive a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
Optionally, the prediction concurrency calculation module is further configured to:
and calculating the predicted concurrency of the task to be processed through a linear regression algorithm based on the response time and the weight.
Optionally, the formula corresponding to the linear regression algorithm is:
f(x i )=w 0 +w 1 x 1 +w 2 x 2 +…+w i x i
wherein f (x) i ) Representing predicted concurrency, w represents weight, and x represents response time of downstream system.
Optionally, the task processing module is further configured to:
and receiving a processing result of the task to be processed, which is returned by the downstream system and corresponds to the predicted concurrency in the service request.
Optionally, the apparatus further includes:
the response time determining module is configured to record the return time when the downstream system returns the processing result of the task to be processed corresponding to the predicted concurrency in the service request, and determine the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
Optionally, the apparatus further includes:
and a sending time acquisition module configured to acquire a sending time of sending a service request to the downstream system based on the predicted concurrency dose.
Optionally, the response time determining module is further configured to:
and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
The apparatus optionally further comprises:
and the response time storage module is configured to store the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and update the preset response time database in real time or at fixed time.
Optionally, the current concurrency is a last concurrency of the predicted concurrency.
According to a third aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is for storing computer-executable instructions, and the processor is for executing the computer-executable instructions:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and weights, wherein the weights comprise weights determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
And sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
According to a fourth aspect of embodiments of the present specification, there is provided a computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the steps of any one of the task processing methods.
The embodiment of the specification provides a task processing method and a task processing device, wherein the task processing method comprises the steps of receiving a timing task processing request and acquiring a task to be processed according to the timing task processing request; acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and the weight; sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result returned by the downstream system for processing the service request;
according to the task processing method, through the relation between the response time and the concurrency of the downstream system and through the predicted concurrency calculated by the response time and the weight of the downstream system, the most proper throughput can be kept at all times when the system processes the timed task, the task processing capacity of the downstream system is effectively ensured, and the processing efficiency of the timed task processing is greatly improved.
Drawings
FIG. 1 is a flow chart of a method of task processing provided in one embodiment of the present disclosure;
FIG. 2 is a flow chart of another task processing method provided by one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a task processing device according to one embodiment of the present disclosure;
FIG. 4 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present specification will be explained.
Upstream and downstream systems: under the environment of SOA (English: service-Oriented Architecture; english: SOA; chinese: service-oriented architecture), there is an interdependence relationship between systems, in the transmission process of each system, the initiator of the Service may be called an upstream system, and the provider of the Service may be called a downstream system; for example: the system A is the initiator of the service, the system B is the provider of the service, then the system B is the downstream system relative to the system A, and the system A is the upstream system relative to the system B.
Response time of downstream system: and initiating a request from the upstream system to the downstream system, and receiving a return result of the downstream system, wherein the response time comprises network time, processing time of the downstream system and the like.
Timing tasks: an interface calling request triggered by a timer; for example, because servers may be busy during the day, for some less time-efficient tasks (regular daily maintenance tasks for servers or database backup tasks, etc.), the system may be allowed to process such tasks in a timed manner during late night or estimated periods of non-busy servers.
Concurrency amount: the system processes the number of timed tasks at the same time.
In the present specification, a task processing method is provided, and the present specification relates to a task processing device, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Referring to fig. 1, fig. 1 shows a flowchart of a task processing method according to an embodiment of the present disclosure, including steps 102 to 106.
Step 102: and receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request.
In specific implementation, the timed task processing request may be a task processing command triggered by a timer based on a preset time, for example, the preset time is 23:00 a.m., and then the timer triggers the timed task processing request and sends the timed task processing request to an upstream system when the preset time is 23:00 a.m.; under the condition that the upstream system receives the timing task processing request, acquiring a task to be processed from a task database to be processed according to the timing task processing request; wherein the timed task processing request includes, but is not limited to, an identification of the task to be processed, a number of tasks to be processed, and the like.
Specifically, the tasks to be processed include, but are not limited to: for tasks with low timeliness, such as a deduction task in online shopping, a backup task of a database and the like.
In practical applications, the number of tasks to be processed carried in the timing task processing request may be set according to historical task processing data or expert experience, for example, 500 or 1000, etc., specifically, according to applications, and is not limited in any way.
For example, the mark of the task to be processed carried in the timing task processing request is a, and the number of the task to be processed is 500, then the timing task processing request is received, and the task to be processed is obtained according to the timing task processing request; and receiving a timing task processing request for the upstream system, and acquiring 500 pending tasks marked as a according to the timing task request.
Step 104: acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and weights, wherein the weights comprise weights determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time.
The response time of the last task to be processed is the difference value between the time of the last task processing request sent by the upstream system and the time of completing the task processing; the time for the downstream system to receive the task processing request sent by the upstream system last time is 15:00, if the downstream system processes the task in three threads, the time for completing the task is 15: 12. 15:09, 15:06, then the response time is 12, 9, 6 minutes.
The weight is determined based on a current concurrency and a response time of the downstream system to process a last task to be processed, wherein the current concurrency is a last concurrency of the predicted concurrency.
Specifically, the calculating the predicted concurrency of the task to be processed based on the response time and the weight includes:
and calculating the predicted concurrency of the task to be processed through a linear regression algorithm based on the response time and the weight.
Wherein, the formula corresponding to the linear regression algorithm is:
f(x i )=w 0 +w 1 x 1 +w 2 x 2 +…+w i x i
wherein f (x) i ) Indicating the predicted concurrency, w indicating the weight, x indicating the response time of the downstream system, and i indicating the number of threads of the downstream system.
For example, if the downstream system includes three threads, each thread1. Thread 2 and thread 3, the response time of thread 1 to process the last pending task is 12 minutes, the response time of thread 2 to process the last pending task is 9 minutes, the response time of thread 3 to process the last pending task is 6 minutes, and w 0 Is 1, w 1 Is 1, w 2 Is 2, w 3 Is 2; the predicted concurrency calculated by the equation corresponding to the linear regression algorithm is:
f(x i )=1+1*12+2*9+2*6
i.e. predicting the concurrency f (x) i )=43。
In practical application, when the system is operated for the first time, the initial value of the weight can be taken based on expert experience, after the system is operated, the weight is calculated based on the current concurrency and the response time of the downstream system for processing the last task to be processed through the formula corresponding to the linear regression algorithm, finally the calculated weight is input into the formula corresponding to the linear regression algorithm, and the predicted concurrency is calculated with the response time of the downstream system for processing the last task to be processed, wherein w 0 The offset is determined based on the conditions of the central processing unit, the memory, the system load, etc. of the downstream system in the actual task processing process, and is not limited in any way.
According to the task processing method, based on the association relation between the response time of the downstream system and the task concurrency amount, the accurate prediction concurrency amount is calculated based on a linear regression algorithm only through one influence factor of the response time of the downstream system, and then the task concurrency amount to be processed is adaptively and dynamically adjusted through the prediction concurrency amount, so that the purpose of improving the task processing throughput of the downstream system is achieved, and the task processing efficiency of the downstream system can be effectively guaranteed.
Step 106: and sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
Specifically, after the predicted concurrency is calculated, a service request is sent to a downstream system, where the service request includes, but is not limited to, tasks to be processed corresponding to the predicted concurrency, for example, if the predicted concurrency is 43, the tasks to be processed corresponding to the predicted concurrency are 43 tasks to be processed extracted sequentially or randomly from all the tasks to be processed.
In practical application, after sending a service request to the downstream system based on the predicted concurrency, the downstream system starts the calculated thread number determined by the predicted concurrency to process the task to be processed according to the task to be processed corresponding to the received predicted concurrency, and returns a processing result to the upstream system after the task to be processed is processed.
Specifically, the receiving the processing result returned by the downstream system for processing the service request includes:
and receiving a processing result of the task to be processed, which is returned by the downstream system and corresponds to the predicted concurrency in the service request.
The tasks to be processed corresponding to the predicted concurrency in the service request may be understood as the tasks to be processed corresponding to the number of the predicted concurrency in the service request.
In practical application, the processing result of the task to be processed can be understood as success or failure conditions of the same number of tasks to be processed corresponding to the predicted concurrency of the downstream system processing.
According to the task processing method provided by the embodiment of the specification, through the relation between the response time and the concurrency of the downstream system, the predicted concurrency is calculated through the response time and the weight of the downstream system, so that the most proper throughput can be kept at any time when the system processes the timed task, the task processing capacity of the downstream system is effectively ensured, and the processing efficiency of the timed task processing is greatly improved.
In another embodiment of the present disclosure, after receiving a processing result of the task to be processed, which corresponds to the predicted concurrency amount in the service request and is returned by the downstream system, the method further includes:
recording the return time when the downstream system returns to process the processing result of the task to be processed corresponding to the predicted concurrency in the service request, and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
Specifically, when receiving the processing result of the service request returned by the downstream system, the upstream system records the return time when the downstream system returns the processing result of the task to be processed corresponding to the predicted concurrency in the service request, for example, 16:06 in 2019 month 1 day 10, and determines the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
In a specific implementation, before determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time, the method further includes:
And acquiring the sending time of sending the service request to the downstream system based on the predicted concurrency quantity.
For example, the transmission time of the service request to the downstream system based on the predicted concurrency is 2019, 10, 1, 16:00.
And based on the return time and the sending time, the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request can be calculated, which is specifically as follows:
and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
For example, the return time is 16:06 in the 10 th month of 2019 and the transmission time is 16:00 in the 1 th month of 2019, and then the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request calculated based on the difference between the return time and the transmission time is 6 minutes.
In the above description, only the downstream system is taken as an example of one thread, and the obtaining of response time of multiple threads in the downstream system is only required to refer to this embodiment, which is not described herein again.
According to the task processing method, after each time of processing a task to be processed corresponding to the predicted concurrency amount, the downstream system transmits response time for processing the task to be processed to the upstream system, when the task to be processed is acquired next time, the upstream system recalculates the most suitable predicted concurrency amount based on the response time of the last batch of task to be processed, which is fed back by the downstream system, and the predicted concurrency amount is adjusted in a self-adaptive mode all the time by adopting the dynamic circulation mode, so that the most suitable throughput can be kept all the time when the system processes the timed task, the task processing capacity of the downstream system is effectively guaranteed, and the processing efficiency of the timed task processing is greatly improved.
In one or more embodiments of the present disclosure, after determining a response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference between the return time and the sending time, the method further includes:
storing the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and updating the preset response time database in real time or at fixed time.
In this embodiment of the present disclosure, response time of a task to be processed corresponding to the predicted concurrency amount in the processing of the service request by the downstream system is stored in the preset response time database, and the preset response time database is updated in real time or at a fixed time, so that the upstream system can directly obtain, through the preset response time database, response time of the downstream system to process the task to be processed last time, calculate the predicted concurrency amount of the task to be processed based on the response time and the weight, and then send the service request to the downstream system based on the adjusted most suitable predicted concurrency amount, so as to ensure task processing capability of the downstream system, and improve throughput of the downstream system to process the task to be processed.
In addition, when the predicted concurrency is calculated, in order to realize more accurate adjustment of the predicted concurrency, when the predicted concurrency is calculated through a formula corresponding to a linear regression algorithm, a CPU, a memory and a system load of a downstream system are added as decision indexes.
Referring to fig. 2, fig. 2 shows a flowchart of another task processing method provided according to an embodiment of the present disclosure, including steps 202 to 214.
Step 202: the timer sends a timed task processing request to the upstream system based on a preset time.
Specifically, step 202 corresponds to the first step in fig. 2: the timed task processes the request.
Wherein the preset time includes, but is not limited to, any one of 00:00 of the previous day to 24:00 of the following day.
For example, if the preset time is 23:00, the timer sends a timed task processing request to the upstream system based on the preset time, and if the timer sends the timed task processing request to the upstream system at 23:00.
Step 204: and after receiving the timing task processing request sent by the timer, the upstream system acquires a task to be processed according to the timing task processing request.
Specifically, step 204 corresponds to the second step in fig. 2: and fishing out the task to be processed.
Step 206: and calculating the predicted concurrency of the task to be processed by an upstream system through a linear regression algorithm based on the response time and the weight.
Specifically, step 206 corresponds to the third step in fig. 2: and calculating the predicted concurrency.
The response time comprises response time of a downstream system for processing a last task to be processed, which is acquired from a preset response time database, or initial response time which is set when the system runs for the first time; the weights include weights determined based on the current concurrency and the response time of the downstream system to process the last pending task, or initial weights set when the system is first run.
Step 208: the upstream system sends a service request to the downstream system based on the calculated predicted concurrency.
Specifically, step 208 corresponds to the fourth step in fig. 2: and (5) a service request.
The service request comprises a task to be processed corresponding to the predicted concurrency.
Step 210: and the upstream system receives a processing result returned by the downstream system for processing the service request.
Specifically, step 210 corresponds to the fifth step in fig. 2: and returning a result.
Step 212: the upstream system records the return time of the processing result of the service request returned by the downstream system, and determines the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
Specifically, step 212 corresponds to the sixth step in fig. 2: the response time is recorded.
The specific calculation of the response time of the task to be processed corresponding to the predicted concurrency in the service request by the downstream system comprises the following steps:
acquiring a sending time for sending a service request to the downstream system based on the predicted concurrency quantity;
and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
The specific calculation process may be referred to the above embodiments, and will not be described herein.
Step 214: and returning the response time of the task to be processed, which is determined to correspond to the predicted concurrency in the service request and is processed by the downstream system, to a timer based on the difference value between the return time and the sending time, and storing the response time in the preset response time database.
Specifically, step 214 corresponds to the seventh step in fig. 2: a response time is returned.
In the task processing method provided by the embodiment of the specification, a timer sends a timing task processing request to an upstream system within a set time, the upstream system acquires a task to be processed from a task database based on the timing task processing request, calculates a predicted concurrency amount in real time based on response time of the downstream system, then sends a service request to the downstream system based on the predicted concurrency amount, and after the downstream system finishes the task to be processed based on the service request, records the response time of the downstream system and returns the response time to the timer, so that the response time is transferred to the upstream system when the timer triggers the timing task processing request next time, the upstream system recalculates more proper predicted concurrency amount, and the downstream system can keep optimal throughput at all times when processing the task to be processed by adopting the dynamic loop calculation mode, so that the processing efficiency of the timing task processing is greatly improved.
Corresponding to the method embodiment, the present disclosure further provides an embodiment of a task processing device, and fig. 3 shows a schematic structural diagram of the task processing device provided in one embodiment of the present disclosure. As shown in fig. 3, the apparatus includes:
a task obtaining module 302, configured to receive a timed task processing request, and obtain a task to be processed according to the timed task processing request;
a predicted concurrency calculation module 304, configured to obtain a response time of a downstream system processing a task to be processed last in a preset response time database, and calculate a predicted concurrency of the task to be processed based on the response time and a weight, where the weight includes a weight determined based on a current concurrency and a response time of the downstream system processing the task to be processed last;
and the task processing module 306 is configured to send a service request to the downstream system based on the predicted concurrency quantity, and receive a processing result returned by the downstream system for processing the service request, where the service request includes a task to be processed corresponding to the predicted concurrency quantity.
Optionally, the prediction concurrency calculation module 304 is further configured to:
And calculating the predicted concurrency of the task to be processed through a linear regression algorithm based on the response time and the weight.
Optionally, the formula corresponding to the linear regression algorithm is:
f(x i )=w 0 +w 1 x 1 +w 2 x 2 +…+w i x i
wherein f (x) i ) Representing predicted concurrency, w represents weight, and x represents response time of downstream system.
Optionally, the task processing module 306 is further configured to:
and receiving a processing result of the task to be processed, which is returned by the downstream system and corresponds to the predicted concurrency in the service request.
Optionally, the apparatus further includes:
the response time determining module is configured to record the return time when the downstream system returns the processing result of the task to be processed corresponding to the predicted concurrency in the service request, and determine the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
Optionally, the apparatus further includes:
and a sending time acquisition module configured to acquire a sending time of sending a service request to the downstream system based on the predicted concurrency dose.
Optionally, the response time determining module is further configured to:
And determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
Optionally, the apparatus further includes:
and the response time storage module is configured to store the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and update the preset response time database in real time or at fixed time.
Optionally, the current concurrency is a last concurrency of the predicted concurrency.
The task processing device provided by the embodiment of the specification comprises the steps of receiving a timing task processing request and acquiring a task to be processed according to the timing task processing request; acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and the weight; sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result returned by the downstream system for processing the service request;
The task processing device ensures that the most proper throughput can be kept at all times when the system processes the timed task by the relation between the response time and the concurrency of the downstream system and the predicted concurrency calculated by the response time and the weight of the downstream system, effectively ensures the task processing capacity of the downstream system and greatly improves the processing efficiency of the timed task processing.
The above is a schematic solution of a task processing device of the present embodiment. It should be noted that, the technical solution of the task processing device and the technical solution of the task processing method belong to the same concept, and details of the technical solution of the task processing device, which are not described in detail, can be referred to the description of the technical solution of the task processing method.
Fig. 4 illustrates a block diagram of a computing device 400 provided in accordance with one embodiment of the present description. The components of the computing device 400 include, but are not limited to, a memory 410 and a processor 420. Processor 420 is coupled to memory 410 via bus 430 and database 450 is used to hold data.
Computing device 400 also includes access device 440, access device 440 enabling computing device 400 to communicate via one or more networks 460. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 440 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 400, as well as other components not shown in FIG. 4, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 4 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 400 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 400 may also be a mobile or stationary server.
Wherein the processor 420 is configured to execute the following computer-executable instructions:
a memory and a processor;
the memory is for storing computer-executable instructions, and the processor is for executing the computer-executable instructions:
Receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and weights, wherein the weights comprise weights determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
and sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the task processing method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the task processing method.
An embodiment of the present disclosure also provides a computer readable storage medium storing computer instructions that, when executed by a processor, implement the steps of any one of the task processing methods.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the task processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the task processing method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (12)

1. A task processing method, comprising:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and weights, wherein the weights comprise weights determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
and sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
2. The task processing method according to claim 1, said calculating a predicted concurrency of the task to be processed based on the response time and weight comprising:
and calculating the predicted concurrency of the task to be processed through a linear regression algorithm based on the response time and the weight.
3. The task processing method according to claim 2, wherein the formula corresponding to the linear regression algorithm is:
f(x i )=w 0 +w 1 x 1 +w 2 x 2 +…+w i x i
Wherein f (x) i ) Representing predicted concurrency, w represents weight, and x represents response time of downstream system.
4. A task processing method according to claim 1 or 3, wherein said receiving a processing result returned by the downstream system for processing the service request includes:
and receiving a processing result of the task to be processed, which is returned by the downstream system and corresponds to the predicted concurrency in the service request.
5. The task processing method according to claim 4, further comprising, after receiving a processing result of the task to be processed corresponding to the predicted concurrency in the service request returned by the downstream system:
recording the return time when the downstream system returns to process the processing result of the task to be processed corresponding to the predicted concurrency in the service request, and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
6. The task processing method according to claim 5, wherein before determining, based on the return time, a response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request, the task processing method further comprises:
And acquiring the sending time of sending the service request to the downstream system based on the predicted concurrency quantity.
7. The task processing method according to claim 6, wherein the determining, based on the return time, a response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request includes:
and determining the response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
8. The task processing method according to claim 7, wherein after determining a response time of the downstream system to process the task to be processed corresponding to the predicted concurrency in the service request based on a difference between the return time and the sending time, further comprising:
storing the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and updating the preset response time database in real time or at fixed time.
9. The task processing method according to claim 1, wherein the current concurrency is a last concurrency of the predicted concurrency.
10. A task processing device comprising:
the task acquisition module is configured to receive a timing task processing request and acquire a task to be processed according to the timing task processing request;
the prediction concurrency calculation module is configured to acquire response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculate the prediction concurrency of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
the task processing module is configured to send a service request to the downstream system based on the predicted concurrency quantity and receive a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
11. A computing device, comprising:
a memory and a processor;
the memory is for storing computer-executable instructions, and the processor is for executing the computer-executable instructions:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
Acquiring response time of a downstream system in a preset response time database for processing a task to be processed last time, and calculating predicted concurrency of the task to be processed based on the response time and weights, wherein the weights comprise weights determined based on the current concurrency and the response time of the downstream system for processing the task to be processed last time;
and sending a service request to the downstream system based on the predicted concurrency quantity, and receiving a processing result of processing the service request returned by the downstream system, wherein the service request comprises a task to be processed corresponding to the predicted concurrency quantity.
12. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the task processing method of any one of claims 1 to 9.
CN202010138709.3A 2020-03-03 2020-03-03 Task processing method and device Active CN111367637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010138709.3A CN111367637B (en) 2020-03-03 2020-03-03 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010138709.3A CN111367637B (en) 2020-03-03 2020-03-03 Task processing method and device

Publications (2)

Publication Number Publication Date
CN111367637A CN111367637A (en) 2020-07-03
CN111367637B true CN111367637B (en) 2023-05-12

Family

ID=71210332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010138709.3A Active CN111367637B (en) 2020-03-03 2020-03-03 Task processing method and device

Country Status (1)

Country Link
CN (1) CN111367637B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603598A (en) * 2015-10-15 2017-04-26 阿里巴巴集团控股有限公司 Method for processing service request and apparatus thereof
CN108683604A (en) * 2018-04-03 2018-10-19 平安科技(深圳)有限公司 concurrent access control method, terminal device and medium
CN109391680A (en) * 2018-08-31 2019-02-26 阿里巴巴集团控股有限公司 A kind of timed task data processing method, apparatus and system
CN110333937A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Task distribution method, device, computer equipment and storage medium
CN110413657A (en) * 2019-07-11 2019-11-05 东北大学 Average response time appraisal procedure towards seasonal form non-stationary concurrency
CN110780990A (en) * 2019-09-12 2020-02-11 中移(杭州)信息技术有限公司 Performance detection method, performance detection device, server and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8930584B2 (en) * 2012-08-09 2015-01-06 Oracle International Corporation System and method for providing a linearizable request manager

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603598A (en) * 2015-10-15 2017-04-26 阿里巴巴集团控股有限公司 Method for processing service request and apparatus thereof
CN108683604A (en) * 2018-04-03 2018-10-19 平安科技(深圳)有限公司 concurrent access control method, terminal device and medium
CN109391680A (en) * 2018-08-31 2019-02-26 阿里巴巴集团控股有限公司 A kind of timed task data processing method, apparatus and system
CN110333937A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Task distribution method, device, computer equipment and storage medium
CN110413657A (en) * 2019-07-11 2019-11-05 东北大学 Average response time appraisal procedure towards seasonal form non-stationary concurrency
CN110780990A (en) * 2019-09-12 2020-02-11 中移(杭州)信息技术有限公司 Performance detection method, performance detection device, server and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
怯肇乾 ; .Tomcat应用服务器高并发优化处理.电脑编程技巧与维护.2018,(02),全文. *

Also Published As

Publication number Publication date
CN111367637A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
US11863644B2 (en) Push notification delivery system with feedback analysis
US20200401913A1 (en) Resource configuration method and apparatus forheterogeneous cloud services
US11436050B2 (en) Method, apparatus and computer program product for resource scheduling
CN110990138B (en) Resource scheduling method, device, server and storage medium
CN112749358A (en) Page rendering method and device, electronic equipment and storage medium
CN108074003B (en) Prediction information pushing method and device
CN107592345A (en) Transaction current-limiting apparatus, method and transaction system
US10762423B2 (en) Using a neural network to optimize processing of user requests
CN111367637B (en) Task processing method and device
CN111046156B (en) Method, device and server for determining rewarding data
CN109960572B (en) Equipment resource management method and device and intelligent terminal
CN114157578B (en) Network state prediction method and device
CN112669091B (en) Data processing method, device and storage medium
CN114358692A (en) Distribution time length adjusting method and device and electronic equipment
CN111078632B (en) File data management method and device
CN113934920A (en) Target information pushing method and device and storage medium
CN116755805B (en) Resource optimization method and device applied to C++, and resource optimization device applied to C++
CN117453377B (en) Model scheduling method, terminal equipment and server
CN111324444A (en) Cloud computing task scheduling method and device
US11922310B1 (en) Forecasting activity in software applications using machine learning models and multidimensional time-series data
CN116893865B (en) Micro-service example adjusting method and device, electronic equipment and readable storage medium
CN113296870B (en) Method and device for predicting Kubernetes cluster configuration
CN117193980A (en) Task remaining duration calculation method and device
US20230410189A1 (en) Adaptive Timing Prediction For Updating Information
CN114186845A (en) Method and device for executing index calculation task at fixed time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211215

Address after: Room 610, floor 6, No. 618, Wai Road, Huangpu District, Shanghai 200010

Applicant after: Ant Shengxin (Shanghai) Information Technology Co.,Ltd.

Address before: 801-11, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province, 310013

Applicant before: Alipay (Hangzhou) Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant