CN111367637A - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN111367637A
CN111367637A CN202010138709.3A CN202010138709A CN111367637A CN 111367637 A CN111367637 A CN 111367637A CN 202010138709 A CN202010138709 A CN 202010138709A CN 111367637 A CN111367637 A CN 111367637A
Authority
CN
China
Prior art keywords
task
processing
processed
predicted
concurrency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010138709.3A
Other languages
Chinese (zh)
Other versions
CN111367637B (en
Inventor
贺财平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Shengxin Shanghai Information Technology Co ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010138709.3A priority Critical patent/CN111367637B/en
Publication of CN111367637A publication Critical patent/CN111367637A/en
Application granted granted Critical
Publication of CN111367637B publication Critical patent/CN111367637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The task processing method and device provided by the embodiment of the specification, wherein the method comprises the following steps: receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request; acquiring response time of a downstream system in a preset response time database for processing a last task to be processed, and calculating the predicted concurrency of the task to be processed based on the response time and the weight; and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.

Description

Task processing method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a task processing method. One or more embodiments of the present specification also relate to a task processing apparatus, a computing device, and a computer-readable storage medium.
Background
In the data processing process, there is a service that may be called a timed task, i.e., a timer-triggered interface call request. When receiving the upstream system request, the service provider can drop the task for the request with low timeliness requirement, return to successful acceptance, then trigger execution through the timer, and return the final processing result. When the timer triggers the task to execute, in order to improve the throughput, the task is executed in a concurrent manner, and if the task needs to depend on a downstream system, the system throughput is directly influenced by how many concurrent times are set.
In the prior art, when a timing task is executed, the concurrency amount is usually fixed, which cannot guarantee the throughput of the system, and affects the data transmission processing efficiency of the network system, so that it is urgently needed to provide a method for adaptively adjusting the appropriate concurrency amount to improve the processing efficiency of the task.
Disclosure of Invention
In view of this, the present specification provides a task processing method. One or more embodiments of the present specification also relate to a task processing apparatus, a computing device, and a computer-readable storage medium to address technical deficiencies in the prior art.
According to a first aspect of embodiments of the present specification, there is provided a task processing method including:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system processing a last task to be processed in a preset response time database, and calculating a predicted concurrency amount of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system processing the last task to be processed;
and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
Optionally, the calculating the predicted concurrency of the to-be-processed task based on the response time and the weight includes:
and calculating the predicted concurrency of the tasks to be processed through a linear regression algorithm based on the response time and the weight.
Optionally, the formula corresponding to the linear regression algorithm is:
f(xi)=w0+w1x1+w2x2+…+wixi
wherein, f (x)i) Represents the predicted concurrency, w represents the weight, and x represents the response time of the downstream system.
Optionally, the receiving a processing result of processing the service request returned by the downstream system includes:
and receiving a processing result which is returned by the downstream system and used for processing the to-be-processed task corresponding to the predicted concurrency in the service request.
Optionally, after receiving a processing result returned by the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request, the method further includes:
and recording the return time when the downstream system returns the processing result of the to-be-processed task corresponding to the predicted concurrency in the service request, and determining the response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request based on the return time.
Optionally, before determining, based on the return time, a response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request, the method further includes:
and acquiring the sending time for sending the service request to the downstream system based on the predicted concurrency.
Optionally, the determining, based on the return time, a response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request includes:
and determining the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
Optionally, after determining, based on the difference between the return time and the sending time, a response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request, the method further includes:
and storing the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and updating the preset response time database in real time or at regular time.
Optionally, the current concurrency amount is a last concurrency amount of the predicted concurrency amount.
According to a second aspect of embodiments of the present specification, there is provided a task processing apparatus including:
the task acquisition module is configured to receive a timing task processing request and acquire a task to be processed according to the timing task processing request;
the system comprises a predicted concurrency amount calculation module, a response time calculation module and a processing module, wherein the predicted concurrency amount calculation module is configured to acquire the response time of a downstream system for processing a last task to be processed in a preset response time database, and calculate the predicted concurrency amount of the task to be processed based on the response time and a weight, and the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system for processing the last task to be processed;
and the task processing module is configured to send a service request to the downstream system based on the predicted concurrency amount and receive a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
Optionally, the prediction concurrency calculation module is further configured to:
and calculating the predicted concurrency of the tasks to be processed through a linear regression algorithm based on the response time and the weight.
Optionally, the formula corresponding to the linear regression algorithm is:
f(xi)=w0+w1x1+w2x2+…+wixi
wherein, f (x)i) Represents the predicted concurrency, w represents the weight, and x represents the response time of the downstream system.
Optionally, the task processing module is further configured to:
and receiving a processing result which is returned by the downstream system and used for processing the to-be-processed task corresponding to the predicted concurrency in the service request.
Optionally, the apparatus further includes:
and the response time determining module is configured to record the return time when the downstream system returns the processing result of the to-be-processed task corresponding to the predicted concurrency amount in the service request, and determine the response time when the downstream system processes the to-be-processed task corresponding to the predicted concurrency amount in the service request based on the return time.
Optionally, the apparatus further includes:
a sending time obtaining module configured to obtain a sending time for sending the service request to the downstream system based on the predicted concurrency.
Optionally, the response time determining module is further configured to:
and determining the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
Optionally, the apparatus further comprises:
and the response time storage module is configured to store the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and update the preset response time database in real time or at regular time.
Optionally, the current concurrency amount is a last concurrency amount of the predicted concurrency amount.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system processing a last task to be processed in a preset response time database, and calculating a predicted concurrency amount of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system processing the last task to be processed;
and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of any one of the task processing methods.
The embodiment of the specification provides a task processing method and a task processing device, wherein the task processing method comprises the steps of receiving a timing task processing request and acquiring a task to be processed according to the timing task processing request; acquiring response time of a downstream system in a preset response time database for processing a last task to be processed, and calculating the predicted concurrency of the task to be processed based on the response time and the weight; based on the predicted concurrency, sending a service request to the downstream system, and receiving a processing result for processing the service request returned by the downstream system;
the task processing method can ensure that the system can constantly keep the most appropriate throughput when the timing task processing is carried out through the relationship between the response time and the concurrency of the downstream system and the predicted concurrency calculated through the response time and the weight of the downstream system, thereby effectively ensuring the task processing capability of the downstream system and greatly improving the processing efficiency of the timing task processing.
Drawings
FIG. 1 is a flow diagram of a task processing method provided by one embodiment of the present description;
FIG. 2 is a flow diagram of another task processing method provided by one embodiment of the present description;
fig. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present specification;
fig. 4 is a block diagram of a computing device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
Upstream and downstream systems: under the SOA (Service-Oriented Architecture, SOA for short, and Chinese full name: Service-Oriented Architecture) environment, the systems have a mutual dependency relationship, in the transmission process of each system, the Service initiator can be called an upstream system, and the Service provider can be called a downstream system; for example: the system A is the initiator of the service, the system B is the provider of the service, and the system B is the downstream system relative to the system A, and the system A is the upstream system relative to the system B.
Response time of downstream system: and initiating a request from the upstream system to the downstream system, and receiving a return result of the downstream system, wherein the response time comprises network time, processing time of the downstream system and the like.
Timing tasks: an interface calling request triggered by a timer; for example, because the server is busy in the day, for some tasks with low timeliness (such as regular daily maintenance tasks of the server or database backup tasks), the system can be allowed to process such tasks in a timed manner in the late night or in an estimated time period when the server is not busy.
Concurrency amount: the system processes the number of timed tasks at the same time.
In the present specification, a task processing method is provided, and the present specification relates to a task processing apparatus, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
Referring to fig. 1, fig. 1 shows a flowchart of a task processing method provided according to an embodiment of the present specification, including steps 102 to 106.
Step 102: receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request.
In specific implementation, the timed task processing request may be a task processing command triggered by the timer based on a preset time, for example, the preset time is 23:00 pm, and then at 23:00 pm, the timer triggers the timed task processing request and sends the timed task processing request to an upstream system; the upstream system acquires the task to be processed from the task database to be processed according to the timed task processing request under the condition of receiving the timed task processing request; the timed task processing request includes, but is not limited to, an identifier of the to-be-processed task, the number of the to-be-processed tasks, and the like.
Specifically, the tasks to be processed include, but are not limited to: for tasks with low timeliness, for example, a deduction task in online shopping, a backup task of a database, and the like.
In practical applications, the number of to-be-processed tasks carried in the timing task processing request may be set according to historical task processing data or expert experience, for example, 500 or 1000 tasks are set, and the setting is specifically performed according to applications, and is not limited herein.
For example, if the identifier of the to-be-processed task carried in the timing task processing request is a and the number of the to-be-processed tasks is 500, receiving the timing task processing request, and acquiring the to-be-processed task according to the timing task processing request; receiving a timing task processing request for an upstream system, and acquiring 500 to-be-processed tasks marked as a according to the timing task request.
Step 104: the method comprises the steps of obtaining the response time of a downstream system processing a last task to be processed in a preset response time database, and calculating the predicted concurrency amount of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system processing the last task to be processed.
The preset response time database has at least one response time for the downstream system to process the last task to be processed, and the response time for the downstream system to process the last task to be processed is the difference value between the time for the downstream system to receive the task processing request sent by the upstream system last time and the time for processing the task to be completed; for example, the time for the downstream system to receive the task processing request sent by the upstream system last time is 15:00, and if the downstream system processes the task in three threads, the time for completing the task is 15: 12. 15:09, 15:06, the response time is 12, 9, 6 minutes.
The weight is determined based on the current concurrency quantity and the response time of the downstream system for processing the last task to be processed, wherein the current concurrency quantity is the last concurrency quantity of the predicted concurrency quantity.
Specifically, the calculating the predicted concurrency of the to-be-processed task based on the response time and the weight includes:
and calculating the predicted concurrency of the tasks to be processed through a linear regression algorithm based on the response time and the weight.
The linear regression algorithm corresponds to a formula as follows:
f(xi)=w0+w1x1+w2x2+…+wixi
wherein, f (x)i) Represents the predicted concurrency, w represents the weight, x represents the response time of the downstream system, and i represents the number of threads of the downstream system.
For example, if the downstream system includes three threads, i.e., thread 1, thread 2, and thread 3, respectively, the response time for thread 1 to process the last task to be processed is 12 minutes, the response time for thread 2 to process the last task to be processed is 9 minutes, and the response time for thread 3 to process the last task to be processed is 6 minutes, and w is0Is 1, w1Is 1, w2Is 2, w3Is 2; the predicted concurrency calculated by the formula corresponding to the linear regression algorithm is:
f(xi)=1+1*12+2*9+2*6
i.e. the predicted concurrency f (x)i)=43。
In practical application, when a system runs for the first time, initial value of weight can be taken based on expert experience, after the system runs, weight is calculated through a formula corresponding to the linear regression algorithm based on current concurrency and response time of a downstream system for processing a last task to be processed each time when task processing is carried out, finally, the calculated weight is input into the formula corresponding to the linear regression algorithm, and predicted concurrency is calculated according to the response time of the downstream system for processing the last task to be processed, wherein w is0The offset is determined based on the conditions of a central processing unit, a memory, a system load and the like of a downstream system in the actual task processing processThis is not limiting in any way.
The task processing method in the embodiment of the description is based on the incidence relation between the response time of the downstream system and the task concurrency, calculates the accurate predicted concurrency based on the linear regression algorithm only through one influence factor of the response time of the downstream system, and then dynamically adjusts the concurrency of the task to be processed in a self-adaptive manner through the predicted concurrency, so that the purpose of improving the task processing throughput of the downstream system is achieved, and the task processing efficiency of the downstream system can be effectively guaranteed.
Step 106: and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
Specifically, after the predicted concurrency is calculated, a service request is sent to a downstream system, where the service request includes, but is not limited to, the to-be-processed tasks corresponding to the predicted concurrency, for example, the predicted concurrency is 43, and the to-be-processed tasks corresponding to the predicted concurrency are 43 to-be-processed tasks sequentially or randomly extracted from all the to-be-processed tasks.
In practical application, after sending a service request to the downstream system based on the predicted concurrency, the downstream system starts the calculated number of threads determined by the predicted concurrency to process the task to be processed according to the task to be processed corresponding to the received predicted concurrency, and returns a processing result to the upstream system after finishing processing the task to be processed.
Specifically, the receiving a processing result of processing the service request returned by the downstream system includes:
and receiving a processing result which is returned by the downstream system and used for processing the to-be-processed task corresponding to the predicted concurrency in the service request.
The to-be-processed tasks corresponding to the predicted concurrency amount in the service request can be understood as the to-be-processed tasks with the same number corresponding to the number of the predicted concurrency amount in the service request.
In practical application, the processing result of the to-be-processed task may be understood as the success or failure condition of the downstream system processing the to-be-processed tasks of the same number corresponding to the predicted concurrency amount.
The task processing method provided by the embodiment of the description can ensure that the system can constantly keep the most appropriate throughput when the timing task is processed by the relationship between the response time and the concurrency of the downstream system and the predicted concurrency calculated by the response time and the weight of the downstream system, thereby effectively ensuring the task processing capability of the downstream system and greatly improving the processing efficiency of the timing task processing.
In another embodiment of this specification, after receiving a processing result, returned by the downstream system, of processing the to-be-processed task corresponding to the predicted concurrency in the service request, the method further includes:
and recording the return time when the downstream system returns the processing result of the to-be-processed task corresponding to the predicted concurrency in the service request, and determining the response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request based on the return time.
Specifically, when receiving a processing result of processing the service request returned by the downstream system, the upstream system records a return time of the downstream system when returning the processing result of processing the to-be-processed task corresponding to the predicted concurrency amount in the service request, for example, 16:06 of 1/10/2019, and determines a response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency amount in the service request based on the return time.
In specific implementation, before determining, based on the return time, a response time for the downstream system to process the to-be-processed task corresponding to the predicted concurrency in the service request, the method further includes:
and acquiring the sending time for sending the service request to the downstream system based on the predicted concurrency.
For example, the sending time for sending the service request to the downstream system based on the predicted concurrency amount is 2019, 10, 1, 16: 00.
And based on the return time and the sending time, the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency amount in the service request can be calculated, which specifically comprises the following steps:
and determining the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
For example, if the return time is 16:06 in 2019, 10 month, 1 day, and 16:00 in 2019, 10 month, 1 day, and the sending time is 16:00 in 2019, the response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency amount in the service request, which is calculated based on the difference between the return time and the sending time, is 6 minutes.
For the above, only the downstream system is taken as an example of one thread, and for the acquisition of the response time of multiple threads in the downstream system, reference may be made to this embodiment, and details are not described here again.
The downstream system of the task processing method provided in the embodiment of the present specification transmits the response time for processing the to-be-processed task to the upstream system after processing the to-be-processed task corresponding to the predicted concurrency amount each time, and the upstream system recalculates the most appropriate predicted concurrency amount based on the response time for processing the previous to-be-processed task fed back by the downstream system when acquiring the to-be-processed task next time.
In one or more embodiments of the present specification, after determining, based on the difference between the return time and the sending time, a response time of the downstream system for processing a to-be-processed task corresponding to the predicted concurrency in the service request, the method further includes:
and storing the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and updating the preset response time database in real time or at regular time.
In this embodiment of the present specification, the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request is stored in the preset response time database, and the preset response time database is updated in real time or at regular time, so that the upstream system can directly obtain the response time of the downstream system for processing the last task to be processed through the preset response time database, calculate the predicted concurrency of the task to be processed based on the response time and the weight, and send the service request to the downstream system based on the adjusted most appropriate predicted concurrency, so as to ensure the task processing capability of the downstream system and improve the throughput of the downstream system for processing the task to be processed.
In addition, when the predicted concurrency is calculated, in order to realize more accurate adjustment of the predicted concurrency, the CPU, the memory and the system load of a downstream system can be increased to be used as decision indexes when the predicted concurrency is calculated through a formula corresponding to a linear regression algorithm.
Referring to fig. 2, fig. 2 shows a flowchart of another task processing method provided according to an embodiment of the present specification, including steps 202 to 214.
Step 202: the timer sends a timed task processing request to an upstream system based on preset time.
Specifically, step 202 corresponds to the first step in fig. 2: the timed task processes the request.
Wherein the preset time includes, but is not limited to, any time from 00:00 of the previous day to 24:00 of the following day.
For example, if the preset time is 23:00, the timer sends the timed task processing request to the upstream system based on the preset time, and then sends the timed task processing request to the upstream system at 23:00 for the timer.
Step 204: and after receiving a timing task processing request sent by a timer, the upstream system acquires a task to be processed according to the timing task processing request.
Specifically, step 204 corresponds to the second step in fig. 2: and fishing the task to be processed.
Step 206: and the upstream system calculates the predicted concurrency of the tasks to be processed through a linear regression algorithm based on the response time and the weight.
Specifically, step 206 corresponds to the third step in fig. 2: and calculating the predicted concurrency.
The response time comprises the response time of the downstream system for processing the last task to be processed, which is acquired from a preset response time database, or the initial response time which is set when the system runs for the first time; the weight comprises a weight determined based on the current concurrency and the response time of the downstream system for processing the last task to be processed, or an initial weight set when the system runs for the first time.
Step 208: the upstream system sends a traffic request to the downstream system based on the calculated predicted concurrency.
Specifically, step 208 corresponds to the fourth step in fig. 2: and requesting the service.
And the service request comprises a task to be processed corresponding to the predicted concurrency.
Step 210: and the upstream system receives a processing result for processing the service request returned by the downstream system.
Specifically, step 210 corresponds to the fifth step in fig. 2: and returning the result.
Step 212: and the upstream system records the return time of the processing result for processing the service request returned by the downstream system, and determines the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the return time.
Specifically, step 212 corresponds to the sixth step in fig. 2: the response time is recorded.
Specifically, the calculating, by the downstream system, the response time for processing the to-be-processed task corresponding to the predicted concurrency in the service request includes:
acquiring the sending time for sending the service request to the downstream system based on the predicted concurrency;
and determining the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
For a specific calculation process, reference may be made to the above embodiments, which are not described herein again.
Step 214: and returning the response time of the task to be processed, which is determined based on the difference between the return time and the sending time and corresponds to the predicted concurrency amount in the service request processed by the downstream system, to a timer, and storing the response time into the preset response time database.
Specifically, step 214 corresponds to the seventh step in fig. 2: and returning the response time.
In the task processing method provided in the embodiment of the present specification, a timer sends a timed task processing request to an upstream system within a set time, the upstream system obtains a task to be processed from a task database based on the timed task processing request, calculates a predicted concurrency amount in real time based on a response time of a downstream system, then sends a service request to the downstream system based on the predicted concurrency amount, records a response time of the downstream system after the downstream system completes the task to be processed based on the service request processing, and returns the response time to the timer, so that when the timer next triggers the timed task processing request, the response time is transmitted to the upstream system, the upstream system recalculates a more appropriate predicted concurrency amount, and by adopting the dynamic loop calculation of the predicted concurrency amount, the downstream system can maintain an optimal throughput at each time when processing the task to be processed, the processing efficiency of the timing task processing is greatly improved.
Corresponding to the above method embodiments, the present specification further provides task processing device embodiments, and fig. 3 shows a schematic structural diagram of a task processing device provided in an embodiment of the present specification. As shown in fig. 3, the apparatus includes:
a task obtaining module 302, configured to receive a timing task processing request, and obtain a to-be-processed task according to the timing task processing request;
a predicted concurrency calculation module 304, configured to obtain a response time of a last task to be processed by a downstream system in a preset response time database, and calculate a predicted concurrency of the task to be processed based on the response time and a weight, wherein the weight includes a weight determined based on a current concurrency and the response time of the last task to be processed by the downstream system;
and the task processing module 306 is configured to send a service request to the downstream system based on the predicted concurrency amount, and receive a processing result returned by the downstream system for processing the service request, where the service request includes a to-be-processed task corresponding to the predicted concurrency amount.
Optionally, the predicted concurrency calculation module 304 is further configured to:
and calculating the predicted concurrency of the tasks to be processed through a linear regression algorithm based on the response time and the weight.
Optionally, the formula corresponding to the linear regression algorithm is:
f(xi)=w0+w1x1+w2x2+…+wixi
wherein, f (x)i) Represents the predicted concurrency, w represents the weight, and x represents the response time of the downstream system.
Optionally, the task processing module 306 is further configured to:
and receiving a processing result which is returned by the downstream system and used for processing the to-be-processed task corresponding to the predicted concurrency in the service request.
Optionally, the apparatus further includes:
and the response time determining module is configured to record the return time when the downstream system returns the processing result of the to-be-processed task corresponding to the predicted concurrency amount in the service request, and determine the response time when the downstream system processes the to-be-processed task corresponding to the predicted concurrency amount in the service request based on the return time.
Optionally, the apparatus further includes:
a sending time obtaining module configured to obtain a sending time for sending the service request to the downstream system based on the predicted concurrency.
Optionally, the response time determining module is further configured to:
and determining the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
Optionally, the apparatus further includes:
and the response time storage module is configured to store the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and update the preset response time database in real time or at regular time.
Optionally, the current concurrency amount is a last concurrency amount of the predicted concurrency amount.
The task processing device provided by the embodiment of the specification receives a timing task processing request and acquires a task to be processed according to the timing task processing request; acquiring response time of a downstream system in a preset response time database for processing a last task to be processed, and calculating the predicted concurrency of the task to be processed based on the response time and the weight; based on the predicted concurrency, sending a service request to the downstream system, and receiving a processing result for processing the service request returned by the downstream system;
the task processing device can ensure that the system can constantly keep the most appropriate throughput when the timing task is processed through the relationship between the response time and the concurrency of the downstream system and the predicted concurrency calculated through the response time and the weight of the downstream system, thereby effectively ensuring the task processing capability of the downstream system and greatly improving the processing efficiency of the timing task processing.
The above is a schematic arrangement of a task processing device of the present embodiment. It should be noted that the technical solution of the task processing device and the technical solution of the task processing method belong to the same concept, and for details that are not described in detail in the technical solution of the task processing device, reference may be made to the description of the technical solution of the task processing method.
FIG. 4 illustrates a block diagram of a computing device 400 provided in accordance with one embodiment of the present description. The components of the computing device 400 include, but are not limited to, a memory 410 and a processor 420. Processor 420 is coupled to memory 410 via bus 430 and database 450 is used to store data.
Computing device 400 also includes access device 440, access device 440 enabling computing device 400 to communicate via one or more networks 460. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 440 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 400, as well as other components not shown in FIG. 4, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 4 is for purposes of example only and is not limiting as to the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 400 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 400 may also be a mobile or stationary server.
Wherein processor 420 is configured to execute the following computer-executable instructions:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system processing a last task to be processed in a preset response time database, and calculating a predicted concurrency amount of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system processing the last task to be processed;
and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the task processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the task processing method.
An embodiment of the present specification further provides a computer readable storage medium storing computer instructions, which when executed by a processor implement the steps of any one of the task processing methods.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the task processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the task processing method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (12)

1. A method of task processing, comprising:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system processing a last task to be processed in a preset response time database, and calculating a predicted concurrency amount of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system processing the last task to be processed;
and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
2. The task processing method according to claim 1, wherein the calculating the predicted concurrency of the to-be-processed task based on the response time and the weight comprises:
and calculating the predicted concurrency of the tasks to be processed through a linear regression algorithm based on the response time and the weight.
3. The task processing method according to claim 2, wherein the linear regression algorithm corresponds to a formula:
f(xi)=w0+w1x1+w2x2+…+wixi
wherein, f (x)i) Represents the predicted concurrency, w represents the weight, and x represents the response time of the downstream system.
4. The task processing method according to claim 1 or 3, wherein the receiving a processing result returned by the downstream system for processing the service request includes:
and receiving a processing result which is returned by the downstream system and used for processing the to-be-processed task corresponding to the predicted concurrency in the service request.
5. The task processing method according to claim 4, after receiving a processing result returned by the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request, further comprising:
and recording the return time when the downstream system returns the processing result of the to-be-processed task corresponding to the predicted concurrency in the service request, and determining the response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request based on the return time.
6. The task processing method according to claim 5, wherein before determining, based on the return time, a response time for the downstream system to process the to-be-processed task corresponding to the predicted concurrency in the service request, the method further comprises:
and acquiring the sending time for sending the service request to the downstream system based on the predicted concurrency.
7. The task processing method according to claim 6, wherein the determining, based on the return time, a response time for the downstream system to process the to-be-processed task corresponding to the predicted concurrency in the service request includes:
and determining the response time of the downstream system for processing the task to be processed corresponding to the predicted concurrency in the service request based on the difference value between the return time and the sending time.
8. The task processing method according to claim 7, after determining, based on the difference between the return time and the sending time, a response time of the downstream system for processing the to-be-processed task corresponding to the predicted concurrency in the service request, further comprising:
and storing the response time of the task to be processed corresponding to the predicted concurrency in the service request processed by the downstream system into the preset response time database, and updating the preset response time database in real time or at regular time.
9. The task processing method according to claim 1, wherein the current concurrency amount is a last concurrency amount of the predicted concurrency amount.
10. A task processing device comprising:
the task acquisition module is configured to receive a timing task processing request and acquire a task to be processed according to the timing task processing request;
the system comprises a predicted concurrency amount calculation module, a response time calculation module and a processing module, wherein the predicted concurrency amount calculation module is configured to acquire the response time of a downstream system for processing a last task to be processed in a preset response time database, and calculate the predicted concurrency amount of the task to be processed based on the response time and a weight, and the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system for processing the last task to be processed;
and the task processing module is configured to send a service request to the downstream system based on the predicted concurrency amount and receive a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
11. A computing device, comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
receiving a timing task processing request, and acquiring a task to be processed according to the timing task processing request;
acquiring response time of a downstream system processing a last task to be processed in a preset response time database, and calculating a predicted concurrency amount of the task to be processed based on the response time and a weight, wherein the weight comprises a weight determined based on the current concurrency amount and the response time of the downstream system processing the last task to be processed;
and sending a service request to the downstream system based on the predicted concurrency amount, and receiving a processing result returned by the downstream system for processing the service request, wherein the service request comprises a task to be processed corresponding to the predicted concurrency amount.
12. A computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the task processing method of any one of claims 1 to 9.
CN202010138709.3A 2020-03-03 2020-03-03 Task processing method and device Active CN111367637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010138709.3A CN111367637B (en) 2020-03-03 2020-03-03 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010138709.3A CN111367637B (en) 2020-03-03 2020-03-03 Task processing method and device

Publications (2)

Publication Number Publication Date
CN111367637A true CN111367637A (en) 2020-07-03
CN111367637B CN111367637B (en) 2023-05-12

Family

ID=71210332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010138709.3A Active CN111367637B (en) 2020-03-03 2020-03-03 Task processing method and device

Country Status (1)

Country Link
CN (1) CN111367637B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140047140A1 (en) * 2012-08-09 2014-02-13 Oracle International Corporation System and method for providing a linearizable request manager
CN106603598A (en) * 2015-10-15 2017-04-26 阿里巴巴集团控股有限公司 Method for processing service request and apparatus thereof
CN108683604A (en) * 2018-04-03 2018-10-19 平安科技(深圳)有限公司 concurrent access control method, terminal device and medium
CN109391680A (en) * 2018-08-31 2019-02-26 阿里巴巴集团控股有限公司 A kind of timed task data processing method, apparatus and system
CN110333937A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Task distribution method, device, computer equipment and storage medium
CN110413657A (en) * 2019-07-11 2019-11-05 东北大学 Average response time appraisal procedure towards seasonal form non-stationary concurrency
CN110780990A (en) * 2019-09-12 2020-02-11 中移(杭州)信息技术有限公司 Performance detection method, performance detection device, server and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140047140A1 (en) * 2012-08-09 2014-02-13 Oracle International Corporation System and method for providing a linearizable request manager
CN106603598A (en) * 2015-10-15 2017-04-26 阿里巴巴集团控股有限公司 Method for processing service request and apparatus thereof
CN108683604A (en) * 2018-04-03 2018-10-19 平安科技(深圳)有限公司 concurrent access control method, terminal device and medium
CN109391680A (en) * 2018-08-31 2019-02-26 阿里巴巴集团控股有限公司 A kind of timed task data processing method, apparatus and system
CN110333937A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Task distribution method, device, computer equipment and storage medium
CN110413657A (en) * 2019-07-11 2019-11-05 东北大学 Average response time appraisal procedure towards seasonal form non-stationary concurrency
CN110780990A (en) * 2019-09-12 2020-02-11 中移(杭州)信息技术有限公司 Performance detection method, performance detection device, server and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
怯肇乾;: "Tomcat应用服务器高并发优化处理" *

Also Published As

Publication number Publication date
CN111367637B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US20200401913A1 (en) Resource configuration method and apparatus forheterogeneous cloud services
CN110990138B (en) Resource scheduling method, device, server and storage medium
CN107241380B (en) Method and apparatus for time-based adjusted load balancing
CN110569252B (en) Data processing system and method
Wang et al. Joint server assignment and resource management for edge-based MAR system
CN111949324A (en) Distributed serial number generation method and device
CN110650195A (en) Distributed load balancing method and device
CN108111591B (en) Method and device for pushing message and computer readable storage medium
CN112817721B (en) Task scheduling method and device based on artificial intelligence, computer equipment and medium
CN111367637A (en) Task processing method and device
CN111343006A (en) CDN peak flow prediction method, device and storage medium
CN113434591B (en) Data processing method and device
CN111901425B (en) CDN scheduling method and device based on Pareto algorithm, computer equipment and storage medium
CN108228334B (en) Container cluster expansion method and device
CN107707383B (en) Put-through processing method and device, first network element and second network element
CN117453377B (en) Model scheduling method, terminal equipment and server
CN113382078B (en) Data processing method and device
CN111770187B (en) Resource downloading method and device
CN116302009B (en) Software updating method and device based on wireless router
CN113296964B (en) Data processing method and device
CN114780202A (en) Method, device, equipment and medium for adjusting function computing resource pool
CN116260666A (en) Resource recommendation method and device
CN114186845A (en) Method and device for executing index calculation task at fixed time
CN112783608A (en) Method and device for adjusting container resources in container cluster
EP2778920A1 (en) Selectively altering requests based on comparison of potential value of requests

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211215

Address after: Room 610, floor 6, No. 618, Wai Road, Huangpu District, Shanghai 200010

Applicant after: Ant Shengxin (Shanghai) Information Technology Co.,Ltd.

Address before: 801-11, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province, 310013

Applicant before: Alipay (Hangzhou) Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant