CN109901921B - Task queue execution time prediction method and device and implementation device - Google Patents

Task queue execution time prediction method and device and implementation device Download PDF

Info

Publication number
CN109901921B
CN109901921B CN201910136619.8A CN201910136619A CN109901921B CN 109901921 B CN109901921 B CN 109901921B CN 201910136619 A CN201910136619 A CN 201910136619A CN 109901921 B CN109901921 B CN 109901921B
Authority
CN
China
Prior art keywords
processed
task
execution time
task queue
concurrency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910136619.8A
Other languages
Chinese (zh)
Other versions
CN109901921A (en
Inventor
罗俊林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Seeyon Internet Software Corp
Original Assignee
Beijing Seeyon Internet Software Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Seeyon Internet Software Corp filed Critical Beijing Seeyon Internet Software Corp
Priority to CN201910136619.8A priority Critical patent/CN109901921B/en
Publication of CN109901921A publication Critical patent/CN109901921A/en
Application granted granted Critical
Publication of CN109901921B publication Critical patent/CN109901921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a task queue execution time prediction method, a task queue execution time prediction device and an implementation device; wherein the method is applied to a server; the method comprises the following steps: acquiring a task queue to be processed; the to-be-processed task queue comprises a plurality of to-be-processed subtasks; determining the basic parameters of each sub task to be processed according to the task queue to be processed; basic parameters include concurrency status and complexity; acquiring resource parameters of a server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process; and predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time. The invention reasonably predicts the execution time of the task queue to be processed and improves the efficiency of optimizing resource scheduling in the application scene of the service queue.

Description

Task queue execution time prediction method and device and implementation device
Technical Field
The invention relates to the technical field of computers, in particular to a task queue execution time prediction method, a task queue execution time prediction device and an implementation device.
Background
In a service queue application scene, when the system is busy, tasks in the queue are easy to accumulate and delay; in the system execution platform, due to the fact that complexity distribution of executed tasks is uneven and system resources are dynamically changed, time for executing tasks of the same business logic on different computing nodes or at different time periods may generate great difference, and it is difficult to reasonably predict execution time of a task queue, so that efficiency of optimizing resource scheduling in a business queue application scene is low.
Disclosure of Invention
In view of this, the present invention provides a method, an apparatus, and an implementation apparatus for predicting execution time of a task queue, so as to reasonably predict execution time of the task queue and improve efficiency of optimizing resource scheduling in a service queue application scenario.
In a first aspect, an embodiment of the present invention provides a method for predicting a task queue execution time, where the method is applied to a server; the method comprises the following steps: acquiring a task queue to be processed; the task queue comprises a plurality of to-be-processed subtasks; determining the basic parameters of each sub task to be processed according to the task queue to be processed; basic parameters include concurrency status and complexity; acquiring resource parameters of a server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process; and predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the determining, according to the queue of pending tasks, a basic parameter of each pending subtask includes: analyzing each to-be-processed subtask to obtain a concurrence state and a single step of each to-be-processed subtask; concurrency states include may or may not be concurrent; and determining the step complexity corresponding to each single step according to the step attribute of each single step.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of determining, according to the step attribute of each single step, the step complexity corresponding to the single step includes: when the single step is newly added, determining the step complexity corresponding to the single step as 1; when the single step is one of deletion, modification and inquiry, the step complexity corresponding to the single step is determined as the number of data volumes involved by the single step.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the predicting an execution time of the to-be-processed task queue according to the basic parameter, the resource parameter, and the pre-obtained average time of task execution includes: adding the step complexity of single step of all the to-be-processed subtasks in the to-be-processed task queue together to obtain the total complexity of the to-be-processed task queue; determining the concurrency number of the task queue to be processed according to the concurrency state of the subtasks to be processed and the preset system task concurrency number; adding the CPU occupancy rate, the input/output port utilization rate and the memory occupancy rate of the host process together to obtain a system resource load rate; and predicting the execution time of the task queue to be processed according to the total complexity, the concurrency number, the system resource load rate and the pre-obtained task execution average time.
With reference to the third possible implementation manner of the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the predicting an execution time of the to-be-processed task queue according to the total complexity, the number of possible concurrencies, the system resource load rate, and a pre-obtained average time for task execution includes: the execution time is calculated by the following formula:
Tpre=(ξ+P-Co)×Tavg
wherein, TpreTo perform time, ξ is the total complexity, P is the system resource load rate, CoTo a number which can be concurrent, TavgThe average time for the task to execute.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the average time for executing the task is calculated by the following formula:
Tavg=Tlast/(ξ'+P'-Co')
wherein, TlastFor the set execution time of the executed subtask, ξ 'is the total complexity of the executed subtask, P' is the system resource load rate during the execution of the executed subtask, Co' is the number of concurrency possible for the executed subtasks.
In a second aspect, an embodiment of the present invention further provides a device for predicting execution time of a task queue, where the device is disposed in a server; the device includes: the task queue acquisition module is used for acquiring a task queue to be processed; the to-be-processed task queue comprises a plurality of to-be-processed subtasks; the basic parameter determining module is used for determining the basic parameters of each to-be-processed subtask according to the to-be-processed task queue; basic parameters include concurrency status and complexity; the resource parameter acquisition module is used for acquiring resource parameters of the server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process; and the execution time prediction module is used for predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the task execution average time obtained in advance.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the basic parameter determining module further includes: the task analysis unit is used for analyzing each sub-task to be processed to obtain the concurrent state and single step of each sub-task to be processed; concurrency states include may or may not be concurrent; and the step complexity determining unit is used for determining the step complexity corresponding to each single step according to the step attribute of each single step.
With reference to the first possible implementation manner of the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, wherein the step complexity determining unit is further configured to: when the single step is newly added, determining the step complexity corresponding to the single step as 1; when the single step is one of deletion, modification and inquiry, the step complexity corresponding to the single step is determined as the number of data volumes involved by the single step.
In a third aspect, an embodiment of the present invention further provides an apparatus for implementing a task queue execution time prediction, including a memory and a processor, where the memory is used to store one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the task queue execution time prediction method as claimed in the above.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a task queue execution time prediction method, a task queue execution time prediction device and an implementation device; after a task queue to be processed is obtained, determining basic parameters of each subtask to be processed according to the task queue to be processed; basic parameters include concurrency status and complexity; acquiring resource parameters of a server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process; and predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time. The method reasonably predicts the execution time of the task queue to be processed, and improves the efficiency of optimizing resource scheduling in a service queue application scene.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for predicting execution time of a task queue according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for performing time prediction for a task queue according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a loop execution of another task queue execution time prediction method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for predicting execution time of a task queue according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an implementation apparatus for performing time prediction on a task queue according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, in a service queue application scenario, it is difficult to reasonably predict execution time of a task queue, which results in low efficiency of scheduling optimized resources in the service queue application scenario.
To facilitate understanding of the embodiment, first, a detailed description is given to a task queue execution time prediction method disclosed in the embodiment of the present invention.
Referring to fig. 1, a flowchart of a task queue execution time prediction method is shown, and the method is applied to a server; the method comprises the following steps:
step S100, acquiring a task queue to be processed; the pending task queue includes a plurality of pending subtasks.
Specifically, when the system is busy, some tasks cannot be processed in time, and can be added into a preset task queue according to a time sequence; when the execution time of the task in the task queue needs to be predicted, all the subtasks of the task queue need to be read.
Step S102, determining basic parameters of each to-be-processed subtask according to the to-be-processed task queue; basic parameters include concurrency status and complexity.
Specifically, the to-be-processed subtasks need to be analyzed, and whether the to-be-processed subtasks can be performed concurrently is determined; in addition, the complexity of the subtask to be processed is obtained according to the number of resources called by each step of the subtask to be processed, such as the memory calculation amount; therefore, the relevant parameters of the task queue are determined according to the basic parameters of the subtasks to be processed, and the task execution time is predicted.
Step S104, acquiring resource parameters of the server; the resource parameters include CPU (Central Processing Unit) occupancy, i/o port usage, and memory occupancy of the host process.
Specifically, a corresponding resource parameter may be requested from each resource; in the process of executing the task, the host process needs to support, the CPU resources are also occupied, and the input/output port may be used, so that the execution time of the task is affected by the CPU occupancy, the input/output port usage, and the memory occupancy of the host process.
And step S106, predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the task execution average time obtained in advance.
Specifically, the average time for executing the task may be calculated according to the execution time of the completed task; generally, when calculating the average time of task execution, it is necessary to introduce the CPU occupancy, i/o port usage, and memory occupancy of the host process when the task has been completed as related parameters.
The embodiment of the invention provides a task queue execution time prediction method; after a task queue to be processed is obtained, determining basic parameters of each subtask to be processed according to the task queue to be processed; basic parameters include concurrency status and complexity; acquiring resource parameters of a server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process; and predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time. The method reasonably predicts the execution time of the task queue to be processed, and improves the efficiency of optimizing resource scheduling in a service queue application scene.
The embodiment of the invention also provides another task queue execution time prediction method, which is realized on the basis of the method shown in FIG. 1, and the execution times of the tasks at different periods of different nodes are predicted by combining the data volume required by the tasks under different conditions and the system resource load conditions, wherein the flow schematic diagram is shown in FIG. 2; the method comprises the following steps:
step S200, acquiring a task queue to be processed; the pending task queue includes a plurality of pending subtasks.
Step S202, analyzing each sub-task to be processed to obtain the concurrent state and single step of each sub-task to be processed; concurrency states include may or may not be concurrent; and whether the subtasks can be executed concurrently or not is determined according to the specific implementation content of the subtasks.
And step S204, determining the step complexity corresponding to each single step according to the step attribute of each single step.
Specifically, the step complexity is determined as follows:
(1) and when the single step is newly added, determining that the complexity of the step corresponding to the single step is 1.
(2) When the single step is one of deletion, modification and query, the step complexity corresponding to the single step is determined as the number of data volumes (also called related data volumes) involved in the single step.
Step S206, adding the step complexity of the single step of all the to-be-processed subtasks in the to-be-processed task queue together to obtain the total complexity of the to-be-processed task queue; in particular, the amount of the solvent to be used,
Figure BDA0001976474480000071
Figure BDA0001976474480000081
m is the step complexity of a single step.
Step S208, determining the concurrency number of the task queue to be processed according to the concurrency state of the subtasks to be processed and the preset system task concurrency number; specifically, task execution may be concurrent execution, and generally, the more the number of concurrences is, the higher the execution efficiency is; but basically remains unchanged when the concurrency number reaches a critical point, which is usually determined by the number that can be paralleled by the system; the relation between the concurrent number and the performance is also determined by the number of the tasks which can be parallelly performed in the queue, if few or no tasks which can be parallelly performed exist, but most or all of the tasks are serial tasks, the task execution concurrent number is basically unrelated to the execution performance; that is, the current task execution concurrency number is the parallelizable task number (concurrency number < system parallelizable number).
Step S210, adding the CPU occupancy rate, the input/output port utilization rate and the memory occupancy rate of the host process together to obtain the system resource load rate; that is, the current system resource load rate is the CPU occupancy + i/o occupancy + the memory occupancy of the host process.
Step S212, predicting the execution time of the task queue to be processed according to the total complexity, the concurrency number, the system resource load rate and the pre-obtained task execution average time.
Specifically, the execution time may be calculated by the following formula:
Tpre=(ξ+P-Co)×Tavg
wherein, TpreTo perform time, ξ is the total complexity, P is the system resource load rate, CoTo a number which can be concurrent, TavgAverage time for task execution; the expression of characters is as follows: the predicted result value (i.e. predicted execution time) is (total complexity of the current queue task + current system resource load rate-current task execution concurrency) and the current task execution average time (i.e. task execution average time).
Wherein, the average time for executing the task is calculated by the following formula:
Tavg=Tlast/(ξ'+P'-Co')
wherein, TlastFor the set execution time of the executed subtask, ξ 'is the total complexity of the executed subtask, P' is the system resource load rate during the execution of the executed subtask, Co' is the number of concurrency possible for the executed subtasks; the expression of characters is as follows: the current task execution average time is the last task execution time/last (total complexity of current queued tasks + current system resource load rate-current task execution concurrency).
In addition, the method can be carried out in a circulating mode under the condition that the task execution condition needs to be monitored; in this case, the method can be simply summarized into two steps, and the flow chart thereof is shown in fig. 3:
(1) acquiring the average execution time of the current task;
(2) and (4) predicting the execution time of the task queue and skipping the execution step (1).
For convenience of calculation, in the method, some factor variables may be defined, including the average execution time of the current task (equivalent to the average execution time of the task), the number of tasks in the current queue, the total complexity of the tasks in the current queue, the amount of data related to the execution of the current task, the number of concurrent executions of the current task, and the system resource load rate (cpu occupancy, i/o condition, memory occupancy of the task execution host process); and after the current values of the factor variables are obtained, the predicted result value of the execution time of the task queue can be obtained according to the calculation formula.
The method defines a factor variable and an acquisition mode method of task queue execution time and a linear formula of the task queue execution time; the predicted result value can be calculated according to the result value of the current factor variable, so that the task queue execution time can be predicted more scientifically, and the follow-up actions can be arranged and scheduled.
The embodiment of the invention also provides a device for predicting the execution time of the task queue, which is arranged on the server, and the schematic structural diagram of the device is shown in FIG. 4; the device includes: a task queue obtaining module 400, configured to obtain a task queue to be processed; the to-be-processed task queue comprises a plurality of to-be-processed subtasks; a basic parameter determining module 402, configured to determine a basic parameter of each to-be-processed sub-task according to the to-be-processed task queue; basic parameters include concurrency status and complexity; a resource parameter obtaining module 404, configured to obtain resource parameters of a server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process; and the execution time prediction module 406 is configured to predict the execution time of the to-be-processed task queue according to the basic parameters, the resource parameters, and the pre-obtained average task execution time.
Specifically, the basic parameter determining module further includes: the task analysis unit is used for analyzing each sub-task to be processed to obtain the concurrent state and single step of each sub-task to be processed; concurrency states include may or may not be concurrent; and the step complexity determining unit is used for determining the step complexity corresponding to each single step according to the step attribute of each single step.
Further, the step complexity determining unit is further configured to: when the single step is newly added, determining the step complexity corresponding to the single step as 1; when the single step is one of deletion, modification and inquiry, the step complexity corresponding to the single step is determined as the number of data volumes involved by the single step.
The task queue execution time prediction device provided by the embodiment of the invention has the same technical characteristics as the task queue execution time prediction method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment provides a device for realizing the execution time prediction of the task queue, which corresponds to the method embodiment. Fig. 5 is a schematic structural diagram of the implementation apparatus, and as shown in fig. 5, the apparatus includes a processor 1201 and a memory 1202; the memory 1202 is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the method for implementing the time prediction for the task queue.
The implementation apparatus shown in fig. 5 further includes a bus 1203 and a forwarding chip 1204, and the processor 1201, the forwarding chip 1204 and the memory 1202 are connected through the bus 1203. The message transmission implementation device may be a network edge device.
The Memory 1202 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Bus 1203 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The forwarding chip 1204 is configured to be connected to at least one user terminal and other network units through a network interface, and send the packaged IPv4 message or IPv6 message to the user terminal through the network interface.
The processor 1201 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 1201. The Processor 1201 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1202, and the processor 1201 reads information in the memory 1202 to complete the steps of the method of the foregoing embodiments in combination with hardware thereof.
The embodiment of the present invention further provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the method for implementing the task queue execution time prediction.
The fund allocation device and the implementation device provided by the embodiment of the invention have the same implementation principle and the same technical effect as the method embodiment, and for the sake of brief description, the corresponding content in the method embodiment can be referred to where the device embodiment is not mentioned partially.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, and the flowcharts and block diagrams in the figures, for example, illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (4)

1. A task queue execution time prediction method is characterized in that the method is applied to a server; the method comprises the following steps:
acquiring a task queue to be processed; the to-be-processed task queue comprises a plurality of to-be-processed subtasks;
determining the basic parameters of each sub task to be processed according to the task queue to be processed; the basic parameters comprise concurrency state and complexity;
acquiring resource parameters of the server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process;
predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time;
the step of determining the basic parameters of each sub task to be processed according to the task queue to be processed includes:
analyzing each sub task to be processed to obtain the concurrent state and single step of each sub task to be processed; the concurrency state comprises concurrency or non-concurrency;
determining step complexity corresponding to each single step according to the step attribute of each single step;
the step of determining the step complexity corresponding to each single step according to the step attribute of each single step includes:
when the single step is newly added, determining that the step complexity corresponding to the single step is 1;
when the single step is one of deletion, modification and query, determining the step complexity corresponding to the single step as the number of data volumes related to the single step;
the step of predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time comprises the following steps:
adding the step complexity of the single step of all the to-be-processed subtasks in the to-be-processed task queue together to obtain the total complexity of the to-be-processed task queue;
determining the concurrency number of the task queue to be processed according to the concurrency state of the subtasks to be processed and a preset system task concurrency number;
adding the CPU occupancy rate, the input/output port utilization rate and the memory occupancy rate of the host process together to obtain a system resource load rate;
predicting the execution time of the task queue to be processed according to the total complexity, the concurrency number, the system resource load rate and the pre-obtained task execution average time;
the step of predicting the execution time of the task queue to be processed according to the total complexity, the concurrency number, the system resource load rate and the pre-obtained task execution average time comprises the following steps:
calculating the execution time by the following formula:
Tpre=(ξ+P-Co)×Tavg
wherein, TpreFor the execution time, ξ is the total complexityP is the system resource load rate, CoFor said number of possible concurrencies, TavgAn average time for the task to execute.
2. The method of claim 1, wherein the average time to task execution is calculated by the following equation:
Tavg=Tlast/(ξ'+P'-Co')
wherein, TlastIn order to set the execution time of the executed subtask, ξ 'is the total complexity of the executed subtask, P' is the system resource load rate in the execution process of the executed subtask, Co' is the concurrency possible number of the executed subtasks.
3. The task queue execution time prediction device is characterized in that the device is arranged on a server; the device comprises:
the task queue acquisition module is used for acquiring a task queue to be processed; the to-be-processed task queue comprises a plurality of to-be-processed subtasks;
a basic parameter determining module, configured to determine a basic parameter of each to-be-processed sub-task according to the to-be-processed task queue; the basic parameters comprise concurrency state and complexity;
the resource parameter acquisition module is used for acquiring the resource parameters of the server; the resource parameters comprise CPU occupation rate, input/output port utilization rate and memory occupancy rate of the host process;
the execution time prediction module is used for predicting the execution time of the task queue to be processed according to the basic parameters, the resource parameters and the pre-obtained average task execution time;
the basic parameter determination module further comprises:
the task analysis unit is used for analyzing each sub-task to be processed to obtain the concurrent state and single step of each sub-task to be processed; the concurrency state comprises concurrency or non-concurrency;
a step complexity determining unit, configured to determine, according to the step attribute of each single step, a step complexity corresponding to the single step;
the step complexity determination unit is further configured to:
when the single step is newly added, determining that the step complexity corresponding to the single step is 1;
when the single step is one of deletion, modification and query, determining the step complexity corresponding to the single step as the number of data volumes related to the single step;
the execution time prediction module further comprises:
a total complexity determining unit, configured to add step complexities of single steps of all to-be-processed sub-tasks in the to-be-processed task queue together to obtain a total complexity of the to-be-processed task queue;
the concurrency quantity determining unit is used for determining the concurrency quantity of the task queue to be processed according to the concurrency state of the subtasks to be processed and the preset system task concurrency quantity;
a load rate determining unit, configured to add the CPU occupancy, the input/output port usage, and the memory occupancy of the host process together to obtain a system resource load rate;
the execution time prediction unit is used for predicting the execution time of the task queue to be processed according to the total complexity, the concurrency number, the system resource load rate and the pre-obtained task execution average time;
the execution time prediction unit is further to:
calculating the execution time by the following formula:
Tpre=(ξ+P-Co)×Tavg
wherein, TpreFor the execution time, ξ is the total complexity, P is the system resource load rate, CoFor said number of possible concurrencies, TavgAn average time for the task to execute.
4. A task queue execution time prediction implementation apparatus comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the method of claim 1 or 2.
CN201910136619.8A 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device Active CN109901921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910136619.8A CN109901921B (en) 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910136619.8A CN109901921B (en) 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device

Publications (2)

Publication Number Publication Date
CN109901921A CN109901921A (en) 2019-06-18
CN109901921B true CN109901921B (en) 2022-02-11

Family

ID=66945412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910136619.8A Active CN109901921B (en) 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device

Country Status (1)

Country Link
CN (1) CN109901921B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149938A (en) * 2019-06-28 2020-12-29 深圳迈瑞生物医疗电子股份有限公司 Pipeline system and sample centrifugation method
CN110737572B (en) * 2019-08-31 2023-01-10 苏州浪潮智能科技有限公司 Big data platform resource preemption test method, system, terminal and storage medium
CN110659137B (en) * 2019-09-24 2022-02-08 支付宝(杭州)信息技术有限公司 Processing resource allocation method and system for offline tasks
CN111131292B (en) * 2019-12-30 2022-04-26 北京天融信网络安全技术有限公司 Message distribution method and device, network security detection equipment and storage medium
CN111199316A (en) * 2019-12-31 2020-05-26 中国电力科学研究院有限公司 Cloud and mist collaborative computing power grid scheduling method based on execution time evaluation
CN112685116A (en) * 2020-12-29 2021-04-20 福州数据技术研究院有限公司 Method for displaying gene data processing progress and storage device
CN112988362B (en) * 2021-05-14 2022-12-30 南京蓝洋智能科技有限公司 Task processing method and device, electronic equipment and storage medium
CN114461053B (en) * 2021-08-24 2022-11-18 荣耀终端有限公司 Resource scheduling method and related device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102745192A (en) * 2012-06-14 2012-10-24 北京理工大学 Task allocation system for distributed control system of hybrid vehicle
CN102831012A (en) * 2011-06-16 2012-12-19 日立(中国)研究开发有限公司 Task scheduling device and task scheduling method in multimode distributive system
CN103593323A (en) * 2013-11-07 2014-02-19 浪潮电子信息产业股份有限公司 Machine learning method for Map Reduce task resource allocation parameters
JP2015108877A (en) * 2013-12-03 2015-06-11 日本電気株式会社 Prediction time distribution generation device, control method, and program
CN105446979A (en) * 2014-06-27 2016-03-30 华为技术有限公司 Data mining method and node
CN107168806A (en) * 2017-06-29 2017-09-15 上海联影医疗科技有限公司 Resource regulating method, system and the computer equipment of distribution scheduling machine
CN107172656A (en) * 2016-03-07 2017-09-15 京东方科技集团股份有限公司 Non- blocking request processing method and processing device
KR20180097904A (en) * 2017-02-24 2018-09-03 한국전자통신연구원 High speed video editing method on cloud platform and apparatus thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9201690B2 (en) * 2011-10-21 2015-12-01 International Business Machines Corporation Resource aware scheduling in a distributed computing environment
CN102902573B (en) * 2012-09-20 2014-12-17 北京搜狐新媒体信息技术有限公司 Task processing method and device based on shared resources
US9195506B2 (en) * 2012-12-21 2015-11-24 International Business Machines Corporation Processor provisioning by a middleware processing system for a plurality of logical processor partitions
CN106201723A (en) * 2016-07-13 2016-12-07 浪潮(北京)电子信息产业有限公司 The resource regulating method of a kind of data center and device
CN108287756A (en) * 2018-01-25 2018-07-17 联动优势科技有限公司 A kind of method and device of processing task

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831012A (en) * 2011-06-16 2012-12-19 日立(中国)研究开发有限公司 Task scheduling device and task scheduling method in multimode distributive system
CN102745192A (en) * 2012-06-14 2012-10-24 北京理工大学 Task allocation system for distributed control system of hybrid vehicle
CN103593323A (en) * 2013-11-07 2014-02-19 浪潮电子信息产业股份有限公司 Machine learning method for Map Reduce task resource allocation parameters
JP2015108877A (en) * 2013-12-03 2015-06-11 日本電気株式会社 Prediction time distribution generation device, control method, and program
CN105446979A (en) * 2014-06-27 2016-03-30 华为技术有限公司 Data mining method and node
CN107172656A (en) * 2016-03-07 2017-09-15 京东方科技集团股份有限公司 Non- blocking request processing method and processing device
KR20180097904A (en) * 2017-02-24 2018-09-03 한국전자통신연구원 High speed video editing method on cloud platform and apparatus thereof
CN107168806A (en) * 2017-06-29 2017-09-15 上海联影医疗科技有限公司 Resource regulating method, system and the computer equipment of distribution scheduling machine

Also Published As

Publication number Publication date
CN109901921A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109901921B (en) Task queue execution time prediction method and device and implementation device
US10558498B2 (en) Method for scheduling data flow task and apparatus
JP6447120B2 (en) Job scheduling method, data analyzer, data analysis apparatus, computer system, and computer-readable medium
US9189273B2 (en) Performance-aware job scheduling under power constraints
US20120290348A1 (en) Routing service requests based on lowest actual cost within a federated virtual service cloud
US20190347134A1 (en) Capacity Expansion Method and Apparatus
EP2680145A2 (en) Monitoring of heterogeneous saas usage
KR101471749B1 (en) Virtual machine allcoation of cloud service for fuzzy logic driven virtual machine resource evaluation apparatus and method
US11150999B2 (en) Method, device, and computer program product for scheduling backup jobs
CN109189572B (en) Resource estimation method and system, electronic equipment and storage medium
CN112256417B (en) Data request processing method and device and computer readable storage medium
WO2016178316A1 (en) Computer procurement predicting device, computer procurement predicting method, and program
CN106603256B (en) Flow control method and device
US20140244846A1 (en) Information processing apparatus, resource control method, and program
Choi et al. An enhanced data-locality-aware task scheduling algorithm for hadoop applications
KR101702218B1 (en) Method and System for Allocation of Resource and Reverse Auction Resource Allocation in hybrid Cloud Server
CN111813535A (en) Resource configuration determining method and device and electronic equipment
CN112000485B (en) Task allocation method, device, electronic equipment and computer readable storage medium
CN112596898A (en) Task executor scheduling method and device
CN112929923B (en) Uplink resource acquisition method and device, mobile terminal and readable storage medium
JP2015108877A (en) Prediction time distribution generation device, control method, and program
US11093281B2 (en) Information processing apparatus, control method, and program to control allocation of computer resources for different types of tasks
JP6163926B2 (en) Virtual machine management apparatus, virtual machine management method, and program
CN112468414A (en) Cloud computing multistage scheduling method, system and storage medium
Farhat et al. Towards stochastically optimizing data computing flows

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant