CN109901921A - Task queue running time prediction method, apparatus and realization device - Google Patents

Task queue running time prediction method, apparatus and realization device Download PDF

Info

Publication number
CN109901921A
CN109901921A CN201910136619.8A CN201910136619A CN109901921A CN 109901921 A CN109901921 A CN 109901921A CN 201910136619 A CN201910136619 A CN 201910136619A CN 109901921 A CN109901921 A CN 109901921A
Authority
CN
China
Prior art keywords
task
queue
concurrent
single step
complexity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910136619.8A
Other languages
Chinese (zh)
Other versions
CN109901921B (en
Inventor
罗俊林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhiyuan Internet Software Ltd By Share Ltd
Original Assignee
Beijing Zhiyuan Internet Software Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhiyuan Internet Software Ltd By Share Ltd filed Critical Beijing Zhiyuan Internet Software Ltd By Share Ltd
Priority to CN201910136619.8A priority Critical patent/CN109901921B/en
Publication of CN109901921A publication Critical patent/CN109901921A/en
Application granted granted Critical
Publication of CN109901921B publication Critical patent/CN109901921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of task queue running time prediction method, apparatus and realization devices;Wherein, this method is applied to server;This method comprises: obtaining inactive queue of task;Inactive queue of task includes multiple subtasks to be processed;According to inactive queue of task, the basic parameter of each subtask to be processed is determined;Basic parameter includes concurrent state and complexity;Obtain the resource parameters of server;Resource parameters include the memory usage of CPU occupation rate, input/output port utilization rate and host process;According to basic parameter, resource parameters and the task execution average time being previously obtained, the execution time of inactive queue of task is predicted.The present invention has carried out rational prediction to execution time of inactive queue of task, improves optimization scheduling of resource efficiency in service queue application scenarios.

Description

Task queue running time prediction method, apparatus and realization device
Technical field
The present invention relates to field of computer technology, more particularly, to a kind of task queue running time prediction method, apparatus And realization device.
Background technique
In service queue application scenarios, the task execution in queue is easy to appear task accumulation, place when system is busy The case where reason delay;And executed in platform in system, due to performed task complexity be unevenly distributed and system resource it is dynamic State variation, the task of same service logic is in different calculate nodes or different periods execute that there may be very big differences time It is different, it is difficult to which that the execution time of reasonable prediction task queue causes optimization scheduling of resource efficiency in service queue application scenarios lower.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of task queue running time prediction method, apparatus and realizations Device is improved with the execution time of reasonable prediction task queue and is optimized scheduling of resource efficiency in service queue application scenarios.
In a first aspect, the embodiment of the invention provides a kind of task queue running time prediction method, this method is applied to Server;This method comprises: obtaining inactive queue of task;Task queue includes multiple subtasks to be processed;According to be processed Task queue determines the basic parameter of each subtask to be processed;Basic parameter includes concurrent state and complexity;Obtain service The resource parameters of device;Resource parameters include the memory usage of CPU occupation rate, input/output port utilization rate and host process; According to basic parameter, resource parameters and the task execution average time being previously obtained, when predicting the execution of inactive queue of task Between.
With reference to first aspect, the embodiment of the invention provides the first possible embodiments of first aspect, wherein on The step of stating the basic parameter that each subtask to be processed is determined according to inactive queue of task, comprising: parsing is each to be processed Subtask obtains the concurrent state and single step of each subtask to be processed;Concurrent state includes can be concurrent or can not be concurrent; According to the step attribute of each single step, step complexity corresponding to single step is determined.
The possible embodiment of with reference to first aspect the first, the embodiment of the invention provides second of first aspect Possible embodiment, wherein the above-mentioned step attribute according to each single step determines that step corresponding to single step is complicated The step of spending, comprising: when single step is newly-increased, determine that step complexity corresponding to single step is 1;When single step is Delete, one among modification and inquiry when, determine that step complexity corresponding to single step is the data that are related to of single step Measure item number.
The possible embodiment of second with reference to first aspect, the embodiment of the invention provides the third of first aspect Possible embodiment, wherein it is above-mentioned according to basic parameter, resource parameters and the task execution average time being previously obtained, in advance The step of surveying the execution time of inactive queue of task, comprising: by inactive queue of task subtask to be handled The step complexity of single step is added together, and obtains total complexity of inactive queue of task;According to subtask to be processed Concurrent state and preset system task number of concurrent, determine inactive queue of task can concurrent quantity;By CPU occupation rate, defeated The memory usage for entering output port utilization rate and host process is added together, and obtains system resource load factor;According to total complexity Degree, can concurrent quantity, system resource load factor and the task execution average time being previously obtained, predict inactive queue of task Execute the time.
The third possible embodiment with reference to first aspect, the embodiment of the invention provides the 4th kind of first aspect Possible embodiment, wherein the total complexity of above-mentioned basis, can concurrent quantity, system resource load factor and what is be previously obtained appoint The step of business executes average time, predicts the execution time of inactive queue of task, comprising: when calculating execution by following formula Between:
Tpre=(ξ+P-Co)×Tavg
Wherein, TpreTo execute the time, ξ is total complexity, and P is system resource load factor, CoFor can concurrent quantity, TavgFor Task execution average time.
The 4th kind of possible embodiment with reference to first aspect, the embodiment of the invention provides the 5th kind of first aspect Possible embodiment, wherein above-mentioned task execution average time is calculated by the following formula to obtain:
Tavg=Tlast/(ξ'+P'-Co')
Wherein, TlastFor the execution time of the subtasking of setting, ξ ' is total complexity of subtasking, P' For the system resource load factor in the implementation procedure of subtasking, Co' for subtasking can concurrent quantity.
Second aspect, the embodiment of the present invention also provide a kind of task queue running time prediction device, which is set to Server;The device includes: that task queue obtains module, for obtaining inactive queue of task;Inactive queue of task includes Multiple subtasks to be processed;Basic parameter determining module, for determining each subtask to be processed according to inactive queue of task Basic parameter;Basic parameter includes concurrent state and complexity;Resource parameters obtain module, for obtaining the resource of server Parameter;Resource parameters include the memory usage of CPU occupation rate, input/output port utilization rate and host process;Execute the time Prediction module, for predicting waiting task according to basic parameter, resource parameters and the task execution average time being previously obtained The execution time of queue.
In conjunction with second aspect, the embodiment of the invention provides the first possible embodiments of second aspect, wherein on State basic parameter determining module further include: task resolution unit obtains each to be processed for parsing each subtask to be processed The concurrent state and single step of subtask;Concurrent state includes can be concurrent or can not be concurrent;Step complexity determination unit is used In the step attribute according to each single step, step complexity corresponding to single step is determined.
In conjunction with the first possible embodiment of second aspect, the embodiment of the invention provides second of second aspect Possible embodiment, wherein above-mentioned steps complexity determination unit is also used to: it when single step is newly-increased, determines single Step complexity corresponding to step is 1;When single step be delete, modification and inquiry among one when, determine single step Corresponding step complexity is the data volume item number that single step is related to.
The third aspect, the embodiment of the present invention also provide a kind of task queue running time prediction realization device, including storage Device and processor, wherein for memory for storing one or more computer instruction, one or more computer instruction is processed Device executes, to realize preceding claim task queue running time prediction method.
The embodiment of the present invention bring it is following the utility model has the advantages that
The embodiment of the invention provides a kind of task queue running time prediction method, apparatus and realization devices;Obtain to After handling task queue, according to the inactive queue of task, the basic parameter of each subtask to be processed is determined;Basic parameter packet Include concurrent state and complexity;Obtain the resource parameters of server;Resource parameters include that CPU occupation rate, input/output port make With rate and the memory usage of host process;According to basic parameter, resource parameters and the task execution average time being previously obtained, Predict the execution time of inactive queue of task.Which is reasonably pre- to the progress of the execution time of inactive queue of task It surveys, improves optimization scheduling of resource efficiency in service queue application scenarios.
Other features and advantages of the present invention will illustrate in the following description, alternatively, Partial Feature and advantage can be with Deduce from specification or unambiguously determine, or by implementing above-mentioned technology of the invention it can be learnt that.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, better embodiment is cited below particularly, and match Appended attached drawing is closed, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of flow chart of task queue running time prediction method provided in an embodiment of the present invention;
Fig. 2 is the flow chart of another task queue running time prediction method provided in an embodiment of the present invention;
Fig. 3 is the process that another task queue running time prediction method circulation provided in an embodiment of the present invention executes Figure;
Fig. 4 is a kind of structural schematic diagram of task queue running time prediction device provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of task queue running time prediction realization device provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise Under every other embodiment obtained, shall fall within the protection scope of the present invention.
Currently, in service queue application scenarios, it is difficult to the execution time of reasonable prediction task queue, lead to service queue It is lower to optimize scheduling of resource efficiency in application scenarios, is based on this, the embodiment of the invention provides a kind of task queues to execute the time Prediction technique, device and realization device can be applied to the fields such as prediction and the scheduling of resource of task execution time.
For convenient for understanding the present embodiment, when being executed first to a kind of task queue disclosed in the embodiment of the present invention Between prediction technique describe in detail.
A kind of flow chart of task queue running time prediction method shown in Figure 1, this method are applied to server; Method includes the following steps:
Step S100 obtains inactive queue of task;Inactive queue of task includes multiple subtasks to be processed.
Specifically, when system is more busy, some tasks are unable to get timely processing, can be added sequentially in time Pre-set task queue;When needing the execution time to task in task queue to predict, need to read task team All subtasks of column.
Step S102 determines the basic parameter of each subtask to be processed according to inactive queue of task;Basic parameter packet Include concurrent state and complexity.
In particular it is required that parsing to subtask to be processed, determine whether subtask to be processed can concurrently carry out;This Outside, it is also necessary to according to the resource quantity that each step in subtask to be processed is called, such as memory calculation amount, obtain son to be processed and appoint The complexity of business;To determine the relevant parameter of task queue according to the basic parameter of subtask to be processed, held with carry out task The prediction of row time.
Step S104 obtains the resource parameters of server;Resource parameters include CPU (Central Processing Unit, central processing unit) occupation rate, input/output port utilization rate and host process memory usage.
It specifically, can be to the corresponding resource parameters of each resource request;Due to needing host in task implementation procedure The support of process, can also occupy cpu resource, and there is a possibility that with input/output port, therefore CPU occupation rate, input/output terminal Mouth utilization rate and the memory usage of host process can all influence the execution time of task.
Step S106 is predicted to be processed according to basic parameter, resource parameters and the task execution average time being previously obtained The execution time of task queue.
Specifically, what the execution time that task can be completed in above-mentioned task execution average time according to was calculated; In general, CPU occupation rate, input/output terminal when calculating task executes average time, when needing introducing that task execution is completed Mouth utilization rate and the memory usage of host process are as relevant parameter.
The embodiment of the invention provides a kind of task queue running time prediction methods;After obtaining inactive queue of task, According to the inactive queue of task, the basic parameter of each subtask to be processed is determined;Basic parameter includes concurrent state and answers Miscellaneous degree;Obtain the resource parameters of server;Resource parameters include CPU occupation rate, input/output port utilization rate and host process Memory usage;According to basic parameter, resource parameters and the task execution average time being previously obtained, waiting task is predicted The execution time of queue.Which has carried out rational prediction to execution time of inactive queue of task, improves business Optimize scheduling of resource efficiency in queue application scenarios.
The embodiment of the invention also provides another task queue running time prediction method, this method sides shown in Fig. 1 It realizes on the basis of method, by combining task data volume needed for different situations and system resource load state, exists to task The execution time of different node different periods is predicted that flow diagram is as shown in Figure 2;Method includes the following steps:
Step S200 obtains inactive queue of task;Inactive queue of task includes multiple subtasks to be processed.
Step S202 parses each subtask to be processed, obtains the concurrent state and single step of each subtask to be processed Suddenly;Concurrent state includes can be concurrent or can not be concurrent;Whether subtask can concurrently execute the specific implementation content of subtask It determines.
Step S204 determines step complexity corresponding to single step according to the step attribute of each single step.
Specifically, the decision procedure of step complexity is as follows:
(1) when single step is newly-increased, determine that step complexity corresponding to single step is 1.
(2) when single step be delete, one among modification and inquiry when, determine that step corresponding to single step is complicated Degree is the data volume item number (also referred to as relevant data volume item number) that single step is related to.
Step S206, by inactive queue of task the step complexity of single step of subtask to be handled add Together, total complexity of inactive queue of task is obtained;Specifically, The step of m is single step is multiple Miscellaneous degree.
Step S208 is determined to be processed according to the concurrent state of subtask to be processed and preset system task number of concurrent Task queue can concurrent quantity;Specifically, task execution may be and concurrently execute, and in general number of concurrent is more, execute effect Rate is higher;But when number of concurrent reaches a critical point, it is maintained for substantially constant, this critical point usually can be simultaneously by system Line number determines;And number of concurrent is with the relationship of performance, also by queue can the quantity of parallel task number determine, if little or no Can parallel task, and be largely or entirely serial task, then task execution number of concurrent and execution performance are substantially not related; That is, current task executes number of concurrent=can parallel task number * number of concurrent (number of concurrent < system can and line number).
The memory usage of CPU occupation rate, input/output port utilization rate and host process is added in one by step S210 It rises, obtains system resource load factor;That is, current system resource load rate=CPU usage+i/o occupancy+host process is interior Deposit occupancy.
Step S212 according to total complexity, can concurrently quantity, system resource load factor and the task execution being previously obtained be put down The equal time predicts the execution time of inactive queue of task.
Specifically, it can be calculated by following formula and execute the time:
Tpre=(ξ+P-Co)×Tavg
Wherein, TpreTo execute the time, ξ is total complexity, and P is system resource load factor, CoFor can concurrent quantity, TavgFor Task execution average time;Using character express are as follows: prediction result value (i.e. prediction execute time)=(current queue task it is total Complexity+current system resource load rate-current task executes number of concurrent) * current task execution average time (i.e. task execution Average time).
Wherein, above-mentioned task execution average time is calculated by the following formula to obtain:
Tavg=Tlast/(ξ'+P'-Co')
Wherein, TlastFor the execution time of the subtasking of setting, ξ ' is total complexity of subtasking, P' For the system resource load factor in the implementation procedure of subtasking, Co' for subtasking can concurrent quantity;Using Character express are as follows: current task executes average time=last time task execution time/last time (currently to the total of column task Complexity+current system resource load rate-current task executes number of concurrent).
In addition, this method can recycle progress in the case where needing to be monitored task execution situation;At this point, should Method can simply be summarised as two steps, and flow chart is as shown in Figure 3:
(1) it obtains current task and executes average time;
(2) prediction task queue executes the time, jumps and executes step (1).
For ease of calculation, in the method, some factor variables can be defined, including current task executes average time (being equivalent to above-mentioned task average performance times), current queue number of tasks, total complexity of current queue task, current task are held The relevant data volume of row, the number of concurrent and system resource load factor (cpu occupancy, i/o situation, task that current task executes Execute the memory usage of host process);After the current value for getting each factor variable, it can be obtained according to above-mentioned calculation formula The prediction result value of time is executed to task queue.
When being executed the method define the factor variable of task queue execution time and acquisition modes method and task queue Between linear formula;Prediction result value can be extrapolated, realizes and predicts more scientificly according to the end value of current factor variable Task queue executes the time, to be arranged subsequent action and be dispatched.
The embodiment of the present invention also provides a kind of task queue running time prediction device, which is set to server, Structural schematic diagram is as shown in Figure 4;The device includes: that task queue obtains module 400, for obtaining inactive queue of task;To Handling task queue includes multiple subtasks to be processed;Basic parameter determining module 402 is used for according to inactive queue of task, Determine the basic parameter of each subtask to be processed;Basic parameter includes concurrent state and complexity;Resource parameters obtain module 404, for obtaining the resource parameters of server;Resource parameters include CPU occupation rate, input/output port utilization rate and host into The memory usage of journey;Running time prediction module 406, for according to basic parameter, resource parameters and being previously obtained for task Average time is executed, predicts the execution time of inactive queue of task.
Specifically, above-mentioned basic parameter determining module further include: task resolution unit is appointed for parsing each son to be processed Business, obtains the concurrent state and single step of each subtask to be processed;Concurrent state includes can be concurrent or can not be concurrent;Step Complexity determination unit determines step complexity corresponding to single step for the step attribute according to each single step.
Further, above-mentioned steps complexity determination unit is also used to: when single step is newly-increased, determining single step Corresponding step complexity is 1;When single step be delete, one among modification and inquiry when, determine that single step is corresponding Step complexity be the data volume item number that is related to of single step.
Task queue running time prediction device provided in an embodiment of the present invention, with task queue provided by the above embodiment Running time prediction method technical characteristic having the same reaches identical technology so also can solve identical technical problem Effect.
Present embodiments provide for a kind of task queue running time prediction corresponding with above method embodiment is real Existing device.Fig. 5 is the structural schematic diagram of the realization device, as shown in figure 5, the equipment includes processor 1201 and memory 1202;Wherein, memory 1202 is for storing one or more computer instruction, and one or more computer instruction is by processor It executes, to realize above-mentioned task queue running time prediction implementation method.
Realization device shown in fig. 5 further includes bus 1203 and forwarding chip 1204, processor 1201, forwarding chip 1204 It is connected with memory 1202 by bus 1203.The realization device of the message transmissions can be network edge device.
Wherein, memory 1202 may include high-speed random access memory (RAM, Random Access Memory), It may also further include non-labile memory (non-volatile memory), for example, at least a magnetic disk storage.Bus 1203 can be isa bus, pci bus or eisa bus etc..The bus can be divided into address bus, data/address bus, control Bus etc..Only to be indicated with a four-headed arrow in Fig. 5, it is not intended that an only bus or a seed type convenient for indicating Bus.
Forwarding chip 1204 is used to connect by network interface at least one user terminal and other network units, will seal The IPv4 message or IPv6 message installed is sent to the user terminal by network interface.
Processor 1201 may be a kind of IC chip, the processing capacity with signal.It is above-mentioned during realization Each step of method can be completed by the integrated logic circuit of the hardware in processor 1201 or the instruction of software form.On The processor 1201 stated can be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), ready-made programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) or Person other programmable logic device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute sheet Disclosed each method, step and logic diagram in invention embodiment.General processor can be microprocessor or this at Reason device is also possible to any conventional processor etc..The step of method in conjunction with disclosed in embodiment of the present invention, can direct body Now executes completion for hardware decoding processor, or in decoding processor hardware and software module combine and execute completion.It is soft Part module can be located at random access memory, and flash memory, read-only memory, programmable read only memory or electrically erasable programmable are deposited In the storage medium of this fields such as reservoir, register maturation.The storage medium is located at memory 1202, and the reading of processor 1201 is deposited Information in reservoir 1202, in conjunction with its hardware complete aforementioned embodiments method the step of.
Embodiment of the present invention additionally provides a kind of machine readable storage medium, and machine readable storage medium storage is organic Device executable instruction, for the machine-executable instruction when being called and being executed by processor, machine-executable instruction promotes processor Realize above-mentioned task queue running time prediction implementation method, specific implementation can be found in method implementation, and details are not described herein.
Fund provided by embodiment of the present invention, which is drawn, pays device and realization device, the technology effect of realization principle and generation Fruit is identical with preceding method embodiment, and to briefly describe, device embodiments part does not refer to place, can refer to preceding method Corresponding contents in embodiment.
In several embodiments provided herein, it should be understood that disclosed device and method can also lead to Other modes are crossed to realize.Device embodiments described above are only schematical, for example, the flow chart in attached drawing and Block diagram shows the system in the cards of the device of multiple embodiments according to the present invention, method and computer program product Framework, function and operation.In this regard, each box in flowchart or block diagram can represent a module, program segment or generation A part of code, a part of the module, section or code include one or more for realizing defined logic function Executable instruction.It should also be noted that function marked in the box can also be in some implementations as replacement Occur different from the sequence marked in attached drawing.For example, two continuous boxes can actually be basically executed in parallel, they Sometimes it can also execute in the opposite order, this depends on the function involved.It is also noted that block diagram and or flow chart In each box and the box in block diagram and or flow chart combination, can function or movement as defined in executing it is special Hardware based system is realized, or can be realized using a combination of dedicated hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention can integrate and form one together solely Vertical part is also possible to modules individualism, can also be integrated to form with two or more modules one it is independent Part.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the disclosure is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) execute all or part of step of each embodiment the method for the disclosure Suddenly.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), deposits at random The various media that can store program code such as access to memory (RAM, Random Access Memory), magnetic or disk.
Finally, it should be noted that embodiment described above, the only specific embodiment of the disclosure, to illustrate this public affairs The technical solution opened, rather than its limitations, the protection scope of the disclosure are not limited thereto, although referring to aforementioned embodiments pair The disclosure is described in detail, those skilled in the art should understand that: any technology for being familiar with the art Personnel can still modify to technical solution documented by aforementioned embodiments in the technical scope that the disclosure discloses Or variation or equivalent replacement of some of the technical features can be readily occurred in;And these modifications, variation or replacement, The spirit and scope for disclosure embodiment technical solution that it does not separate the essence of the corresponding technical solution, should all cover in this public affairs Within the protection scope opened.Therefore, the protection scope of the disclosure shall be subject to the protection scope of the claim.

Claims (10)

1. a kind of task queue running time prediction method, which is characterized in that the method is applied to server;The method packet It includes:
Obtain inactive queue of task;The inactive queue of task includes multiple subtasks to be processed;
According to the inactive queue of task, the basic parameter of each subtask to be processed is determined;The basic parameter packet Include concurrent state and complexity;
Obtain the resource parameters of the server;The resource parameters include CPU occupation rate, input/output port utilization rate and place The memory usage of host process;
According to the basic parameter, the resource parameters and the task execution average time being previously obtained, predict described to be processed The execution time of task queue.
2. determination is each the method according to claim 1, wherein described according to the inactive queue of task The step of basic parameter of the subtask to be processed, comprising:
Each subtask to be processed is parsed, the concurrent state and single step of each subtask to be processed are obtained;Institute State concurrent state include can be concurrent or can not be concurrent;
According to the step attribute of each single step, step complexity corresponding to the single step is determined.
3. according to the method described in claim 2, it is characterized in that, the step attribute according to each single step, The step of determining step complexity corresponding to the single step, comprising:
When the single step is newly-increased, determine that step complexity corresponding to the single step is 1;
When the single step be delete, one among modification and inquiry when, determine that step corresponding to the single step is multiple Miscellaneous degree is the data volume item number that the single step is related to.
4. according to the method described in claim 3, it is characterized in that, it is described according to the basic parameter, the resource parameters and The task execution average time being previously obtained, the step of predicting the execution time of the inactive queue of task, comprising:
By in the inactive queue of task the step complexity of single step of subtask to be handled be added together, obtain To total complexity of the inactive queue of task;
According to the concurrent state of the subtask to be processed and preset system task number of concurrent, the waiting task team is determined What is arranged can concurrent quantity;
The memory usage of the CPU occupation rate, the input/output port utilization rate and the host process is added together, Obtain system resource load factor;
According to total complexity, it is described can concurrent quantity, the system resource load factor and the task execution that is previously obtained it is flat The equal time predicts the execution time of the inactive queue of task.
5. according to the method described in claim 4, it is characterized in that, it is described according to total complexity, it is described can concurrent quantity, The system resource load factor and the task execution average time being previously obtained, when predicting the execution of the inactive queue of task Between the step of, comprising:
The execution time is calculated by following formula:
Tpre=(ξ+P-Co)×Tavg
Wherein, TpreFor the execution time, ξ is total complexity, and P is the system resource load factor, CoFor it is described can be simultaneously Send out quantity, TavgFor the task execution average time.
6. according to the method described in claim 5, it is characterized in that, the task execution average time be calculated by the following formula It obtains:
Tavg=Tlast/(ξ'+P'-Co')
Wherein, TlastFor the execution time of the subtasking of setting, ξ ' is total complexity of the subtasking, P' For the system resource load factor in the implementation procedure of the subtasking, Co' for the subtasking can be concurrent Quantity.
7. a kind of task queue running time prediction device, which is characterized in that described device is set to server;Described device packet It includes:
Task queue obtains module, for obtaining inactive queue of task;The inactive queue of task includes multiple to be processed Subtask;
Basic parameter determining module, for determining the base of each subtask to be processed according to the inactive queue of task This parameter;The basic parameter includes concurrent state and complexity;
Resource parameters obtain module, for obtaining the resource parameters of the server;The resource parameters include CPU occupation rate, The memory usage of input/output port utilization rate and host process;
Running time prediction module, for being put down according to the basic parameter, the resource parameters and the task execution being previously obtained The equal time predicts the execution time of the inactive queue of task.
8. device according to claim 7, which is characterized in that the basic parameter determining module further include:
Task resolution unit obtains the concurrent of each subtask to be processed for parsing each subtask to be processed State and single step;The concurrent state includes can be concurrent or can not be concurrent;
Step complexity determination unit determines the single step pair for the step attribute according to each single step The step complexity answered.
9. device according to claim 8, which is characterized in that the step complexity determination unit is also used to:
When the single step is newly-increased, determine that step complexity corresponding to the single step is 1;
When the single step be delete, one among modification and inquiry when, determine that step corresponding to the single step is multiple Miscellaneous degree is the data volume item number that the single step is related to.
10. a kind of task queue running time prediction realization device, which is characterized in that including memory and processor, wherein institute State memory for store one or more computer instruction, one or more computer instruction held by the processor Row, to realize method described in any one of claims 1-6.
CN201910136619.8A 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device Active CN109901921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910136619.8A CN109901921B (en) 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910136619.8A CN109901921B (en) 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device

Publications (2)

Publication Number Publication Date
CN109901921A true CN109901921A (en) 2019-06-18
CN109901921B CN109901921B (en) 2022-02-11

Family

ID=66945412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910136619.8A Active CN109901921B (en) 2019-02-22 2019-02-22 Task queue execution time prediction method and device and implementation device

Country Status (1)

Country Link
CN (1) CN109901921B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110659137A (en) * 2019-09-24 2020-01-07 支付宝(杭州)信息技术有限公司 Processing resource allocation method and system for offline tasks
CN110737572A (en) * 2019-08-31 2020-01-31 苏州浪潮智能科技有限公司 Big data platform resource preemption test method, system, terminal and storage medium
CN111131292A (en) * 2019-12-30 2020-05-08 北京天融信网络安全技术有限公司 Message distribution method and device, network security detection equipment and storage medium
CN111199316A (en) * 2019-12-31 2020-05-26 中国电力科学研究院有限公司 Cloud and mist collaborative computing power grid scheduling method based on execution time evaluation
CN112149938A (en) * 2019-06-28 2020-12-29 深圳迈瑞生物医疗电子股份有限公司 Pipeline system and sample centrifugation method
CN112685116A (en) * 2020-12-29 2021-04-20 福州数据技术研究院有限公司 Method for displaying gene data processing progress and storage device
CN112988362A (en) * 2021-05-14 2021-06-18 南京蓝洋智能科技有限公司 Task processing method and device, electronic equipment and storage medium
CN114461053A (en) * 2021-08-24 2022-05-10 荣耀终端有限公司 Resource scheduling method and related device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102745192A (en) * 2012-06-14 2012-10-24 北京理工大学 Task allocation system for distributed control system of hybrid vehicle
CN102831012A (en) * 2011-06-16 2012-12-19 日立(中国)研究开发有限公司 Task scheduling device and task scheduling method in multimode distributive system
CN102902573A (en) * 2012-09-20 2013-01-30 北京搜狐新媒体信息技术有限公司 Task processing method and device based on shared resources
US20130104140A1 (en) * 2011-10-21 2013-04-25 International Business Machines Corporation Resource aware scheduling in a distributed computing environment
CN103593323A (en) * 2013-11-07 2014-02-19 浪潮电子信息产业股份有限公司 Machine learning method for Map Reduce task resource allocation parameters
US20140181833A1 (en) * 2012-12-21 2014-06-26 International Business Machines Corporation Processor provisioning by a middleware system for a plurality of logical processor partitions
JP2015108877A (en) * 2013-12-03 2015-06-11 日本電気株式会社 Prediction time distribution generation device, control method, and program
CN105446979A (en) * 2014-06-27 2016-03-30 华为技术有限公司 Data mining method and node
CN106201723A (en) * 2016-07-13 2016-12-07 浪潮(北京)电子信息产业有限公司 The resource regulating method of a kind of data center and device
CN107168806A (en) * 2017-06-29 2017-09-15 上海联影医疗科技有限公司 Resource regulating method, system and the computer equipment of distribution scheduling machine
CN107172656A (en) * 2016-03-07 2017-09-15 京东方科技集团股份有限公司 Non- blocking request processing method and processing device
CN108287756A (en) * 2018-01-25 2018-07-17 联动优势科技有限公司 A kind of method and device of processing task
KR20180097904A (en) * 2017-02-24 2018-09-03 한국전자통신연구원 High speed video editing method on cloud platform and apparatus thereof

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831012A (en) * 2011-06-16 2012-12-19 日立(中国)研究开发有限公司 Task scheduling device and task scheduling method in multimode distributive system
US20130104140A1 (en) * 2011-10-21 2013-04-25 International Business Machines Corporation Resource aware scheduling in a distributed computing environment
CN102745192A (en) * 2012-06-14 2012-10-24 北京理工大学 Task allocation system for distributed control system of hybrid vehicle
CN102902573A (en) * 2012-09-20 2013-01-30 北京搜狐新媒体信息技术有限公司 Task processing method and device based on shared resources
US20140181833A1 (en) * 2012-12-21 2014-06-26 International Business Machines Corporation Processor provisioning by a middleware system for a plurality of logical processor partitions
CN103593323A (en) * 2013-11-07 2014-02-19 浪潮电子信息产业股份有限公司 Machine learning method for Map Reduce task resource allocation parameters
JP2015108877A (en) * 2013-12-03 2015-06-11 日本電気株式会社 Prediction time distribution generation device, control method, and program
CN105446979A (en) * 2014-06-27 2016-03-30 华为技术有限公司 Data mining method and node
CN107172656A (en) * 2016-03-07 2017-09-15 京东方科技集团股份有限公司 Non- blocking request processing method and processing device
CN106201723A (en) * 2016-07-13 2016-12-07 浪潮(北京)电子信息产业有限公司 The resource regulating method of a kind of data center and device
KR20180097904A (en) * 2017-02-24 2018-09-03 한국전자통신연구원 High speed video editing method on cloud platform and apparatus thereof
CN107168806A (en) * 2017-06-29 2017-09-15 上海联影医疗科技有限公司 Resource regulating method, system and the computer equipment of distribution scheduling machine
CN108287756A (en) * 2018-01-25 2018-07-17 联动优势科技有限公司 A kind of method and device of processing task

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUE LU 等: "A Statistical Response-Time Analysis of Real-Time Embedded Systems", 《2012 IEEE 33RD REAL-TIME SYSTEMS SYMPOSIUM》 *
丁新安 等: "基于请求负载的网格任务模糊控制调度策略", 《计算机仿真》 *
李涛 等: "基于线程池的GPU任务并行计算模式研究", 《计算机学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149938A (en) * 2019-06-28 2020-12-29 深圳迈瑞生物医疗电子股份有限公司 Pipeline system and sample centrifugation method
CN110737572A (en) * 2019-08-31 2020-01-31 苏州浪潮智能科技有限公司 Big data platform resource preemption test method, system, terminal and storage medium
CN110737572B (en) * 2019-08-31 2023-01-10 苏州浪潮智能科技有限公司 Big data platform resource preemption test method, system, terminal and storage medium
CN110659137A (en) * 2019-09-24 2020-01-07 支付宝(杭州)信息技术有限公司 Processing resource allocation method and system for offline tasks
CN110659137B (en) * 2019-09-24 2022-02-08 支付宝(杭州)信息技术有限公司 Processing resource allocation method and system for offline tasks
CN111131292A (en) * 2019-12-30 2020-05-08 北京天融信网络安全技术有限公司 Message distribution method and device, network security detection equipment and storage medium
CN111131292B (en) * 2019-12-30 2022-04-26 北京天融信网络安全技术有限公司 Message distribution method and device, network security detection equipment and storage medium
CN111199316A (en) * 2019-12-31 2020-05-26 中国电力科学研究院有限公司 Cloud and mist collaborative computing power grid scheduling method based on execution time evaluation
CN112685116A (en) * 2020-12-29 2021-04-20 福州数据技术研究院有限公司 Method for displaying gene data processing progress and storage device
CN112988362A (en) * 2021-05-14 2021-06-18 南京蓝洋智能科技有限公司 Task processing method and device, electronic equipment and storage medium
CN114461053A (en) * 2021-08-24 2022-05-10 荣耀终端有限公司 Resource scheduling method and related device

Also Published As

Publication number Publication date
CN109901921B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN109901921A (en) Task queue running time prediction method, apparatus and realization device
US9727383B2 (en) Predicting datacenter performance to improve provisioning
CN105718479B (en) Execution strategy generation method and device under cross-IDC big data processing architecture
CN110389816B (en) Method, apparatus and computer readable medium for resource scheduling
CN109189572B (en) Resource estimation method and system, electronic equipment and storage medium
CN109298990A (en) Log storing method, device, computer equipment and storage medium
CN109669774A (en) Quantization method, method of combination, device and the network equipment of hardware resource
CN109634744A (en) A kind of fine matching method based on cloud platform resource allocation, equipment and storage medium
CN115756780A (en) Quantum computing task scheduling method and device, computer equipment and storage medium
CN109800092A (en) A kind of processing method of shared data, device and server
CN115460216A (en) Calculation force resource scheduling method and device, calculation force resource scheduling equipment and system
CN108874520A (en) Calculation method and device
JP5108011B2 (en) System, method, and computer program for reducing message flow between bus-connected consumers and producers
CN110908797A (en) Call request data processing method, device, equipment, storage medium and system
CN111144796A (en) Method and device for generating tally information
CN113672375B (en) Resource allocation prediction method, device, equipment and storage medium
CN115705593A (en) Logistics transportation method and device, computer equipment and storage medium
CN109039826A (en) Collecting method, device and electronic equipment
CN116302453B (en) Task scheduling method and device for quantum electronic hybrid platform
CN115374914A (en) Distributed training method, parallel deep learning framework and electronic equipment
CN111694670B (en) Resource allocation method, apparatus, device and computer readable medium
CN110968420A (en) Scheduling method and device for multi-crawler platform, storage medium and processor
CN113657635B (en) Method for predicting loss of communication user and electronic equipment
CN113656046A (en) Application deployment method and device
CN113094155B (en) Task scheduling method and device under Hadoop platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant