CN108446165A - A kind of task forecasting method in cloud computing - Google Patents
A kind of task forecasting method in cloud computing Download PDFInfo
- Publication number
- CN108446165A CN108446165A CN201810200611.9A CN201810200611A CN108446165A CN 108446165 A CN108446165 A CN 108446165A CN 201810200611 A CN201810200611 A CN 201810200611A CN 108446165 A CN108446165 A CN 108446165A
- Authority
- CN
- China
- Prior art keywords
- task
- virtual machine
- time
- cloud computing
- pool
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses the task forecasting methods in a kind of cloud computing, the present invention by before task execution to the pre- judgement of multiplexed transport speed, both the transmission next step task while task of execution had been realized, the scheduling strategy of task in parallel computation is not influenced again, overcome it is of the existing technology do not account for the multiplexed transport time on computation-intensive independent task execute total time influence deficiency, the time for being further reduced executing tasks parallelly is allowed the invention to, system effectiveness is effectively improved.With the increase of calculating task amount, the task time that the present invention can be saved effectively will be apparent from.
Description
Technical field
The invention belongs to computer realms, and in particular to the task forecasting method in a kind of cloud computing.
Background technology
Cloud computing is the development trend of following IT industries, the computing resource converged on internet, storage resource, data money
Source and application resource are just continuously increased with the expansion of internet scale, and internet turns from the communications platform of traditional sense
Ubiquitous, intelligent computing platform is turned to, calculating task is distributed on the resource pool that a large amount of computers are constituted, makes by cloud computing technology
Various application systems can obtain computing resource, data resource, storage resource and application service resource etc. as needed.
With the arrival in big data epoch, extensive task is effectively treated at a high speed becomes the problem of urgently being promoted, cloud computing
It is generally acknowledged at present optimal solution.How further to promote the efficiency of extensive task distribution parallel computation is hot issue.
Patent " a kind of cloud computing in method for scheduling task " (application number of Xian Electronics Science and Technology University's application:
201710476832.4 publication number:107357641A) disclose method for scheduling task in a kind of cloud computing.The patent it is specific
Step includes:(1) task divides:When new task reaches task window, task is divided into 10N-100N parts by task window, structure
It at set of tasks, is placed in task pool, N is total virtual machine quantity;(2) monitoring system state:Resource monitor monitoring system
State, including physical host information, mission bit stream, task status information;(3) virtual machine is created:Task dispatcher, which uses, to be had
The virtual machine extension scheme of machine startup Time Perception creates new virtual machine successively, starts to start parallel process;(4) every
When having, when task is completed on new virtual machine startup or virtual machine, task dispatcher is that the virtual machine distributes task, while task
The distribution condition of task in task pool is recorded in task status table scheduler, and task status table is inquiring each virtual machine
Operating condition and task execution progress;(5) whole tasks are completed and return to task result.The deficiency of this method is:Do not examine
Consider influence of the task transmission time to task execution efficiency when task amount is larger.
Patent " task scheduling and the money in a kind of cloud computing system of PLA University of Science and Technology for National Defense's application
The universal method of source configuration " (application number:201610293128.0 publication number:CN106020927A) a kind of cloud computing system is disclosed
The universal method of task scheduling and resource distribution in system.The specific steps of the patent include:(1) from task scheduling and resource distribution
The mission bit stream that arrived is obtained in general framework, and according to the mission bit stream it is determined that at least one of choosing scheduling mesh
Mark;(2) the physical host information for obtaining virtualization cloud is called specific according to physical host information, mission bit stream and regulation goal
Algorithm creates virtual machine, and task is assigned on virtual machine and is executed;(3) the state letter of all assigned tasks is persistently monitored
Breath carries out dynamic resource allocation to virtual machine;(4) whole tasks are completed and return to task result.Deficiency existing for this method is:
It does not account for virtual machine and starts the influence that time and multiplexed transport time execute computation-intensive independent task the time.
Invention content
The purpose of the present invention is to overcome the above shortcomings and to provide the task forecasting methods in a kind of cloud computing, at upper one
Multiplexed transport operation is carried out when calculating task is near completion, has evaded most of multiplexed transport time, is effectively increased parallel
The efficiency of processing task.
In order to achieve the above object, the present invention includes the following steps:
Step 1, task divides, and constitutes set of tasks, is placed in task pool;
Step 2 builds virtual machine, starts parallel process;
Step 3 estimates transmission time needed for single task role;
Step 4, whenever having, when task will be completed on new virtual machine startup or virtual machine, task dispatcher is the void
Quasi- machine distributes task, while the distribution condition of task in task pool is recorded in task status table task dispatcher, task shape
State table is inquiring the operating condition and task execution progress of each virtual machine;
Step 5, until set of tasks distributes, completion task is got.
In step 1, when new task reaches task window, task is averagely divided into 10N-100N parts by task window, N
For total virtual machine quantity, set of tasks is constituted.
In step 2, task dispatcher uses the virtual machine extension scheme with machine startup Time Perception to create successively newly
Virtual machine.
In step 3, from being taken in task pool in a task a to virtual machine, it is T to count the time usedg, in virtual machine
Configuration, task size and network environment it is identical in the case of, TgIt can be considered time of the transmission per a copy by a copy task.
It is that rear, progress before distribution and execution task is segmented into task to estimate task time, multiple optionally virtual
It is repeatedly tested in machine, takes its average value to be used as and take a required by task time Tg。
In step 4, the task of distribution is as follows:
The first step takes out a task from task pool, in duty mapping to the virtual machine, will start execution task, and
It estimates task and completes required time Tf;
Second step, T after task starts to execute on virtual machinef-TgMoment checks task pool, if task pool is not sky, continues
The virtual machine is reverted to original state by the first step if task pool is sky.
In step 4, task status table refers to:In set of tasks T={ t1,t2,…,tnIn, by any task tiIt indicates
At ti={ ai,li,fi, wherein ai、liAnd fiRespectively arrival time, task size and task completion time.
Compared with prior art, the present invention before task execution by, to the pre- judgement of multiplexed transport speed, both realizing
Next step task is transmitted while the task of execution, and does not influence the scheduling strategy of task in parallel computation, overcomes existing skill
The multiplexed transport time is not accounted for existing for art on the deficiency of computation-intensive independent task execution total time influence so that this
Invention can be further reduced the time of executing tasks parallelly, effectively improve system effectiveness.With the increase of calculating task amount, originally
The task time that invention can be saved effectively will be apparent from.
Description of the drawings
Fig. 1 is the scheduling architecture for the cloud computing system of the present invention used;
Fig. 2 is the flow chart of the present invention.
Specific implementation mode
The present invention will be further described below in conjunction with the accompanying drawings.
Referring to Fig.1, the cloud computing system that the present invention uses is the set of a physical host, the task tune that the present invention uses
Degree layer is made of task window, task dispatcher, wherein task window, for receiving newly arrived task and division task;Appoint
Business scheduler is assigned to for the work order that task in scheduler task window divides on virtual machine.Task refers to computation-intensive
Independent task.
The present invention is studied for the problems in the independent task parallel computation of computation-intensive.Problem can describe such as
Under:There is N number of task to start at time point 0, have M platforms virtual machine for these task runs, final regulation goal is to make this
A little tasks total deadline is minimum, wherein M < N.Simultaneously it is assumed that this N number of task size is identical, any one task therein
It can execute, cannot each be preempted at any time, i.e., it, can only be completely after a task is assigned to a machine
Operation.
Set of tasks T={ t1,t2,…,tn, wherein ti(i=1,2 ..., n) indicate i-th of task;The collection of virtual machine
Close M={ m1,m2,…,mm, wherein mj(j=1,2 ..., m) indicate jth platform virtual machine.Task tiIt is transmitted to mjOn virtual machine simultaneously
Operation completes the required time and uses pijIt indicates;kijA certain element in expression task and virtual machine mapping matrix, if 1 table
Show that i-th of task distribution is otherwise 0 on j-th of virtual machine.
As described above, the mathematical model of scheduling problem of the invention can be summarized as follows
If by TgFor the time for transmitting needed for any task to virtual machine;Designed task prefetches mechanism and carries in the present invention
The task time that can shorten after high efficiency is:
(n-m)×Tg
As can be seen that identical in available virtual machine quantity, in the case of every part of task size is identical after segmentation, task amount is got over
More, then the system effectiveness that the present invention can be promoted will be further apparent.
With reference to Fig. 2, specific implementation step of the invention is as follows:
Step 1, task divide:
When new task reaches task window, task approximation is averagely divided into 10N-100N part by task window, and (N is total void
Quasi- machine quantity), set of tasks is constituted, is placed in task pool;Task in the present invention refers to the independent task of computation-intensive.
Step 2 creates virtual machine:
Task dispatcher uses the virtual machine extension scheme with machine startup Time Perception to create new virtual machine successively.
Step 3 estimates the multiplexed transport time:
Statistics takes the time used in a task a to virtual machine from task pool, big in virtual machine configuration, task
It is small it is identical with network environment in the case of, can be considered the time transmitted needed for any task to virtual machine;
Estimating the multiplexed transport time is as follows:
1st step takes i parts of tasks and j platforms virtual machine at random (i and j are depending on mainframe cluster scale and general assignment size);
I parts of task branches are individually transmitted in j platform virtual machines by the 2nd step, and it is t to count each used timexy(1≤x≤i,1≤
Y≤j), it should be noted that every part of task must be transmitted successively because when transformation task virtual machine and task pool Connection Time
It can not ignore;
3rd step calculates the substantially transmission time of every part of task
Step 4, task distribution:
Whenever there is new virtual machine to start or virtual machine on task complete when, while task dispatcher is by the distribution feelings of task
Condition is recorded in task status table, and task status table is inquiring the operating condition and task execution progress of each virtual machine.
1 part of task is taken out from task pool starts execution task, and estimate task by duty mapping to the virtual machine
T the time required to completingf;
When on virtual machine task be finished before TfMoment checks task pool, when task pool is not empty, continues to execute
One step;
When task is finished and when task pool is empty, which is reverted to original state on virtual machine.
Step 5 completes task:
It completes whole tasks and returns to task result.
Claims (7)
1. the task forecasting method in a kind of cloud computing, which is characterized in that include the following steps:
Step 1, task divides, and constitutes set of tasks, is placed in task pool;
Step 2 builds virtual machine, starts parallel process;
Step 3 estimates transmission time needed for single task role;
Step 4, whenever having, when task will be completed on new virtual machine startup or virtual machine, task dispatcher is the virtual machine
Distribution task, while the distribution condition of task in task pool is recorded in task status table task dispatcher, task status table
To inquire the operating condition and task execution progress of each virtual machine;
Step 5, until set of tasks distributes, completion task is got.
2. the task forecasting method in a kind of cloud computing according to claim 1, which is characterized in that in step 1, when new
When task reaches task window, task is averagely divided into 10N-100N parts by task window, and N is total virtual machine quantity, constitutes and appoints
Business set.
3. the task forecasting method in a kind of cloud computing according to claim 1, which is characterized in that in step 2, task
Scheduler uses the virtual machine extension scheme with machine startup Time Perception to create new virtual machine successively.
4. the task forecasting method in a kind of cloud computing according to claim 1, which is characterized in that in step 3, from appoint
Business takes in pond in a task a to virtual machine, and it is T to count the time usedg, in virtual machine configuration, task size and network rings
In the case of border is identical, TgIt can be considered time of the transmission per a copy by a copy task.
5. the task forecasting method in a kind of cloud computing according to claim 1 or 4, which is characterized in that when estimating task
Between be segmented into task it is rear, distribution and execute task before carry out, repeatedly tested, taken in multiple optional virtual machines
Its average value is used as and takes a required by task time Tg。
6. the task forecasting method in a kind of cloud computing according to claim 1, which is characterized in that in step 4, distribution
Task is as follows:
The first step takes out a task from task pool and starts execution task, and estimate by duty mapping to the virtual machine
Task completes required time Tf;
Second step, T after task starts to execute on virtual machinef-TgMoment checks task pool, if task pool is not sky, continues first
The virtual machine is reverted to original state by step if task pool is sky.
7. the task forecasting method in a kind of cloud computing according to claim 1, which is characterized in that in step 4, task
State table refers to:In set of tasks T={ t1,t2,…,tnIn, by any task tiIt is expressed as ti={ ai,li,fi, wherein ai、
liAnd fiRespectively arrival time, task size and task completion time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810200611.9A CN108446165A (en) | 2018-03-12 | 2018-03-12 | A kind of task forecasting method in cloud computing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810200611.9A CN108446165A (en) | 2018-03-12 | 2018-03-12 | A kind of task forecasting method in cloud computing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108446165A true CN108446165A (en) | 2018-08-24 |
Family
ID=63194040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810200611.9A Pending CN108446165A (en) | 2018-03-12 | 2018-03-12 | A kind of task forecasting method in cloud computing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446165A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110806922A (en) * | 2019-10-14 | 2020-02-18 | 广州微算互联信息技术有限公司 | Script execution method, device, equipment and storage medium |
CN112486651A (en) * | 2020-11-30 | 2021-03-12 | 中国电子科技集团公司第十五研究所 | Cloud test platform task scheduling method based on improved genetic algorithm |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530182A (en) * | 2013-10-22 | 2014-01-22 | 海南大学 | Working scheduling method and device |
CN106371893A (en) * | 2016-08-31 | 2017-02-01 | 开封大学 | Cloud computing scheduling system and method |
CN107357641A (en) * | 2017-06-21 | 2017-11-17 | 西安电子科技大学 | Method for scheduling task in a kind of cloud computing |
CN108228323A (en) * | 2016-12-14 | 2018-06-29 | 龙芯中科技术有限公司 | Hadoop method for scheduling task and device based on data locality |
-
2018
- 2018-03-12 CN CN201810200611.9A patent/CN108446165A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530182A (en) * | 2013-10-22 | 2014-01-22 | 海南大学 | Working scheduling method and device |
CN106371893A (en) * | 2016-08-31 | 2017-02-01 | 开封大学 | Cloud computing scheduling system and method |
CN108228323A (en) * | 2016-12-14 | 2018-06-29 | 龙芯中科技术有限公司 | Hadoop method for scheduling task and device based on data locality |
CN107357641A (en) * | 2017-06-21 | 2017-11-17 | 西安电子科技大学 | Method for scheduling task in a kind of cloud computing |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110806922A (en) * | 2019-10-14 | 2020-02-18 | 广州微算互联信息技术有限公司 | Script execution method, device, equipment and storage medium |
CN110806922B (en) * | 2019-10-14 | 2022-06-21 | 广州微算互联信息技术有限公司 | Script execution method, device, equipment and storage medium |
CN112486651A (en) * | 2020-11-30 | 2021-03-12 | 中国电子科技集团公司第十五研究所 | Cloud test platform task scheduling method based on improved genetic algorithm |
CN112486651B (en) * | 2020-11-30 | 2024-09-06 | 中国电子科技集团公司第十五研究所 | Cloud test platform task scheduling method based on improved genetic algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240111586A1 (en) | Multi-policy intelligent scheduling method and apparatus oriented to heterogeneous computing power | |
CN108182109B (en) | Workflow scheduling and data distribution method in cloud environment | |
CN107168782A (en) | A kind of concurrent computational system based on Spark and GPU | |
CN101359333B (en) | Parallel data processing method based on latent dirichlet allocation model | |
CN111381950A (en) | Task scheduling method and system based on multiple copies for edge computing environment | |
US20070226743A1 (en) | Parallel-distributed-processing program and parallel-distributed-processing system | |
CN103745225A (en) | Method and system for training distributed CTR (Click To Rate) prediction model | |
CN105718364A (en) | Dynamic assessment method for ability of computation resource in cloud computing platform | |
CN112416585A (en) | GPU resource management and intelligent scheduling method for deep learning | |
CN108845874A (en) | The dynamic allocation method and server of resource | |
CN104793993B (en) | The cloud computing method for scheduling task of artificial bee colony particle cluster algorithm based on Levy flights | |
CN104881322A (en) | Method and device for dispatching cluster resource based on packing model | |
CN107864211A (en) | Cluster resource dispatching method and system | |
CN107070965B (en) | Multi-workflow resource supply method under virtualized container resource | |
CN108595255B (en) | Workflow task scheduling method based on shortest path algorithm in geographically distributed cloud | |
CN115408152A (en) | Adaptive resource matching obtaining method and system | |
CN108446165A (en) | A kind of task forecasting method in cloud computing | |
CN106934539A (en) | It is a kind of with limited and expense restriction workflow schedule method | |
CN112306642A (en) | Workflow scheduling method based on stable matching game theory | |
CN104156505B (en) | A kind of Hadoop cluster job scheduling method and devices based on user behavior analysis | |
CN117032902A (en) | Cloud task scheduling method for improving discrete particle swarm algorithm based on load | |
CN109976873B (en) | Scheduling scheme obtaining method and scheduling method of containerized distributed computing framework | |
CN116701001B (en) | Target task allocation method and device, electronic equipment and storage medium | |
CN108429784A (en) | A kind of cloud resource distribution of energy efficiency priority and dispatching method | |
CN114138453B (en) | Resource optimization allocation method and system suitable for edge computing environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180824 |
|
RJ01 | Rejection of invention patent application after publication |