CN104079502A - Multi-user multi-queue scheduling method - Google Patents

Multi-user multi-queue scheduling method Download PDF

Info

Publication number
CN104079502A
CN104079502A CN201410302274.6A CN201410302274A CN104079502A CN 104079502 A CN104079502 A CN 104079502A CN 201410302274 A CN201410302274 A CN 201410302274A CN 104079502 A CN104079502 A CN 104079502A
Authority
CN
China
Prior art keywords
user
scheduling
resource
queue
job queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410302274.6A
Other languages
Chinese (zh)
Other versions
CN104079502B (en
Inventor
刘欣然
沈时军
朱春鸽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Computer Network and Information Security Management Center
Original Assignee
National Computer Network and Information Security Management Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Computer Network and Information Security Management Center filed Critical National Computer Network and Information Security Management Center
Priority to CN201410302274.6A priority Critical patent/CN104079502B/en
Publication of CN104079502A publication Critical patent/CN104079502A/en
Application granted granted Critical
Publication of CN104079502B publication Critical patent/CN104079502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a multi-user multi-queue scheduling method. The method specifically includes the following steps of firstly, establishing a multi-user multi-queue scheduling model; secondly, receiving operations submitted by users through scheduling servers, and caching the operations; thirdly, periodically scheduling the operations through the scheduling servers in a circulating mode, and issuing the operations to servers corresponding to resources. By means of the multi-user multi-queue scheduling method, under any scheduling scene, scheduling preemption among the users can be avoided, and the certain scheduling probability of low-priority operations can be ensured on the premise of considering operation priorities.

Description

A kind of multi-user multiqueue dispatching method
Technical field
The present invention relates to a kind of dispatching method, be specifically related to a kind of multi-user multiqueue dispatching method.
Background technology
Most popular at present cloud computing or Distributed Calculation field is three class dispatching algorithms in Hadoop: first-in first-out algorithm FIFO (First In First Out), fair share dispatching algorithm (Fair Scheduler) and computing capability dispatching algorithm (Capacity Scheduler).
1) main thought of FIFO algorithm is: all users' operation is all submitted in a queue, and operation according to the priority, submission time successively sorts; If the priority of operation is different, the operation that priority is higher is always preferentially carried out; If the priority of operation is identical, submission time operation early is always preferentially carried out.The method has following shortcoming: 1) cannot avoid the scheduling between user to seize.If certain user has submitted a large amount of operations to, this user can obtain more scheduling time; 2) cannot look after the scheduling of lower priority job.When the operation of high priority exists, the operation of low priority will can not get scheduling always.
2) main thought of fair share dispatching algorithm (Fair Scheduler) is: for each user sets up a job queue, and be specified scheduling time of each queue assignment, in each inner queue, according to the algorithm of above-mentioned FIFO, dispatch.The method has following shortcoming: cannot guarantee that the probability that operation that priority is high is scheduled is higher.Because the scheduling time that user is assigned with is determined, when the operation submitted to as certain user is less, the operation of its low priority may more easily be scheduled than the operation of other user's high priority.
3) main thought of computing capability dispatching algorithm (Capacity Scheduler) is: set up a plurality of job queues, and be the scheduling time that each queue assignment is certain; When scheduling, always preferentially select actual schedule time and expection to differ maximum job queue; In each inner queue, according to the algorithm of above-mentioned FIFO, dispatch.The method has following shortcoming: because scheduling time of each job queue is specified, when the operation quantity difference of different queue is larger, exist operation that priority is low by the situation of priority scheduling.
Summary of the invention
In order to overcome above-mentioned the deficiencies in the prior art, the invention provides a kind of multi-user multiqueue dispatching method, under any scheduling scene, it both can avoid the scheduling between user to seize, can take into account under the prerequisite of job priority again, guarantee the certain scheduling probability of lower priority job.
In order to realize foregoing invention object, the present invention takes following technical scheme:
The invention provides a kind of multi-user multiqueue dispatching method, described method specifically comprises the following steps:
Step 1: set up multi-user multiqueue dispatching model;
Step 2: dispatch server receives the operation that user submits to, and operation is carried out to buffer memory;
Step 3: dispatch server carries out periodic cycle scheduling to operation, and operation is handed down to server corresponding to resource.
In described step 1, multi-user multiqueue dispatching model is as follows:
Definition resource collection Φ={ r i| i=1,2 ..., n}, wherein r irepresent i resource in resource collection, n represents resource sum in resource collection; Operation set Ψ={ t j| j=1,2 ..., m}, wherein t jrepresent j operation in operation set, m represents operation sum in operation set; Function U (t j) expression operation t jaffiliated user, function K (t j) expression operation t jpriority; Job queue Ψ u,k={ t j| U (t j)=u^K (t junder)=k} represents, user is the set of u, the priority All Jobs that is k, and wherein (1, υ), (1, κ), υ is total number of users to k ∈ to u ∈, and κ is limit priority; Definition w u,krepresent operation set Ψ u,kthe time that last time is scheduled, p kfor the probability that is scheduled of the priority operation that is k, and have
Described step 2 comprises the following steps:
Step 2-1: when dispatch server receives the operation of user's submission, according to U (t j), K (t j) search operation t jcorresponding job queue Ψ u,k;
Step 2-2: work as Ψ u,knot yet create, create this job queue, and w is set u,k=0;
Step 2-3: by operation t jaccording to first-in first-out order, put into job queue Ψ u,k.
Described step 3 comprises the following steps:
Step 3-1: record current time w;
Step 3-2: the job queue Ψ that calculates each non-NULL u,kdispatch weight f u,k=(w-w u,k) p k, and select f u,kmaximum job queue Ψ ' u,k;
Step 3-3: according to first-in first-out order from Ψ ' u,kmiddle taking-up operation t ' j, and find and meet t ' in resource collection Φ jthe resource r of condition i, by t ' jbe issued to resource r ion corresponding server, move;
Step 3-4: upgrade Ψ ' u,klast time be scheduled the time, make the time w ' after upgrading u,kequal w;
Step 3-5: check that each is empty job queue Ψ u,kif, w u,k< w-w 0, by Ψ u,kdelete, wherein w 0for the scheduling maximal memory time.
Compared with prior art, beneficial effect of the present invention is:
1. multi-user provided by the invention multiqueue dispatching method, can, in the job scheduling environment of multi-user, multipriority, make: (1), if the priority of operation is different, the probability that the operation that priority is high is scheduled is higher; (2), if the priority of operation is identical, the operation quantity that the probability that the operation of different user is scheduled and user submit to is irrelevant;
2. an independently job queue is set up in the operation that the method is each priority of each user, and when job scheduling, the job queue that the selection of time being scheduled according to the priority of job queue, last time need to be dispatched
3. compare with computing capability dispatching algorithm (Capacity Scheduler) dispatching method with fair share dispatching algorithm in now widely used Hadoop (Fair Scheduler), the method can guarantee the fairness under any scheduling scene.
Accompanying drawing explanation
Fig. 1 is operation reception and buffer memory flow chart in the embodiment of the present invention;
Fig. 2 be in the embodiment of the present invention job scheduling with issue flow chart.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
The invention provides a kind of multi-user multiqueue dispatching method, described method specifically comprises the following steps:
Step 1: set up multi-user multiqueue dispatching model;
In described step 1, multi-user multiqueue dispatching model is as follows:
Definition resource collection Φ={ r i| i=1,2 ..., n}, wherein r irepresent i resource in resource collection, n represents resource sum in resource collection; Operation set Ψ={ t j| j=1,2 ..., m}, wherein t jrepresent j operation in operation set, m represents operation sum in operation set; Function U (t j) expression operation t jaffiliated user, function K (t j) expression operation t jpriority; Job queue Ψ u,k={ t j| U (t j)=u^K (t junder)=k} represents, user is the set of u, the priority All Jobs that is k, and wherein (1, υ), (1, κ), υ is total number of users to k ∈ to u ∈, and κ is limit priority; Definition w u,krepresent operation set Ψ u,kthe time that last time is scheduled, p kfor the probability that is scheduled of the priority operation that is k, and have
Step 2: dispatch server receives the operation that user submits to, and operation is carried out to buffer memory;
As Fig. 1, described step 2 comprises the following steps:
Step 2-1: when dispatch server receives the operation of user's submission, according to U (t j), K (t j) search operation t jcorresponding job queue Ψ u,k;
Step 2-2: work as Ψ u,knot yet create, create this job queue, and w is set u,k=0;
Step 2-3: by operation t jaccording to first-in first-out order, put into job queue Ψ u,k.
Step 3: dispatch server carries out periodic cycle scheduling to operation, and operation is handed down to server corresponding to resource.
Described step 3 comprises the following steps:
As Fig. 2, step 3-1: record current time w;
Step 3-2: the job queue Ψ that calculates each non-NULL u,kdispatch weight f u,k=(w-w u,k) p k, and select f u,kmaximum job queue Ψ ' u,k;
Step 3-3: according to first-in first-out order from Ψ ' u,kmiddle taking-up operation t ' j, and find and meet t ' in resource collection Φ jthe resource r of condition i, by t ' jbe issued to resource r ion corresponding server, move;
Step 3-4: upgrade Ψ ' u,klast time be scheduled the time, make the time w ' after upgrading u,kequal w;
Step 3-5: check that each is empty job queue Ψ u,kif, w u,k< w-w 0, by Ψ u,kdelete, wherein w 0for the scheduling maximal memory time.
Embodiment
The data structure pseudo-code of using is in an embodiment:
The data structure * of/* resource/
struct?resource{
Int rid; // resource ID
/*other?attributes*/
};
The data structure * of/* operation/
struct?task{
Int tid; // operation ID
String user; User under // operation
Int k; // job priority
/*other?attributes*/
};
The data structure * of/* job queue/
struct?queue{
String user; User under // job queue
Int k; The priority of // job queue
Int last_sch_time; // the last time is scheduled the time
Float sch_weight; // dispatch weight
/*other?attributes*/
};
Setting priority is 1-5, wherein 1 minimum, 5 the highest.The scheduling probability float pri[5 of initialization different priorities operation], pri[1]=0.03, pri[2]=0.07, pri[3]=0.15, pri[4]=0.25, pri[5]=0.5.
Maximal memory time sch_max_record_time=600 second is dispatched in initialization.
A multi-user multiqueue dispatching method for justice triggers " operation reception and process of caching " when receiving operation, comprises following steps:
In step 10, when receiving operation task, in job queue set, search corresponding job queue queue, require queue.user==task.user & & queue.k==task.k.
In step 20, if do not find such job queue, create this job queue, be designated as queue, and initialization queue.user=task.user, queue.k=task.k, queue.last_sch_time=0.
In step 30, if find such job queue, be designated as queue.
In step 40, task, according to the order of FIFO, is put into the tail of the queue of queue.
The invention provides a kind of multi-user multiqueue dispatching method of justice, every 0.01 second, call " job scheduling with issue process ", comprise following steps:
In step 100, record current time now.
In step 110, travel through each job queue queue, if being sky, queue directly ignores, otherwise the dispatch weight of calculating queue, i.e. queue.sch_weight=(now – queue.last_sch_time) * pri[queue.k].
In step 120, select the job queue of queue.sch_weight maximum, be designated as sch_queue.
In step 130, according to FIFO order, from sch_queue, take out first operation, be designated as sch_task.
In step 130, in resource collection, find the resource sch_resource that meets sch_task condition.
In step 140, sch_task is issued to the upper operation of sch_resource.
In step 150, the last time of the upgrading sch_queue time sch_queue.last_sch_time=now that is scheduled.
In step 160, check each job queue queue, if queue is empty, and queue.last_sch_time<now – sch_max_record_time, queue is deleted.
Finally should be noted that: above embodiment is only in order to illustrate that technical scheme of the present invention is not intended to limit; those of ordinary skill in the field still can modify or be equal to replacement the specific embodiment of the present invention with reference to above-described embodiment; these do not depart from any modification of spirit and scope of the invention or are equal to replacement, within the claim protection range of the present invention all awaiting the reply in application.

Claims (4)

1.Yi Zhong multi-user multiqueue dispatching method, is characterized in that: described method specifically comprises the following steps:
Step 1: set up multi-user multiqueue dispatching model;
Step 2: dispatch server receives the operation that user submits to, and operation is carried out to buffer memory;
Step 3: dispatch server carries out periodic cycle scheduling to operation, and operation is handed down to server corresponding to resource.
2. multi-user according to claim 1 multiqueue dispatching method, is characterized in that: in described step 1, multi-user multiqueue dispatching model is as follows:
Definition resource collection Φ={ r i| i=1,2 ..., n}, wherein r irepresent i resource in resource collection, n represents resource sum in resource collection; Operation set Ψ={ t j| j=1,2 ..., m}, wherein t jrepresent j operation in operation set, m represents operation sum in operation set; Function U (t j) expression operation t jaffiliated user, function K (t j) expression operation t jpriority; Job queue Ψ u,k={ t j| U (t j)=u^K (t junder)=k} represents, user is the set of u, the priority All Jobs that is k, and wherein (1, υ), (1, κ), υ is total number of users to k ∈ to u ∈, and κ is limit priority; Definition w u,krepresent operation set Ψ u,kthe time that last time is scheduled, p kfor the probability that is scheduled of the priority operation that is k, and have
3. multi-user according to claim 2 multiqueue dispatching method, is characterized in that: described step 2 comprises the following steps:
Step 2-1: when dispatch server receives the operation of user's submission, according to U (t j), K (t j) search operation t jcorresponding job queue Ψ u,k;
Step 2-2: work as Ψ u,knot yet create, create this job queue, and w is set u,k=0;
Step 2-3: by operation t jaccording to first-in first-out order, put into job queue Ψ u, k.
4. multi-user according to claim 3 multiqueue dispatching method, is characterized in that: described step 3 comprises the following steps:
Step 3-1: record current time w;
Step 3-2: the job queue Ψ that calculates each non-NULL u,kdispatch weight f u,k=(w-w u,k) p k, and select f u,kmaximum job queue Ψ ' u,k;
Step 3-3: according to first-in first-out order from Ψ ' u,kmiddle taking-up operation t ' j, and find and meet t ' in resource collection Φ jthe resource r of condition i, by t ' jbe issued to resource r ion corresponding server, move;
Step 3-4: upgrade Ψ ' u,klast time be scheduled the time, make the time w ' after upgrading u,kequal w;
Step 3-5: check that each is empty job queue Ψ u,kif, w u,k< w-w 0, by Ψ u,kdelete, wherein w 0for the scheduling maximal memory time.
CN201410302274.6A 2014-06-27 2014-06-27 Multi-user multi-queue scheduling method Active CN104079502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410302274.6A CN104079502B (en) 2014-06-27 2014-06-27 Multi-user multi-queue scheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410302274.6A CN104079502B (en) 2014-06-27 2014-06-27 Multi-user multi-queue scheduling method

Publications (2)

Publication Number Publication Date
CN104079502A true CN104079502A (en) 2014-10-01
CN104079502B CN104079502B (en) 2017-05-10

Family

ID=51600554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410302274.6A Active CN104079502B (en) 2014-06-27 2014-06-27 Multi-user multi-queue scheduling method

Country Status (1)

Country Link
CN (1) CN104079502B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573829A (en) * 2016-02-02 2016-05-11 沈文策 Method for fast processing high-traffic-flow data in system
CN106326003A (en) * 2016-08-11 2017-01-11 中国科学院重庆绿色智能技术研究院 Operation scheduling and computing resource allocation method
CN106528295A (en) * 2016-11-07 2017-03-22 广州华多网络科技有限公司 System task scheduling method and device
CN110175073A (en) * 2019-05-31 2019-08-27 杭州数梦工场科技有限公司 Dispatching method, sending method, device and the relevant device of data exchange operation
WO2020134425A1 (en) * 2018-12-24 2020-07-02 深圳市中兴微电子技术有限公司 Data processing method, apparatus, and device, and storage medium
US10976956B2 (en) 2016-09-30 2021-04-13 Huawei Technologies Co., Ltd. Non-volatile memory persistence method and computing device
CN113132265A (en) * 2021-04-16 2021-07-16 武汉光迅信息技术有限公司 Multi-stage scheduling method and device for multi-path Ethernet

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324525A (en) * 2013-07-03 2013-09-25 东南大学 Task scheduling method in cloud computing environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324525A (en) * 2013-07-03 2013-09-25 东南大学 Task scheduling method in cloud computing environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朱春鸽 等: "虚拟计算环境下面向应用的基于信任的资源匹配模型", 《通信学报》 *
沈时军 等: "云计算中的服务可用性保障机制", 《通信学报》 *
王凯: "MapReduce 集群多用户作业调度方法的研究与实现", 《国防科技大学硕士学位论文》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573829A (en) * 2016-02-02 2016-05-11 沈文策 Method for fast processing high-traffic-flow data in system
CN106326003A (en) * 2016-08-11 2017-01-11 中国科学院重庆绿色智能技术研究院 Operation scheduling and computing resource allocation method
US10976956B2 (en) 2016-09-30 2021-04-13 Huawei Technologies Co., Ltd. Non-volatile memory persistence method and computing device
CN106528295A (en) * 2016-11-07 2017-03-22 广州华多网络科技有限公司 System task scheduling method and device
WO2020134425A1 (en) * 2018-12-24 2020-07-02 深圳市中兴微电子技术有限公司 Data processing method, apparatus, and device, and storage medium
CN110175073A (en) * 2019-05-31 2019-08-27 杭州数梦工场科技有限公司 Dispatching method, sending method, device and the relevant device of data exchange operation
CN110175073B (en) * 2019-05-31 2022-05-31 杭州数梦工场科技有限公司 Scheduling method, sending method, device and related equipment of data exchange job
CN113132265A (en) * 2021-04-16 2021-07-16 武汉光迅信息技术有限公司 Multi-stage scheduling method and device for multi-path Ethernet
CN113132265B (en) * 2021-04-16 2022-05-10 武汉光迅信息技术有限公司 Multi-stage scheduling method and device for multi-path Ethernet

Also Published As

Publication number Publication date
CN104079502B (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN104079502A (en) Multi-user multi-queue scheduling method
CN110275758B (en) Intelligent migration method for virtual network function
TWI510030B (en) System and method for performing packet queuing on a client device using packet service classifications
CN111211830B (en) Satellite uplink bandwidth resource allocation method based on Markov prediction
CN101557348B (en) Message forwarding method and device based on token bucket
CN108762896A (en) One kind being based on Hadoop cluster tasks dispatching method and computer equipment
CN104168318A (en) Resource service system and resource distribution method thereof
CN103294548B (en) A kind of I/O request dispatching method based on distributed file system and system
US8588242B1 (en) Deficit round robin scheduling using multiplication factors
CN107431667A (en) Packet is dispatched in the network device
CN101694631A (en) Real-time operation dispatching system and method thereof
CN103428883A (en) Dispatching method and equipment of physical downlink control channel resources
CN101374109B (en) Method and apparatus for scheduling packets
Kettimuthu et al. An elegant sufficiency: load-aware differentiated scheduling of data transfers
CN104360902A (en) Sliding window-based multi-priority metadata task scheduling method
Li et al. Task scheduling algorithm for heterogeneous real-time systems based on deadline constraints
US8797868B2 (en) Energy-efficient network device with coordinated scheduling and rate control using non-zero base power
Susanto et al. A near optimal multi-faced job scheduler for datacenter workloads
WO2022002247A1 (en) Resource scheduling method, electronic device, and storage medium
CN106921586B (en) Data stream shaping method, data scheduling method and device
CN109976873A (en) The scheduling scheme acquisition methods and dispatching method of containerization distributed computing framework
Xiaohuan et al. An aggregate flow based scheduler in multi-task cooperated UAVs network
CN111782627A (en) Task and data cooperative scheduling method for wide-area high-performance computing environment
Kumar et al. Two-Level Priority Task Scheduling Algorithm for Real-Time IoT Based Storage Condition Assessment System
Patil et al. Review on a comparative study of various task scheduling algorithm in cloud computing environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant