CN109491775B - Task processing and scheduling method used in edge computing environment - Google Patents
Task processing and scheduling method used in edge computing environment Download PDFInfo
- Publication number
- CN109491775B CN109491775B CN201811308511.4A CN201811308511A CN109491775B CN 109491775 B CN109491775 B CN 109491775B CN 201811308511 A CN201811308511 A CN 201811308511A CN 109491775 B CN109491775 B CN 109491775B
- Authority
- CN
- China
- Prior art keywords
- task
- resource
- level
- execution
- idle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
Abstract
The invention provides a task processing and scheduling method used in an edge computing environment, which divides task execution computing levels by defining task sequences; if a plurality of tasks simultaneously request resources, judging the task type of each task and the current idle resource of the system, and if a task matched with the idle state is found, operating the relative idle resource of the current system to meet the maximum execution calculation level of the task; and if the tasks which cannot be matched or only have single task requests, randomly selecting the tasks to be executed by a greedy strategy, and finally judging whether the tasks are successfully scheduled or not, and if so, removing the tasks. According to the task processing and scheduling method used in the edge computing environment, the system can adaptively select the optimal execution level of the task under the current available system resource through the division of the task execution computing level, so that the corresponding time of the task is effectively reduced, the application satisfaction of a user is improved, and the utilization rate of the system resource is effectively improved.
Description
Technical Field
The invention relates to the field of application of edge computing technology, in particular to a task processing and scheduling method used in an edge computing environment.
Background
With the development of networks and mobile devices, the number of internet of things devices is increasing, and the emergence of new applications such as VR, AR and the like presents a significant challenge to the traditional cloud computing environment, edge computing is a feasible solution, but it needs to deploy edge servers in a large area, and it is objectively determined that the performance of the servers is not too strong and system resources are limited, so that the traditional task scheduling processing mode has disadvantages.
The traditional task scheduling has two types, namely a first-come first-served scheduling algorithm, namely an FCFS scheduling algorithm and a short job priority scheduling algorithm, namely an SJF scheduling algorithm. The FCFS scheduling algorithm is more beneficial to long jobs and busy CPU jobs, but not beneficial to short jobs and busy I/O jobs; the SJF scheduling algorithm is very detrimental to long jobs. Moreover, in the era of the internet of things, the two methods cannot efficiently process a large number of user task requests, especially when the tasks are computationally intensive, which causes relatively high task delay and leads to the reduction of the application satisfaction of users.
Disclosure of Invention
The invention provides a task processing and scheduling method used in an edge computing environment, aiming at solving the technical problems that in the prior art, when a large number of user task requests, especially computation-intensive tasks, cannot be processed efficiently, higher task delay is caused, and the application satisfaction of users is reduced.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a task processing and scheduling method for an edge computing environment comprises the following steps:
s1: defining a task sequence, and dividing task execution calculation levels;
s2: judging whether the task queue is empty, if not, taking the earliest arriving task from the task queue and executing the step S3; if yes, repeatedly executing the step S2;
s3: judging whether a plurality of tasks arrive at the same time, if yes, executing step S4; if not, executing the reached maximum task execution calculation level by using a greedy strategy, and then executing the step S5;
s4: judging the task type of each task and the relative idle resources of the current system, and if the task matched with the idle state of the system is found, operating the relative idle resources of the current system to meet the maximum execution calculation level of the task matched with the idle state of the system; if the tasks can not be matched, one task is randomly selected and executed by a greedy strategy;
s5: judging whether the task is successfully scheduled, if so, removing the successfully scheduled task from the task queue; if not, waiting for resource release.
In step S1, the method specifically includes the following steps:
s11: defining a task sequence, specifically as follows:
Job={J1,J2,...,Jn};
Jn={IDn,Cn,Mn,Dn,Ln};
Ln={level1,level2,...,levelm};
wherein Job is a task sequence; IDnIs a unique identification of the task; cnIs the number of CPU revolutions that task n needs to execute; mnIs the size of the memory space required for running the corresponding task n; dnIs the data set size of the task; l isnIs the execution level that can be selected corresponding to the task n, and is level respectively1,level2,...,levelmOf each levelRespectively representing the proportion of the CPU and the data set required by the level and the accuracy proportion of the task, and the maximum proportion is 100%;
s12: according to the relation between the execution accuracy of the task and the task resource requirement, a relation model is built, and the task execution calculation level is divided, specifically:
three relation models are determined as follows: a linear incremental model, a deceleration incremental model and an acceleration-before-deceleration incremental model;
defining an accuracy loss function LF, specifically:
wherein AccmaxAcc being the highest precision of the taskcurIs the accuracy of the task at the selected execution level, ResmaxAnd RescurThe resource quantity requirements of the task under the highest accuracy and the selected execution level are respectively; the loss function LF is related to the slope of the relationship model function;
according to the task sequence, a relational model is selected such that the loss function LF is minimized.
The task types described in step S4 are specifically divided into CPU-intensive tasks and I/O-intensive tasks.
The method for judging the relative idle resources of the current system in step S4 is to calculate the idle ratio of each resource; if the idle ratio of the CPU is higher than that of the I/O equipment, the CPU is an idle resource of the current system, and the idle ratio CPU idle rate calculation method is as follows:
in step S4, the task type determining method is to calculate the dominant resource of the task, calculate the ratio of each resource requirement of the task to the total resource of the system, where the ratio with the largest ratio is the dominant resource of the task, and the calculation formula of the CPU occupancy ratio CPU rate specifically is:
in the above scheme, the task execution calculation levels are divided to respectively correspond to different resource requirements, and the higher the task execution calculation level is, the more system resources are required, and the higher the accuracy of the final completion of the task is.
In the above scheme, in an edge computing environment, the execution condition of a specific task is divided into a plurality of selectable computing levels, each computing level corresponds to different resource requirements and task accuracy, and the specific reference to three relation models is distributed according to actual application requirements.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the task processing and scheduling method used in the edge computing environment, the system can adaptively select the optimal execution level of the task under the current available system resource through the division of the task execution computing level, so that the corresponding time of the task is effectively reduced, the application satisfaction of a user is improved, and the utilization rate of the system resource is effectively improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a linear incremental model;
FIG. 3 is a schematic diagram of a deceleration incremental model;
FIG. 4 is a diagram of an acceleration-followed-deceleration incremental model.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a task processing and scheduling method for use in an edge computing environment includes the following steps:
s1: defining a task sequence, and dividing task execution calculation levels;
s2: judging whether the task queue is empty, if not, taking the earliest arriving task from the task queue and executing the step S3; if yes, repeatedly executing the step S2;
s3: judging whether a plurality of tasks arrive at the same time, if yes, executing step S4; if not, executing the reached maximum execution calculation level by using a greedy strategy, and then executing the step S5;
s4: judging the task type of each task and the relative idle resources of the current system, and if the task matched with the idle state of the system is found, operating the relative idle resources of the current system to meet the maximum execution calculation level of the task matched with the idle state of the system; if the tasks can not be matched, one task is randomly selected and executed by a greedy strategy;
s5: judging whether the task is successfully scheduled, if so, removing the successfully scheduled task from the task queue; if not, waiting for resource release.
More specifically, step S1 specifically includes the following steps:
s11: defining a task sequence, specifically as follows:
Job={J1,J2,...,Jn};
Jn={IDn,Cn,Mn,Dn,Ln};
Ln={level1,level2,...,levelm};
wherein Job is a task sequence; IDnIs a unique identification of the task; cnIs the number of CPU revolutions that task n needs to execute; mnIs the size of the memory space required for running the corresponding task n; dnIs the data set size of the task; l isnIs the execution level that can be selected corresponding to the task n, and is level respectively1,level2,...,levelmOf each levelRespectively representing the proportion of the CPU and the data set required by the level and the accuracy proportion of the task, and the maximum proportion is 100%;
s12: according to the relation between the execution accuracy of the task and the task resource requirement, a relation model is built, and the task execution calculation level is divided, specifically:
determining three relation models, as shown in fig. 2, fig. 3 and fig. 4, respectively: a linear incremental model, a deceleration incremental model and an acceleration-before-deceleration incremental model;
defining an accuracy loss function LF, specifically:
wherein AccmaxAcc being the highest precision of the taskcurIs the accuracy of the task at the selected execution level, ResmaxAnd RescurThe resource quantity requirements of the task under the highest accuracy and the selected execution level are respectively; the loss function LF is related to the slope of the relationship model function;
according to the task sequence, a relational model is selected such that the loss function LF is minimized.
The task types described in step S4 are specifically divided into CPU-intensive tasks and I/O-intensive tasks.
More specifically, the method for determining the relative idle resources of the current system in step S4 is to calculate the idle ratio of each resource; if the idle ratio of the CPU is higher than that of the I/O equipment, the CPU is an idle resource of the current system, and the idle ratio CPU idle rate calculation method is as follows:
more specifically, the task type determining method in step S4 is to calculate the ratio of each resource requirement of the task to the total system resource, where the ratio with the largest ratio is the dominant resource of the task, where the calculation formula of the CPU occupancy ratio CPU rate specifically is:
in the specific implementation process, the task execution calculation levels are divided to respectively correspond to different resource requirements, the higher the task execution calculation level is, the more system resources are required, and the higher the accuracy of the final completion of the task is.
In a specific implementation process, under an edge computing environment, the execution condition of a specific task is divided into a plurality of selectable computing levels, each computing level corresponds to different resource requirements and task accuracy, and the specific reference is made to three relation models and distributed according to actual application requirements.
In the specific implementation process, through the division of the task execution calculation levels, the system can adaptively select the optimal execution level of the task under the current available system resources, effectively reduce the corresponding time of the task, improve the application satisfaction of the user and simultaneously effectively improve the utilization rate of the system resources.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (4)
1. A task processing and scheduling method for use in an edge computing environment, comprising the steps of:
s1: defining a task sequence, and dividing task execution calculation levels;
s2: judging whether the task queue is empty, if not, taking the earliest arriving task from the task queue and executing the step S3; if yes, repeatedly executing the step S2;
s3: judging whether a plurality of tasks arrive at the same time, if yes, executing step S4; if not, executing the maximum execution calculation level of the reached task by using a greedy strategy, and then executing the step S5;
s4: judging the task type of each task and the relative idle resources of the current system, and if the task matched with the idle state of the system is found, operating the relative idle resources of the current system to meet the maximum execution calculation level of the task matched with the idle state of the system; if no matched task exists, one task is randomly selected and executed by a greedy strategy;
s5: judging whether the task is successfully scheduled, if so, removing the successfully scheduled task from the task queue; if not, waiting for resource release;
in step S1, the method specifically includes the following steps:
s11: defining a task sequence, specifically as follows:
wherein Job is a task sequence;is a taskThe unique identifier of (a);is a taskThe number of CPU revolutions to be executed;is to run the corresponding taskThe size of the required memory space;is a taskThe data set size of (d);is a corresponding taskThe execution levels that can be selected are respectivelyOf each levelRespectively representing the proportion of a CPU, the proportion of a data set and the accuracy proportion of a task required by the level, and the maximum proportion is 100%;
s12: according to the relation between the execution accuracy of the task and the task resource requirement, a relation model is built, and the task execution calculation level is divided, specifically:
three relation models are determined as follows: a linear incremental model, a deceleration incremental model and an acceleration-before-deceleration incremental model;
defining an accuracy loss function LF, specifically:
wherein the content of the first and second substances,for the highest accuracy of the task or tasks,is the accuracy of the task at the selected execution level,andthe resource quantity requirements of the task under the highest accuracy and the selected execution level are respectively; the loss function LF is related to the slope of the relationship model function;
according to the task sequence, a relational model is selected such that the loss function LF is minimized.
2. The method of claim 1, wherein the task processing and scheduling method comprises: the task types described in step S4 are specifically divided into CPU-intensive tasks and I/O-intensive tasks.
3. The method of claim 1, wherein the task processing and scheduling method comprises: in step S4, the method for determining the relative idle resources of the current system is to calculate the idle ratio of each resource; if the idle ratio of the CPU is higher than that of the I/O equipment, the CPU is an idle resource of the current system, and the idle ratio CPU idle rate calculation method is as follows:
4. the method of claim 1, wherein the task processing and scheduling method comprises: in step S4, the task type determination method is to calculate the dominant resource of the task, and calculate the ratio of each resource requirement of the task to the total resource of the system, where the ratio with the largest ratio is the dominant resource of the task, and the calculation formula of the CPU occupancy ratio is specifically:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811308511.4A CN109491775B (en) | 2018-11-05 | 2018-11-05 | Task processing and scheduling method used in edge computing environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811308511.4A CN109491775B (en) | 2018-11-05 | 2018-11-05 | Task processing and scheduling method used in edge computing environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109491775A CN109491775A (en) | 2019-03-19 |
CN109491775B true CN109491775B (en) | 2021-09-21 |
Family
ID=65693794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811308511.4A Active CN109491775B (en) | 2018-11-05 | 2018-11-05 | Task processing and scheduling method used in edge computing environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109491775B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110716806B (en) * | 2019-09-27 | 2023-05-12 | 深圳市网心科技有限公司 | Edge node computing capability determining method, electronic equipment, system and medium |
CN112433852B (en) * | 2020-11-23 | 2021-09-03 | 广州技象科技有限公司 | Internet of things edge calculation control method, device, equipment and storage medium |
CN113760553B (en) * | 2021-09-09 | 2024-04-26 | 中山大学 | Mixed part cluster task scheduling method based on Monte Carlo tree search |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718479A (en) * | 2014-12-04 | 2016-06-29 | 中国电信股份有限公司 | Execution strategy generation method and device under cross-IDC (Internet Data Center) big data processing architecture |
CN106126317A (en) * | 2016-06-24 | 2016-11-16 | 安徽师范大学 | It is applied to the dispatching method of virtual machine of cloud computing environment |
CN108319502A (en) * | 2018-02-06 | 2018-07-24 | 广东工业大学 | A kind of method and device of the D2D tasks distribution based on mobile edge calculations |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10007513B2 (en) * | 2015-08-27 | 2018-06-26 | FogHorn Systems, Inc. | Edge intelligence platform, and internet of things sensor streams system |
-
2018
- 2018-11-05 CN CN201811308511.4A patent/CN109491775B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718479A (en) * | 2014-12-04 | 2016-06-29 | 中国电信股份有限公司 | Execution strategy generation method and device under cross-IDC (Internet Data Center) big data processing architecture |
CN106126317A (en) * | 2016-06-24 | 2016-11-16 | 安徽师范大学 | It is applied to the dispatching method of virtual machine of cloud computing environment |
CN108319502A (en) * | 2018-02-06 | 2018-07-24 | 广东工业大学 | A kind of method and device of the D2D tasks distribution based on mobile edge calculations |
Non-Patent Citations (1)
Title |
---|
"移动边缘计算中基于移动模型的任务迁移算法与协议研究";王梓;《中国优秀硕士学位论文全文数据库 信息科技辑》;20181015;第I136-409页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109491775A (en) | 2019-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10558498B2 (en) | Method for scheduling data flow task and apparatus | |
CN108762896B (en) | Hadoop cluster-based task scheduling method and computer equipment | |
CN107045456B (en) | Resource allocation method and resource manager | |
CN109491775B (en) | Task processing and scheduling method used in edge computing environment | |
WO2016106516A1 (en) | Method and device for scheduling user request in distributed resource system | |
CN109697122B (en) | Task processing method, device and computer storage medium | |
CN109564528B (en) | System and method for computing resource allocation in distributed computing | |
CN104765640B (en) | A kind of intelligent Service dispatching method | |
CN109992403B (en) | Optimization method and device for multi-tenant resource scheduling, terminal equipment and storage medium | |
CN108123980B (en) | Resource scheduling method and system | |
CN109086135B (en) | Resource scaling method and device, computer equipment and storage medium | |
CN112363821A (en) | Computing resource scheduling method and device and computer equipment | |
KR20110080735A (en) | Computing system and method | |
CN105592110B (en) | Resource scheduling method and device | |
WO2015144008A1 (en) | Method and device for allocating physical machine to virtual machine | |
US20130219395A1 (en) | Batch scheduler management of tasks | |
WO2018126771A1 (en) | Storage controller and io request processing method | |
WO2014108000A1 (en) | Task allocation method and system | |
CN112214319A (en) | Task scheduling method for sensing computing resources | |
Choi et al. | An enhanced data-locality-aware task scheduling algorithm for hadoop applications | |
Komarasamy et al. | A novel approach for Dynamic Load Balancing with effective Bin Packing and VM Reconfiguration in cloud | |
CN109062683B (en) | Method, apparatus and computer readable storage medium for host resource allocation | |
Wu et al. | Abp scheduler: Speeding up service spread in docker swarm | |
CN111143210A (en) | Test task scheduling method and system | |
CN104731662B (en) | A kind of resource allocation methods of variable concurrent job |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |