CN111427679A - Computing task scheduling method, system and device facing edge computing - Google Patents

Computing task scheduling method, system and device facing edge computing Download PDF

Info

Publication number
CN111427679A
CN111427679A CN202010220415.5A CN202010220415A CN111427679A CN 111427679 A CN111427679 A CN 111427679A CN 202010220415 A CN202010220415 A CN 202010220415A CN 111427679 A CN111427679 A CN 111427679A
Authority
CN
China
Prior art keywords
task
edge
computing
cloud
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010220415.5A
Other languages
Chinese (zh)
Inventor
李奇杰
陈世超
朱凤华
熊刚
张利国
商秀芹
刘旭东
王飞跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202010220415.5A priority Critical patent/CN111427679A/en
Publication of CN111427679A publication Critical patent/CN111427679A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The invention belongs to the technical field of edge computing, and particularly relates to a computing task scheduling method, system and device for edge computing, aiming at further improving the real-time performance of computing task processing at an edge side and maximally utilizing resources at the edge side. The system method comprises the following steps: each edge node acquires a computing task to be executed, selects a task meeting execution requirements for processing, and unloads the task not meeting the execution requirements as a first task to an edge cloud; the edge cloud sequences the first tasks and selects and processes the first tasks meeting the execution requirement, and the first tasks which do not meet the execution requirement are not used as second tasks; selecting edge nodes meeting the execution requirements of the second tasks, scheduling and processing the edge nodes, and taking the edge nodes not meeting the execution requirements as third tasks; and segmenting the third task, selecting edge nodes and/or edge clouds and/or center clouds meeting the execution requirements after segmentation, and scheduling and processing, otherwise, failing to process. According to the invention, through cloud-edge-end cooperative processing, resources at the edge side are reasonably utilized, and the real-time performance of computing task scheduling processing is improved.

Description

Computing task scheduling method, system and device facing edge computing
Technical Field
The invention belongs to the technical field of edge computing, and particularly relates to a computing task scheduling method, system and device for edge computing.
Background
With the arrival of the internet of things era, the data volume is increased rapidly, and higher requirements on the real-time performance and the safety of data processing are provided in the face of various application scenes of the internet of things. Although the traditional cloud computing platform has abundant computing and storage resources and provides a high-efficiency computing platform for data processing, the cloud computing platform is far away from a user, and high network delay exists, so that the real-time performance of data processing is difficult to realize, and the bandwidth pressure caused by data transmission is increased; secondly, when the user submits a service request to the cloud, the private data of the user is uploaded to the cloud, and the risk of data leakage is increased; finally, the energy consumption required for processing the data with the rapid increase is also rapidly increased, which are challenges faced by the data processing in the internet of things.
And the edge computing and the cloud computing can be a complementary computing paradigm for solving the application requirements in the scene of the internet of things. On one hand, edge computing is a computing mode for providing services for users by utilizing a unified resource platform formed by computing, network, storage and other resources on a network edge side close to a data source, and edge equipment is close to terminal equipment. On the other hand, the user privacy data are transmitted to the edge device and do not need to be uploaded to the cloud for processing, and the risk of user privacy disclosure can be effectively reduced. However, at the same time edge calculation still exhibits certain disadvantages: namely, the edge side resources are limited, and the processing effect of the existing edge side processing method cannot meet the user requirement aiming at the multi-computing task concurrent scene.
Because the computing resources of the edge side are limited, the tasks requested by the edge nodes have real-time requirements, and resource competition exists among the tasks, how to arrange and schedule the tasks transmitted to the edge side is achieved, the limited computing resources of the edge side are reasonably utilized, resource waste is reduced, the edge side utilizes the limited computing resources of the edge side to process more tasks, time delay can be guaranteed to be lower, and meanwhile, the improvement of the resource utilization rate is an important problem in the field of edge computing. The patent provides a cloud-edge-end computing task collaborative scheduling method, which fully considers the user level, the latest task completion time and the task waiting time to compute task priority, ensures the real-time processing of high-priority computing tasks, and fully utilizes the computing resources of a cloud computing center, an edge server and a terminal (cloud-edge-end) to achieve the purpose of high-efficiency and real-time execution of the computing tasks.
Disclosure of Invention
In order to solve the above-mentioned problems in the prior art, that is, to further improve the real-time performance of the edge-side computing task processing and to maximally utilize resources such as computing, storage, and network on the edge side, a first aspect of the present invention provides a computing task scheduling method for edge computing, where the method includes:
step S100, each edge node acquires a computing task to be executed, selects the computing task meeting the execution requirement through a preset first selection rule for processing, and unloads the computing task not meeting the execution requirement to an edge cloud as a first task;
step S200, the edge cloud obtains the priority of each first task through a preset priority computing method, sorts the priority, selects first tasks meeting execution requirements through a preset first selection rule after sorting, and takes the first tasks not meeting the execution requirements as second tasks;
step S300, the edge cloud selects edge nodes meeting the execution requirements of each second task through the preset first selection rule, schedules each second task to the corresponding edge node for processing, and takes the second task not meeting the execution requirements as a third task;
and S400, the edge cloud divides each third task, selects a center cloud and/or an edge node meeting the execution requirement of the third task according to the division condition through the preset first rule, schedules the third task to the corresponding center cloud and/or edge node for processing, and otherwise returns that the processing of the computing task fails.
In some preferred embodiments, the preset first selection rule is that the predicted completion time of the computing task at the current edge node or edge cloud or the central cloud is less than the latest completion time set for the task, and the current edge node or edge cloud is in an unloaded overload state.
In some preferred embodiments, the preset priority calculation method is as follows:
Tprio=w1*Unj+w2*Tw-w3*Tl
wherein, TprioIs a value corresponding to the priority of the first task, UnjTo offload the user level of the edge node of the first task, TwFor the first task has waited for, TlSet the latest completion time, w, for the first task1、w2、w3Is a preset weight value.
In some preferred embodiments, the predicted completion time of each computing task in step S100 is the sum of the computing execution time of the computing task at the current edge node and the waiting processing time of the computing task at the edge node;
the estimated completion time of the first task is the sum of the computing execution time of the first task in the edge cloud, the network communication time of the first task unloaded to the edge cloud and the waiting processing time of the first task in the edge cloud;
the predicted completion time of the second task is the sum of the computing execution time of the second task at the edge node, the network communication time of the second task for unloading to the edge cloud, the network communication time of the second task for scheduling to the edge node and the waiting processing time of the second task at the edge node;
the predicted completion time of the third task is the sum of the computing execution time of the third task on the edge node and/or the edge cloud and/or the center cloud, the network communication time of the third task unloaded to the edge cloud, the network communication time of the third task scheduled to the edge node and/or the center cloud, and the waiting processing time of the third task on the edge node and/or the edge cloud and/or the center cloud.
In some preferred embodiments, in step S300, "each second task is scheduled to a corresponding edge node for processing", the method includes:
if the number of the edge nodes meeting the execution requirement of each second task is one, directly scheduling the second task to the corresponding edge node for processing;
if at least two edge nodes meeting the execution requirements of each second task are provided, scheduling the second task to the edge node with the shortest predicted completion time for processing; and the type of the edge node meeting the execution requirement of each second task is the same as that of each second task.
In some preferred embodiments, in step S400, "according to the segmentation condition, selecting a center cloud and/or an edge node that meets the execution requirement of the third task by using the preset first rule, and scheduling the third task to the corresponding center cloud and/or edge node for processing", the method includes:
if the third task is not divisible, the edge cloud evaluates the predicted completion time of the third task scheduled to the central cloud, if the time is less than the set latest completion time of the task, the task is scheduled to the central cloud for processing, otherwise, the processing of the computing task is returned to fail;
if the third task can be divided, the edge cloud evaluates the predicted completion time of the third task which is dispatched to the center cloud and/or the edge node of the corresponding type after being divided and the predicted completion time of the third task which is dispatched to the center cloud without being divided, selects the minimum predicted completion time, and if the minimum predicted completion time is less than the set latest completion time of the third task, dispatches the third task to the center cloud and/or the edge node of the corresponding type according to the dispatching mode corresponding to the minimum predicted completion time for processing, otherwise, returns to the computing task and fails to process.
In some preferred embodiments, the sorting in step S200 is performed from high to low according to the value of the priority corresponding to each first task.
The invention provides a computing task scheduling system facing to edge computing, which comprises a computing task selection processing module, a first task selection processing module, a second task selection processing module and a third task selection processing module;
the computing task selection processing module is configured to acquire computing tasks to be executed by each edge node, select computing tasks meeting execution requirements through a preset first selection rule to process, and take the computing tasks not meeting the execution requirements as first tasks to be unloaded to an edge cloud;
the first task selection processing module is configured to obtain the priority of each first task through a preset priority computing method and sort the priority, select the first tasks meeting the execution requirement through a preset first selection rule after the priority is sorted, and take the first tasks not meeting the execution requirement as second tasks;
the second task selection processing is configured to select edge nodes meeting the execution requirements of each second task by the edge cloud according to the preset first selection rule, and schedule each second task to the corresponding edge node for processing, and take the second task not meeting the execution requirements as a third task;
the third task selection processing module is configured to divide each third task by the edge cloud, select a center cloud and/or an edge node meeting the execution requirement of the third task according to the division condition through the preset first rule, and schedule the third task to the corresponding center cloud and/or edge node for processing, otherwise, return to the failure of processing the computation task.
In a third aspect of the present invention, a storage device is provided, in which a plurality of programs are stored, and the programs are loaded and executed by a processor to implement the above-mentioned edge-computation-oriented computing task scheduling method.
In a fourth aspect of the present invention, a processing apparatus is provided, which includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the edge computing-oriented computing task scheduling method.
The invention has the beneficial effects that:
according to the invention, the computing task is processed through cloud-edge-end cooperation, the computing resources at the edge side are reasonably utilized, and the real-time performance of the computing task scheduling processing is improved. According to the method, the computing task is processed through the edge node, and if the edge node is overloaded or the time delay requirement of the processing of the computing task is not met (namely the estimated completion time of the computing task at the edge node is longer than the set latest completion time of the task), the computing task is unloaded to the edge cloud.
And taking the edge cloud as a center, and comprehensively considering the user level of the edge node for unloading the computing task, the latest completion time of the computing task and the waiting time of the computing task to obtain the priority of the computing task. According to the priority of the computing tasks, the edge cloud processes the computing tasks, and the problem that the computing tasks which are submitted by low-level users and have low real-time requirements cannot be executed all the time can be prevented. If the load of the edge cloud is too heavy or the time delay requirement of the processing of the computing task cannot be met, the computing tasks with different requirements are dispatched to the edge nodes and/or the edge cloud and/or the center cloud which can provide corresponding computing services for processing according to the classification type and the segmentation of the computing tasks, computing resources of the whole network are fully utilized, efficient and accurate dispatching of the computing tasks with high priority is achieved, and time delay can be effectively reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings.
FIG. 1 is a flowchart illustrating a method for scheduling edge-computing-oriented computing tasks according to an embodiment of the present invention;
FIG. 2 is a block diagram of a computing task scheduling system for edge-oriented computing according to an embodiment of the present invention;
FIG. 3 is a hardware framework diagram of a computing task scheduling system for edge-oriented computing according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The method for scheduling the computing task facing the edge computing, as shown in fig. 1, comprises the following steps:
step S100, each edge node acquires a computing task to be executed, selects the computing task meeting the execution requirement through a preset first selection rule for processing, and unloads the computing task not meeting the execution requirement to an edge cloud as a first task;
step S200, the edge cloud obtains the priority of each first task through a preset priority computing method, sorts the priority, selects first tasks meeting execution requirements through a preset first selection rule after sorting, and takes the first tasks not meeting the execution requirements as second tasks;
step S300, the edge cloud selects edge nodes meeting the execution requirements of each second task through the preset first selection rule, schedules each second task to the corresponding edge node for processing, and takes the second task not meeting the execution requirements as a third task;
and S400, the edge cloud divides each third task, selects a center cloud and/or an edge node meeting the execution requirement of the third task according to the division condition through the preset first rule, schedules the third task to the corresponding center cloud and/or edge node for processing, and otherwise returns that the processing of the computing task fails.
In order to more clearly describe the method for scheduling a computation task facing edge computation according to the present invention, details of each step in an embodiment of the method of the present invention are expanded below with reference to the accompanying drawings.
Step S100, each edge node acquires a computing task to be executed, selects the computing task meeting the execution requirement through a preset first selection rule for processing, and takes the computing task not meeting the execution requirement as a first task to be unloaded to an edge cloud.
In this embodiment, there are n edge nodes on the edge side, denoted as P1,P2…PnEach edge node performs a respective computational task. If the predicted completion time t of the task is calculated in the n edge nodes1The latest completion time T set by the calculation task is less than or equal tolIf the load is too heavy (the cpu utilization of the edge node exceeds 60%) or the computation task of the current edge node does not meet the delay requirement (that is, the predicted completion time of the computation task at the current edge node is not less than the latest completion time), the computation task corresponding to the current edge node (that is, the computation task that does not meet the execution requirement) is offloaded to the edge cloud as the first task.
The predicted completion time of the computing task at the edge node is the sum of the computing execution time of the computing task at the edge node and the waiting processing time of the computing task at the edge node.
In addition, the edge nodes have different computing frames, and each edge node has different user levels (the computing frame has richer computing and storage resources, higher computing task complexity and higher user levels).
The computation framework complexity is evaluated according to the type of computation task that the edge node can compute, TkDenotes the k-th calculation task, CnjThe nth edge node is represented to calculate j calculation tasks, wherein the task types comprise images, voice, texts and the like.
The edge node user level is evaluated according to the computational framework complexity of the edge node, UnjThe nth edge node which can calculate j calculation tasks has a user level j, wherein the larger the j is, the higher the user level is.
And S200, the edge cloud obtains the priority of each first task through a preset priority computing method, sorts the priority, selects the first tasks meeting the execution requirement through the preset first selection rule after sorting, and takes the first tasks not meeting the execution requirement as second tasks.
In this embodiment, for all the computing tasks offloaded by the edge nodes to the edge cloud, that is, the first task, the edge cloud integrates the priority of the first task according to the user level of the edge node, the latest completion time of the first task, and the waiting time of the first task, as shown in formula (1):
Tprio=w1*Unj+w2*Tw-w3*Tl
wherein, TprioIs a value corresponding to the priority of the first task, UnjTo offload the user level of the edge node of the first task, TwFor the first task has waited for, TlSet the latest completion time, w, for the first task1、w2、w3The weight value is a preset weight value and can be adjusted according to actual conditions.
Task queue arranged from high to low according to corresponding value of task priorityUsing the set T ═ Ti1,Ti2,.......TimDenotes m tasks, where TimRepresenting the calculation task with the arrangement sequence of m sent by the ith edge node, wherein the parameter i satisfies 0<i≤m。
The edge cloud allocates the first task with high priority of computing resource priority processing according to the state of the computing resource of the edge cloud, if the predicted completion time of the task in the edge cloud processing is less than the set latest completion time of the task, the task is directly processed in the edge cloud, and the task is deleted from the task queue, otherwise, the task is skipped, and all the tasks are traversed in sequence. And if the load of the edge cloud is too heavy (the CUP utilization rate of the edge cloud exceeds 60 percent) or the predicted completion time of the first task in the edge cloud is greater than the set latest completion time of the task, taking the rest tasks as second tasks according to the task priority, and scheduling the second tasks to different types of edge nodes or cloud computing centers. The hardware structure based on the edge node-edge cloud-center cloud is shown in fig. 3.
The predicted completion time of the first task in the edge cloud is the sum of the computing execution time of the first task in the edge cloud, the network communication time of the first task unloaded to the edge cloud, and the waiting processing time of the first task in the edge cloud. The computing execution time of the first task at the edge cloud is estimated from hardware computing resources, storage resources, and the like.
And step S300, the edge cloud selects edge nodes meeting the execution requirements of each second task through the preset first selection rule, schedules each second task to the corresponding edge node for processing, and takes the second task not meeting the execution requirements as a third task.
In the embodiment, the edge cloud classifies the remaining tasks in the task queue according to the task types and arranges the tasks from high to low according to the priority. According to the speciality of the computing frame of the edge nodes (the edge nodes are specially used for processing a class of computing tasks), the task predicted completion time of each second task processed at the edge node of the corresponding type is firstly computed, if the task predicted completion time of one edge node meets the task ductility requirement (namely is less than the latest completion time set by the task), the second task is dispatched to the edge node for processing, and the task is deleted from the task queue. If the predicted completion time of at least two edge nodes meets the time delay requirement of the second task, the task is dispatched to the edge node with the minimum predicted completion time for processing, and the task is deleted in the task queue; if the edge node does not meet the time delay requirement of the task, the task is used as a third task, and whether the task can be divided or not is judged.
The predicted completion time of the second task is the sum of the computing execution time of the second task at the edge node of the corresponding type, the network communication time of the second task unloaded to the edge cloud, the network communication time of the second task scheduled to the edge node of the corresponding type and the waiting processing time of the second task at the edge node of the corresponding type. The computation execution time of the second task at the edge node is estimated based on hardware computation resources, storage resources, and the like.
And S400, the edge cloud divides each third task, selects a center cloud and/or an edge node meeting the execution requirement of the third task according to the division condition through the preset first rule, schedules the third task to the corresponding center cloud and/or edge node for processing, and otherwise returns that the processing of the computing task fails.
In this embodiment, if the third task is not divisible, calculating an expected completion time of processing the task in the existing computing resources of the central cloud (i.e., the cloud computing center), if the computing resources meet the task processing requirement (i.e., the expected completion time of the third task in the central cloud is less than or equal to the set latest completion time of the task), uploading the task to the central cloud, allocating the task to the computing resources with the minimum expected completion time, and deleting the task in the task queue, otherwise, notifying the edge node that the task processing fails, and deleting the task in the task queue.
If the third task can be divided, estimating a divided scheduling strategy according to the computing resources of the current edge node, the edge cloud and the center cloud and the network state of the current cloud-edge-end, namely calculating the predicted task completion time corresponding to all possible division conditions of the task, finding out the division condition corresponding to the minimum predicted completion time, comparing the division condition with the minimum predicted completion time which is uploaded to the center cloud for processing without dividing the third task, judging whether the minimum predicted completion time under the two conditions meets the task processing requirement or not, if at least one of the minimum predicted completion time under the two conditions meets the task processing requirement, scheduling and executing the third task according to the mode of the smaller predicted completion time of the third task and deleting the task in the task queue. If the task processing requirements are not met, informing the edge node that the task processing fails, and deleting the task in the task queue. And returning to execute the step S300 until all the task queues are empty, and finishing the calculation.
Dividing the division condition of the third task into two categories, namely respectively allocating the subtasks after the task division to different edge nodes for cooperative processing; and secondly, distributing the subtasks to edge nodes-edge clouds or edge nodes-center clouds or edge clouds-center clouds or edge nodes-edge clouds-center clouds respectively for cooperative processing.
Wherein, the dispatching strategy of all the dispatched edge nodes evaluates the estimated completion time of the divided subtasks at different edge nodes, selects the maximum estimated completion time in the subtasks as the estimated completion time t of the taskl1
Scheduling the scheduling strategy to the edge node-edge cloud or edge node-center cloud or edge cloud-center cloud or edge node-edge cloud-center cloud, evaluating the estimated completion time of the divided subtasks at different edge nodes and/or edge clouds and/or center cloud, and selecting the maximum estimated completion time of the subtasks in the edge node and/or edge cloud and/or center cloud as the estimated completion time t of the third taskl2. Evaluation of tl1,tl2And td-c(predicted completion time for task undivided) with the latest completion time of the third task if at least one of the ways satisfies T ≦ latest completion time TlAnd selecting a mode with the minimum time consumption for execution, and if the mode can not meet the requirements, the edge cloud informs the edge node of the task processing failure.
Therefore, the predicted completion time of the third task is the sum of the computing execution time of the third task at the edge node and/or the edge cloud and/or the center cloud, the network communication time for offloading the third task to the edge cloud, the network communication time for scheduling the third task to the edge node and/or the center cloud, and the waiting processing time of the third task at the edge node and/or the edge cloud and/or the center cloud. And the computing execution time of the third task on the edge node and/or the edge cloud and/or the center cloud is estimated according to the hardware computing resource, the storage resource and the like.
A computing task scheduling system facing edge computing according to a second embodiment of the present invention, as shown in fig. 2, includes: a calculation task selection processing module 100, a first task selection processing module 200, a second task selection processing module 300 and a third task selection processing module 400;
the computing task selection processing module 100 is configured to obtain a computing task to be executed by each edge node, select a computing task meeting execution requirements through a preset first selection rule, and unload the computing task not meeting the execution requirements as a first task to an edge cloud;
the first task selection processing module 200 is configured to obtain the priority of each first task by the edge cloud through a preset priority calculation method, sort the priority, select the first tasks meeting the execution requirement through the preset first selection rule after the priority is obtained, and take the first tasks not meeting the execution requirement as second tasks;
the second task selection processing 300 is configured to select, by the edge cloud, edge nodes meeting the execution requirements of each second task according to the preset first selection rule, schedule each second task to corresponding edge nodes for processing, and use the second task that does not meet the execution requirements as a third task;
the third task selection processing module 400 is configured to divide each third task by the edge cloud, select, according to the division condition, a center cloud and/or an edge node that meets the execution requirement of the third task by the preset first rule, and schedule the third task to the corresponding center cloud and/or edge node for processing, otherwise, return to the computing task processing failure.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
It should be noted that, the edge-computing-oriented computing task scheduling system provided in the foregoing embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage apparatus according to a third embodiment of the present invention stores therein a plurality of programs adapted to be loaded by a processor and to implement the above-described edge-computation-oriented computation task scheduling method.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the edge computing-oriented computing task scheduling method.
It can be clearly understood by those skilled in the art that, for convenience and brevity, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method examples, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A computing task scheduling method facing edge computing is characterized by comprising the following steps:
step S100, each edge node acquires a computing task to be executed, selects the computing task meeting the execution requirement through a preset first selection rule for processing, and unloads the computing task not meeting the execution requirement to an edge cloud as a first task;
step S200, the edge cloud obtains the priority of each first task through a preset priority computing method, sorts the priority, selects first tasks meeting execution requirements through a preset first selection rule after sorting, and takes the first tasks not meeting the execution requirements as second tasks;
step S300, the edge cloud selects edge nodes meeting the execution requirements of each second task through the preset first selection rule, schedules each second task to the corresponding edge node for processing, and takes the second task not meeting the execution requirements as a third task;
and S400, the edge cloud divides each third task, selects a center cloud and/or an edge node meeting the execution requirement of the third task according to the division condition through the preset first rule, schedules the third task to the corresponding center cloud and/or edge node for processing, and otherwise returns that the processing of the computing task fails.
2. The method for scheduling edge-computing-oriented computing tasks according to claim 1, wherein the preset first selection rule is that the predicted completion time of the computing task at the current edge node or edge cloud or center cloud is less than the latest completion time set for the task, and the current edge node or edge cloud is in an unloaded overload state.
3. The edge-computation-oriented computation task scheduling method of claim 2, wherein the preset priority computation method is:
Tprio=w1*Unj+w2*Tw-w3*Tl
wherein, TprioIs a value corresponding to the priority of the first task, UnjTo offload the user level of the edge node of the first task, TwFor the first task has waited for, TlSet the latest completion time, w, for the first task1、w2、w3Is a preset weight value.
4. The method for scheduling edge-computing-oriented computing tasks according to claim 2, wherein the predicted completion time of each computing task in step S100 is the sum of the computing execution time of the computing task at the current edge node and the waiting processing time of the computing task at the edge node;
the estimated completion time of the first task is the sum of the computing execution time of the first task in the edge cloud, the network communication time of the first task unloaded to the edge cloud and the waiting processing time of the first task in the edge cloud;
the predicted completion time of the second task is the sum of the computing execution time of the second task at the edge node, the network communication time of the second task for unloading to the edge cloud, the network communication time of the second task for scheduling to the edge node and the waiting processing time of the second task at the edge node;
the predicted completion time of the third task is the sum of the computing execution time of the third task on the edge node and/or the edge cloud and/or the center cloud, the network communication time of the third task unloaded to the edge cloud, the network communication time of the third task scheduled to the edge node and/or the center cloud, and the waiting processing time of the third task on the edge node and/or the edge cloud and/or the center cloud.
5. The method for scheduling edge-computing-oriented computing tasks according to claim 4, wherein in step S300, "each second task is scheduled to a corresponding edge node for processing", and the method includes:
if the number of the edge nodes meeting the execution requirement of each second task is one, directly scheduling the second task to the corresponding edge node for processing;
if at least two edge nodes meeting the execution requirements of each second task are provided, scheduling the second task to the edge node with the shortest predicted completion time for processing; and the type of the edge node meeting the execution requirement of each second task is the same as that of each second task.
6. The method for scheduling edge-computing-oriented computing tasks according to claim 5, wherein in step S400, according to the segmentation condition, the central cloud and/or the edge node that meet the execution requirement of the third task are selected according to the preset first rule, and the third task is scheduled to the corresponding central cloud and/or edge node for processing, and the method includes:
if the third task is not divisible, the edge cloud evaluates the predicted completion time of the third task scheduled to the central cloud, if the time is less than the set latest completion time of the task, the task is scheduled to the central cloud for processing, otherwise, the processing of the computing task is returned to fail;
if the third task can be divided, the edge cloud evaluates the predicted completion time of the third task which is dispatched to the center cloud and/or the edge node of the corresponding type after being divided and the predicted completion time of the third task which is dispatched to the center cloud without being divided, selects the minimum predicted completion time, and if the minimum predicted completion time is less than the set latest completion time of the third task, dispatches the third task to the center cloud and/or the edge node of the corresponding type according to the dispatching mode corresponding to the minimum predicted completion time for processing, otherwise, returns to the computing task and fails to process.
7. The edge-computing-oriented computing task scheduling method according to claim 3, wherein the sorting in step S200 is performed from high to low according to the value of the priority corresponding to each first task.
8. An edge computing oriented computing task scheduling system, the system comprising: the system comprises a calculation task selection processing module, a first task selection processing module, a second task selection processing module and a third task selection processing module;
the computing task selection processing module is configured to acquire computing tasks to be executed by each edge node, select computing tasks meeting execution requirements through a preset first selection rule to process, and take the computing tasks not meeting the execution requirements as first tasks to be unloaded to an edge cloud;
the first task selection processing module is configured to obtain the priority of each first task through a preset priority computing method and sort the priority, select the first tasks meeting the execution requirement through a preset first selection rule after the priority is sorted, and take the first tasks not meeting the execution requirement as second tasks;
the second task selection processing is configured to select edge nodes meeting the execution requirements of each second task by the edge cloud according to the preset first selection rule, and schedule each second task to the corresponding edge node for processing, and take the second task not meeting the execution requirements as a third task;
the third task selection processing module is configured to divide each third task by the edge cloud, select a center cloud and/or an edge node meeting the execution requirement of the third task according to the division condition through the preset first rule, and schedule the third task to the corresponding center cloud and/or edge node for processing, otherwise, return to the failure of processing the computation task.
9. A storage device having stored therein a plurality of programs, wherein said program applications are loaded and executed by a processor to implement the method of edge computing oriented computation task scheduling of any of claims 1-7.
10. A processing device comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; wherein the program is adapted to be loaded and executed by a processor to implement the method for scheduling computation tasks for edge-oriented computation according to any one of claims 1 to 7.
CN202010220415.5A 2020-03-25 2020-03-25 Computing task scheduling method, system and device facing edge computing Pending CN111427679A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010220415.5A CN111427679A (en) 2020-03-25 2020-03-25 Computing task scheduling method, system and device facing edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010220415.5A CN111427679A (en) 2020-03-25 2020-03-25 Computing task scheduling method, system and device facing edge computing

Publications (1)

Publication Number Publication Date
CN111427679A true CN111427679A (en) 2020-07-17

Family

ID=71548856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010220415.5A Pending CN111427679A (en) 2020-03-25 2020-03-25 Computing task scheduling method, system and device facing edge computing

Country Status (1)

Country Link
CN (1) CN111427679A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111682973A (en) * 2020-08-17 2020-09-18 烽火通信科技股份有限公司 Method and system for arranging edge cloud
CN111901435A (en) * 2020-07-31 2020-11-06 南京航空航天大学 Load-aware cloud-edge collaborative service deployment method
CN112202888A (en) * 2020-09-30 2021-01-08 中国联合网络通信集团有限公司 Message forwarding method for edge user and SDN
CN112272227A (en) * 2020-10-22 2021-01-26 华侨大学 Edge computing task scheduling method based on computation graph
CN112637312A (en) * 2020-12-17 2021-04-09 深圳艾灵网络有限公司 Edge node task coordination method, device and storage medium
CN112905320A (en) * 2021-02-05 2021-06-04 北京邮电大学 System, method and device for executing tasks of Internet of things
CN113157446A (en) * 2021-04-09 2021-07-23 联通(广东)产业互联网有限公司 Cloud edge cooperative resource allocation method, device, equipment and medium
CN113254178A (en) * 2021-06-01 2021-08-13 苏州浪潮智能科技有限公司 Task scheduling method and device, electronic equipment and readable storage medium
CN113810792A (en) * 2021-11-19 2021-12-17 南京绛门信息科技股份有限公司 Edge data acquisition and analysis system based on cloud computing
CN114301907A (en) * 2021-11-18 2022-04-08 北京邮电大学 Service processing method, system and device in cloud computing network and electronic equipment
WO2022152016A1 (en) * 2021-01-12 2022-07-21 华为技术有限公司 Node scheduling method and apparatus

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901435A (en) * 2020-07-31 2020-11-06 南京航空航天大学 Load-aware cloud-edge collaborative service deployment method
CN111682973B (en) * 2020-08-17 2020-11-13 烽火通信科技股份有限公司 Method and system for arranging edge cloud
CN111682973A (en) * 2020-08-17 2020-09-18 烽火通信科技股份有限公司 Method and system for arranging edge cloud
CN112202888B (en) * 2020-09-30 2021-12-14 中国联合网络通信集团有限公司 Message forwarding method for edge user and SDN
CN112202888A (en) * 2020-09-30 2021-01-08 中国联合网络通信集团有限公司 Message forwarding method for edge user and SDN
CN112272227A (en) * 2020-10-22 2021-01-26 华侨大学 Edge computing task scheduling method based on computation graph
CN112272227B (en) * 2020-10-22 2022-04-15 华侨大学 Edge computing task scheduling method based on computation graph
CN112637312A (en) * 2020-12-17 2021-04-09 深圳艾灵网络有限公司 Edge node task coordination method, device and storage medium
WO2022152016A1 (en) * 2021-01-12 2022-07-21 华为技术有限公司 Node scheduling method and apparatus
CN112905320A (en) * 2021-02-05 2021-06-04 北京邮电大学 System, method and device for executing tasks of Internet of things
CN113157446A (en) * 2021-04-09 2021-07-23 联通(广东)产业互联网有限公司 Cloud edge cooperative resource allocation method, device, equipment and medium
CN113254178B (en) * 2021-06-01 2021-10-29 苏州浪潮智能科技有限公司 Task scheduling method and device, electronic equipment and readable storage medium
CN113254178A (en) * 2021-06-01 2021-08-13 苏州浪潮智能科技有限公司 Task scheduling method and device, electronic equipment and readable storage medium
CN114301907A (en) * 2021-11-18 2022-04-08 北京邮电大学 Service processing method, system and device in cloud computing network and electronic equipment
CN113810792A (en) * 2021-11-19 2021-12-17 南京绛门信息科技股份有限公司 Edge data acquisition and analysis system based on cloud computing

Similar Documents

Publication Publication Date Title
CN111427679A (en) Computing task scheduling method, system and device facing edge computing
US10474504B2 (en) Distributed node intra-group task scheduling method and system
CN107911478B (en) Multi-user calculation unloading method and device based on chemical reaction optimization algorithm
CN109656703B (en) Method for assisting vehicle task unloading through mobile edge calculation
US7797705B2 (en) System for assigning tasks according to the magnitude of the load of information processing requested
CN109788046B (en) Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm
CN101167054A (en) Methods and apparatus for selective workload off-loading across multiple data centers
CN109005211B (en) Micro-cloud deployment and user task scheduling method in wireless metropolitan area network environment
CN106407007B (en) Cloud resource configuration optimization method for elastic analysis process
JP2020507135A (en) Exclusive agent pool distribution method, electronic device, and computer-readable storage medium
CN113347267B (en) MEC server deployment method in mobile edge cloud computing network
Li et al. An efficient scheduling optimization strategy for improving consistency maintenance in edge cloud environment
CN111338807B (en) QoE (quality of experience) perception service enhancement method for edge artificial intelligence application
Smys et al. Performance evaluation of game theory based efficient task scheduling for edge computing
Tran-Dang et al. Task priority-based resource allocation algorithm for task offloading in fog-enabled IoT systems
CN111176840A (en) Distributed task allocation optimization method and device, storage medium and electronic device
Chatterjee et al. A new clustered load balancing approach for distributed systems
CN110287024A (en) The dispatching method of multi-service oriented device multi-user in a kind of industrial intelligent edge calculations
CN112988363B (en) Resource scheduling method, device, server and storage medium
CN112866358B (en) Method, system and device for rescheduling service of Internet of things
CN113157443A (en) Resource balanced scheduling method based on edge computing environment
CN112148454A (en) Edge computing method supporting serial and parallel and electronic equipment
Elsharkawey et al. Mlrts: multi-level real-time scheduling algorithm for load balancing in fog computing environment
CN112148449A (en) Local area network scheduling algorithm and system based on edge calculation
CN114816720B (en) Scheduling method and device of multi-task shared physical processor and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination