CN110069341B - Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing - Google Patents

Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing Download PDF

Info

Publication number
CN110069341B
CN110069341B CN201910286347.XA CN201910286347A CN110069341B CN 110069341 B CN110069341 B CN 110069341B CN 201910286347 A CN201910286347 A CN 201910286347A CN 110069341 B CN110069341 B CN 110069341B
Authority
CN
China
Prior art keywords
edge
tasks
server
task
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910286347.XA
Other languages
Chinese (zh)
Other versions
CN110069341A (en
Inventor
谈海生
刘柳燕
李向阳
黄浩强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201910286347.XA priority Critical patent/CN110069341B/en
Publication of CN110069341A publication Critical patent/CN110069341A/en
Application granted granted Critical
Publication of CN110069341B publication Critical patent/CN110069341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge calculation, which comprises the following steps: step 1, acquiring related parameters of a network and a task, and selecting an initialization edge server; step 2, carrying out greedy initial configuration on the edge server by using the relevant parameters in the step 1 to obtain server configuration information; step 3, representing the tasks with the dependency relationship in the step 1 by using a directed acyclic graph, and carrying out topological sequencing on the tasks in the directed acyclic graph to form a topological sequence; step 4, iterating the topological sequence in the step 3 by using the server configuration information in the step 2, calculating the completion time of the earliest completion of the operation of each task on each edge server, and obtaining a task distribution and scheduling scheme; and 5, under the constraint of the actual capacity of the edge server, distributing and scheduling each task according to the distribution and scheduling scheme of the tasks in the step 4. The method can minimize the completion time of an application consisting of multiple dependent tasks in an edge computing environment.

Description

Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
Technical Field
The invention relates to the field of edge computing, in particular to a method for scheduling tasks with dependency relationships, which are configured according to needs by combining functions in edge computing.
Background
In recent years, with the rapid development of cellular networks and internet of things (IOT), high-speed and high-reliability air interfaces enable high-complexity and high-energy-consumption applications to be offloaded to a remote cloud data center for operation, so that the shortage of computing capacity of a mobile terminal is made up and the energy consumption of the mobile terminal is reduced. However, the long distance propagation inevitably causes a serious communication delay, which cannot meet the requirement that applications such as augmented reality (VR), cognitive assistance, car networking, etc., require real-time response. To alleviate this problem, an important paradigm shift has emerged in the field of mobile Computing, from centralized cloud Computing to Edge Computing (Edge Computing, also called fog Computing, micro-cloud Computing). The idea of edge computing is to deploy small servers, called edge servers, at the edge of the internet (e.g., Wi-Fi access points or cellular base stations), which have more powerful computing and storage capabilities than mobile devices and are located geographically close to the mobile user, who can typically connect directly to the edge servers via a wireless network, thereby greatly reducing communication latency and enabling the mobile user to seamlessly access cloud services with low latency.
However, as the performance requirements and resource requirements of mobile applications are increasing dramatically, edge computing faces many challenges in practical applications, such as:
(1) capacity limitation and on-demand configuration: in contrast to remote cloud computing, edge servers have relatively limited computing and storage capabilities, and only a limited number of functions can be configured on the edge servers. In order to run a certain task, the edge server needs to perform corresponding operations such as database caching, image downloading, installation and startup, additional environment configuration and the like, and these series of operations can be called as function configuration, so that a task can only run on the edge server with the required function. If the current edge server does not have enough capacity to configure the function corresponding to the current task to be scheduled, a decision is needed to remove the configured function on part of the edge servers. The on-demand configuration of the functions will significantly affect the performance of the mobile application and the utilization of the edge server, so how to provide an intelligent function configuration policy is crucial.
(2) Task dependent and parallel execution: mobile applications are composed of multiple dependent tasks, usually represented by a Directed Acyclic Graph (DAG). Points in the graph represent different types of tasks, and values on the directed edges represent that a certain amount of data needs to be transmitted after one task is finished as input of an arrow pointing to the task, so that the edge set in the graph also defines the sequential or parallel relation of task execution. In addition, different tasks may have different preferences for edge servers, such as a Facebook video processing application, where the encoding operation is a computationally intensive task that is more suitable for edge servers with greater computing power. In order to minimize the completion time of the application as much as possible, how to design a reasonable scheduling policy is a problem to be solved, including which edge server each task in the decision DAG is placed to and the order in which the tasks are executed on each edge server.
At present, in the field of mobile edge computing, a great deal of work is carried out on task scheduling and function configuration problems, but existing algorithms do not consider the dependency relationship of tasks in an application program, and assume that the application is an independent and inseparable whole. With the increasing complexity of mobile applications, the performance of the mobile applications can be effectively optimized by distributing tasks which can be executed in parallel to different edge servers. However, in the edge computing environment with limited resources, how to perform function configuration and task scheduling with dependency relationship is an urgent problem to be solved.
Disclosure of Invention
Based on the problems in the prior art, the invention aims to provide a method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing, which can solve the problem that the task scheduling in the existing edge computing does not consider the dependency relationship of tasks in an application program, so that the application running efficiency in the edge computing is not high.
The purpose of the invention is realized by the following technical scheme:
the embodiment of the invention provides a method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing, which comprises the following steps:
step 1, acquiring relevant parameters of an edge computing network and an application containing a task with a dependency relationship, and selecting an edge server from the edge computing network as an initialization server for processing input and output of the application;
step 2, carrying out greedy initial configuration on each edge server in the edge computing network by using the relevant parameters of the application obtained in the step 1 to obtain server configuration information;
step 3, representing the tasks with the dependency relationship applied in the step 1 by using a directed acyclic graph, and performing topological sequencing on the tasks in the directed acyclic graph to obtain a topological sequence of the tasks;
step 4, performing iterative computation on the topology sequence of the tasks obtained in the step 3 by using the server configuration information obtained in the step 2, calculating the completion time of the earliest completion of the operation of each task placed on each edge server of the edge computing network in the topology sequence, storing the corresponding distribution process, and reversely searching the stored distribution processes according to the completion time of the last task to reversely reconstruct the distribution and scheduling schemes of all the tasks;
and 5, under the constraint of the actual capacity of the edge server, distributing and scheduling each task according to the distribution and scheduling scheme of the tasks finally determined in the step 4.
It can be seen from the foregoing technical solutions provided in the present invention that, the method for scheduling tasks with dependency relationships configured as needed in combination with functions in edge computing provided in the embodiments of the present invention has the following beneficial effects:
the method of the invention realizes the configuration of decision functions according to needs, and the edge server to which each task is respectively placed and the execution sequence of the tasks on each edge server. The method has the advantages that the utilization rate of the edge server can be improved by efficiently scheduling tasks with dependency relationships and configuring functions as required under the premise of considering the limited number of function configurations on the edge server in the edge computing, the completion time of one application consisting of a plurality of dependent tasks is minimized in an edge computing environment (the completion time refers to the time for obtaining a final operation result and returning the final operation result to a mobile user after the application is unloaded from the edge server or a remote cloud server), the application operation time is reduced, compared with other methods (such as a HEFT algorithm provided by Topcuoglu) which are modified and used in the scene, the application completion time can be reduced by 1.54-2.8 times, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a scheduling method for tasks with dependency configured as needed in combination with functions in edge computing according to an embodiment of the present invention;
FIG. 2 is a block diagram of a configuration method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a directed acyclic graph structure for three applications provided in the present invention;
FIG. 4 is a graph comparing performance of the process provided by the practice of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below in conjunction with the specific contents of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a method for scheduling tasks with dependency relationships configured as needed in combination with functions in edge computing, which can minimize the completion time of an application composed of multiple dependency tasks in an edge computing environment, thereby improving the efficiency of application running in edge computing and improving user experience, and includes:
step 1, acquiring relevant parameters of an edge computing network and an application containing a task with a dependency relationship, and selecting an edge server from the edge computing network as an initialization server of the application;
step 2, using the relevant parameters obtained in the step 1 to perform greedy initial configuration on each edge server in the edge computing network to obtain server configuration information;
step 3, representing the tasks with the dependency relationship in the step 1 by using a directed acyclic graph, and carrying out topological sequencing on the tasks in the directed acyclic graph to obtain a topological sequence of the tasks;
step 4, performing iterative computation on the topology sequence of the tasks obtained in the step 3 by using the server configuration information obtained in the step 2, calculating the completion time of the earliest completion of the operation of each task placed on each edge server of the edge computing network in the topology sequence, storing the corresponding process, and reversely searching the stored processes according to the completion time of the last task to reversely reconstruct the distribution and scheduling schemes of all the tasks;
and 5, under the constraint of the actual capacity of the edge server, distributing and scheduling each task according to the distribution and scheduling scheme of the tasks finally determined in the step 4.
In step 1 of the above method, the edge computing network includes:
a remote cloud server and a plurality of heterogeneous edge servers, each edge server having a limited capacity, wherein bidirectional data transfer rates between any two edge servers are equal.
In step 1 of the method, obtaining relevant parameters of the edge computing network and the task having the dependency relationship includes:
the running time of each task on each edge server of the edge computing network and the time it takes for each edge server to configure different functions.
In step 1 of the method, the server where the greedy initial configuration is started is determined by the minimum running time of all tasks on all servers. The initialization server simulates a mobile device to offload a computing task (such as a face recognition application) to an edge server (i.e. the initialization server), and the mobile device provides input data required by the computing task and returns a final computing result from the initialization server in hopes of using resources of the initialization server to complete the computing task more quickly.
In step 2 of the above method, the edge servers in the edge computing network are greedy initially configured by using the relevant parameters obtained in step 1 to obtain server configuration information, a number of a corresponding array record configuration function is set on each edge server, and the information stored in the array is the obtained server configuration information, and the configuration process includes the following steps:
step 21, on the premise of ignoring the actual capacity of the edge server, greedy ensuring that each task configures a corresponding function on the edge server with the least running time, recording the serial number of the function in the array of the edge server, and calculating the maximum capacity cost value in the edge server as the virtual capacity under the current configuration;
step 22, setting the capacity of all edge servers as virtual capacity, and continuing to configure the edge servers which are not fully configured in the step 21 as follows: sequencing the running time of all tasks on all edge servers from small to large, sequentially judging whether the edge servers corresponding to the running time are fully matched, if so, skipping the subsequent steps, if not, judging whether the edge servers which are not fully matched have configured the functions corresponding to the tasks, if so, skipping the subsequent steps, otherwise, configuring and storing the edge servers to an array, and then skipping to the next running time for judgment until all the edge servers are fully matched.
The processing of step 21 is specifically to count how much capacity each edge server needs to spend under the greedy configuration scheme, where the maximum capacity spending value in the edge server is the virtual capacity (when one function occupies a unit of capacity, the maximum configuration function number is equal to the maximum capacity consumption number).
In the step 22, the incomplete allocation means that the number of configured functions is smaller than the virtual capacity of the edge server;
full allocation means that the number of functions configured is equal to the virtual capacity of the edge server.
The processing of step 22 is to continue function configuration on the edge servers that are not fully configured in step 21, so that each edge server consumes its virtual capacity, and actually, a two-dimensional array may be used to record the configured functions on each edge server.
In step 3 of the method, representing the tasks with dependency relationships applied in step 1 by using a directed acyclic graph comprises the following steps:
in tasks with dependency relationships, each task represents a computing module in an application, and the tasks with the dependency relationships are set as a task set
Figure GDA0003687924250000051
Wherein, the task v j At edge server s k Has an operating time t jk
Using a directed acyclic graph
Figure GDA0003687924250000052
Representing an application, set with directed edge e: (v) in each case i ,v j ) E epsilon represents task v j Is required to execute task v i As an input, the result of (2), wherein the amount of data transferred is w ij
In the step 4, the assignment and scheduling scheme for the tasks is obtained in the dynamic planning process, the result of the sub-problem is memorized and stored in the dynamic planning method, when the iteration is performed to the last step (namely, the last task), the minimum completion time is obtained, the assignment and scheduling scheme for the tasks can be reversely reconstructed from the result, the assignment of each task at the completion time and the scheduling order of the tasks on each server are recorded by an array, and the finally determined assignment and scheduling scheme for the tasks is obtained.
The method of the invention realizes the configuration of decision functions according to needs, and the edge server to which each task is respectively placed and the execution sequence of the tasks on each edge server. The method has the advantages that the utilization rate of the edge server can be improved by efficiently scheduling tasks with dependency relationships and configuring functions as required under the premise of considering the limited number of function configurations on the edge server in the edge computing, the completion time of one application consisting of a plurality of dependent tasks (the completion time refers to the time for returning a final operation result to a mobile user after the application is unloaded from the edge server or a remote cloud server) is minimized in an edge computing environment, the application operation time is reduced, and the user experience is improved.
The embodiments of the present invention are described in further detail below.
The method for scheduling the dependent tasks configured according to the needs of the combined function in the edge calculation is a method for scheduling the dependent tasks configured according to the combined function, and comprises the following steps: model definition and processing steps;
wherein, (1) the network environment and each model used by the scheduling method are defined as follows:
(11) edge computing network: the method of the invention uses an edge computing network as an edge cloud system, wherein K heterogeneous edge servers are used
Figure GDA0003687924250000061
Represents; wherein each edge server s k Having a finite capacity C k (ii) a Edge server s i To s j The data transmission rate of (a) is d ij Setting d ij =d ji (ii) a The edge cloud system comprises a remote cloud server s K
(12) Task dependency graph: one computing module in each application is a task, a plurality of tasks of each application have dependency relationship, and a plurality of task sets of one application are set
Figure GDA0003687924250000062
Wherein, the task v j At the edge server s k T is used as the running time of jk Represents; using a Directed Acyclic Graph (DAG)
Figure GDA0003687924250000063
To represent an application, set with directed edge e: (v) in each case i ,v j ) E epsilon represents task v j Is required to execute task v i The result of (1) as input, wherein the amount of data transferred is w ij
(13) Server configuration: task v i Can only run on edge servers configured with corresponding functions, at edge server s j For task v i It takes time r to perform configuration ij (ii) a Setting the capacity of each function in one unit, edge server s i Last time, C can be configured at most i A default server may open an instance (also referred to as a thread) for a configured function to process a corresponding task; if there is not enough capacity on the edge server to configure the new function, a decision is needed to drop some of the configured functions while terminating the instance to which the dropped function corresponds.
(14) Directed acyclic graph simplification: in order to simplify application representation, adding a null node, and generating a directed edge pointing to all entry nodes (the degree of entry is 0) in the directed acyclic graph, wherein the data transmission quantity of the edges is the application initialization data quantity, and the other null node is used for collecting the results of all end nodes (the degree of exit is 0); the empty node does not occupy any capacity and has no execution time, two empty nodes can only be placed on the same edge server, and the initialization server representing the task request is also the edge server which receives the final execution result. This step is added because the mobile device of the user will generally offload the computed application to the edge server (referred to as initialization server in this application) closest to the mobile device, and send the initial input data of the application to the initialization server, and when scheduling, if the initialization server decides to place the ingress node (i.e. the task that needs the initial input data as input) on another edge server, it will take into account the communication delay caused by the transmission of the initial input data from the initialization server to another edge server. Otherwise, the final result is also returned to the mobile equipment of the user by the initialization server, and the process of utilizing and depicting the directed acyclic graph simplification is realized.
The specific treatment of the method comprises the following steps:
step 1, inputting information of each parameter in model definition, including: the network, the directed acyclic graph, the running time of each task on each server, and the time required for each edge server to configure different functions, and specify the server where the entry node of the application is located.
Step 2, using the information in the step 1, specifically, the running time information of each task on different servers, to perform greedy initial configuration on the edge server in the network:
step 21, on the premise of not considering the actual capacity constraint of the edge server, greedy ensuring that each task is configured on the server with the least running time, and calculating the maximum value of the capacity cost in all servers (namely, the maximum configured function number on the server), which is called as the virtual capacity;
step 22, setting the capacity of all edge servers as virtual capacity, and continuing to configure the edge servers that have not reached full configuration (that is, the configured number of functions is less than the virtual capacity) in the step 21: sequencing the running time of all tasks in all edge servers from small to large, and sequentially judging the running time t jk Corresponding edge server s k If the configuration is not full, the edge server s is judged k Whether or not task v has been configured i And if the corresponding functions are configured, skipping the subsequent steps, and after configuration, skipping to the next operation time for judgment until all the edge servers are fully configured. This configuration also ensures that there is an edge server executable for each task (which is a necessary condition for step 4 to run) in the case where on-demand configuration is not allowed (i.e., in the case where the functionality of the server configuration is no longer changed);
step 3, carrying out topological sorting on the directed acyclic graph input in the step 1 to obtain a topological sequence of tasks; defining a parameter f ij Representing a task v i Place server s j The earliest ending time of the upper run. Initialisation to f ij : infinity, J ≦ J for all 1 ≦ i and 1 ≦ J ≦ K;
step (ii) of4, utilizing the server configuration information of the step 2 and the definition of the step 3, iterating according to the topological sequence, using a dynamic programming method, and obtaining f step by step ij The value of (a) is set to (b),
Figure GDA0003687924250000071
Figure GDA0003687924250000072
until the last task f is calculated Ja And obtaining a task allocation and scheduling scheme;
and 5, under the constraint of the actual capacity of the server, distributing and scheduling the tasks of the application according to the result of the step 4, and calculating the real completion time of the application. Considering that the actual capacity is often smaller than the virtual capacity, extra queuing waiting time and function configuration time are needed for some tasks which can be directly run after the dependency constraint of the directed acyclic graph is met under the virtual capacity, so that the real task completion time is not f obtained in step 4 Ja And is f Ja Plus some tasks due to queuing latency and functional configuration time. The queue waiting time is the time when a task can run under the condition of satisfying the time constraint of the directed acyclic graph, and the task needs to wait for the appearance of an idle (unprocessed any task) instance on the server where the task is located. The function configuration time refers to the time for configuring the function corresponding to the task as required and replacing the function corresponding to the idle instance.
Examples
The method for scheduling tasks with dependency relationships configured according to needs by combining functions in edge computing provided by the embodiment of the invention specifically comprises the following steps:
step 1, inputting the edge computing network set in the figure 2 and the directed acyclic graph of the three applications in the figure 3
Figure GDA0003687924250000082
Edge server s giving application initialization a
And 2, carrying out greedy initial configuration on the edge servers in the network by using the information in the step 1, neglecting the capacity of the edge servers, and after greedy ensuring that each task is configured on the server with the minimum running time, setting the virtual capacity value as the maximum value of the spent capacity of all the servers and setting the capacity of the edge servers as the virtual capacity value. Under the virtual capacity setting, the edge server with the residual capacity capable of being configured greedy selects and configures the task of the minimum running time in a non-repeated mode until all the servers are fully configured. The configuration also ensures that each task has an executable server under the condition that the configuration according to the requirement is not allowed;
and 3, carrying out topological sequencing on the directed acyclic graph in the step 1 to obtain a topological sequence of the tasks. Defining a parameter f ij Representing a task v i Place server s j The earliest ending time of the upper run. Is initialized to f ij : infinity, i ≦ J for all 1 ≦ J and 1 ≦ J ≦ K;
step 4, utilizing the server configuration information of step 2 and the definition of step 3, iterating according to the topological sequence, using a dynamic programming method to obtain f in one step ij The value of (a) is,
Figure GDA0003687924250000081
until the last task f is calculated Ja And obtaining a task allocation and scheduling scheme;
and 5, calculating the application completion time in the actual edge calculation network given in the step 1 again by using the task allocation and scheduling sequence scheme obtained in the step 4, performing on-demand configuration if a certain function is lacked during the running of the task, and calculating the real completion time after adding the configuration and queuing waiting time.
In the edge computing network illustrated in fig. 2, an edge cloud system composed of 3 edge servers and a remote cloud is described, in which some functions are allocated to the edge servers, and one function is applied to the edge server s 1 Initialization, tasks 1 and 3 in the DAG graph are placed at s via the inventive decision 1 Is run, and task 2 is placed at s 2 Task 4 is placed at s 3 . Due to s 2 The function corresponding to the task 2 is not configured, and the server s runs in the task 2 2 The function needs to be downloaded from the cloud and replaced by the function corresponding to task 1.
FIG. 4 is a graph illustrating the comparison of the performance of the process of the present invention; the abscissa "chain query" in fig. 4 corresponds to a query chain application, "Video Processing" corresponds to a Video Processing application, and "CDA" corresponds to a complex data analysis application. The algorithm ALG-ODM provided by the invention reduces the completion time by at least 2.8,2.28 and 1.54 times under the three applications.
The configuration of the edge computing network applied in this embodiment is as follows:
Figure GDA0003687924250000091
those of ordinary skill in the art will understand that: all or part of the processes of the methods according to the embodiments may be implemented by a program, which may be stored in a computer-readable storage medium, and when executed, may include the processes according to the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. A method for scheduling dependent tasks configured on demand in combination with functions in edge computing is characterized by comprising the following steps:
step 1, acquiring relevant parameters of an edge computing network and an application containing a task with a dependency relationship, and selecting an edge server from the edge computing network as an initialization server for processing input and output of the application;
step 2, using the relevant parameters of the application obtained in the step 1 and the selected initialization server to perform greedy initial configuration on each edge server in the edge computing network to obtain server configuration information;
in the step 2, greedy initial configuration is performed on the edge servers in the edge computing network by using the relevant parameters of the application obtained in the step 1 to obtain server configuration information, and a number of a corresponding array record configuration function is set on each edge server, wherein the configuration process includes the following steps:
step 21, on the premise of ignoring the actual capacity of the edge server, greedy ensuring that each task configures a corresponding function on the edge server with the least running time, recording the serial number of the function in the array of the edge server, and calculating the maximum capacity cost value in the edge server as the virtual capacity under the current configuration;
step 22, setting the capacity of all edge servers as the virtual capacity, and continuing to configure the edge servers which are not fully configured in the step 21 as follows: sequencing the running time of all tasks on all edge servers from small to large, sequentially judging whether the edge servers corresponding to the running time are fully matched or not, skipping the subsequent steps if the running time is fully matched, judging whether the edge servers which are not fully matched have configured the functions corresponding to the tasks or not if the running time is not fully matched, skipping the subsequent steps if the running time is configured, or else, configuring and storing the edge servers into an array and then skipping to the next running time for judgment until all the edge servers are fully matched;
step 3, representing the tasks with the dependency relationship applied in the step 1 by using a directed acyclic graph, and performing topological sequencing on the tasks in the directed acyclic graph to obtain a topological sequence of the tasks;
step 4, performing iterative computation on the topology sequence of the tasks obtained in the step 3 by using the server configuration information obtained in the step 2, calculating the completion time of the earliest completion of each task placed on each edge server of the edge computing network in the topology sequence and storing the corresponding distribution process, and reversely searching the stored distribution processes according to the completion time of the last task to reversely reconstruct the distribution and scheduling schemes of all the tasks;
and 5, under the constraint of the actual capacity of the edge server, distributing and scheduling each task according to the distribution and scheduling scheme of the tasks finally determined in the step 4.
2. The method for scheduling function-on-demand configured dependent tasks in edge computing according to claim 1, wherein in step 1 of the method, the edge computing network comprises:
a remote cloud and a plurality of heterogeneous edge servers, each edge server having a limited capacity, wherein bidirectional data transfer rates between any two edge servers are equal.
3. The method for scheduling tasks with dependency relationships configured on demand in combination with functions in edge computing according to claim 1 or 2, wherein in step 1 of the method, acquiring relevant parameters of an edge computing network and an application containing tasks with dependency relationships comprises:
the running time of each task on each edge server of the edge computing network and the time it takes for each edge server to configure different functions.
4. The method according to claim 1, wherein the step 22 of not fully allocating means that the number of allocated functions is less than the virtual capacity of the edge server;
full-allocation means that the number of configured functions is equal to the virtual capacity of the edge server.
5. The method for scheduling function-on-demand tasks in edge computing according to claim 1 or 2, wherein in step 3 of the method, representing the tasks with dependencies of the application in step 1 by a directed acyclic graph comprises:
in the tasks with the dependency relationship, each task represents one calculation module in the application, and the tasks with the dependency relationship are set as a task set v ═ { v ═ v } 1 ,v 2 ,…,v J Where, task v j At edge server s k Has a running time of t jk
Using a directed acyclic graph
Figure FDA0003687924240000021
Representing an application, set with directed edge e: (v) ═ v i ,v j ) E epsilon represents task v j Is required to execute task v i The result of (1) as input, where the amount of data transferred is w ij
CN201910286347.XA 2019-04-10 2019-04-10 Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing Active CN110069341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910286347.XA CN110069341B (en) 2019-04-10 2019-04-10 Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910286347.XA CN110069341B (en) 2019-04-10 2019-04-10 Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing

Publications (2)

Publication Number Publication Date
CN110069341A CN110069341A (en) 2019-07-30
CN110069341B true CN110069341B (en) 2022-09-06

Family

ID=67367446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910286347.XA Active CN110069341B (en) 2019-04-10 2019-04-10 Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing

Country Status (1)

Country Link
CN (1) CN110069341B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110650194A (en) * 2019-09-23 2020-01-03 中国科学技术大学 Task execution method based on edge calculation in computer network
CN110740194B (en) * 2019-11-18 2020-11-20 南京航空航天大学 Micro-service combination method based on cloud edge fusion and application
CN113031522B (en) * 2019-12-25 2022-05-31 沈阳高精数控智能技术股份有限公司 Low-power-consumption scheduling method suitable for periodically dependent tasks of open type numerical control system
CN111756812B (en) * 2020-05-29 2021-09-21 华南理工大学 Energy consumption perception edge cloud cooperation dynamic unloading scheduling method
CN111930487B (en) * 2020-08-28 2024-05-24 北京百度网讯科技有限公司 Job stream scheduling method and device, electronic equipment and storage medium
CN116670684A (en) * 2021-05-14 2023-08-29 支付宝(杭州)信息技术有限公司 Method and system for scheduling tasks
CN115037956B (en) * 2022-06-06 2023-03-21 天津大学 Traffic scheduling method for cost optimization of edge server
CN116880994B (en) * 2023-09-07 2023-12-12 之江实验室 Multiprocessor task scheduling method, device and equipment based on dynamic DAG

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018095537A1 (en) * 2016-11-25 2018-05-31 Nokia Technologies Oy Application provisioning to mobile edge
CN109561148A (en) * 2018-11-30 2019-04-02 湘潭大学 Distributed task dispatching method in edge calculations network based on directed acyclic graph

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018095537A1 (en) * 2016-11-25 2018-05-31 Nokia Technologies Oy Application provisioning to mobile edge
CN109561148A (en) * 2018-11-30 2019-04-02 湘潭大学 Distributed task dispatching method in edge calculations network based on directed acyclic graph

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Online job dispatching and scheduling in edge-clouds;Haisheng Tan;《IEEE INFOCOM 2017 - IEEE Conference on Computer Communications》;20171005;全文 *
Performance-effective and low-complexity task scheduling for heterogeneous computing;H. Topcuoglu;《 IEEE Transactions on Parallel and Distributed Systems》;20020807;第13卷(第3期);全文 *
边缘计算环境下服务质量感知的资源调度机制;邹云峰等;《电子技术与软件工程》;20180927(第18期);全文 *
面向移动边缘计算的边缘服务器部署及资源分配研究;赵磊;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190215;第2019年卷(第2期);I139-206 *

Also Published As

Publication number Publication date
CN110069341A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110069341B (en) Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
CN110413392B (en) Method for formulating single task migration strategy in mobile edge computing scene
CN110941667B (en) Method and system for calculating and unloading in mobile edge calculation network
Li et al. Resource allocation and task offloading for heterogeneous real-time tasks with uncertain duration time in a fog queueing system
CN107911478B (en) Multi-user calculation unloading method and device based on chemical reaction optimization algorithm
CN108804227B (en) Method for computing-intensive task unloading and optimal resource allocation based on mobile cloud computing
Téllez et al. A tabu search method for load balancing in fog computing
Zhu et al. BLOT: Bandit learning-based offloading of tasks in fog-enabled networks
CN109788046B (en) Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm
CN110519370B (en) Edge computing resource allocation method based on facility site selection problem
Mostafavi et al. A stochastic approximation approach for foresighted task scheduling in cloud computing
CN113822456A (en) Service combination optimization deployment method based on deep reinforcement learning in cloud and mist mixed environment
CN112860337B (en) Method and system for unloading dependent tasks in multi-access edge computing
Chen et al. Latency minimization for mobile edge computing networks
Chen et al. When learning joins edge: Real-time proportional computation offloading via deep reinforcement learning
Chen et al. Joint optimization of task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge network
Hmimz et al. Bi-objective optimization for multi-task offloading in latency and radio resources constrained mobile edge computing networks
Xu et al. Online learning algorithms for offloading augmented reality requests with uncertain demands in MECs
Kim et al. Partition placement and resource allocation for multiple DNN-based applications in heterogeneous IoT environments
Chen et al. An intelligent approach of task offloading for dependent services in Mobile Edge Computing
Gao et al. Markov decision process‐based computation offloading algorithm and resource allocation in time constraint for mobile cloud computing
Meng et al. Deep reinforcement learning based delay-sensitive task scheduling and resource management algorithm for multi-user mobile-edge computing systems
CN116737370A (en) Multi-resource scheduling method, system, storage medium and terminal
Gao et al. JCSP: Joint caching and service placement for edge computing systems
Yin et al. An optimal image storage strategy for container-based edge computing in smart factory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant