CN110069341A - What binding function configured on demand has the dispatching method of dependence task in edge calculations - Google Patents
What binding function configured on demand has the dispatching method of dependence task in edge calculations Download PDFInfo
- Publication number
- CN110069341A CN110069341A CN201910286347.XA CN201910286347A CN110069341A CN 110069341 A CN110069341 A CN 110069341A CN 201910286347 A CN201910286347 A CN 201910286347A CN 110069341 A CN110069341 A CN 110069341A
- Authority
- CN
- China
- Prior art keywords
- task
- edge
- server
- configuration
- edge server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000008569 process Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 230000002457 bidirectional effect Effects 0.000 claims description 2
- 125000002015 acyclic group Chemical group 0.000 claims 2
- 230000006870 function Effects 0.000 abstract description 56
- 230000001419 dependent effect Effects 0.000 abstract description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Information Transfer Between Computers (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本发明公开了一种边缘计算中结合功能按需配置的有依赖关系任务的调度方法,包括:步骤1,获取网络和任务的相关参数,选取一初始化边缘服务器;步骤2,用步骤1的相关参数对边缘服务器进行贪心初始配置得出服务器配置信息;步骤3,用有向无环图表示步骤1中具有依赖关系的任务,并对有向无环图中任务进行拓扑排序成拓扑序列;步骤4,用步骤2的服务器配置信息对步骤3的拓扑序列迭代,算出各任务在各边缘服务器上运行最早结束的完成时间,得到任务的分配和调度方案;步骤5,在边缘服务器实际容量约束下,按照步骤4的任务的分配和调度方案对各任务进行分配和调度。该方法能在边缘计算环境中最小化一个由多个依赖任务组成的应用的完成时间。
The invention discloses a scheduling method for tasks with dependencies that are configured on demand in combination with functions in edge computing. The parameters are greedy initial configuration of the edge server to obtain server configuration information; step 3, use a directed acyclic graph to represent the tasks with dependencies in step 1, and topologically sort the tasks in the directed acyclic graph into a topological sequence; step 4. Use the server configuration information of step 2 to iterate the topology sequence of step 3, calculate the earliest completion time of each task running on each edge server, and obtain the task allocation and scheduling plan; step 5, under the actual capacity constraints of the edge server , and assign and schedule each task according to the task assignment and scheduling scheme in step 4. This method can minimize the completion time of an application composed of multiple dependent tasks in an edge computing environment.
Description
技术领域technical field
本发明涉及边缘计算领域,尤其涉及一种边缘计算中结合功能按需配置的有依赖关系任务的调度方法。The invention relates to the field of edge computing, in particular to a scheduling method for tasks with dependencies in edge computing that combine functions with on-demand configuration.
背景技术Background technique
近年来,随着蜂窝网络和物联网(IOT)的快速发展,高速度、高可靠性的空中接口使得高复杂度、高能耗的应用被卸载至远程云数据中心运行,从而弥补移动终端计算能力的不足并降低其能耗。然而,长距离的传播不可避免地导致严重的通信延迟,这无法满足诸如增强现实(VR)、认知辅助、车联网等应用程序需实时响应的要求。为了缓解这一问题,移动计算领域出现了一个重要的范式转变,从集中式云计算转向边缘计算(Edge Computing,也叫雾计算,微云计算)。边缘计算的理念是在互联网的边缘(如Wi-Fi接入点或蜂窝基站)部署小型服务器,称为边缘服务器,这些服务器相比于移动设备具有更强大的计算和存储能力,且在地理上靠近移动用户,通常移动用户可以通过无线网络直接与边缘服务器相连,从而极大的降低了通信延迟,使移动用户能够在低延迟的情况下无缝地访问云服务。In recent years, with the rapid development of cellular networks and the Internet of Things (IOT), high-speed and high-reliability air interfaces have enabled applications with high complexity and high energy consumption to be offloaded to remote cloud data centers for operation, thereby making up for the computing power of mobile terminals. and reduce its energy consumption. However, long-distance propagation inevitably leads to severe communication delays, which cannot meet the real-time response requirements of applications such as augmented reality (VR), cognitive assistance, and connected vehicles. To alleviate this problem, there has been an important paradigm shift in the field of mobile computing, from centralized cloud computing to edge computing (Edge Computing, also known as fog computing, micro-cloud computing). The idea of edge computing is to deploy small servers, called edge servers, at the edge of the Internet (such as Wi-Fi access points or cellular base stations), which have more powerful computing and storage capabilities than mobile devices and are geographically located. Close to mobile users, usually mobile users can directly connect to edge servers through wireless networks, which greatly reduces communication latency and enables mobile users to seamlessly access cloud services with low latency.
然而由于移动应用的性能要求和资源需求正在急剧增加,边缘计算在实际应用中面临着许多挑战,如:However, as the performance requirements and resource demands of mobile applications are increasing dramatically, edge computing faces many challenges in practical applications, such as:
(1)容量限制及按需配置:与远程云计算相比,边缘服务器的计算和存储能力都相对有限,边缘服务器上只能配置有限数量的功能。边缘服务器为了运行某一个任务,需要进行相应的数据库缓存,镜像下载、安装和启动以及额外的环境配置等操作,这些系列操作可称为功能配置,因而一个任务只能在具有所需功能的边缘服务器上运行。若当前边缘服务器没有足够的容量去配置当前待调度任务对应的功能,则需要决策移除部分边缘服务器上已配置的功能。功能的按需配置将显著影响移动应用程序的性能和边缘服务器的利用率,因此如何提供智能的功能配置策略至关重要。(1) Capacity limitation and on-demand configuration: Compared with remote cloud computing, edge servers have relatively limited computing and storage capabilities, and only a limited number of functions can be configured on edge servers. In order to run a certain task, the edge server needs to perform operations such as database caching, image download, installation and startup, and additional environment configuration. These series of operations can be called function configuration. Therefore, a task can only be performed on the edge with the required function. run on the server. If the current edge server does not have enough capacity to configure the function corresponding to the currently scheduled task, you need to decide to remove the configured functions on some edge servers. The on-demand configuration of functions will significantly affect the performance of mobile applications and the utilization of edge servers, so how to provide intelligent function configuration strategies is crucial.
(2)任务依赖和并行执行:移动应用程序由多个有依赖关系的任务组成,通常用一个有向无环图(DAG)来表示。图中的点代表不同类型的任务,有向边上的值代表一个任务结束后需要传输一定的数据量作为箭头指向任务的输入,所以图中的边集也定义了任务执行的先后或并行关系。此外,不同的任务可能对边缘服务器有不同的偏好,比如一个Facebook的视频处理应用,编码操作是计算密集型任务,更适合放在运算性能更强大的边缘服务器上。为了尽可能的最小化应用的完成时间,如何设计合理的调度策略是需要解决的问题,包括决策DAG中各个任务分别放置到哪个边缘服务器以及各边缘服务器上任务执行的顺序。(2) Task dependencies and parallel execution: A mobile application consists of multiple tasks with dependencies, usually represented by a directed acyclic graph (DAG). The points in the graph represent different types of tasks, and the value on the directed edge represents that a certain amount of data needs to be transmitted after the end of the task as the input of the arrow pointing to the task, so the edge set in the graph also defines the sequence or parallel relationship of task execution. . In addition, different tasks may have different preferences for edge servers. For example, in a Facebook video processing application, encoding operations are computationally intensive tasks and are more suitable for edge servers with more powerful computing performance. In order to minimize the application completion time as much as possible, how to design a reasonable scheduling strategy is a problem that needs to be solved, including deciding which edge server to place each task in the DAG and the order in which the tasks are executed on each edge server.
目前的移动边缘计算领域,有大量研究任务调度和功能配置问题的工作,但是已有的算法没有考虑应用程序中任务的依赖关系,而是假定应用是一个独立且不可分的整体。随着移动应用日益复杂,将其中可并行的任务分配到不同的边缘服务器上运行能有效优化移动应用的性能。但在资源受限的边缘计算环境中,如何进行功能配置和对有依赖关系的任务调度是急需解决的问题。At present, in the field of mobile edge computing, there is a lot of work on task scheduling and function configuration, but the existing algorithms do not consider the dependencies of tasks in the application, but assume that the application is an independent and inseparable whole. With the increasing complexity of mobile applications, distributing parallel tasks to different edge servers can effectively optimize the performance of mobile applications. However, in a resource-constrained edge computing environment, how to configure functions and schedule tasks with dependencies is an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
基于现有技术所存在的问题,本发明的目的是提供一种边缘计算中结合功能按需配置的有依赖关系任务的调度方法,能解决现有边缘计算中任务调度没有考虑应用程序中任务的依赖关系,导致边缘计算中应用运行效率不高的问题。Based on the problems existing in the prior art, the purpose of the present invention is to provide a scheduling method for tasks with dependencies in edge computing combined with on-demand configuration of functions, which can solve the problem that task scheduling in existing edge computing does not consider tasks in application programs. Dependency, which leads to the problem of low application operation efficiency in edge computing.
本发明的目的是通过以下技术方案实现的:The purpose of this invention is to realize through the following technical solutions:
本发明实施方式提供一种边缘计算中结合功能按需配置的有依赖关系任务的调度方法,包括:Embodiments of the present invention provide a scheduling method for tasks with dependencies configured on demand in combination with functions in edge computing, including:
步骤1,获取边缘计算网络和包含具有依赖关系任务的应用的相关参数,从边缘计算网络中选取一个边缘服务器作为处理所述应用输入和输出的初始化服务器;Step 1, obtain the relevant parameters of the edge computing network and the application including the task with dependency, and select an edge server from the edge computing network as the initialization server for processing the input and output of the application;
步骤2,利用所述步骤1获取的应用的相关参数对所述边缘计算网络中的各边缘服务器进行贪心初始配置得出服务器配置信息;Step 2, using the relevant parameters of the application obtained in the step 1 to perform a greedy initial configuration on each edge server in the edge computing network to obtain server configuration information;
步骤3,用有向无环图表示所述步骤1中的应用的具有依赖关系的任务,并对所述有向无环图中的任务进行拓扑排序,得到任务的拓扑序列;Step 3, use a directed acyclic graph to represent the tasks with dependencies of the application in the step 1, and perform topological sorting on the tasks in the directed acyclic graph to obtain the topological sequence of the tasks;
步骤4,利用所述步骤2得出的服务器配置信息对所述步骤3得出的任务的拓扑序列迭代计算,计算出拓扑序列中每个任务放置在边缘计算网络的各边缘服务器上运行最早结束的完成时间并存储对应分配过程,根据最后一个任务完成时间反向搜索所存储的各分配过程来反向重建所有任务的分配和调度方案;Step 4: Use the server configuration information obtained in step 2 to iteratively calculate the topology sequence of the tasks obtained in step 3, and calculate that each task in the topology sequence is placed on each edge server of the edge computing network and runs the earliest to end. According to the completion time of the last task and store the corresponding allocation process, reversely search the stored allocation processes according to the completion time of the last task to reversely reconstruct the allocation and scheduling scheme of all tasks;
步骤5,在边缘服务器实际容量约束下,按照所述步骤4最终确定的任务的分配和调度方案对各任务进行分配和调度。Step 5: All tasks are allocated and scheduled according to the task allocation and scheduling scheme finally determined in step 4 under the constraints of the actual capacity of the edge server.
由上述本发明提供的技术方案可以看出,本发明实施例提供的边缘计算中结合功能按需配置的有依赖关系任务的调度方法,其有益效果为:It can be seen from the technical solutions provided by the present invention that the embodiments of the present invention provide a scheduling method for tasks with dependencies that are configured on demand in combination with functions in edge computing, and the beneficial effects are as follows:
本发明的方法实现了决策功能的按需配置,以及各个任务分别放置到哪个边缘服务器以及各边缘服务器上任务的执行顺序。能对边缘计算中,考虑边缘服务器上有限数量的功能配置前提下,对具有依赖关系的任务的高效调度及功能的按需配置,提高边缘服务器的利用率,实现在边缘计算环境中最小化一个由多个依赖任务组成的应用的完成时间(完成时间是指应用从边缘服务器或远程云服务器卸载后,获得最终运行结果返回至移动用户的时间),减少应用运行时间,与其他修改后利用到本场景下的方法(如topcuoglu提出的HEFT算法)相比,能减少1.54~2.8倍的应用完成时间,提高用户体验。The method of the invention realizes the on-demand configuration of the decision-making function, as well as which edge server each task is placed on and the execution sequence of the tasks on each edge server. In edge computing, under the premise of considering the limited number of functions on the edge server, the efficient scheduling of tasks with dependencies and the on-demand configuration of functions can improve the utilization of edge servers, and minimize one in the edge computing environment. Completion time of an application composed of multiple dependent tasks (completion time refers to the time it takes for the application to obtain the final running result and return it to the mobile user after the application is uninstalled from the edge server or remote cloud server), reduce the application running time, and use it with other modifications. Compared with the method in this scenario (such as the HEFT algorithm proposed by topcuoglu), it can reduce the application completion time by 1.54 to 2.8 times and improve the user experience.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.
图1为本发明实施例提供的边缘计算中结合功能按需配置的有依赖关系任务的调度方法的流程图;1 is a flowchart of a method for scheduling tasks with dependencies that are configured on demand in combination with functions in edge computing provided by an embodiment of the present invention;
图2为本发明实施例提供的配置方法的模型图;2 is a model diagram of a configuration method provided by an embodiment of the present invention;
图3为本发明实施提供的三种应用的有向无环图结构示意图;3 is a schematic structural diagram of a directed acyclic graph of three applications provided by the implementation of the present invention;
图4为本发明实施提供的方法的实施性能对比图。FIG. 4 is a comparison diagram of the implementation performance of the method provided by the implementation of the present invention.
具体实施方式Detailed ways
下面结合本发明的具体内容,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。本发明实施例中未作详细描述的内容属于本领域专业技术人员公知的现有技术。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the specific content of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present invention. Contents that are not described in detail in the embodiments of the present invention belong to the prior art known to those skilled in the art.
如图1所示,本发明实施例提供一种边缘计算中结合功能按需配置的有依赖关系任务的调度方法,能实现在边缘计算环境中最小化一个由多个依赖任务组成的应用的完成时间,进而提高边缘计算中应用运行的效率,提升用户体验,包括:As shown in FIG. 1 , an embodiment of the present invention provides a method for scheduling tasks with dependencies in edge computing combined with on-demand configuration of functions, which can minimize the completion of an application composed of multiple dependent tasks in an edge computing environment time, thereby improving the efficiency of application operation in edge computing and improving user experience, including:
步骤1,获取边缘计算网络和包含具有依赖关系任务的应用的相关参数,从边缘计算网络中选取一个边缘服务器作为应用的初始化服务器;Step 1, obtain the relevant parameters of the edge computing network and the application including the tasks with dependencies, and select an edge server from the edge computing network as the initialization server of the application;
步骤2,利用所述步骤1获取的相关参数对所述边缘计算网络中的各边缘服务器进行贪心初始配置得出服务器配置信息;Step 2, using the relevant parameters obtained in step 1 to perform greedy initial configuration on each edge server in the edge computing network to obtain server configuration information;
步骤3,用有向无环图表示所述步骤1中的具有依赖关系的任务,并对所述有向无环图中的任务进行拓扑排序,得到任务的拓扑序列;Step 3, use a directed acyclic graph to represent the tasks with dependencies in the step 1, and perform topological sorting on the tasks in the directed acyclic graph to obtain a topological sequence of tasks;
步骤4,利用所述步骤2得出的服务器配置信息对所述步骤3得出的任务的拓扑序列迭代计算,计算出拓扑序列中每个任务放置在边缘计算网络的各边缘服务器上运行最早结束的完成时间并存储对应过程,根据最后一个任务完成时间反向搜索所存储的各过程来反向重建所有任务的分配和调度方案;Step 4: Use the server configuration information obtained in step 2 to iteratively calculate the topology sequence of the tasks obtained in step 3, and calculate that each task in the topology sequence is placed on each edge server of the edge computing network and runs the earliest to end. and store the corresponding process, and reversely search the stored processes according to the completion time of the last task to reversely reconstruct the allocation and scheduling scheme of all tasks;
步骤5,在边缘服务器实际容量约束下,按照所述步骤4最终确定的任务的分配和调度方案对各任务进行分配和调度。Step 5: All tasks are allocated and scheduled according to the task allocation and scheduling scheme finally determined in step 4 under the constraints of the actual capacity of the edge server.
上述方法的步骤1中,边缘计算网络包括:In step 1 of the above method, the edge computing network includes:
一个远程云服务器和多个异构的边缘服务器,每个边缘服务器具有有限容量,其中,任意两个边缘服务器之间的双向数据传输率相等。A remote cloud server and multiple heterogeneous edge servers, each edge server has a limited capacity, wherein the bidirectional data transmission rate between any two edge servers is equal.
上述方法的步骤1中,获取边缘计算网络和具有依赖关系的任务的相关参数包括:In step 1 of the above method, obtaining the relevant parameters of the edge computing network and the task with dependencies includes:
各任务在边缘计算网络的各边缘服务器上的运行时间以及各边缘服务器配置不同功能需花费的时间。The running time of each task on each edge server of the edge computing network and the time it takes for each edge server to configure different functions.
上述方法的步骤1中,贪心初始配置开始的服务器是由所有任务在所有服务器上运行时间最小值决定的。这个初始化服务器其实是模拟某一个移动设备将其一个计算任务(比如一个人脸识别的应用)卸载到某一个边缘服务器(就是此处的初始化服务器),希望借助初始化服务器的资源帮其更快速的完成该计算任务,移动设备会提供计算任务所需要的输入数据,并由该初始化服务器返回最终的计算结果。In step 1 of the above method, the server where the greedy initial configuration starts is determined by the minimum running time of all tasks on all servers. This initialization server actually simulates a mobile device to offload one of its computing tasks (such as a face recognition application) to an edge server (that is, the initialization server here), hoping to use the resources of the initialization server to help it faster. After completing the computing task, the mobile device will provide input data required by the computing task, and the initialization server will return the final computing result.
上述方法的步骤2的利用所述步骤1获取的相关参数对所述边缘计算网络中的边缘服务器进行贪心初始配置得出服务器配置信息中,在各边缘服务器上设置一个对应的数组记录配置功能的编号,数组存储的信息即为得出的服务器配置信息,配置过程包括以下步骤:In step 2 of the above method, the relevant parameters obtained in step 1 are used to perform greedy initial configuration on the edge servers in the edge computing network to obtain the server configuration information, and a corresponding array record configuration function is set on each edge server. The information stored in the array is the obtained server configuration information. The configuration process includes the following steps:
步骤21,在忽略边缘服务器的实际容量的前提下,贪心的确保每个任务在其运行时间最少的边缘服务器上配置好对应的功能,将功能的编号记录在该边缘服务器的数组中,并计算出当前配置下,边缘服务器中最大容量花费值作为虚拟容量;Step 21: Under the premise of ignoring the actual capacity of the edge server, greedily ensure that each task is configured with the corresponding function on the edge server with the least running time, record the function number in the array of the edge server, and calculate Under the current configuration, the maximum capacity cost value in the edge server is used as the virtual capacity;
步骤22,将所有边缘服务器的容量设为虚拟容量,对所述步骤21中未满配的边缘服务器继续进行如下配置:将所有任务在所有边缘服务器的运行时间从小到大排序,依次判断运行时间所对应的边缘服务器上是否已经满配,若满配,则跳过后续步骤,若未满配,判断未满配的边缘服务器是否已经配置任务对应的功能,若已配置,跳过后续步骤,否则,进行配置并保存到数组后再跳到下一个运行时间进行判断,直到所有的边缘服务器均配满。Step 22, set the capacity of all edge servers as virtual capacity, and continue to perform the following configuration on the edge servers that are not fully configured in step 21: sort the running time of all tasks on all edge servers from small to large, and determine the running time in turn Whether the corresponding edge server is fully configured, if it is fully configured, skip the next steps; if not, determine whether the function corresponding to the task has been configured on the under-configured edge server. If so, skip the next steps. Otherwise, configure and save it to the array, and then skip to the next runtime for judgment until all edge servers are full.
上述步骤21的处理具体是统计各个边缘服务器在贪心配置方案下分别需要花费多少的容量,边缘服务器中最大容量花费值为虚拟容量(当一个功能占用一个单位的容量时,最大配置功能数目就等于最大容量消耗数目)。The processing of the above-mentioned step 21 is to count how much capacity each edge server needs to spend under the greedy configuration scheme. maximum capacity consumption).
上述步骤22中,未满配是指配置的功能数小于边缘服务器的虚拟容量;In the above step 22, under-provisioning means that the number of configured functions is less than the virtual capacity of the edge server;
满配是指配置的功能数等于边服务器的虚拟容量。Full configuration means that the number of functions configured is equal to the virtual capacity of the edge server.
上述步骤22的处理具体是在步骤21下未满配的边缘服务器上继续进行功能配置,使得每个边缘服务器消耗完其虚拟容量,实际中可用一个二维数组记录每个边缘服务器上配置的功能。The processing of the above step 22 is specifically to continue to perform function configuration on the edge servers that are not fully configured in step 21, so that each edge server consumes its virtual capacity, and in practice, a two-dimensional array can be used to record the functions configured on each edge server. .
上述方法的步骤3中,用有向无环图表示所述步骤1中应用的具有依赖关系的任务包括:In step 3 of the above method, using a directed acyclic graph to represent the tasks with dependencies applied in step 1 includes:
具有依赖关系的任务中,每个任务表示应用中的一个计算模块,将具有依赖关系的任务设定为一个任务集合其中,任务vj在边缘服务器sk的运行时间为tjk;Among the tasks with dependencies, each task represents a computing module in the application, and the tasks with dependencies are set as a task set Among them, the running time of task v j on the edge server sk is t jk ;
用一个有向无环图表示一个应用,设定有向边e:=(vi,vj)∈ε表示任务vj的执行需要任务vi,的结果作为输入,其中传输的数据量为wij。with a directed acyclic graph Representing an application, set the directed edge e:=(vi,v j )∈ε to indicate that the execution of task v j requires the result of task v i , as input, where the amount of transmitted data is w ij .
上述步骤4中,对任务的分配和调度方案是在动态规划过程得出的,动态规划方法中会记忆化存储子问题的结果,当迭代到最后一步时(也就是最后一个任务)得到了最小化完成时间,可从该结果反向重建任务分配和调度方案,用数组记录该完成时间下各个任务的分配以及各个服务器上任务调度次序,进而得到最终确定的任务的分配和调度方案。In the above step 4, the assignment and scheduling scheme of tasks is obtained in the dynamic programming process. In the dynamic programming method, the results of the sub-problems will be memorized and stored. When the iteration reaches the last step (that is, the last task), the minimum value is obtained. To calculate the completion time, the task allocation and scheduling scheme can be reconstructed in reverse from the result, and the assignment of each task under the completion time and the task scheduling order on each server can be recorded with an array, and then the finalized task allocation and scheduling scheme can be obtained.
本发明的方法实现了决策功能的按需配置,以及各个任务分别放置到哪个边缘服务器以及各边缘服务器上任务的执行顺序。能对边缘计算中,考虑边缘服务器上有限数量的功能配置前提下,对具有依赖关系的任务的高效调度及功能的按需配置,提高边缘服务器的利用率,实现在边缘计算环境中最小化一个由多个依赖任务组成的应用的完成时间(完成时间是指应用从边缘服务器或远程云服务器卸载后,获得最终运行结果返回至移动用户的时间),减少应用运行时间,提高用户体验。The method of the invention realizes the on-demand configuration of the decision-making function, as well as which edge server each task is placed on and the execution sequence of the tasks on each edge server. In edge computing, under the premise of considering the limited number of functions on the edge server, the efficient scheduling of tasks with dependencies and the on-demand configuration of functions can improve the utilization of edge servers, and minimize one in the edge computing environment. Completion time of an application consisting of multiple dependent tasks (completion time refers to the time it takes for an application to be uninstalled from an edge server or a remote cloud server to obtain the final running result and return it to the mobile user), reduce application running time and improve user experience.
下面对本发明实施例具体作进一步地详细描述。The embodiments of the present invention will be described in further detail below.
本发明实施例的边缘计算中结合功能按需配置的有依赖关系任务的调度方法,是一种结合功能配置的有依赖关系任务的调度方法,包括:模型定义以及处理步骤;The method for scheduling tasks with dependencies that are configured on demand in combination with functions in an edge computing embodiment of the present invention is a method for scheduling tasks with dependencies that are configured in combination with functions, including: model definition and processing steps;
其中,(1)该调度方法所用的网络环境、各模型定义如下:Among them, (1) the network environment and each model used in the scheduling method are defined as follows:
(11)边缘计算网络:本发明的方法应用的边缘计算网络为边缘云系统,其中有K个异构的边缘服务器,用表示;其中,每一个边缘服务器sk有有限容量Ck;边缘服务器si到sj的数据传输率是dij,设定dij=dji;边缘云系统中包括一个远程云服务器sK。(11) Edge computing network: the edge computing network to which the method of the present invention is applied is an edge cloud system, in which there are K heterogeneous edge servers, which are where, each edge server sk has a limited capacity C k ; the data transmission rate from edge servers si to s j is d ij , set d ij =d ji ; the edge cloud system includes a remote cloud server s K .
(12)任务依赖图:每个应用中的一个计算模块为一个任务,每个应用的多个任务具有依赖关系,设定一个应用的多个的任务集合其中,任务vj在边缘服务器sk的运行时间用tjk表示;用一个有向无环图(DAG)来表示一个应用,设定有向边e:=(vi,vj)∈ε表示任务vj的执行需要任务vi,的结果作为输入,其中传输的数据量为wij。(12) Task dependency graph: One computing module in each application is a task, and multiple tasks of each application have dependencies, and set multiple task sets of one application Among them, the running time of task v j on the edge server sk is denoted by t jk ; a directed acyclic graph (DAG) To represent an application, set the directed edge e:=(vi, v j )∈ε to indicate that the execution of task v j requires the result of task v i , as input, where the amount of data transmitted is w ij .
(13)服务器配置:任务vi只能在配置了对应功能的边缘服务器上运行,在边缘服务器sj为任务vi进行配置需要花费时间rij;设定每个功能占一个单位的容量,边缘服务器si上一次可以至多配置Ci个功能,默认服务器会为配置好的功能开启一个实例(也可称为线程)以处理对应的任务;如果边缘服务器上没有足够的容量配置新功能,则需要决策扔掉某些已配置的功能,同时终止所扔掉功能对应的实例。(13) Server configuration: task vi can only be run on edge servers configured with corresponding functions , and it takes time r ij to configure task vi on edge server s j ; set the capacity of each function to account for one unit, The edge server si can be configured with at most C i functions at a time. The default server will open an instance (also called a thread) for the configured function to process the corresponding task; if there is not enough capacity on the edge server to configure the new function, Then a decision needs to be made to throw away some configured functions, and at the same time terminate the corresponding instance of the thrown function.
(14)有向无环图简化:为了简化应用表示,加入一个空节点,并生成有向边指向有向无环图中所有的入口节点(入度为0),这些边上的数据传输量为应用初始化数据量,另外一个空节点用来收集所有结束节点(出度为0)的结果;空节点不占据任何容量,也没有执行时间,两个空节点只能放置在同一个边缘服务器上,表示任务请求的初始化服务器也是其接收最终执行结果的边缘服务器。这一步的加入是因为用户的移动设备一般会将计算的应用卸载到与其距离最近的边缘服务器(本申请中称为初始化服务器),同时把应用的初始输入数据发送给初始化服务器,在调度的时候,若初始化服务器决定把入口节点(也就是需要初始输入数据作为输入的任务)放置到了别的边缘服务器上,那势必要考虑初始输入数据从初始化服务器传输到别的边缘服务器带来的通信延迟。反之最后的结果也由初始化服务器返回给用户的移动设备,对有向无环图简化有利用刻画这一过程。(14) Directed acyclic graph simplification: In order to simplify the application representation, add an empty node, and generate directed edges pointing to all entry nodes (in-degree 0) in the directed acyclic graph, and the amount of data transmission on these edges Initialize the amount of data for the application, and another empty node is used to collect the results of all end nodes (out-degree 0); empty nodes do not occupy any capacity and have no execution time, and two empty nodes can only be placed on the same edge server. , indicating that the initialization server of the task request is also the edge server that receives the final execution result. This step is added because the user's mobile device generally unloads the computing application to the edge server closest to it (referred to as the initialization server in this application), and at the same time sends the initial input data of the application to the initialization server. When scheduling , if the initialization server decides to place the entry node (that is, the task that requires initial input data as input) on another edge server, it is necessary to consider the communication delay caused by the transmission of the initial input data from the initialization server to other edge servers. Conversely, the final result is also returned to the user's mobile device by the initialization server, which simplifies and characterizes the process of the directed acyclic graph.
该方法的具体处理包括如下步骤:The specific processing of the method includes the following steps:
步骤1,输入模型定义中各个参数的信息,包括:网络、有向无环图、各个任务在各个服务器上的运行时间以及各个边缘服务器配置不同功能需要花费的时间,并指定该应用的入口节点所在的服务器。Step 1: Enter the information of each parameter in the model definition, including: network, directed acyclic graph, the running time of each task on each server, and the time it takes for each edge server to configure different functions, and specify the entry node of the application the server on which it resides.
步骤2,利用上述步骤1的信息,具体是指每个任务在不同服务器上的运行时间信息,对网络中的边缘服务器进行贪心初始配置:Step 2, using the information in the above step 1, specifically the running time information of each task on different servers, perform greedy initial configuration on the edge servers in the network:
步骤21,在不考虑边缘服务器的实际容量约束的前提下,贪心的确保每个任务在其运行时间最少的服务器上配置好,并计算出所有服务器中容量花费的最大值(也就是服务器上最大配置的功能数目),该值被称为虚拟容量;Step 21, without considering the actual capacity constraints of edge servers, greedily ensure that each task is configured on the server with the least running time, and calculate the maximum capacity cost among all servers (that is, the maximum cost on the server). number of functions configured), this value is called virtual capacity;
步骤22,将所有边缘服务器的容量设为虚拟容量,对上述步骤21中未达到满配(也就是配置的功能数目小于虚拟容量)的边缘服务器继续进行配置:将所有任务在所有边缘服务器的运行时间从小到大排序,依次判断运行时间tjk所对应的边缘服务器sk上是否已经满配,若满配,则跳过后续步骤,若未满配,判断边缘服务器sk是否已经配置任务vi对应的功能,若已配置,跳过后续步骤,进行配置后再跳到下一个运行时间进行判断,直到所有的边缘服务器均配满。该配置也保证了不允许按需配置的情况下(也就是服务器配置的功能不再变化的情况下)每个任务都有可执行的边缘服务器(这是步骤4运行的必要条件);Step 22: Set the capacity of all edge servers as virtual capacity, and continue to configure the edge servers that have not reached full configuration (that is, the number of configured functions is less than the virtual capacity) in step 21 above: run all tasks on all edge servers. The time is sorted from small to large, and it is judged in turn whether the edge server sk corresponding to the running time t jk is fully configured. If the function corresponding to i has been configured, skip the next steps, and then skip to the next running time to judge until all edge servers are fully configured. This configuration also ensures that each task has an executable edge server (this is a necessary condition for the operation of step 4) when on-demand configuration is not allowed (that is, when the function of the server configuration does not change);
步骤3,对上述步骤1中输入的有向无环图进行拓扑排序,得到任务的拓扑序列;定义参数fij表示任务vi放置服务器sj上运行最早结束的时间。初始化为fij∶=∞,对于所有的1≤i≤J以及1≤j≤K;Step 3: Topological sorting is performed on the directed acyclic graph input in the above step 1 to obtain the topological sequence of the tasks; the parameter f ij is defined to represent the earliest end time of the task vi placed on the server s j . Initialized to f ij :=∞, for all 1≤i≤J and 1≤j≤K;
步骤4,利用上述步骤2的服务器配置信息以及步骤3的定义,按照拓扑序列迭代,使用动态规划的方法,一步一步获得fij的值, 直到计算出最后一个任务fJa的完成时间,并得到任务的分配和调度方案;Step 4, using the server configuration information of the above step 2 and the definition of step 3, iterate according to the topology sequence, and use the method of dynamic programming to obtain the value of f ij step by step, Until the completion time of the last task f Ja is calculated, and the assignment and scheduling scheme of the task is obtained;
步骤5,在服务器实际容量约束下,按照步骤4的结果对应用的任务进行分配和调度,计算出应用真正的完成时间。考虑实际容量往往小于虚拟容量,对某些在虚拟容量下满足了有向无环图依赖约束后可直接运行的任务在实际容量限制下还需要花费额外的排队等待时间和功能配置时间,因此真正的任务完成时间不是步骤4得到的fJa,而是fJa再加上某些任务因为排队等待时间以及功能配置的时间。其中,排队等待时间是当任务在满足有向无环图的时间约束下可运行时,需等候该任务所在服务器上有空闲(未处理任何任务)的实例出现的时间。功能配置时间是指进行按需配置该任务对应的功能,并替换掉空闲实例对应的功能的时间。Step 5: Under the constraint of the actual capacity of the server, the tasks of the application are allocated and scheduled according to the result of step 4, and the real completion time of the application is calculated. Considering that the actual capacity is often smaller than the virtual capacity, for some tasks that can be run directly after satisfying the directed acyclic graph dependency constraint under the virtual capacity, additional queuing waiting time and function configuration time are required under the actual capacity limit. The task completion time is not f Ja obtained in step 4, but f Ja plus some tasks due to queue waiting time and function configuration time. Among them, the queue waiting time is the time when a task can run under the time constraint of the directed acyclic graph, and it needs to wait for an idle instance (not processing any task) on the server where the task is located. The function configuration time refers to the time required to configure the function corresponding to the task and replace the function corresponding to the idle instance.
实施例Example
本发明实施例提供的边缘计算中结合功能按需配置的有依赖关系任务的调度方法,具体包括如下步骤:The method for scheduling tasks with dependencies configured on demand in combination with functions in edge computing provided by the embodiment of the present invention specifically includes the following steps:
步骤1,输入图2设定的边缘计算网络以及图3的三种应用的有向无环图给出应用初始化的边缘服务器sa;Step 1, input the edge computing network set in Figure 2 and the directed acyclic graph of the three applications in Figure 3 give the edge server s a initialized by the application;
步骤2,利用上述步骤1的信息对网络中的边缘服务器进行贪心初始配置,忽略边缘服务器的本身容量,贪心的确保每个任务在其运行时间最少的服务器上配置后,将虚拟容量值设为所有服务器花费容量的最大值并设定边缘服务器的容量均为该虚拟容量值。在此虚拟容量设定下,对还有剩余容量可以进行配置的边缘服务器贪心非重复的选择配置最小运行时间的任务,直到所有的服务器均配满。该配置也保证了不允许按需配置的情况下每个任务都有可执行的服务器;Step 2: Use the information in Step 1 above to perform a greedy initial configuration of the edge servers in the network, ignoring the capacity of the edge servers, and greedily ensure that each task is configured on the server with the least running time, and set the virtual capacity value as All servers spend the maximum value of capacity and set the capacity of the edge server to be the virtual capacity value. Under this virtual capacity setting, edge servers with remaining capacity that can be configured greedily and non-repeatedly select the task of configuring the minimum running time until all servers are fully configured. This configuration also ensures that each task has an executable server without allowing on-demand configuration;
步骤3,对上述步骤1中有向无环图进行拓扑排序,得到任务的拓扑序列。定义参数fij表示任务vi放置服务器sj上运行最早结束的时间。初始化为fij∶=∞,对于所有的1≤i≤J以及1≤j≤K;Step 3: Perform topological sorting on the directed acyclic graph in Step 1 above to obtain a topological sequence of tasks. The parameter f ij is defined to represent the earliest finish time of the task vi placed on the server s j . Initialized to f ij :=∞, for all 1≤i≤J and 1≤j≤K;
步骤4,利用2的服务器配置信息以及3的定义,按照拓扑序列迭代,使用动态规划的方法,一步一步获得fij的值,直到计算出最后一个任务fJa的完成时间,并得到任务的分配和调度方案;Step 4, use the server configuration information of 2 and the definition of 3, iterate according to the topology sequence, and use the method of dynamic programming to obtain the value of f ij step by step, Until the completion time of the last task f Ja is calculated, and the assignment and scheduling scheme of the task is obtained;
步骤5,利用上述步骤4中获得的任务分配和调度顺序方案,重新在上述步骤1给出的实际边缘计算网络中计算出应用完成时间,任务运行时缺少某个功能则进行按需配置,加上配置以及排队等待时间后可计算出真实的完成时间。Step 5: Using the task allocation and scheduling sequence scheme obtained in the above step 4, recalculate the application completion time in the actual edge computing network given in the above step 1. If a certain function is missing when the task is running, configure it as needed, and add it. The actual completion time can be calculated after the above configuration and the waiting time in the queue.
上述图2示意的边缘计算网络中,描述了3个边缘服务器与一个远程云组成的边缘云系统,其中边缘服务器分配配置了一些功能,一个应用在边缘服务器s1初始化,经过本发明决策将DAG图中任务1和3放置在s1上运行,而任务2放置在s2,任务4放置在s3。由于s2上未配置任务2对应的功能,在任务2运行时,服务器s2需要从云端下载该功能并替换掉任务1对应的功能。In the edge computing network shown in FIG. 2 above, an edge cloud system composed of three edge servers and a remote cloud is described, in which the edge servers are allocated and configured with some functions, an application is initialized in the edge server s 1 , and the DAG is determined by the present invention. In the figure, tasks 1 and 3 are placed to run on s 1 , while task 2 is placed on s 2 and task 4 is placed on s 3 . Since the function corresponding to task 2 is not configured on s 2 , when task 2 runs, server s 2 needs to download the function from the cloud and replace the function corresponding to task 1.
图4示意了本发明方法的实施性能对比图;该图4中横坐标“chain query”对应查询链应用,“Video Processing”对应视频处理应用,“CDA”对应复杂数据分析应用。本发明提出的算法ALG-ODM在这三种应用下减少了至少2.8,2.28,1.54倍完成时间。Fig. 4 shows a performance comparison diagram of the method of the present invention; in Fig. 4, the abscissa "chain query" corresponds to a query chain application, "Video Processing" corresponds to a video processing application, and "CDA" corresponds to a complex data analysis application. The algorithm ALG-ODM proposed in the present invention reduces the completion time by at least 2.8, 2.28, 1.54 times under these three applications.
本实施例所应用的边缘计算网络配置如下表:The edge computing network configuration applied in this embodiment is as follows:
本领域普通技术人员可以理解:实现上述实施例方法中的全部或部分流程是可以通过程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be completed by instructing relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods. The storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) or the like.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明披露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求书的保护范围为准。The above description is only a preferred embodiment of the present invention, but the protection scope of the present invention is not limited to this. Substitutions should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910286347.XA CN110069341B (en) | 2019-04-10 | 2019-04-10 | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910286347.XA CN110069341B (en) | 2019-04-10 | 2019-04-10 | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110069341A true CN110069341A (en) | 2019-07-30 |
CN110069341B CN110069341B (en) | 2022-09-06 |
Family
ID=67367446
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910286347.XA Active CN110069341B (en) | 2019-04-10 | 2019-04-10 | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110069341B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110650194A (en) * | 2019-09-23 | 2020-01-03 | 中国科学技术大学 | Task execution method based on edge computing in computer network |
CN110740194A (en) * | 2019-11-18 | 2020-01-31 | 南京航空航天大学 | Micro-service combination method based on cloud edge fusion and application |
CN111756812A (en) * | 2020-05-29 | 2020-10-09 | 华南理工大学 | An energy-aware edge-cloud collaborative dynamic offload scheduling method |
CN111930487A (en) * | 2020-08-28 | 2020-11-13 | 北京百度网讯科技有限公司 | Job flow scheduling method and device, electronic equipment and storage medium |
CN113031522A (en) * | 2019-12-25 | 2021-06-25 | 沈阳高精数控智能技术股份有限公司 | Low-power-consumption scheduling method suitable for periodically dependent tasks of open type numerical control system |
CN113986553A (en) * | 2021-11-04 | 2022-01-28 | 中国电信股份有限公司 | Model caching method and device based on mobile edge calculation, medium and equipment |
CN114610502A (en) * | 2022-03-24 | 2022-06-10 | 阿里巴巴(中国)有限公司 | Application workload scheduling method and device |
CN115037956A (en) * | 2022-06-06 | 2022-09-09 | 天津大学 | Traffic scheduling method for cost optimization of edge server |
WO2022236834A1 (en) * | 2021-05-14 | 2022-11-17 | Alipay (Hangzhou) Information Technology Co., Ltd. | Method and system for scheduling tasks |
CN116880994A (en) * | 2023-09-07 | 2023-10-13 | 之江实验室 | Multi-processor task scheduling method, device and equipment based on dynamic DAG |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018095537A1 (en) * | 2016-11-25 | 2018-05-31 | Nokia Technologies Oy | Application provisioning to mobile edge |
CN109561148A (en) * | 2018-11-30 | 2019-04-02 | 湘潭大学 | Distributed task dispatching method in edge calculations network based on directed acyclic graph |
-
2019
- 2019-04-10 CN CN201910286347.XA patent/CN110069341B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018095537A1 (en) * | 2016-11-25 | 2018-05-31 | Nokia Technologies Oy | Application provisioning to mobile edge |
CN109561148A (en) * | 2018-11-30 | 2019-04-02 | 湘潭大学 | Distributed task dispatching method in edge calculations network based on directed acyclic graph |
Non-Patent Citations (4)
Title |
---|
H. TOPCUOGLU: "Performance-effective and low-complexity task scheduling for heterogeneous computing", 《 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》 * |
HAISHENG TAN: "Online job dispatching and scheduling in edge-clouds", 《IEEE INFOCOM 2017 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS》 * |
赵磊: "面向移动边缘计算的边缘服务器部署及资源分配研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
邹云峰等: "边缘计算环境下服务质量感知的资源调度机制", 《电子技术与软件工程》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110650194A (en) * | 2019-09-23 | 2020-01-03 | 中国科学技术大学 | Task execution method based on edge computing in computer network |
CN110740194A (en) * | 2019-11-18 | 2020-01-31 | 南京航空航天大学 | Micro-service combination method based on cloud edge fusion and application |
CN110740194B (en) * | 2019-11-18 | 2020-11-20 | 南京航空航天大学 | Microservice composition method and application based on cloud-edge fusion |
CN113031522A (en) * | 2019-12-25 | 2021-06-25 | 沈阳高精数控智能技术股份有限公司 | Low-power-consumption scheduling method suitable for periodically dependent tasks of open type numerical control system |
CN111756812A (en) * | 2020-05-29 | 2020-10-09 | 华南理工大学 | An energy-aware edge-cloud collaborative dynamic offload scheduling method |
CN111756812B (en) * | 2020-05-29 | 2021-09-21 | 华南理工大学 | Energy consumption perception edge cloud cooperation dynamic unloading scheduling method |
CN111930487B (en) * | 2020-08-28 | 2024-05-24 | 北京百度网讯科技有限公司 | Job stream scheduling method and device, electronic equipment and storage medium |
CN111930487A (en) * | 2020-08-28 | 2020-11-13 | 北京百度网讯科技有限公司 | Job flow scheduling method and device, electronic equipment and storage medium |
WO2022236834A1 (en) * | 2021-05-14 | 2022-11-17 | Alipay (Hangzhou) Information Technology Co., Ltd. | Method and system for scheduling tasks |
CN113986553A (en) * | 2021-11-04 | 2022-01-28 | 中国电信股份有限公司 | Model caching method and device based on mobile edge calculation, medium and equipment |
CN113986553B (en) * | 2021-11-04 | 2024-12-24 | 中国电信股份有限公司 | Model caching method, device, medium, and equipment based on mobile edge computing |
CN114610502A (en) * | 2022-03-24 | 2022-06-10 | 阿里巴巴(中国)有限公司 | Application workload scheduling method and device |
CN115037956B (en) * | 2022-06-06 | 2023-03-21 | 天津大学 | Traffic scheduling method for cost optimization of edge server |
CN115037956A (en) * | 2022-06-06 | 2022-09-09 | 天津大学 | Traffic scheduling method for cost optimization of edge server |
CN116880994A (en) * | 2023-09-07 | 2023-10-13 | 之江实验室 | Multi-processor task scheduling method, device and equipment based on dynamic DAG |
CN116880994B (en) * | 2023-09-07 | 2023-12-12 | 之江实验室 | Multiprocessor task scheduling method, device and equipment based on dynamic DAG |
Also Published As
Publication number | Publication date |
---|---|
CN110069341B (en) | 2022-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110069341B (en) | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing | |
CN113114758B (en) | Method and device for scheduling tasks for server-free edge computing | |
CN111541760B (en) | Complex task allocation method based on server-free mist computing system architecture | |
CN113672391B (en) | Parallel computing task scheduling method and system based on Kubernetes | |
CN113741999A (en) | Dependency-oriented task unloading method and device based on mobile edge calculation | |
CN117407160A (en) | A hybrid deployment method of online tasks and offline tasks in edge computing scenarios | |
WO2025015842A1 (en) | Computing power resource and network resource allocation method and apparatus, and computing power resource and network resource allocation device and system | |
CN118838691A (en) | Resource scheduling method, job processing method, scheduler, system and related equipment | |
CN109408230A (en) | Docker container dispositions method and system based on energy optimization | |
CN110048966B (en) | Coflow scheduling method for minimizing system overhead based on deadline | |
CN116010051A (en) | A federated learning multi-task scheduling method and device | |
Huang et al. | AutoVNF: An Automatic Resource Sharing Schema for VNF Requests. | |
CN118250287A (en) | Transaction processing method and device | |
CN116151137B (en) | Simulation system, method and device | |
CN118210609A (en) | A cloud computing scheduling method and system based on DQN model | |
CN117793665A (en) | A method and device for offloading computing tasks in Internet of Vehicles | |
CN117651044A (en) | Edge computing task scheduling method and device | |
CN116527605A (en) | Resource arrangement method, related device and storage medium | |
CN115840634B (en) | Business execution method, device, equipment and storage medium | |
COMPUTING | An efficient job sharing strategy for prioritized tasks in mobile cloud computing environment using acs-js algorithm | |
CN114866612B (en) | Electric power micro-service unloading method and device | |
CN118301666B (en) | QoE-aware mobile assisted edge service method, system and device | |
CN117707797B (en) | Task scheduling method and device based on distributed cloud platform and related equipment | |
Ge et al. | Efficient Computation Offloading with Energy Consumption Constraint for Multi-Cloud System | |
CN109086127B (en) | Resource scheduling method based on FSM control and framework system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |