CN114666339B - Edge unloading method and system based on noose set and storage medium - Google Patents
Edge unloading method and system based on noose set and storage medium Download PDFInfo
- Publication number
- CN114666339B CN114666339B CN202210140585.1A CN202210140585A CN114666339B CN 114666339 B CN114666339 B CN 114666339B CN 202210140585 A CN202210140585 A CN 202210140585A CN 114666339 B CN114666339 B CN 114666339B
- Authority
- CN
- China
- Prior art keywords
- edge server
- edge
- task
- candidate
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 230000007935 neutral effect Effects 0.000 claims description 22
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002776 aggregation Effects 0.000 claims description 5
- 238000004220 aggregation Methods 0.000 claims description 5
- 238000005265 energy consumption Methods 0.000 description 14
- 238000012545 processing Methods 0.000 description 12
- 238000002474 experimental method Methods 0.000 description 10
- 230000004044 response Effects 0.000 description 10
- 230000000052 comparative effect Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000012733 comparative method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 235000001674 Agaricus brunnescens Nutrition 0.000 description 1
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1021—Server selection for load balancing based on client or server locations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/101—Server selection for load balancing based on network conditions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Computer And Data Communications (AREA)
Abstract
本发明涉及边缘计算卸载技术领域,公开了一种基于中智集的边缘卸载方法、系统及存储介质,该方法包括计算任务在移动设备上被执行时所需的第一代价以及任务在第一边缘服务器上被执行时所需的第二代价,第一边缘服务器为N个边缘服务器中距离移动设备最近的边缘服务器;在第一代价大于第二代价的情况下,将任务卸载至第一边缘服务器中执行;在第一边缘服务器达到负载阈值的情况下,基于N个边缘服务器中每一个边缘服务器的上下文参数锁定第二边缘服务器,将任务卸载至第二边缘服务器中执行;设计了一种多层次边缘计算卸载策略,解决了任务是否需要卸载以及卸载到哪的问题,减小了执行任务的代价。
The present invention relates to the technical field of edge computing offloading, and discloses an edge offloading method, system and storage medium based on neutrosophic sets. The second cost required when being executed on the edge server, the first edge server is the edge server closest to the mobile device among the N edge servers; when the first cost is greater than the second cost, the task is offloaded to the first edge Execute in the server; when the first edge server reaches the load threshold, lock the second edge server based on the context parameter of each edge server in the N edge servers, offload the task to the second edge server for execution; design a The multi-level edge computing offloading strategy solves the problem of whether tasks need to be offloaded and where, and reduces the cost of executing tasks.
Description
技术领域Technical Field
本发明涉及边缘计算卸载技术领域,尤其涉及一种基于中智集的边缘卸载方法、系统及存储介质。The present invention relates to the field of edge computing offloading technology, and in particular to an edge offloading method, system and storage medium based on CISCO.
背景技术Background Art
随着无线通信技术和互联网技术的快速发展,万物互联的时代已悄然来临。预计至2030年,中国的移动设备数量将达到40亿。与此同时如VR、在线游戏、无人驾驶等新型应用如雨后春笋般涌现,此类应用产生的数据量的大小和复杂度都急剧增加,同时对延迟的要求越来越严格,否则难以满足用户体验质量。虽然,移动设备配备的CPU越来越强,但受体积、存储空间以及电量的影响,移动设备自身对延迟敏感型和计算密集型的应用显得力不从心。因此,边缘计算应运而生。With the rapid development of wireless communication technology and Internet technology, the era of the Internet of Everything has quietly arrived. It is estimated that by 2030, the number of mobile devices in China will reach 4 billion. At the same time, new applications such as VR, online games, and driverless cars have sprung up like mushrooms after rain. The amount of data generated by such applications has increased dramatically in size and complexity, and the requirements for latency have become increasingly stringent, otherwise it will be difficult to meet the user experience quality. Although the CPUs equipped in mobile devices are becoming more and more powerful, due to the size, storage space, and power consumption, mobile devices themselves are unable to cope with latency-sensitive and computationally intensive applications. Therefore, edge computing came into being.
边缘计算(Edge Computing,EC)是一种新型计算模式,其将计算与存储资源,例如:cloudlet(边缘服务器)、微型数据中心或雾节点等部署在更贴近移动设备或传感器的网络边缘,以解决设备在资源存储、计算性能以及能效等方面存在的不足,从而能够提供快处理和低时延的服务。但边缘计算仍然面临着许多技术上的难题,如计算卸载和移动性管理。作为边缘计算中关键技术之一,计算卸载是指终端设备将部分或全部计算任务交给边缘服务器处理的技术,在延迟最小化和服务质量保证中起着重要作用。移动性管理是指当用户处于密集且复杂的网络覆盖区域时,如何根据自己的移动轨迹选择合适的边缘节点为自己提供服务。针对边缘计算存在的问题,学术界已有大量的研究和算法,这些研究旨在探索制定最佳卸载决策,在满足执行延迟约束的同时最小化能耗。但是这些研究也存在着一些不足,例如部分算法默认所有任务都需要卸载,这显然是不合理的,因为部分任务可能在移动设备内部进行处理代价更小;在移动设备附近存在多个边缘节点时,如何同时考虑边缘节点和移动设备的上下文因素,融合多属性选择一个最优的节点,目前的研究并不充分;在一些研究中,卸载决策模型和移动性管理要么是分开进行的,要么完全根据移动性来进行卸载决策,而实际上,移动性是表征上下文影响决策的重要因素。可见,现有的边缘卸载方法的卸载方式代价较大。Edge computing (EC) is a new computing model that deploys computing and storage resources, such as cloudlets (edge servers), micro data centers or fog nodes, at the edge of the network closer to mobile devices or sensors to solve the shortcomings of devices in terms of resource storage, computing performance and energy efficiency, so as to provide fast processing and low-latency services. However, edge computing still faces many technical challenges, such as computing offloading and mobility management. As one of the key technologies in edge computing, computing offloading refers to the technology in which terminal devices hand over part or all of the computing tasks to edge servers for processing, which plays an important role in minimizing latency and ensuring service quality. Mobility management refers to how users can choose appropriate edge nodes to provide services for themselves according to their mobile trajectories when they are in dense and complex network coverage areas. In response to the problems existing in edge computing, a large number of studies and algorithms have been conducted in academia, which aim to explore the formulation of optimal offloading decisions to minimize energy consumption while meeting execution delay constraints. However, these studies also have some shortcomings. For example, some algorithms assume that all tasks need to be offloaded by default, which is obviously unreasonable because some tasks may be processed more cheaply inside the mobile device. When there are multiple edge nodes near the mobile device, how to simultaneously consider the contextual factors of the edge nodes and the mobile device and select the optimal node by integrating multiple attributes is not sufficient. In some studies, the offloading decision model and mobility management are either carried out separately or the offloading decision is made entirely based on mobility. In fact, mobility is an important factor that characterizes the impact of context on decision-making. It can be seen that the offloading method of the existing edge offloading method is costly.
发明内容Summary of the invention
本发明提供了一种基于中智集的边缘卸载方法、系统及存储介质,以解决现有的边缘卸载方法的卸载方式代价较大的问题。The present invention provides an edge unloading method, system and storage medium based on Zhongzhiset to solve the problem that the unloading method of the existing edge unloading method has a high cost.
为了实现上述目的,本发明通过如下的技术方案来实现:In order to achieve the above object, the present invention is implemented by the following technical solutions:
第一方面,本发明提供一种基于中智集的边缘卸载方法,应用于计算卸载结构,所述计算卸载结构包括云计算中心、N个边缘服务器和移动设备,N为正整数,所述方法包括:In a first aspect, the present invention provides an edge offloading method based on a central intelligence set, which is applied to a computing offloading structure, wherein the computing offloading structure includes a cloud computing center, N edge servers and a mobile device, where N is a positive integer, and the method includes:
S1、计算任务在移动设备上被执行时所需的第一代价以及所述任务在第一边缘服务器上被执行时所需的第二代价,所述第一边缘服务器为所述N个边缘服务器中距离所述移动设备最近的边缘服务器;S1, calculating a first cost required when a task is executed on a mobile device and a second cost required when the task is executed on a first edge server, where the first edge server is the edge server closest to the mobile device among the N edge servers;
S2、在所述第一代价大于第二代价的情况下,将所述任务卸载至所述第一边缘服务器中执行;S2. When the first cost is greater than the second cost, offloading the task to the first edge server for execution;
S3、在所述第一边缘服务器达到负载阈值的情况下,基于所述N个边缘服务器中每一个候选边缘服务器的上下文参数锁定第二边缘服务器,所述上下文参数包括用户移动性、网络条件、每一个边缘服务器的负载以及CPU利用率;S3. When the first edge server reaches a load threshold, lock a second edge server based on context parameters of each candidate edge server among the N edge servers, where the context parameters include user mobility, network conditions, load of each edge server, and CPU utilization;
S4、将所述任务卸载至所述第二边缘服务器中执行。S4. Offloading the task to the second edge server for execution.
第二方面,本申请实施例提供一种基于中智集的边缘卸载系统,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述方法的步骤。In a second aspect, an embodiment of the present application provides an edge unloading system based on CITS, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method described in the first aspect when executing the computer program.
第三方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面所述的方法的步骤。In a third aspect, an embodiment of the present application provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps of the method described in the first aspect.
有益效果:Beneficial effects:
本发明提供的基于中智集的边缘卸载方法,在任务在移动设备上被执行时所需的较大的情况下,首先考虑将任务卸载至距离移动设备最近的第一边缘服务器上执行,并在第一边缘服务器达到负载阈值的情况下,基于N个边缘服务器中每一个边缘服务器的上下文参数锁定第二边缘服务器,并将任务卸载至第二边缘服务器中执行,这样,设计了一种多层次边缘计算卸载策略,解决了任务是否需要卸载以及卸载到哪的问题,减小了执行任务的代价;在此基础上,本申请中根据用户移动性、网络条件、每一个边缘服务器的负载以及CPU利用率确定第二边缘服务器,充分考虑了用户在不同云覆盖范围内移动的实时性,采用中智集处理上下文参数随时间的高度可变性,不仅能节能省时,还能减少失败的任务个数。The edge offloading method based on the central intelligence set provided by the present invention, when the task required to be executed on the mobile device is large, first considers offloading the task to the first edge server closest to the mobile device for execution, and when the first edge server reaches the load threshold, locks the second edge server based on the context parameters of each edge server in N edge servers, and offloads the task to the second edge server for execution. In this way, a multi-level edge computing offloading strategy is designed to solve the problem of whether the task needs to be offloaded and where to offload, and reduce the cost of executing the task; on this basis, in this application, the second edge server is determined according to user mobility, network conditions, the load of each edge server and CPU utilization, fully considering the real-time nature of user movement within different cloud coverage ranges, and using the central intelligence set to process the high variability of context parameters over time, which can not only save energy and time, but also reduce the number of failed tasks.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明优选实施例的一种基于中智集的边缘卸载方法的流程图;FIG1 is a flow chart of an edge offloading method based on a central intelligence set according to a preferred embodiment of the present invention;
图2为本发明优选实施例的三层计算卸载结构示意图;FIG2 is a schematic diagram of a three-layer computing offloading structure according to a preferred embodiment of the present invention;
图3为本发明优选实施例的云模型的示意图;FIG3 is a schematic diagram of a cloud model according to a preferred embodiment of the present invention;
图4为本发明优选实施例的用户的移动性随时间的变化;FIG4 shows the change of user mobility over time in a preferred embodiment of the present invention;
图5为本发明优选实施例的NSCO和对比实验的方法的任务平均失败次数;FIG5 is a graph showing the average number of task failures of NSCO according to a preferred embodiment of the present invention and a method of a comparative experiment;
图6为本发明优选实施例的NSCO和两种对比方法在处理不同任务数量时的平均消耗时间;FIG6 is an average consumption time of NSCO of a preferred embodiment of the present invention and two comparative methods when processing different numbers of tasks;
图7为本发明优选实施例的NSCO和两种对比方法在处理不同任务数量时的平均消耗能耗。FIG. 7 shows the average energy consumption of NSCO according to the preferred embodiment of the present invention and two comparative methods when processing different numbers of tasks.
具体实施方式DETAILED DESCRIPTION
下面对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solution of the present invention is described clearly and completely below. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
除非另作定义,本发明中使用的技术术语或者科学术语应当为本发明所属领域内具有一般技能的人士所理解的通常意义。本发明中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”或者“一”等类似词语也不表示数量限制,而是表示存在至少一个。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也相应地改变。Unless otherwise defined, the technical terms or scientific terms used in the present invention shall have the usual meanings understood by persons with ordinary skills in the field to which the present invention belongs. The words "first", "second" and similar words used in the present invention do not indicate any order, quantity or importance, but are only used to distinguish different components. Similarly, words such as "one" or "one" do not indicate quantity restrictions, but indicate the existence of at least one. Words such as "connect" or "connected" are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "Up", "down", "left", "right" and the like are only used to indicate relative positional relationships. When the absolute position of the object being described changes, the relative positional relationship also changes accordingly.
请参见图1,本申请实施例提供一种基于中智集的边缘卸载方法,应用于计算卸载结构,计算卸载结构包括云计算中心、N个候选边缘服务器和移动设备,N为正整数,该方法包括:Referring to FIG. 1 , an embodiment of the present application provides an edge offloading method based on a central intelligence set, which is applied to a computing offloading structure. The computing offloading structure includes a cloud computing center, N candidate edge servers, and a mobile device, where N is a positive integer. The method includes:
S1、计算任务在移动设备上被执行时所需的第一代价以及任务在第一边缘服务器上被执行时所需的第二代价,第一边缘服务器为N个候选边缘服务器中距离移动设备最近的候选边缘服务器;S1, calculating a first cost required when the task is executed on the mobile device and a second cost required when the task is executed on the first edge server, where the first edge server is the candidate edge server closest to the mobile device among the N candidate edge servers;
本申请提供的基于中智集的边缘卸载方法,适用于如图2所示的三层计算卸载结构,在该结构中,云计算中心能够提供稳定而强大的计算能力,适合处理计算密集型和延迟容忍型任务。Cloudlet(边缘服务器)是部署在网络边缘的具有计算和网络资源的服务器,具有优于移动设备的计算能力,且比云计算具有更低的延迟。本申请中考虑了四个卸载目的地:移动设备本地、最近的cloudlet(第一边缘服务器)、最佳的cloudlet(第二边缘服务器)、云计算中心,在后文计算中分别用下标m,nea,Opt,Cl表示。The edge offloading method based on CICI provided in this application is applicable to the three-layer computing offloading structure as shown in Figure 2, in which the cloud computing center can provide stable and powerful computing power, suitable for processing computing-intensive and delay-tolerant tasks. Cloudlet (edge server) is a server with computing and network resources deployed at the edge of the network, with computing power superior to mobile devices and lower latency than cloud computing. Four offloading destinations are considered in this application: local mobile device, the nearest cloudlet (first edge server), the best cloudlet (second edge server), and the cloud computing center, which are represented by subscripts m, nea, Opt, and Cl in the following calculations.
在三层云边混合环境下,卸载问题描述为如何选择在哪里执行任务,以及如何根据上下文因素选择一个最佳的cloudlet,从而使整体完成时间和能耗最低。具体说来,设共有n个任务,若在本地执行W个任务,就近cloudlet执行x个任务,在最佳cloudlet执行y个任务,有Z个任务在云服务器上执行,则完成这组任务的总代价为:In a three-layer cloud-edge hybrid environment, the offloading problem is described as how to choose where to execute tasks and how to select an optimal cloudlet based on contextual factors to minimize the overall completion time and energy consumption. Specifically, suppose there are n tasks in total. If W tasks are executed locally, x tasks are executed on the nearest cloudlet, y tasks are executed on the best cloudlet, and Z tasks are executed on the cloud server, then the total cost of completing this set of tasks is:
W+x+y+Z=n (1)W+x+y+Z=n (1)
CSite由两部分组成,分别是任务的完成时间T和消耗的能量E:C Site consists of two parts, namely the task completion time T and the consumed energy E:
Csite=α·Tsite+β·Esite;C site =α·T site +β·E site ;
site∈{m,nea,opt,cl}; (2)site∈{m, nea, opt, cl}; (2)
α和β是调整成本中时间和能耗部分的权重因子,可根据用户对这些因素的偏好进行调整,完成时间T和能耗E的计算在后文中详细介绍,其值被归一化后参与代价的计算。α and β are weight factors for adjusting the time and energy consumption parts of the cost, which can be adjusted according to the user's preference for these factors. The calculation of completion time T and energy consumption E is introduced in detail later. Their values are normalized and then used in the calculation of the cost.
S2、在第一代价大于第二代价的情况下,将任务卸载至第一边缘服务器中执行。S2. When the first cost is greater than the second cost, offload the task to the first edge server for execution.
需要说明的是,并非所有任务都需要卸载,移动设备能也能为一些小型任务提供良好的运行环境。本申请通过卸载代价来决定任务是否需要卸载。It should be noted that not all tasks need to be offloaded, and mobile devices can also provide a good operating environment for some small tasks. This application determines whether a task needs to be offloaded by the offload cost.
在移动设备上完成任务的时延、能耗计算如下:The latency and energy consumption of completing tasks on mobile devices are calculated as follows:
Tm=I/Sm (3)T m =I/S m (3)
Em=Pmo·(I/Sm) (4) Em = Pmo ·(I/ Sm ) (4)
其中,I指任务大小,以指令条数描述,Sm是移动设备单位时间内执行的指令数量,Pmo移动设备执行任务时单位时间的功耗。Where I refers to the task size, which is described by the number of instructions, Sm is the number of instructions executed by the mobile device per unit time, and Pmo is the power consumption per unit time when the mobile device executes the task.
如果由最近的cloudlet响应任务请求,则相应的计算如下:If the task request is answered by the nearest cloudlet, the corresponding calculation is as follows:
本实施例中,涉及的参数的释义包括,Dnea/Sp表示传播时间,Du/Bu和Dd/Bd表示上传和回传链路时间,I/Snea表示最近cloudlet处理任务花费的时间,Qnea在最近cloudlet中排队等待的时间,Du表示上传数据量,Dd表示回传数据量,Bu表示上传数据速率,Bd表示回传数据速率,Pts表示移动设备发送数据时单位时间的功耗,Ptr表示移动设备接收数据时单位时间的功耗,Pmo表示移动设备进行任务计算时单位时间的功耗,Pi表示移动设备空闲状态(等待结果)时单位时间的功耗,I表示待执行的任务的指令条数,通过指令条数来表示任务大小,Sm表示移动设备单位时间内执行的指令数量,Snea表示最近cloudlet单位时间内执行的指令数量,Sopt表示最佳cloudlet单位时间内执行的指令数量,Scl表示云计算中心cloud单位时间内执行的指令数量,Sp表示传播速度,Dcl表示最近cloudlet和cloud之间的距离,Dnea表示最近cloudlet和移动设备的距离,Dopt表示最近cloudlet和最佳cloudlet之间的距离,Qnea表示在最近cloudlet中排队时间,Qopt表示在最佳cloudlet中排队时间,Ttimer表示设置的定时器时间。In this embodiment, the explanations of the parameters involved include: D nea / Sp represents the propagation time, Du / Bu and D d / Bd represent the upload and return link times, I/S nea represents the time spent by the nearest cloudlet to process the task, Q nea represents the waiting time in the nearest cloudlet, Du represents the amount of uploaded data, D d represents the amount of returned data, Bu represents the upload data rate, Bd represents the return data rate, P ts represents the power consumption per unit time when the mobile device sends data, P tr represents the power consumption per unit time when the mobile device receives data, P mo represents the power consumption per unit time when the mobile device performs task calculation, Pi represents the power consumption per unit time when the mobile device is in an idle state (waiting for the result), I represents the number of instructions of the task to be executed, and the task size is represented by the number of instructions, S m represents the number of instructions executed by the mobile device per unit time, S nea represents the number of instructions executed by the nearest cloudlet per unit time, S opt represents the number of instructions executed by the best cloudlet per unit time, S cl represents the number of instructions executed by the cloud computing center cloud per unit time, S p represents the propagation speed, D cl represents the distance between the nearest cloudlet and the cloud, D nea represents the distance between the nearest cloudlet and the mobile device, D opt represents the distance between the nearest cloudlet and the best cloudlet, Q nea represents the queuing time in the nearest cloudlet, Q opt represents the queuing time in the best cloudlet, and T timer represents the set timer time.
所以,当Cm≤Cnea,则在移动设备本地处理任务代价更小,如果Cm>Cnea,则将任务卸载到最近cloudlet更合适。Therefore, when C m ≤ C nea , it is less costly to process the task locally on the mobile device. If C m > C nea , it is more appropriate to offload the task to the nearest cloudlet.
S3、在第一边缘服务器达到负载阈值的情况下,基于N个边缘服务器中每一个边缘服务器的上下文参数锁定第二边缘服务器,上下文参数包括用户移动性、网络条件、每一个边缘服务器的负载以及CPU利用率;S3. When the first edge server reaches a load threshold, locking the second edge server based on context parameters of each edge server in the N edge servers, where the context parameters include user mobility, network conditions, load of each edge server, and CPU utilization;
S4、将任务卸载至第二边缘服务器中执行。S4. Offload the task to the second edge server for execution.
上述的基于中智集的边缘卸载方法,在任务在移动设备上被执行时所需的较大的情况下,首先考虑将任务卸载至距离移动设备最近的第一边缘服务器上执行,并在第一边缘服务器达到负载阈值的情况下,基于N个边缘服务器中每一个边缘服务器的上下文参数锁定第二边缘服务器,并将任务卸载至第二边缘服务器中执行,这样,设计了一种多层次边缘计算卸载策略,解决了任务是否需要卸载以及卸载到哪的问题,减小了执行任务的代价;在此基础上,本申请中根据每一个候选边缘服务器的用户移动性、网络条件、服务器负载以及CPU利用率确定第二边缘服务器,充分考虑了用户在不同云覆盖范围内移动的实时性。The above-mentioned edge offloading method based on Zhongzhiji, when the task is executed on a mobile device and the required time is large, first considers offloading the task to the first edge server closest to the mobile device for execution, and when the first edge server reaches the load threshold, locks the second edge server based on the context parameters of each edge server in the N edge servers, and offloads the task to the second edge server for execution. In this way, a multi-level edge computing offloading strategy is designed to solve the problem of whether the task needs to be offloaded and where to offload, and reduce the cost of executing the task; on this basis, in this application, the second edge server is determined according to the user mobility, network conditions, server load and CPU utilization of each candidate edge server, and the real-time movement of users within different cloud coverage areas is fully considered.
可选地,S3具体包括:Optionally, S3 specifically includes:
S31、将N个候选边缘服务器中除第一边缘服务器之外的候选边缘服务器视为候选边缘服务器,根据每一候选边缘服务器的最近q个时刻内的上下文参数构建每一候选边缘服务器的时变上下文矩阵;S31, considering the candidate edge servers other than the first edge server among the N candidate edge servers as candidate edge servers, and constructing a time-varying context matrix of each candidate edge server according to the context parameters of each candidate edge server in the most recent q moments;
S32、使用逆向云生成器算法将每一候选边缘服务器的时变上下文矩阵转化为单值中智上下文矩阵;S32, using the reverse cloud generator algorithm to convert the time-varying context matrix of each candidate edge server into a single-valued neutral context matrix;
S33、使用单值中智集加权平均聚合算子将单值中智上下文矩阵聚合成候选边缘服务器的单值中智数;S33, using a single-valued neutral intelligence set weighted average aggregation operator to aggregate the single-valued neutral intelligence context matrix into a single-valued neutral intelligence number of the candidate edge server;
S34、使用单值中智数的得分函数计算每一候选边缘服务器的得分,将分值最高的候选边缘服务器作为第二边缘服务器。S34: Calculate the score of each candidate edge server using a score function of a single-valued neutral number, and use the candidate edge server with the highest score as the second edge server.
在本可选的实施方式中,采用中智集处理上下文参数随时间的高度可变性,不仅能节能省时,还能减少失败的任务个数。In this optional implementation, the high variability of context parameters over time is processed by using a neutral intelligence set, which can not only save energy and time, but also reduce the number of failed tasks.
在一些场景下,最近的cloudlet达到了负载阈值而无法为新到的任务提供服务,此时最近的cloudlet需要充当代理的角色,广播请求,向附近其他cloudlet寻求帮助,具体而言,在一示例中,锁定第二边缘服务器(最近的cloudlet)的步骤如下。In some scenarios, the nearest cloudlet reaches the load threshold and cannot provide services for newly arrived tasks. At this time, the nearest cloudlet needs to act as a proxy, broadcast requests, and seek help from other nearby cloudlets. Specifically, in one example, the steps of locking the second edge server (the nearest cloudlet) are as follows.
第一边缘服务器广播任务请求消息,搜索附近其他的cloudlet,并且设置一个定时器Ttimer,Ttimer<<Tnea。当附近有cloudlet可以满足任务的请求时,这些cloudlet将自己的CPU利用率、当前的负载、与其他cloudlets的网络连接情况回应给代理,代理cloudlet将候选cloudlet的这些信息选择出最佳cloudlet,并将任务转发给该cloudlet来执行。这种情况下,时延Topt、能耗Eopt和代价分计算如下:The first edge server broadcasts the task request message, searches for other nearby cloudlets, and sets a timer T timer , T timer <<T nea . When there are cloudlets nearby that can meet the task request, these cloudlets respond to the proxy with their CPU utilization, current load, and network connection status with other cloudlets. The proxy cloudlet selects the best cloudlet based on the information of the candidate cloudlets and forwards the task to the cloudlet for execution. In this case, the delay T opt , energy consumption E opt and cost are calculated as follows:
可选地,上述的方法还包括:设定预设时间阈值,在预设时间阈值内若没有成功锁定第二边缘服务器,则将任务卸载至云计算中心。Optionally, the above method further includes: setting a preset time threshold, and if the second edge server is not successfully locked within the preset time threshold, offloading the task to the cloud computing center.
在本可选的实施方式中,如果定时器Ttimer到达0时还没有任何cloudlet响应该任务的请求,说明附近没有合适的cloudlet,此时将该任务卸载到云计算中心,由云服务器来提供服务。这样,可以保证不管是否成功锁定第二边缘服务器,都可以将移动设备上的任务卸载出去,减小移动设备的压力。这种情况下,时延Topt、能耗Eopt和代价分计算如下:In this optional implementation, if no cloudlet responds to the task request when the timer T timer reaches 0, it means that there is no suitable cloudlet nearby. At this time, the task is unloaded to the cloud computing center, and the cloud server provides the service. In this way, it can be ensured that regardless of whether the second edge server is successfully locked, the task on the mobile device can be unloaded, reducing the pressure on the mobile device. In this case, the delay T opt , energy consumption E opt and cost are calculated as follows:
需要说明的是,在不同的时段,网络状况呈现不同的拥塞程度,服务器可用资源也随着时间动态变化,终端设备因其移动性在不同时刻也处于不同的位置,即包括网络状态、服务器资源以及终端设备位置都随时间变化而动态更新,这些影响卸载决策的上下文因素呈现时变特性。It should be noted that in different time periods, the network status presents different levels of congestion, the available server resources also change dynamically over time, and the terminal devices are in different locations at different times due to their mobility. That is, the network status, server resources and terminal device locations are all dynamically updated over time. These contextual factors that affect the offloading decision show time-varying characteristics.
在不同的时刻不同的位置,可以将任务卸载到不同的cloudlet上。这些时变的上下文因素对卸载决策起着重要作用,而现有的研究中少有考虑上下文参数的时变动态性。本章将采用单值中智集对用户移动性、网络条件、服务器负载、CPU利用率四个上下文参数进行时变的刻画。Tasks can be offloaded to different cloudlets at different times and locations. These time-varying contextual factors play an important role in offloading decisions, but few existing studies consider the time-varying dynamics of context parameters. This chapter will use a single-valued neutral intelligence set to characterize the time-varying nature of four context parameters: user mobility, network conditions, server load, and CPU utilization.
当前用户处在p个候选cloudlet的服务范围内时,为选择最佳cloudlet,获取其在p个候选cloudlet的四个上下文在q个时刻内的值,以便于进行决策。本申请通过预估用户在cloudlets的停留时间以表征移动性,记为i∈{1,2,...,p},j∈{1,2,...,q}。其余三个上下文参数包括候选cloudlet的负载、CPU利用率及其与代理cloudlet之间的网络条件,可以通过相应API获取对应的数值,分别记为考虑到移动性是极大型指标(越大越好),其余三个是极小型指标,为了避免尺度混乱,采用y′=max_y-y极小型指标进行正向化处理,max_y为指标y取值的最大值。同时,为使不同量纲的特征处于同一数值量级,减少方差大的特征的影响,采用y=(y-min_y)/(max_y-min_y)据进行归一化处理。其中,min_y为指标y取值的最小值,max_y为指标y取值的最大值,因此,对用户服务范围内的p个候选cloudlet,在最近q个时刻内其归一化的上下文属性数据可表达为矩阵每个矩阵元素即cloudleti在时刻j的归一化上下文属性数据,为一个四元组i∈{1,2,...,p},j∈{1,2,...,q}。则有:When the current user is within the service range of p candidate cloudlets, in order to select the best cloudlet, the values of the four contexts of the p candidate cloudlets in q moments are obtained to facilitate decision making. This application characterizes mobility by estimating the user's stay time in cloudlets, denoted as i∈{1, 2, ..., p}, j∈{1, 2, ..., q}. The other three context parameters include the candidate cloudlet’s load, CPU utilization, and the network conditions between it and the proxy cloudlet. The corresponding values can be obtained through the corresponding API and are denoted as Considering that mobility is an extremely large indicator (the larger the better), and the other three are extremely small indicators, in order to avoid scale confusion, the extremely small indicator y′=max_y-y is used for positive processing, and max_y is the maximum value of the indicator y. At the same time, in order to make the features of different dimensions at the same numerical order and reduce the impact of features with large variance, y=(y-min_y)/(max_y-min_y) is used for normalization processing. Among them, min_y is the minimum value of the indicator y, and max_y is the maximum value of the indicator y. Therefore, for the p candidate cloudlets within the user service range, their normalized context attribute data in the last q moments can be expressed as a matrix Each matrix element That is, the normalized context attribute data of cloudleti at time j is a four-tuple i∈{1, 2, ..., p}, j∈{1, 2, ..., q}. Then:
时变的上下文信息具有不确定性,而中智集(Neutrosophic Set,NS)有独立的隶属度函数、不确定隶属度函数和非隶属度函数,可以很好地表征因时变产生的不一致、不确定的信息。但它定义在非标准单位子区间]0-,1+[,上。The time-varying context information is uncertain, and the Neutrosophic Set (NS) has independent membership functions, uncertain membership functions, and non-membership functions, which can well characterize the inconsistent and uncertain information caused by time-varying. However, it is defined on a non-standard unit subinterval]0-,1+[,.
首先,单值中智集(Single-valued Neutrosop-hic Set,SVNS)定义,设X是一个给定的论域,X上的一个单值中智集A用一个包括隶属度函数TA(x)、不确定隶属度函数IA(x)和非隶属度函数FA(x)来表示为:First, the definition of single-valued neutrosophic set (SVNS) is given. Let X be a given domain. A single-valued neutrosophic set A on X is represented by a membership function TA (x), an uncertain membership function IA (x) and a non-membership function FA (x):
A={<X,TA(x),IA(x),FA(x)>lx∈X}A={<X, T A (x), I A (x), F A (x)>lx∈X}
式中,TA(x),IA(x),FA(x)∈[0,1],满足有0≤TA(x)+IA(x)+FA(x)≤3。对论域X上的单值中智集A中的一个元素,称为单值中智数(Single-Valued NeutrosophicNumber,SVNN),简记为<TA,IA,FA>。Where T A (x), I A (x), F A (x)∈[0,1], satisfying 0≤TA (x)+ IA (x)+ FA (x)≤3. An element in a single-valued neutrosophic set A on the domain X is called a single-valued neutrosophic number (SVNN), abbreviated as < TA , IA , FA >.
为了能为cloudlet的时变上下文属性建立SVNS模型,方便后续基于中智集的多属性决策,需要为cloudleti在q个时刻内的上下文序列数据建立三种隶属度函数,即完成的转换。为此,本申请选用一种基于概率统计和模糊集理论、实现定性概念与定量数据双向转换的认知模型——云模型(Cloud Model,CM)来完成这一转换。In order to establish an SVNS model for the time-varying context attributes of cloudlet and facilitate the subsequent multi-attribute decision-making based on the neutrino-intelligence set, it is necessary to provide context sequence data of cloudleti within q moments. Establish three membership functions, that is, complete To this end, this application uses a cognitive model based on probability statistics and fuzzy set theory to achieve bidirectional conversion between qualitative concepts and quantitative data, the Cloud Model (CM), to complete this conversion.
值得解释的是,云由云滴组成,一个云滴是定性概念的一次实现,一定数量的云滴即可表达一朵云。在本申请中,cloudleti的某一时变上下文在时刻j上的取值,即可视为一个云滴,其q个时刻的数据序列即可表征该上下文的云模型,记为 It is worth explaining that a cloud is composed of cloud droplets. A cloud droplet is a realization of a qualitative concept. A certain number of cloud droplets can express a cloud. In this application, the value of a time-varying context of cloudlet i at time j can be regarded as a cloud droplet, and its data sequence at time q can represent the cloud model of the context, which is recorded as
如图3所示,云模型的数字特征由期望值熵超熵三个数值来表征。期望是云滴在论域空间分布的期望,代表定性概念的基本确定性;熵代表定性概念不确定性的度量,反映了在论域中可被这个概念所接受的数值范围,在图3中反映云的跨度。超熵是熵的熵,是熵的不确定性,表达云模型的偏离程度,在图3中反映云的厚度。因此,取期望当作单值中智数的隶属度TA,熵当作不确定隶属度IA,超熵当作非隶属度FA,可完成到SVNS的转换。即:As shown in Figure 3, the digital features of the cloud model are composed of expected values entropy Super Entropy Three values are used to represent it. is the expectation of the distribution of cloud droplets in the domain space, representing the basic certainty of qualitative concepts; entropy A measure of uncertainty in a qualitative concept, reflecting the range of values that can be accepted by the concept in the domain, and in Figure 3, the span of the cloud. It is entropy The entropy is the uncertainty of entropy, which expresses the degree of deviation of the cloud model and reflects the thickness of the cloud in Figure 3. Therefore, the expected As the membership degree of the single-valued neutrosophic number T A , entropy As uncertain membership IA , super entropy Assuming it as non-membership degree F A , the conversion to SVNS can be completed. That is:
进一步地,本申请采用云模型理论中的逆向云生成器算法完成转换:Furthermore, this application uses the reverse cloud generator algorithm in cloud model theory to complete the conversion:
具体而言,逆向云生成器算法的输入可以是cloudleti某一时变上下文序列,如i∈{1,2,...,p};输出可以是cloudleti某一上下文云模型的数字特征值,① ②③ Specifically, the input of the reverse cloud generator algorithm can be a time-varying context sequence of cloudlet i , such as i∈{1,2,...,p}; the output can be the digital feature value of a context cloud model of cloudlet i , ① ② ③
在一示例中,以cloudlet1在10个时刻内的负载数据为例,对上述过程进行解释。在对数据进行正向化和归一化后,cloudlet1的时变负载数据为(0.98,1.00,0.54,0.00,0.60,0.51,0.26,0.82,0.80,0.50),对应的云模型为:In an example, the above process is explained by taking the load data of cloudlet 1 in 10 moments as an example. After the data is normalized and normalized, the time-varying load data of cloudlet 1 is (0.98, 1.00, 0.54, 0.00, 0.60, 0.51, 0.26, 0.82, 0.80, 0.50), and the corresponding cloud model is:
将数据输入逆向生成器中得到该云模型对应的数字特征期望熵 超熵因此,cloudlet1的负载单值中智数表示为 Input the data into the reverse generator to obtain the expected digital features corresponding to the cloud model entropy Super Entropy Therefore, the load single-valued neutrinodic number of cloudlet 1 is expressed as
至此,若cloudleti的网络条件、负载、CPU利用率的时变数据序列分别为 使用云模型的逆向生成器算法进行中智转化后,建立的cloudleti的单值中智上下文模型分别为:So far, if the time-varying data series of cloudlet i ’s network conditions, load, and CPU utilization are After using the cloud model's reverse generator algorithm for neutrosophic transformation, the single-valued neutrosophic context models of cloudlet i are established as follows:
而要建立单值中智移动性模型,先需对移动性进行度量。To establish a single-valued neutrosophic mobility model, mobility must be measured first.
请参见图4,图4阐释了用户的移动性随时间的变化。假设用户开始沿着α方向移动,在某一时刻,用户改变移动方向为α1,再下一时刻,用户又沿着α2移动,本文通过预测移动设备在cloudlet内的停留时间来度量移动性从而为cloudlet的选择提供参考。Please refer to Figure 4, which illustrates the change of user mobility over time. Assume that the user starts to move along the direction of α, at a certain moment, the user changes the moving direction to α1, and at the next moment, the user moves along α2. This paper measures mobility by predicting the residence time of the mobile device in the cloudlet to provide a reference for the selection of cloudlets.
假设在某一时刻用户同时处于两个cloudlet的服务区内,以cloudlet2为例,R表示cloudlet的服务范围,S为用户沿移动方向离开cloudlet覆盖范围的距离,v表示用户移动速度,用户的移动方向和速度v可通过GPS获得,D为用户当前位置和cloudlet的直线距离,为用户当前位置到cloudlet的方向向量,设用户当前位置为(A,B),cloudlet位置为(a,b),有则用户在某一cloudlet内的停留时间可通过如下公式计算:Assume that at a certain moment the user is in the service area of two cloudlets at the same time. Taking cloudlet2 as an example, R represents the service range of the cloudlet, S is the distance the user is away from the coverage area of the cloudlet along the moving direction, v represents the user's moving speed, and the user's moving direction and speed v can be obtained through GPS, D is the straight-line distance between the user's current location and the cloudlet, is the direction vector from the user's current position to the cloudlet. Assume that the user's current position is (A, B) and the cloudlet's position is (a, b). The user's stay time in a cloudlet can be calculated using the following formula:
其中,距离S可以通过三角函数计算得到:The distance S can be calculated by trigonometric function:
因此,用户在cloudleti内的q个时刻的移动性数据序列为利用逆向云生成器算法进行中智转化后,所建立的移动性单值中智集模型为 Therefore, the mobility data sequence of the user in cloudlet i at q moments is After using the reverse cloud generator algorithm to transform the neutral intelligence, the established mobility single-valued neutral intelligence set model is:
综上,对于cloudleti,其单值中智上下文模型如下矩阵i∈{1,2,...,p},行表示p个候选cloudlet,列为移动性、网络条件、负载、CPU利用率四个上下文因素的SVNS表示如下:In summary, for cloudlet i , its single-valued neutrosophic context model is as follows: i∈{1, 2, ..., p}, the rows represent p candidate cloudlets, and the columns represent the four contextual factors of mobility, network conditions, load, and CPU utilization as follows:
在得到cloudleti的单值中智上下文模型后,根据候选cloudlets的四个相关上下文因素进行决策,选出最佳cloudlet,显然,这是一个多属性决策问题。在多属性决策问题中,每一个备选方案的属性往往是复杂,不同上下文属性对决策的贡献程度不一,应赋予它们不同的权重值。因为属性权重完全未知,符合模糊理论的熵的特性。因此,本申请利用中智熵理论推算SVNS环境下各上下文属性的最优权重。After obtaining the single-valued neutral-intelligence context model of cloudlet i , a decision is made based on the four relevant contextual factors of the candidate cloudlets to select the best cloudlet. Obviously, this is a multi-attribute decision-making problem. In multi-attribute decision-making problems, the attributes of each alternative are often complex, and different contextual attributes contribute to the decision to different degrees, so they should be given different weight values. Because the attribute weights are completely unknown, they conform to the entropy characteristics of fuzzy theory. Therefore, this application uses the neutral-intelligence entropy theory to calculate the optimal weights of each context attribute in the SVNS environment.
设是论域X={x1,x2,…,xn}上的一个单值中智集,则中智熵E(A)的定义为Ac为A的补集。在中,每一个元素表示cloudleti的某一上下文的SVNN,每一列,代表每一个上下文属性的SVNS。因此,每个属性对应的权重计算如下:set up is a single-valued neutrosophic set on the domain X = {x 1 , x 2 , …, x n }, then the neutrosophic entropy E(A) is defined as A c is the complement of A. In each element It represents the SVNN of a context of cloudleti. Each column represents the SVNS of each context attribute. Therefore, the weight corresponding to each attribute is calculated as follows:
其中crtt∈{M,D,L,C},表示四个上下文属性。where crtt∈{M, D, L, C} represents four context attributes.
最后,使用公式(13)——单值中智集加权平均聚合算子(Single-valuedNeutrosophic S et Weighted Algorithm,SVNSWA),将cloudleti各上下文属性SVNS聚合成候选cloudleti的S VNN,记为SVNNi={Ti,Ii,Fi}。其中,cnt∈{M,D,L,C}。Finally, using formula (13) - Single-valued Neutrosophic Set Weighted Algorithm (SVNSWA), the context attributes SVNS of cloudlet i are aggregated into the S VNN of candidate cloudlet i , denoted as SVNN i = {T i , I i , F i }, where cnt∈{M, D, L, C}.
在获得SVNNi后,使用公式(14)——SVNN的得分函数计算每一个候选cloudlet的得分。得分函数是SVNN排序中的一个重要指标。隶属度T越大,SVNN越大;不确定度I越小,SVNN越大;同样的,非隶属度F越小,SVNN就越大。在得到关于候选cloudlet的得分列表后,得分最高的即为最佳cloudlet。After obtaining SVNN i , use formula (14) - SVNN score function to calculate the score of each candidate cloudlet. The score function is an important indicator in SVNN sorting. The larger the membership T, the larger the SVNN; the smaller the uncertainty I, the larger the SVNN; similarly, the smaller the non-membership F, the larger the SVNN. After obtaining the score list of candidate cloudlets, the one with the highest score is the best cloudlet.
score(SVNNi)=(Ti+1-Ii+1-Fi)/3 (14)score(SVNN i )=(T i +1-I i +1-F i )/3 (14)
综上,本申请的基于中智集的边缘卸载方法(Time-varying Context-awareEdgeOffloading Based On the Neutrosophic Set,NSCO)。可以描述通过算法如下:In summary, the time-varying context-aware edge offloading method based on the Neutrosophic Set (NSCO) of the present application can be described by the following algorithm:
{输入:任务=<I,Du>,I是待执行任务的指令条数,Du是卸载时上传的数据量。{Input: Task = <I, Du >, I is the number of instructions of the task to be executed, Du is the amount of data uploaded during offloading.
输出:任务t的执行位置。Output: The execution location of task t.
1.使用公式(2)(3)(4)(5)(6)计算在本地和在最近cloudlet执行任务的代价Cm和Cnea 1. Use formulas (2)(3)(4)(5)(6) to calculate the cost Cm and Cnea of executing the task locally and on the nearest cloudlet
2.if Cm≤Cnea:2.if C m ≤C nea :
3.在本地执行;3. Execute locally;
4.else:4.else:
5.最近cloudlet接收任务;5. The cloudlet recently received the task;
6.if最近cloudlet满足任务要求:6. If the cloudlet meets the task requirements recently:
7.在最近cloudlet上执行任务;7. Execute the task on the nearest cloudlet;
8.else:8.else:
9.最近cloudlet充当代理,向附近其他可用cloudlets广播任务请求消息;9. The nearest cloudlet acts as a proxy and broadcasts the task request message to other available cloudlets nearby;
10.if在时间Ttimer内,有其他cloudlet可以满足任务的卸载请求:10. If within the time T timer , there are other cloudlets that can satisfy the task's offloading request:
11.获取用户在所有候选cloudlets最近q个时刻内的移动性、网络条件、负载、CPU利用率四个上下文数据,构成cloudleti的时变上下文矩阵 11. Obtain the user's mobility, network conditions, load, and CPU utilization in the last q moments of all candidate cloudlets to form the time-varying context matrix of cloudlet i
12.使用逆向云生成器算法将转化为单值中智上下文矩阵:12. Use the reverse cloud generator algorithm to Transformed into a single-valued neutrosophic context matrix:
13.使用公式(13)将聚合成候选cloudleti的SVNN。13. Use formula (13) to Aggregate into SVNN of candidate cloudlet i .
14.使用公式(14)对SVNNi进行比较,得分最高的为最佳cloudlet,任务从最近cloudlet转移到最佳cloudlet。14. Use formula (14) to compare SVNN i . The one with the highest score is the best cloudlet, and the task is transferred from the nearest cloudlet to the best cloudlet.
15.else:15.else:
16.Ttimer内附近都没有cloudlet可以执行该任务,则将任务卸载到云。16. If there is no nearby cloudlet that can perform the task within T timer , the task will be offloaded to the cloud.
}}
算法主流程为确定任务的执行位置:首先步骤1-7表示根据执行任务的代价Cm和Cnea来确定是否将任务卸载到最近cloudlet。然后步骤8-14表示当最近cloudlet无法满足任务请求时,使用单值中智集表征候选cloudlets的时变上下文信息,利用cloudlets时变上下文信息选择一个最佳cloudlet来执行任务。最后步骤15-16表示若在Ttimer内没有合适的候选cloudlet,从而将任务卸载到云计算中心。算法的时间复杂度为O(n),其中,单值中智集加权平均聚合算子的时间复杂度为O(n),使用SVNN比较函数对候选cloudlet进行比较的时间复杂度为O(n),这两者是并行的关系,其他算法流程时间复杂度为O(1),综合时间复杂度为O(n)。The main process of the algorithm is to determine the execution location of the task: first, steps 1-7 indicate whether to offload the task to the nearest cloudlet based on the cost of executing the task Cm and Cnea . Then steps 8-14 indicate that when the nearest cloudlet cannot meet the task request, the single-valued neutral-intelligence set is used to characterize the time-varying context information of the candidate cloudlets, and the cloudlets time-varying context information is used to select an optimal cloudlet to execute the task. Finally, steps 15-16 indicate that if there is no suitable candidate cloudlet within T timer , the task is offloaded to the cloud computing center. The time complexity of the algorithm is O(n), among which the time complexity of the single-valued neutral-intelligence set weighted average aggregation operator is O(n), and the time complexity of using the SVNN comparison function to compare the candidate cloudlets is O(n). The two are in parallel, the time complexity of other algorithm processes is O(1), and the comprehensive time complexity is O(n).
下面,通过实验验证本申请提供的基于中智集的边缘卸载方法的性能。Next, the performance of the edge offloading method based on the central intelligence set provided by the present application is verified through experiments.
(1)数据集:本文实验使用了Stanford Drone数据集和阿里巴巴集群数据集。Stanford Drone数据集连续记录了斯坦福大学校园内某一区域内行人的运动轨迹,有用户的具体位置信息。阿里巴巴集群数据集提供来自实际生产的集群跟踪,记录了8天内4000台服务器的相关数据,实验摘取了其中服务器的CPU利用率和负载两项数据。网络条件由通信时延衡量,根据(2)仿真参数进行计算。(1) Dataset: This paper uses the Stanford Drone dataset and the Alibaba cluster dataset. The Stanford Drone dataset continuously records the movement trajectories of pedestrians in a certain area on the Stanford University campus, and contains the specific location information of users. The Alibaba cluster dataset provides cluster tracking from actual production, recording relevant data of 4,000 servers in 8 days. The experiment extracts the CPU utilization and load data of the servers. The network condition is measured by the communication delay, which is calculated according to (2) simulation parameters.
(2)仿真参数:本文使用Advantech EIS-D210(3846MIPS,1.5GHz,4GB RAM)作为cloudlet参数,使用Del PowerEdge(31790MIPS,3.0GHz,768GB RAM)作为云服务器参数。设定cloudlet的服务范围为0-50m,用户与cloudlet之间的带宽设为100Mbps,用户与云服务器之间的带宽设置为1Gbps。任务大小采用均匀分布进行分配,平均值为4600M条指令,每个任务的数据传输量大小也以相同的方式分配,平均值为750千字节。(2) Simulation parameters: This paper uses Advantech EIS-D210 (3846MIPS, 1.5GHz, 4GB RAM) as cloudlet parameters and Del PowerEdge (31790MIPS, 3.0GHz, 768GB RAM) as cloud server parameters. The service range of cloudlet is set to 0-50m, the bandwidth between user and cloudlet is set to 100Mbps, and the bandwidth between user and cloud server is set to 1Gbps. The task size is distributed uniformly, with an average of 4600M instructions. The data transmission size of each task is also distributed in the same way, with an average of 750 kilobytes.
(3)对比实验:将本文算法NSCO与Application-aware cloudlet selection forcompu tation offloading in multi-cloudlet environment(appAware)和mCloud:AContext-Aw are Offloading Framework for Heterogeneous Mobile Cloud(mCloud)进行对比实验。(3) Comparative experiments: The proposed algorithm NSCO is compared with Application-aware cloudlet selection for computation offloading in multi-cloudlet environment (appAware) and mCloud: AContext-Aw are Offloading Framework for Heterogeneous Mobile Cloud (mCloud).
appAware:不同的Cloudlet可以执行不同类型的应用程序,根据请求的应用程序类型分配到不同的Cloudlet,平衡工作负载,减少系统延迟,降低功耗。appAware: Different Cloudlets can execute different types of applications. They are assigned to different Cloudlets based on the requested application type, balancing workloads, reducing system latency, and lowering power consumption.
mCloud:考虑移动设备上下文(网络条件)的变化,为选择无线介质和云资源提供帮助,从而作出更好的卸载决策,以提供更好的性能和更低的电池消耗。mCloud: Considers changes in mobile device context (network conditions) to provide assistance in selecting wireless media and cloud resources, thereby making better offloading decisions to provide better performance and lower battery consumption.
(4)评价指标:本文选用任务平均失败次数、响应时间和能耗三个评价标准。将失败任务定义为用户在某一cloudlet内的停留时间小于将任务卸载到该cloudlet的完成时间,因为在这种情况下,用户已经离开该cloudlet的服务区而无法接收结果。(4) Evaluation indicators: This paper uses three evaluation criteria: average number of task failures, response time, and energy consumption. A failed task is defined as the time a user spends in a cloudlet that is less than the completion time of offloading the task to the cloudlet, because in this case, the user has left the service area of the cloudlet and cannot receive the result.
在一次案例研究中,定义了20个cloudlet来模拟任务卸载的实验,并取10个时刻内的上下文参数的原始数据转化为中智集,产生的结果如表1所示。接着使用公式(13)进行加权平均聚合可以获得每一候选cloudlet单值中智数,如表2所示。最后,使用公式(14)可以得到关于候选cloudlet的一个得分列表:In a case study, 20 cloudlets were defined to simulate the task offloading experiment, and the original data of the context parameters within 10 moments were converted into a neutral intelligence set. The results are shown in Table 1. Then, using formula (13) for weighted average aggregation, the single-valued neutral intelligence number of each candidate cloudlet can be obtained, as shown in Table 2. Finally, using formula (14), a score list of candidate cloudlets can be obtained:
cloudletl5>cloudlet5>cloudletl3>cloudletT>cloudlet3……>cloudletl4。cloudletl5>cloudlet5>cloudletl3>cloudletT>cloudlet3...>cloudletl4.
cloudlet15成为最优候选是因为其在每个上下文参数上都表现出了较高的隶属度和较低的非隶属度和不确定度,而最差的cloudlet14与其相反。Cloudlet15 becomes the best candidate because it exhibits higher membership and lower non-membership and uncertainty in each context parameter, while the worst cloudlet14 is the opposite.
表1.各上下文参数的中智数Table 1. Neutral numbers of various context parameters
表2.候选cloudlet聚合单值中智数Table 2. Aggregate single-valued neutrinocity of candidate cloudlets
然后,进行对比分析如下:Then, a comparative analysis is performed as follows:
(1)任务平均失败次数分析(1) Analysis of the average number of task failures
在任务个数分别为25、50、75、100时,测量了任务的平均失败次数,本文提出的基于中智集的时变上下文感知边缘卸载方法(简记为NSCO)和对比实验的方法的任务平均失败次数如图5所示。When the number of tasks is 25, 50, 75, and 100, the average number of task failures is measured. The average number of task failures of the time-varying context-aware edge offloading method based on neutral intelligence set (abbreviated as NSCO) proposed in this paper and the comparative experimental method is shown in Figure 5.
之所以能有以上优势的原因有二:(1)NSCO考虑了用户的高移动性,并通过用户在cloudlet范围内的停留时间来捕捉用户的移动趋势。预测用户停留时间可以帮助过滤掉那些可用时间较少的cloudlet,从而避免潜在的任务卸载失败。(2)NSCO通过历史数据来推测未来,过去一段时间内最适宜的cloudlet,在最近的将来应该也是最佳的。当最近cloudlet无法满足任务要求转而寻找周围最佳cloudlet时,NSCO选用相关上下文历史时刻数据,结合云模型、中智聚集等算法选择出最近最优cloudlet,该方法可以对减少任务卸载失败的次数起着重要作用。There are two reasons for the above advantages: (1) NSCO takes into account the high mobility of users and captures the user's mobility trend through the time they stay within the cloudlet range. Predicting the user's stay time can help filter out cloudlets with less available time, thereby avoiding potential task offloading failures. (2) NSCO uses historical data to infer the future. The most suitable cloudlet in the past period of time should also be the best in the near future. When the nearest cloudlet cannot meet the task requirements and turns to look for the best cloudlet around, NSCO uses relevant contextual historical moment data and combines cloud models, neutrinocular aggregation and other algorithms to select the nearest optimal cloudlet. This method can play an important role in reducing the number of task offloading failures.
在对比实验appAware和mCloud中,都没有考虑时间的动态性来选择cloudlet,也没有考虑用户的移动性,所以失败任务个数较多。此外,在对比实验appAware中,只考虑了是否有某一类型的cloudlet专用于处理某一类型的任务,若没有该类型的cloudlet,则会选择卸载到云中心,但是将任务转移到云会增加响应时间,响应时间过长往往会导致任务失败。在对比实验mCloud中,只考虑了网络接口这一上下文条件,仅仅根据网络接口是否可用来决定任务在哪处理,存在某些时刻大量任务在本地设备处理的情况,众所周知,本地设备的处理能力有限,容易出现处理失败的情况。In the comparative experiments appAware and mCloud, neither the dynamics of time nor the mobility of users were considered in selecting cloudlets, so there were many failed tasks. In addition, in the comparative experiment appAware, only whether there is a certain type of cloudlet dedicated to processing a certain type of task was considered. If there is no cloudlet of that type, it will choose to offload to the cloud center, but transferring tasks to the cloud will increase the response time, and long response time often leads to task failure. In the comparative experiment mCloud, only the context condition of the network interface was considered, and the task was processed only based on whether the network interface was available. At some point, a large number of tasks were processed by local devices. As we all know, the processing capacity of local devices is limited, and processing failures are prone to occur.
(2)任务卸载所花费的时间和能耗分析(2) Analysis of time and energy consumption of task offloading
图6和图7显示了NSCO和两种对比方法在处理不同任务数量时的平均消耗时间和能耗。从图中可以看出使用本文提出的方案,平均响应时间对比appAware和mCloud分别减少约28.9%和54.7%,平均能耗对比appAware和mCloud分别降低约33.2%和56.8%。Figures 6 and 7 show the average time and energy consumption of NSCO and the two comparison methods when processing different numbers of tasks. It can be seen from the figure that the average response time of the proposed solution is reduced by about 28.9% and 54.7% compared with appAware and mCloud, and the average energy consumption is reduced by about 33.2% and 56.8% compared with appAware and mCloud, respectively.
除了在(1)中分析的因响应时间过长导致任务失败,从而增加了系统的响应时间外,在appAware方法中,若没有处理某一任务类型的专一cloudlet,则会选择卸载到云中心,但是将任务转移到云会增加传播时延,因此响应时间和能耗收到影响。而在mCloud中,由于不合理的任务比例分配,使得本地处理的任务过多,增加了响应时间和能耗。In addition to the analysis in (1) that the long response time leads to task failure, thus increasing the system response time, in the appAware method, if there is no dedicated cloudlet to handle a certain task type, it will choose to offload to the cloud center, but transferring the task to the cloud will increase the propagation delay, so the response time and energy consumption are affected. In mCloud, due to the unreasonable task ratio distribution, too many tasks are processed locally, which increases the response time and energy consumption.
本文提出的方案中,首先选择最近的Cloudlet。如果它不能卸载任务的要求,则最近Cloudlet充当代理服务器,并从其附近的Cloudlet中选择最佳cloudlet来处理任务。如果附近的Cloudlet都没有响应,再将任务卸载到云。而且,在进行最优cloudlet选择时,本文从全局出发,充分考虑了用户移动性、网络条件、cloudlet CPU利用率、cloudlet负载这四个上下文因素,这分别对应着响应时间中的传播时间、通信时间、处理时间、排队时间,因此采用该方案卸载任务所花费的时间少,消耗的能量低,达到了提升用户综合体验的效果。In the scheme proposed in this paper, the nearest Cloudlet is selected first. If it cannot offload the task requirements, the nearest Cloudlet acts as a proxy server and selects the best Cloudlet from its nearby Cloudlets to handle the task. If none of the nearby Cloudlets respond, the task is offloaded to the cloud. Moreover, when selecting the optimal Cloudlet, this paper takes a global perspective and fully considers four contextual factors: user mobility, network conditions, cloudlet CPU utilization, and cloudlet load, which correspond to the propagation time, communication time, processing time, and queuing time in the response time, respectively. Therefore, the time spent on offloading tasks using this scheme is short, the energy consumed is low, and the effect of improving the overall user experience is achieved.
综上,本文针对边缘计算下任务卸载的问题进行了研究。提出了考虑用户移动性等多个上下文因素的计算卸载策略。当最近cloudlet无法处理卸载的任务时,将该问题转化为一个多属性决策问题,从附近选择一个最佳cloudlet来处理任务,并采用中智集来处理上下文数据随时间的高度动态变化性。模拟实验结果表明,使用我们提出的策略,延迟与功耗分别降低了28.9%-54.7%和33.2%-56.8%。In summary, this paper studies the problem of task offloading under edge computing. A computing offloading strategy that considers multiple contextual factors such as user mobility is proposed. When the nearest cloudlet cannot handle the offloaded task, the problem is transformed into a multi-attribute decision-making problem, and an optimal cloudlet is selected from the vicinity to handle the task. A neutral intelligence set is used to handle the highly dynamic changes in context data over time. Simulation experimental results show that using our proposed strategy, latency and power consumption are reduced by 28.9%-54.7% and 33.2%-56.8%, respectively.
本实施例中,边缘服务器也可以是指边缘云。In this embodiment, the edge server may also refer to an edge cloud.
本申请还提供一种基于中智集的边缘卸载系统,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述方法的步骤。该基于中智集的边缘卸载系统能实现上述基于中智集的边缘卸载方法的各个实施例,且能达到相同的有益效果,此处,不做赘述。The present application also provides an edge unloading system based on a central intelligence set, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the above method when executing the computer program. The edge unloading system based on a central intelligence set can implement various embodiments of the above edge unloading method based on a central intelligence set, and can achieve the same beneficial effects, which will not be described in detail here.
本申请还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的方法步骤。该计算机可读存储介质能实现上述基于中智集的边缘卸载方法的各个实施例,且能达到相同的有益效果,此处,不做赘述。The present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method steps described above are implemented. The computer-readable storage medium can implement various embodiments of the above-mentioned edge offloading method based on the central intelligence set, and can achieve the same beneficial effects, which will not be described here.
以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术人员无需创造性劳动就可以根据本发明的构思作出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。The preferred specific embodiments of the present invention are described in detail above. It should be understood that a person skilled in the art can make many modifications and changes based on the concept of the present invention without creative work. Therefore, any technical solution that can be obtained by a person skilled in the art through logical analysis, reasoning or limited experiments based on the concept of the present invention on the basis of the prior art should be within the scope of protection determined by the claims.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210140585.1A CN114666339B (en) | 2022-02-16 | 2022-02-16 | Edge unloading method and system based on noose set and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210140585.1A CN114666339B (en) | 2022-02-16 | 2022-02-16 | Edge unloading method and system based on noose set and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114666339A CN114666339A (en) | 2022-06-24 |
CN114666339B true CN114666339B (en) | 2023-04-11 |
Family
ID=82027226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210140585.1A Active CN114666339B (en) | 2022-02-16 | 2022-02-16 | Edge unloading method and system based on noose set and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114666339B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116112865B (en) * | 2023-01-17 | 2023-10-03 | 广州爱浦路网络技术有限公司 | Edge application server selection method based on user equipment position, computer device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104211A (en) * | 2019-12-05 | 2020-05-05 | 山东师范大学 | Method, system, device and medium for computing offloading based on task dependency |
CN112887435A (en) * | 2021-04-13 | 2021-06-01 | 中南大学 | Method for improving task unloading cooperation rate in edge calculation |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10771569B1 (en) * | 2019-12-13 | 2020-09-08 | Industrial Technology Research Institute | Network communication control method of multiple edge clouds and edge computing system |
CN111274037B (en) * | 2020-01-21 | 2023-04-28 | 中南大学 | An edge computing task offloading method and system |
JP7677990B2 (en) * | 2020-03-23 | 2025-05-15 | アップル インコーポレイテッド | Dynamic service discovery and offloading framework for edge computing based cellular network systems |
CN111835849B (en) * | 2020-07-13 | 2021-12-07 | 中国联合网络通信集团有限公司 | Method and device for enhancing service capability of access network |
US11427215B2 (en) * | 2020-07-31 | 2022-08-30 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for generating a task offloading strategy for a vehicular edge-computing environment |
CN112306696B (en) * | 2020-11-26 | 2023-05-26 | 湖南大学 | Energy-saving and efficient edge computing task unloading method and system |
CN112600895B (en) * | 2020-12-07 | 2023-04-21 | 中国科学院深圳先进技术研究院 | Service scheduling method, system, terminal and storage medium for mobile edge computing |
-
2022
- 2022-02-16 CN CN202210140585.1A patent/CN114666339B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104211A (en) * | 2019-12-05 | 2020-05-05 | 山东师范大学 | Method, system, device and medium for computing offloading based on task dependency |
CN112887435A (en) * | 2021-04-13 | 2021-06-01 | 中南大学 | Method for improving task unloading cooperation rate in edge calculation |
Also Published As
Publication number | Publication date |
---|---|
CN114666339A (en) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2024174426A1 (en) | Task offloading and resource allocation method based on mobile edge computing | |
WO2023040022A1 (en) | Computing and network collaboration-based distributed computation offloading method in random network | |
WO2024254892A1 (en) | Agent policy learning method with privacy protection in mobile edge computing | |
CN107766135B (en) | A task allocation method based on particle swarm optimization and simulated annealing optimization in a moving cloud | |
US11784931B2 (en) | Network burst load evacuation method for edge servers | |
CN110798849A (en) | Computing resource allocation and task unloading method for ultra-dense network edge computing | |
CN109684075A (en) | Method for unloading computing tasks based on edge computing and cloud computing cooperation | |
CN111641681A (en) | Internet of things service unloading decision method based on edge calculation and deep reinforcement learning | |
CN107295109A (en) | Task unloading and power distribution joint decision method in self-organizing network cloud computing | |
CN114390057A (en) | Multi-interface self-adaptive data unloading method based on reinforcement learning under MEC environment | |
CN115659803A (en) | Intelligent unloading method for computing tasks under unmanned aerial vehicle twin network mapping error condition | |
CN117749635A (en) | Digital twin-enabled industrial Internet of things resource allocation system and method | |
CN113965569B (en) | An energy-efficient and low-latency edge node computing migration configuration system | |
CN110149401B (en) | A method and system for optimizing edge computing tasks | |
CN112511336A (en) | Online service placement method in edge computing system | |
CN111901400A (en) | Edge computing network task unloading method equipped with cache auxiliary device | |
CN114938381A (en) | A D2D-MEC offloading method and computer program product based on deep reinforcement learning | |
CN112162789A (en) | Edge calculation random unloading decision method and system based on software definition | |
CN104754063B (en) | Local cloud computing resource scheduling method | |
CN116321307A (en) | A two-way cache placement method based on deep reinforcement learning in cellular-free networks | |
CN114666339B (en) | Edge unloading method and system based on noose set and storage medium | |
CN116016538A (en) | Edge-device collaborative reasoning task offload optimization method and system for dynamic environment | |
CN110062356B (en) | A Layout Method of Cached Replicas in D2D Networks | |
CN113709853B (en) | Network content transmission method and device oriented to cloud edge collaboration and storage medium | |
CN115604274A (en) | Self-adaptive computing unloading method based on server load balancing mechanism under MEC environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |