WO2023116460A1 - 移动边缘计算环境下多用户多任务计算卸载方法及系统 - Google Patents

移动边缘计算环境下多用户多任务计算卸载方法及系统 Download PDF

Info

Publication number
WO2023116460A1
WO2023116460A1 PCT/CN2022/137704 CN2022137704W WO2023116460A1 WO 2023116460 A1 WO2023116460 A1 WO 2023116460A1 CN 2022137704 W CN2022137704 W CN 2022137704W WO 2023116460 A1 WO2023116460 A1 WO 2023116460A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
user
computing
offloading
resources
Prior art date
Application number
PCT/CN2022/137704
Other languages
English (en)
French (fr)
Inventor
高程希
褚淑惠
徐敏贤
叶可江
须成忠
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2023116460A1 publication Critical patent/WO2023116460A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Definitions

  • the invention belongs to the field of mobile edge computing, and in particular relates to a method and system for offloading multi-user and multi-task computing in a mobile edge computing environment.
  • Computing offloading reduces the computing requirements of device terminals by offloading the computing-intensive tasks of device terminals to cloud servers with abundant computing resources for execution.
  • computing offloading is widely used in mobile edge computing and has become the main technology in the field of mobile edge computing.
  • computing offloading in the mobile edge computing environment can be divided into: process-based fine-grained computing offloading, in which part of the computing tasks are offloaded to the cloud server for execution, and the rest are left for local computing on the device terminal; application-based coarse-grained computing offloading Granular computing offloading, in which the entire computing-intensive application is offloaded to the cloud server for execution, so that there is no need to divide computing tasks at the device terminal.
  • the calculation offloading in the mobile edge computing environment can be divided into: minimizing task time delay, minimizing energy consumption under delay constraints, minimizing time and energy consumption and There is a trade-off between time and energy consumption.
  • the solutions for task computing offloading are mainly divided into two types: centralized and distributed.
  • the centralized solution needs to know the information of all nodes in the system to determine the offloading solution; in the distributed solution, each node is in Offloading decisions can be made locally without obtaining information from other nodes.
  • distributed solutions there is research work on computing offloading based on game theory.
  • Game theory is used to model the offloading problem, optimize the time or energy consumption of computing offloading, establish a decision-making model for distributed computing offloading, and solve distributed optimal problems.
  • the solution is the Nash equilibrium solution of the game theory model. Among them, for the fairness among users, the average allocation of communication resources and computing resources is considered.
  • the existing technology only considers the static allocation of limited communication resources and computing resources in the process of task computing offloading, that is, the communication resources and computing resources allocated to each offloading task are fixed during the task computing offloading process and cannot be dynamically adjusted, even if other The offload task has ended and the allocated resources are released, which results in a waste of communication resources and computing resources.
  • a user usually has multiple computing tasks, while the existing technology generally only targets offloading scenarios where each user has only one computing task, and does not consider multi-user and multi-task computing offloading scenarios, which lack universality and flexibility. sex.
  • the present invention proposes a dynamic allocation scheme of limited resources suitable for multi-user and multi-task computing offloading scenarios.
  • each user may have multiple computing tasks offloaded, so as to be more in line with real-life scenarios; then consider the dynamic allocation of limited computing resources and communication resources, that is, the allocated resources can be used during the task computing offloading process. It can be re-adjusted after the completion of other tasks, so as to avoid the waste of limited resources, improve the utilization rate of limited resources, and then improve the performance of multi-user and multi-task computing offloading.
  • Embodiments of the present invention provide a method and system for offloading multi-user and multi-task computing in a mobile edge computing environment, so as to at least solve the technical problem that the prior art does not consider multi-user and multi-task computing offloading scenarios.
  • a method for offloading multi-user multi-task computing in a mobile edge computing environment including the following steps:
  • Each user makes a task computing offload decision locally at the device terminal based on the cost of computing task response time.
  • the method specifically includes the following steps:
  • a mobile edge computing network architecture is adopted to design a dynamic local computing resource, cloud computing resource and wireless broadband resource allocation scheme, in which the allocated resources are released immediately after the computing task is completed, and then the released resources are redistributed to the unavailable resources.
  • a computing model of multi-user and multi-task computing offloading in the mobile edge computing environment is established, including local computing and cloud computing, and a user cost model is constructed based on the computing model, and the multi-task computing offloading problem is mathematically constructed.
  • the multi-user multi-tasking computing offloading game model is established by using the game theory method, and the potential game is introduced in the establishment process, and the multi-user multi-tasking computing offloading decision-making problem is modeled as a potential game model, in which users interact with each other in an automatic manner.
  • the decision of task computing offloading is made locally on the device terminal in an organized way;
  • the mobile edge computing network architecture is adopted, including N mobile device users and a wireless base station, where server resources are deployed near the wireless base station s; the user set is expressed as Each mobile user equipment terminal has multiple independent computing tasks, and the number of computing tasks of user n is expressed as k n ;
  • the communication link between the user and the base station consists of M wireless channels, denoted as Any task of the user is calculated locally on the device terminal or offloaded to the edge server through a certain wireless channel for execution;
  • the computing resources allocated by the task are immediately released, and the released resources will be reallocated to those tasks whose calculation has not yet completed.
  • the computing resource assigned to task i of user n is expressed as:
  • F c represents edge cloud computing capability
  • the cloud computing resources allocated by the task are immediately released, and the released resources will be redistributed to those offloading tasks whose calculation has not yet completed.
  • the bandwidth resource allocated to task i of user n is expressed as:
  • the bandwidth resources allocated by the task are released immediately, and the released resources will be re-allocated to those offload tasks on the same channel whose data transmission has not yet completed.
  • task i of user n calculates the data transmission time in offloading as shown in formula 3:
  • the cost of user n is the average calculation cost of all tasks of the user, as shown in formula 7;
  • a -n (a 1 ,...,a n-1 ,a n+1 ,...,a N ) represents the computing offloading strategy of all users except user n. Given a -n , user n will formulate An optimal policy a n minimizes its cost;
  • Computational offloading problem modeled as a potential game model in represents the set of all tasks of user n, Represents the decision space of user n’s task i, U n,i represents the utility function that user n’s task i minimizes; in the process of establishing the potential game model ⁇ n , set the potential function equal to the cost function T n of user n, according to Definition 2, deduce the utility function U n,i of task i of user n;
  • a ⁇ (n,i) denotes the computation offloading decision for all tasks of all users except task i of user n.
  • each time slot t includes the following two stages:
  • Collect task offloading cost According to the decision vector a(t) of time slot t, the base station calculates the task offloading cost when task i of user n selects channel m and send it to user n; at this stage, each user n collects each task from the base station select channel task offloading cost
  • Update task decision In this stage, the present invention allows no more than one user to update the current decision of a task. According to the task offloading cost collected in the first stage, each user n uses formula 10 to calculate its task decision update set:
  • a multi-user and multi-task computing offloading system in a mobile edge computing environment including:
  • a model building module used to build a game theory model for distributed multi-task computing offloading
  • the unloading decision calculation module is used for unloading decision. Each user makes a task calculation unloading decision locally on the device terminal based on the cost of the response time of the calculation task.
  • the focus is on the computing offloading of delay-sensitive applications, and the important influencing factor of the response time of computing tasks is mainly considered to establish a distributed multi-task
  • each user makes the task computing offloading decision locally on the device terminal based on the cost of computing task response time.
  • the limited computing and communication resources are dynamically allocated. That is, the allocated resources are released immediately after the task ends, and the released resources are redistributed to the unfinished tasks, so as to realize efficient resource allocation and improve the utilization rate of limited resources.
  • Fig. 1 is an overall flow chart of the multi-user multi-task computing offloading method and system in the mobile edge computing environment of the present invention.
  • the rise of mobile edge computing provides a solution to the above problems, that is, in the mobile edge computing environment, user equipment terminals can offload computing tasks to infrastructure deployed at the edge of the network for execution, thereby providing lower latency and shake.
  • the computing and storage resources of the server are limited.
  • the network communication resources between the user equipment terminal and the infrastructure are limited. Therefore, the research on the calculation offloading method of multi-user tasks needs to consider the contention of tasks for limited resources. Use questions.
  • there are currently related researches on the problem of limited resource allocation in task computing offloading based on game theory methods but none of them considers the dynamic allocation of resources, resulting in the waste of limited resources.
  • the main technical problem to be solved by the present invention is to dynamically allocate limited computing and communication resources in the scenario of multi-user and multi-task computing offloading, that is, to release the allocated resources immediately after the task ends, and re-allocate the released resources to other users. Unfinished tasks, so as to achieve efficient resource allocation and improve the utilization of limited resources.
  • the purpose of the present invention is to design and realize a dynamic allocation scheme of limited resources in multi-user and multi-task computing offloading scenarios, so as to improve the utilization rate of limited resources and further improve the performance of multi-user and multi-task computing offloading.
  • the present invention focuses on the computing offloading of delay-sensitive applications, and mainly considers the important influencing factor of computing task response time, and establishes a game theory model for distributed multi-task computing offloading. Make task computing offloading decisions locally.
  • Fig. 1 the basic content of technical solution of the present invention is as follows:
  • Aiming at the problem of multi-task computing offloading use the method of game theory to establish the game model of multi-user multi-task computing offloading, and introduce potential game in the process of establishment.
  • the multi-user and multi-task computing offloading decision-making problem is modeled as a potential game model, in which users make a decision to offload task computing locally on the device terminal in a self-organizing manner, and finally realize that all users in the system are relatively satisfied
  • the solution is also the globally optimal solution, which is implemented in a distributed manner;
  • system model of the present invention is specifically as follows:
  • the present invention considers a classic mobile edge computing network architecture, including N mobile device users and a wireless base station, where server resources are deployed near the wireless base station s.
  • a collection of users can be expressed as
  • Each mobile user equipment terminal has multiple independent computing tasks.
  • a smart camera user may simultaneously run multiple tasks such as video compression and real-time object recognition.
  • the number of computing tasks of user n is expressed as k n .
  • the communication link between the user and the base station consists of M wireless channels, which can be expressed as Any task of the user can be calculated locally on the device terminal or offloaded to the edge server for execution through a certain wireless channel.
  • the task i of each user n consists of two parts: the size D n,i of transmitted data (including program codes and input files, etc.) when the task is offloaded, and the number of CPU cycles L n,i required for task calculation.
  • the technical solution of the present invention considers a complex local computing model.
  • the present invention considers the release and readjustment of resources.
  • the specific performance is: when the task computing is completed, the computing resources allocated by the task are released immediately , and the released resources will be reallocated to those tasks whose calculations have not yet ended, thereby improving resource utilization. Therefore, during the entire process of local computing of user tasks, the computing resources allocated to this task will dynamically increase as the computing of other tasks ends.
  • edge cloud computing resources are limited, when multiple tasks choose cloud computing, they compete for cloud computing resources. For fairness, the present invention considers that cloud computing resources are evenly allocated to these tasks. Therefore, when a n,i > 0, the computing resource allocated to task i of user n can be expressed as where F c represents the edge cloud computing capability.
  • F c represents the edge cloud computing capability.
  • the technical solution of the present invention considers the release and readjustment of resources in the allocation of cloud computing resources, specifically as follows: when the offloading task calculation is completed, the cloud computing resources allocated by the task are immediately released, and the released resources will be redistributed Unload tasks for those calculations that have not yet ended, thereby improving resource utilization. Therefore, during the whole process of offloading task cloud computing, the cloud computing resources allocated to this task will increase dynamically as the calculation of other offloading tasks ends.
  • the present invention considers a heterogeneous wireless communication network, where the bandwidth resource of the wireless channel m is denoted as B m , and when multiple tasks select the same channel m to unload, they compete for the bandwidth resource of the channel. For the sake of fairness, the present invention considers that bandwidth resources are evenly allocated to these tasks on the same channel. Therefore, when a n,i >0, the bandwidth resources of task i allocated to user n can be expressed as The existing technical solution only considers that the bandwidth resource allocated to each offloading task is fixed during the calculation offloading process of the task and cannot be readjusted.
  • the technical solution of the present invention considers the release and readjustment of resources in the allocation of bandwidth resources, specifically as follows: when the data transmission of the offload task is completed, the bandwidth resources allocated by the task are immediately released, and the released resources will be reallocated Unloading tasks for those whose data transmission has not yet ended on the same channel, thereby improving resource utilization. Therefore, during the whole process of data transmission of an offloading task, the bandwidth resource allocated to the task will increase dynamically as the data transmission of other offloading tasks on the same channel ends.
  • the present invention analyzes the local execution of the computing task at the user equipment end and the offloading to the edge server end for execution.
  • the present invention ignores the time overhead of returning the task calculation result to the client, because the size of the task calculation result is usually much smaller than D n,i .
  • the present invention defines the cost of user n as the average calculation cost of all tasks of the user, as shown in formula 7.
  • Definition 1 (Nash Equilibrium) A stable state of a game model, in which all participants can reach a solution that everyone is satisfied with, so that no participant can reduce its cost function by unilaterally changing its strategy.
  • Definition 2 There is a potential function, and each participant in the game can map the change of his utility function to the potential function, that is, when a certain participant reduces his utility function by changing his strategy , the value of the potential function will also be reduced, and the potential function has the same trend as the utility function of each participant.
  • the present invention uses the method of game theory to solve the problem of multi-user and multi-task computing unloading.
  • Game theory is a powerful tool for designing distributed solutions, so that users can formulate the best strategy locally on the device in a self-organizing manner, and finally realize a satisfactory solution.
  • Equation 8 the goal of each user is to minimize its own cost, as shown in Equation 8.
  • a -n (a 1 ,...,a n-1 ,a n+1 ,...,a N ) represents the computing offloading strategy of all users except user n. Given a -n , user n will formulate An optimal policy a n minimizes its cost.
  • the computational offloading problem in Equation 8 is a combinatorial optimization problem in a k n- dimensional discrete space, which is NP-hard. Therefore, the present invention uses the potential game to solve the approximate solution of the above computational offloading problem in polynomial time.
  • the present invention models the above computing offloading problem as a potential game model in, represents the set of all tasks of user n, Denotes the decision space of user n's task i, and U n,i represents the utility function that user n's task i minimizes.
  • the present invention sets the potential function equal to the cost function T n of user n.
  • the present invention can derive the utility function U n,i of task i of user n.
  • the solution concept of the potential game is Nash equilibrium, as shown in Definition 1, and in the potential game, the Nash equilibrium can minimize the potential function locally or globally. Therefore, the present invention can achieve the goal of optimizing T n by optimizing U n,i , and finally solve the approximate solution of the above problem in polynomial time.
  • the present invention models the offloading problem of multi-user and multi-task computing as a game theory model
  • the collection of all tasks of all users represents the set of participants
  • U n,i represents the utility function that user n's task i minimizes.
  • the game theory model ⁇ is expressed as:
  • a ⁇ (n,i) denotes the computation offloading decision for all tasks of all users except task i of user n.
  • Nash equilibrium is a very important concept in game theory. It is a stable state of the game model.
  • the Nash equilibrium of multi-user multi-task computing offloading game can be expressed as a decision vector and satisfy Not all game models have Nash equilibrium, but potential games have an important property, that is, in all potential games, Nash equilibrium must exist, and the multi-task computing offloading game model established by the present invention belongs to potential games (in Theoretically, the present invention can prove that the multi-task computing offloading game is a potential game by constructing a potential function. Therefore, the multi-user multi-task computing offloading game model has a Nash equilibrium.
  • the base station can calculate the task offloading cost when task i of user n chooses channel m and send it to user n.
  • each user n collects each task from the base station select channel task offloading cost
  • each user n uses formula 10 to calculate its task decision update set:
  • user n does not need to know the relevant information of other users' tasks, thus ensuring the privacy. if User n will send a request message to the base station to compete for the opportunity to update the task decision, otherwise, user n will not send any request message. Then, the base station will randomly select a user k from all users who have sent the request message, and send an allow message to user k (allowing the user to update the task decision).
  • User k receiving the permission message will select a task decision update (i,a) from ⁇ k (t), send it to the base station to update the decision vector a(t+1) of the next slot, and then assign the task The decision of i in the next time slot is updated to a, and the decisions of other tasks remain unchanged. Other users who have not received the permission message will keep their task decision unchanged in the next slot.
  • the multi-user multi-task computing offloading game will converge to a Nash equilibrium within a limited number of time slots.
  • the base station does not receive any request message within a time slot, the base station broadcasts an end message to all users.
  • the game process of multi-user multi-tasking computing offloading ends, and then each user takes the decision made in the last time slot in the above process as the final computing offloading decision for its task, and then performs multi-tasking based on this decision. Execution of tasks.
  • the present invention Compared with the existing multi-user computing offloading technology based on the game theory method, on the one hand, the present invention considers the situation that each user has multiple computing tasks, and realizes the computing offloading technology in the multi-user multi-task scenario, which has certain advantages. Universality and flexibility, more in line with real-life scenarios; on the other hand, the present invention considers the release and readjustment of limited resources, and designs a dynamic local computing resource, cloud computing resource and wireless broadband resource allocation scheme, which will release the resources released by the end of the task. The resources are redistributed to those unfinished tasks, so as to realize the efficient allocation of limited resources, improve the utilization rate of limited resources, reduce the time overhead of the user end, and thus improve the user experience quality of the user device end.
  • the present invention has been analyzed at the technical theory level and realized by simulation experiments, and the results prove that the present invention is superior to the existing technical solutions in terms of the utilization rate of limited resources and the calculation cost of user tasks.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed over multiple units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server or a network device, etc.) execute all or part of the steps of the methods in various embodiments of the present invention.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本发明涉及移动边缘计算领域,具体涉及一种移动边缘计算环境下多用户多任务计算卸载方法及系统。该方法及系统重点关注延迟敏感型应用的计算卸载,主要考虑计算任务的响应时间这一重要影响因素,建立分布式多任务计算卸载的博弈论模型,每个用户基于计算任务响应时间的代价在设备终端本地进行任务计算卸载决策的制定,在多用户多任务计算卸载场景中,对有限的计算和通信资源进行动态分配,即任务结束后立即释放所分配的资源,并将释放的资源重新分配给还未结束的任务,从而实现高效的资源分配,提高有限资源的利用率。

Description

移动边缘计算环境下多用户多任务计算卸载方法及系统 技术领域
本发明属于移动边缘计算领域,尤其涉及一种移动边缘计算环境下多用户多任务计算卸载方法及系统。
背景技术
计算卸载通过将设备终端计算密集型任务卸载至计算资源丰富的云服务器端执行,来减少设备终端计算需求。目前,计算卸载广泛应用于移动边缘计算中,成为移动边缘计算领域的主要技术。根据划分粒度,移动边缘计算环境下的计算卸载分为:基于进程的细粒度的计算卸载,其中计算任务的一部分卸载至云服务器端执行,其余部分留在设备终端本地计算;基于应用程序的粗粒度的计算卸载,其中整个计算密集型应用全部卸载至云服务器上执行,从而无需在设备终端对计算任务进行划分。根据优化目标,考虑到时间和能耗两个影响因素,移动边缘计算环境下的计算卸载分为:最小化任务时间延迟、在时延约束下最小化能耗、同时最小化时间和能耗并在时间和能耗之间进行权衡。目前关于任务计算卸载的解决方案主要分为集中式和分布式两种,集中式的解决方案需要掌握系统中所有节点的信息,以确定卸载方案;在分布式的解决方案中,每个节点在本地就可以做出卸载决策,不需要获取其它节点的信息。关于分布式的解决方案,有基于博弈论的计算卸载研究工作,使用博弈论方法对卸载问题建模,优化计算卸载的时间或能耗,建立分布式计算卸载的决策模型,求解分布式最优解,即博弈论模型的纳什均衡解。其中,为了用户之间的公平性,考虑通信资源和计算资源的平均分配。
现有技术只考虑任务计算卸载过程中有限通信资源和计算资源的静态分配,即分配给每一个卸载任务的通信资源和计算资源在任务计算卸载的过程中固定不变,不能动态调整,即便其它卸载任务已经结束并释放所 分配的资源,这导致通信资源和计算资源的浪费。除此之外,现实场景通常一个用户有多个计算任务,而现有技术一般只针对每个用户只有一个计算任务的卸载场景,没有考虑多用户多任务计算卸载场景,这缺乏普遍性和灵活性。针对这些缺点,本发明提出一种适用于多用户多任务计算卸载场景的有限资源的动态分配方案。其中,首先要考虑每个用户可能有多个计算任务卸载,从而更符合现实生活场景;然后考虑有限计算资源和通信资源的动态分配,即所分配的资源在任务计算卸载的过程中可以随着其它任务的结束而重新调整,从而避免有限资源的浪费,提高有限资源的利用率,进而提升多用户多任务计算卸载的性能。
发明内容
本发明实施例提供了一种移动边缘计算环境下多用户多任务计算卸载方法及系统,以至少解决现有技术没有考虑多用户多任务计算卸载场景的技术问题。
根据本发明的一实施例,提供了一种移动边缘计算环境下多用户多任务计算卸载方法,包括以下步骤:
建立分布式多任务计算卸载的博弈论模型;
每个用户基于计算任务响应时间的代价在设备终端本地制定任务计算卸载决策。
进一步地,该方法具体包括以下步骤:
采用一个移动边缘计算的网络架构,设计动态的本地计算资源、云计算资源和无线宽带资源分配方案,其中在计算任务结束后立即释放其所分配的资源,然后将释放的资源重新分配给还未结束的计算任务;
基于所设计的资源分配方案,建立移动边缘计算环境下多用户多任务计算卸载的计算模型,其中包括本地计算和云计算,并基于计算模型构造用户代价模型,对多任务计算卸载问题进行数学建模;
使用博弈理论方法进行多用户多任务计算卸载博弈模型的建立,在建立的过程中引入势博弈,将多用户多任务计算卸载决策制定问题建模为势博弈模型,其中用户之间以一种自组织的方式在设备终端本地做出任务计算卸载的决策;
对建立的博弈论模型进行技术理论层面的分析,借助势博弈表明此博弈模型最优解—纳什均衡的存在性,使用基于最佳响应的分布式算法求解获得博弈模型的纳什均衡解,制定多用户多任务计算卸载性能的评价衡量指标,对博弈论模型进行评估。
进一步地,采用移动边缘计算网络架构,包括N个移动设备用户和一个无线基站,其中无线基站s附近部署了服务器资源;用户集合表示为
Figure PCTCN2022137704-appb-000001
每一个移动用户设备终端有多个独立的计算任务,将用户n的计算任务的数量表示为k n
用户和基站之间的通信链路由M个无线信道组成,表示为
Figure PCTCN2022137704-appb-000002
Figure PCTCN2022137704-appb-000003
用户的任意一个任务在设备终端本地计算或者通过某一个无线信道卸载至边缘服务器端执行;
每一个用户n的任务i由两部分组成:任务卸载时传输数据的大小D n,i和任务计算所需的CPU周期数L n,i;用户n的任务i的计算卸载决策表示为
Figure PCTCN2022137704-appb-000004
其中,a n,i=0表示用户n选择在本地设备上执行计算任务i,a n,i>0表示用户n选择将计算任务i通过无线信道a n,i卸载至服务器端执行;用户n的所有任务的计算卸载决策构成了用户n的策略
Figure PCTCN2022137704-appb-000005
Figure PCTCN2022137704-appb-000006
a=(a 1,a 2,…,a N)表示所有用户的计算卸载策略。
进一步地,其中本地计算资源分配为:
Figure PCTCN2022137704-appb-000007
表示用户n的本地设备的计算能力,当a n,i=0时,分配给用户n的任务i的计算资源表示为:
Figure PCTCN2022137704-appb-000008
其中I {A}是指示函数,当A为真时,I {A}=1,当A为假时,I {A}=0;
当任务计算结束后,该任务所分配的计算资源立即被释放,并且被释放的资源将被重新分配给那些计算还没结束的任务。
进一步地,当a n,i>0时,分配给用户n的任务i的计算资源表示为:
Figure PCTCN2022137704-appb-000009
其中F c表示边缘云计算能力;
当卸载任务计算结束后,该任务所分配的云计算资源立即被释放,并且被释放的资源将被重新分配给那些计算还没结束的卸载任务。
进一步地,当a n,i>0时,分配给用户n的任务i的带宽资源表示为:
Figure PCTCN2022137704-appb-000010
当卸载任务的数据传输结束后,该任务所分配的带宽资源立即被释放,并且被释放的资源将被重新分配给同一信道上那些数据传输还没结束的卸载任务。
进一步地,基于本地计算资源分配方案,当a n,i=0时,用户n的任务i在用户设备终端本地计算的时间如公式1所示:
Figure PCTCN2022137704-appb-000011
其中,当A<B时,min{A,B}=A;否则的话,min{A,B}=B;
当用户n卸载其任务i至边缘服务器端执行时,即a n,i>0时,基云计算资源分配方案,用户n的任务i在边缘服务器端云计算的时间如公式2所示:
Figure PCTCN2022137704-appb-000012
基于无线带宽资源分配方案,用户n的任务i计算卸载中的数据传输时间如公式3所示:
Figure PCTCN2022137704-appb-000013
当a n,i=0时,根据公式1,该任务本地计算的代价为:
Figure PCTCN2022137704-appb-000014
当a n,i>0时,根据公式2和公式3,该任务云计算的代价为:
Figure PCTCN2022137704-appb-000015
根据公式4和公式5,用户n的任务i的计算代价为:
Figure PCTCN2022137704-appb-000016
用户n的代价为该用户的所有任务的平均计算代价,如公式7所示;
Figure PCTCN2022137704-appb-000017
进一步地,博弈模型建立为:
在多用户多任务计算卸载问题中,每一个用户的目标就是最小化自己 的代价,如公式8所示:
Figure PCTCN2022137704-appb-000018
其中,a -n=(a 1,…,a n-1,a n+1,…,a N)表示除了用户n以外其它所有用户的计算卸载策略,给定a -n,用户n会制定一个最佳的策略a n以最小化其代价;
计算卸载问题建模为一个势博弈模型
Figure PCTCN2022137704-appb-000019
其中,
Figure PCTCN2022137704-appb-000020
表示用户n的所有任务的集合,
Figure PCTCN2022137704-appb-000021
表示用户n的任务i的决策空间,U n,i表示用户n的任务i最小化的效用函数;在势博弈模型Γ n的建立过程中,设置势函数等于用户n的代价函数T n,根据定义2,推导出用户n的任务i的效用函数U n,i
计算卸载问题建模为一个博弈论模型
Figure PCTCN2022137704-appb-000022
其中,所有用户的所有任务的集合
Figure PCTCN2022137704-appb-000023
表示参与者集合,
Figure PCTCN2022137704-appb-000024
表示用户n的任务i的策略空间,U n,i表示用户n的任务i最小化的效用函数;然后,博弈论模型Γ表示为:
Figure PCTCN2022137704-appb-000025
其中,a -(n,i)表示除了用户n的任务i之外的所有用户的所有任务的计算卸载决策。
进一步地,决策更新迭代过程在一个时隙内完成,所有用户并行进行,由无线基站的时钟信号实现同步;每一个时隙t包括以下两个阶段:
收集任务卸载代价:根据时隙t的决策向量a(t),基站计算出用户n的 任务i选择信道m时的任务卸载代价
Figure PCTCN2022137704-appb-000026
并将其发送给用户n;在本阶段,每一个用户n从基站处收集每一个任务
Figure PCTCN2022137704-appb-000027
选择信道
Figure PCTCN2022137704-appb-000028
时的任务卸载代价
Figure PCTCN2022137704-appb-000029
更新任务决策:本阶段本发明让不超过一个用户更新其一个任务的当前决策,根据在第一阶段收集到的任务卸载代价,每一个用户n使用公式10计算其任务决策更新集合:
Figure PCTCN2022137704-appb-000030
根据本发明的另一实施例,提供了一种移动边缘计算环境下多用户多任务计算卸载系统,包括:
模型建立模块,用于建立分布式多任务计算卸载的博弈论模型;
卸载决策计算模块,用于卸载决策每个用户基于计算任务响应时间的代价在设备终端本地制定任务计算卸载决策。
本发明实施例中的移动边缘计算环境下多用户多任务计算卸载方法及系统中,重点关注延迟敏感型应用的计算卸载,主要考虑计算任务的响应时间这一重要影响因素,建立分布式多任务计算卸载的博弈论模型,每个用户基于计算任务响应时间的代价在设备终端本地进行任务计算卸载决策的制定,在多用户多任务计算卸载场景中,对有限的计算和通信资源进行动态分配,即任务结束后立即释放所分配的资源,并将释放的资源重新分配给还未结束的任务,从而实现高效的资源分配,提高有限资源的利用率。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明移动边缘计算环境下多用户多任务计算卸载方法及系统总体流程图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
随着物联网和人工智能技术的飞速发展,智能手机、笔记本电脑等移动用户设备变得越来越普及,并且在这些用户设备终端运行的应用也越来越多,如自然语言处理、人脸识别等。这些应用大多是一些计算密集型应用,而用户设备终端的硬件资源有限,并不能满足延迟敏感型应用的需求。与此同时,部署在核心网络中的传统云计算服务器距离用户设备终端较远,设备终端在云计算环境下的计算卸载容易产生网络传输拥塞和高延迟的 问题。移动边缘计算的兴起为以上问题提供了一种解决方案,即在移动边缘计算环境下,用户设备终端可以将计算任务卸载至部署在网络边缘的基础设施处进行执行,从而提供更低的延迟和抖动。在移动边缘计算环境下,服务器的计算和存储资源有限,此外,用户设备终端与基础设施之间的网络通信资源有限,因此,关于多用户任务计算卸载方法的研究需要考虑任务对有限资源的争用问题。针对该问题,考虑到用户之间的公平性,目前有相关的基于博弈论方法进行任务计算卸载中有限资源分配问题的研究,但是都没有考虑动态分配资源,导致了有限资源的浪费。除此之外,相关研究大都没有考虑多用户多任务计算卸载场景,即每个用户端有多个计算任务卸载,缺乏普遍性。本发明要解决的主要技术问题是在多用户多任务计算卸载场景中,对有限的计算和通信资源进行动态分配,即任务结束后立即释放所分配的资源,并将释放的资源重新分配给还未结束的任务,从而实现高效的资源分配,提高有限资源的利用率。
本发明的目的是设计并实现一种在多用户多任务计算卸载场景的有限资源的动态分配方案,以提高有限资源的利用率,进而提升多用户多任务计算卸载的性能。本发明重点关注延迟敏感型应用的计算卸载,主要考虑计算任务的响应时间这一重要影响因素,建立分布式多任务计算卸载的博弈论模型,每个用户基于计算任务响应时间的代价在设备终端本地进行任务计算卸载决策的制定。参见图1,本发明技术方案的基本内容如下:
1).首先,采用一个经典的移动边缘计算的网络架构,设计动态的本地计算资源、云计算资源和无线宽带资源分配方案,其中,考虑资源的释放和重新调整,在计算任务结束后立即释放其所分配的资源,然后将释放的资源重新分配给还未结束的计算任务,实现有限资源的动态分配;
2).基于1)中所设计的资源分配方案,建立移动边缘计算环境下多用户多任务计算卸载的计算模型,其中包括本地计算和云计算,并基于计算模型构造用户代价模型,对多任务计算卸载问题进行数学建模;
3).针对多任务计算卸载问题,使用博弈理论方法,进行多用户多任 务计算卸载博弈模型的建立,在建立的过程中引入势博弈。将多用户多任务计算卸载决策制定问题建模为势博弈模型,其中,用户之间以一种自组织的方式在设备终端本地做出任务计算卸载的决策,最终实现系统中所有用户都相对满意的解决方案,也是全局最优的解决方案,这种实现方式以分布式的形式进行;
4).对3)中建立的博弈论模型进行技术理论层面的分析,借助势博弈表明此博弈模型最优解—纳什均衡的存在性,进而使用基于最佳响应的分布式算法求解获得博弈模型的纳什均衡解。最后,制定多用户多任务计算卸载性能的评价衡量指标,对本发明所提出的模型进行评估。
对应上述技术方案步骤,本发明的系统模型具体如下:
1.1网络架构
本发明考虑一个经典的移动边缘计算网络架构,包括N个移动设备用户和一个无线基站,其中无线基站s附近部署了服务器资源。用户集合可以表示为
Figure PCTCN2022137704-appb-000031
每一个移动用户设备终端有多个独立的计算任务,例如,一个智能相机用户可能会同时运行视频压缩和实时目标识别等多个任务,本发明将用户n的计算任务的数量表示为k n。用户和基站之间的通信链路由M个无线信道组成,可以表示为
Figure PCTCN2022137704-appb-000032
用户的任意一个任务可以在设备终端本地计算或者可以通过某一个无线信道卸载至边缘服务器端执行。每一个用户n的任务i由两部分组成:任务卸载时传输数据(包括程序代码和输入文件等)的大小D n,i和任务计算所需的CPU周期数L n,i。用户n的任务i的计算卸载决策表示为
Figure PCTCN2022137704-appb-000033
其中,a n,i=0表示用户n选择在本地设备上执行计算任务i,a n,i>0表示用户n选择将计算任务i通过无线信道a n,i卸载至服务器端执行。用户n的所有任务的 计算卸载决策构成了用户n的策略
Figure PCTCN2022137704-appb-000034
a=(a 1,a 2,…,a N)表示所有用户的计算卸载策略。
1.2本地计算资源分配
Figure PCTCN2022137704-appb-000035
表示用户n的本地设备的计算能力,即每秒的CPU周期数。由于用户设备本地计算资源有限,当多个任务选择本地计算时,它们竞争本地计算资源。出于公平性,本发明考虑本地计算资源平均分配给这些任务。因此,当a n,i=0时,分配给用户n的任务i的计算资源可以表示为
Figure PCTCN2022137704-appb-000036
其中I {A}是指示函数,当A为真时,I {A}=1,当A为假时,I {A}=0。现有的技术方案只考虑简单的本地计算模型,即一旦确定后,每个任务所分配的本地计算资源在该任务整个计算过程中不能再重新调整。本发明的技术方案考虑一种复杂的本地计算模型,关于本地计算资源分配,本发明考虑资源的释放和重新调整,具体表现为:当任务计算结束后,该任务所分配的计算资源立即被释放,并且被释放的资源将被重新分配给那些计算还没结束的任务,从而提高资源的利用率。因此,在用户任务本地计算的整个过程中,该任务所分配的计算资源会随着其它任务计算的结束而动态增加。
1.3云计算资源分配
由于边缘云计算资源有限,当多个任务选择云计算时,它们竞争云计算资源。出于公平性,本发明考虑云计算资源平均分配给这些任务。因此,当a n,i>0时,分配给用户n的任务i的计算资源可以表示为
Figure PCTCN2022137704-appb-000037
其中F c表示边缘云计算能力。现有的技术方案只考虑每个卸载任务所分配 的云计算资源在该任务计算卸载的过程中固定不变,不能重新调整。本发明的技术方案考虑云计算资源分配中资源的释放和重新调整,具体表现为:当卸载任务计算结束后,该任务所分配的云计算资源立即被释放,并且被释放的资源将被重新分配给那些计算还没结束的卸载任务,从而提高资源利用率。因此,在卸载任务云计算的整个过程中,该任务所分配的云计算资源会随着其它卸载任务计算的结束而动态增加。
1.4无线带宽资源分配
本发明考虑异构的无线通信网络,其中无线信道m的带宽资源表示为B m,当多个任务选择同一信道m卸载时,它们竞争该信道的带宽资源。出于公平性,本发明考虑带宽资源平均分配给这些同一信道上的任务。因此,当a n,i>0时,分配给用户n的任务i的带宽资源可以表示为
Figure PCTCN2022137704-appb-000038
现有的技术方案只考虑每个卸载任务所分配的带宽资源在该任务计算卸载的过程中固定不变,不能重新调整。本发明的技术方案考虑带宽资源分配中资源的释放和重新调整,具体表现为:当卸载任务的数据传输结束后,该任务所分配的带宽资源立即被释放,并且被释放的资源将被重新分配给同一信道上那些数据传输还没结束的卸载任务,从而提高资源利用率。因此,在卸载任务数据传输的整个过程中,该任务所分配的带宽资源会随着同一信道上其它卸载任务数据传输的结束而动态增加。
2计算模型
本发明从计算任务的响应时间出发,分别对计算任务在用户设备端本 地执行和卸载至边缘服务器端执行进行分析。
2.1本地计算
基于1.2中的本地计算资源分配方案,当a n,i=0时,用户n的任务i在用户设备终端本地计算的时间如公式1所示。
Figure PCTCN2022137704-appb-000039
其中,当A<B时,min{A,B}=A;否则的话,min{A,B}=B。
2.2云计算
当用户n卸载其任务i至边缘服务器端执行时,即a n,i>0时,基于1.3中的云计算资源分配方案,用户n的任务i在边缘服务器端云计算的时间如公式2所示。
Figure PCTCN2022137704-appb-000040
除此之外,对于云计算来说,任务的计算卸载会因数据传输而引入额外的时间延迟。基于1.4中的无线带宽资源分配方案,用户n的任务i计算卸载中的数据传输时间如公式3所示。
Figure PCTCN2022137704-appb-000041
2.3代价模型
首先,本发明考虑响应时间作为一个计算任务的执行代价,那么对于用户n的任务i来说,当a n,i=0时,根据公式1,该任务本地计算的代价为:
Figure PCTCN2022137704-appb-000042
当a n,i>0时,根据公式2和公式3,该任务云计算的代价为:
Figure PCTCN2022137704-appb-000043
其中,本发明忽略了任务计算结果返回到用户端的时间开销,这是因为通常情况下任务计算结果大小要远小于D n,i
根据公式4和公式5,用户n的任务i的计算代价为:
Figure PCTCN2022137704-appb-000044
然后,本发明定义用户n的代价为该用户的所有任务的平均计算代价,如公式7所示。
Figure PCTCN2022137704-appb-000045
3多任务计算卸载博弈模型
3.1定义
定义1(纳什均衡)一个博弈模型的稳定状态,在该状态下,所有的参与者可以达成大家都满意的解决方案,从而没有任何一个参与者能够通过单方面改变其策略来降低其代价函数。
定义2(势博弈)存在一个势函数,博弈中的每一个参与者对其效用函数的改变都可以映射到该势函数中去,即当某一个参与者通过改变其策略以降低其效用函数时,该势函数的值也会得到降低,势函数与每一个参与者的效用函数具有一致的趋势。
3.2博弈模型建立
本发明使用博弈理论方法解决多用户多任务计算卸载问题,博弈论是设计分布式方案强有力的工具,从而用户可以通过一种自组织的方式在设备本地制定最佳策略,最终实现一个大家都满意的解决方案。
在多用户多任务计算卸载问题中,每一个用户的目标就是最小化自己的代价,如公式8所示。
Figure PCTCN2022137704-appb-000046
其中,a -n=(a 1,…,a n-1,a n+1,…,a N)表示除了用户n以外其它所有用户的计算卸载策略,给定a -n,用户n会制定一个最佳的策略a n以最小化其代价。
对于用户n来说,给定a -n,公式8中的计算卸载问题是一个k n维离散空间的组合优化问题,是NP难问题。因此,本发明借助势博弈来在多项式时间内求解出上述计算卸载问题的近似解。根据定义2,本发明将上述计算卸载问题建模为一个势博弈模型
Figure PCTCN2022137704-appb-000047
其中,
Figure PCTCN2022137704-appb-000048
表示用户n的所有任务的集合,
Figure PCTCN2022137704-appb-000049
表示用户n的任务i的决策空间,U n,i表示用户n的任务i最小化的效用函数。在势博弈模型Γ n的建立过程中,本发明设置势函数等于用户n的代价函数T n,根据定义2,本发明可以推导出用户n的任务i的效用函数U n,i。势博弈的解概念是纳什均衡,如定义1所示,并且在势博弈中,纳什均衡能够局部或者全局最小化势函数,因此,本发明可以通过优化U n,i来实现优化T n的目标,最终在多项式时间内求解出上述问题的近似解。
根据以上分析,本发明将多用户多任务计算卸载问题建模为一个博弈 论模型
Figure PCTCN2022137704-appb-000050
其中,所有用户的所有任务的集合
Figure PCTCN2022137704-appb-000051
表示参与者集合,
Figure PCTCN2022137704-appb-000052
表示用户n的任务i的策略空间,U n,i表示用户n的任务i最小化的效用函数。然后,博弈论模型Γ表示为:
Figure PCTCN2022137704-appb-000053
其中,a -(n,i)表示除了用户n的任务i之外的所有用户的所有任务的计算卸载决策。
纳什均衡是博弈理论中一个十分重要的概念,是博弈模型的一个稳定状态,多用户多任务计算卸载博弈的纳什均衡可以表示为一个决策向量
Figure PCTCN2022137704-appb-000054
并且满足
Figure PCTCN2022137704-appb-000055
Figure PCTCN2022137704-appb-000056
并不是所有的博弈模型都存在纳什均衡,但是势博弈具有一个重要的性质,即在所有的势博弈中,纳什均衡一定存在,而本发明建立的多任务计算卸载博弈模型就属于势博弈(在理论层面本发明可以通过构造势函数证明多任务计算卸载博弈是势博弈),因此,多用户多任务计算卸载博弈模型拥有一个纳什均衡。
4分布式多任务计算卸载实现
势博弈具有两个重要的特性:1)拥有一个纳什均衡;2)拥有有限的改进特性。根据这两个特性,一次不超过一个用户更新其一个任务的决策,然后这个决策更新过程在有限次迭代后保证能够到达纳什均衡。决策更新迭代过程在一个时隙内完成,所有用户并行进行,由无线基站的时钟信号实现同步。每一个时隙t包括以下两个阶段:
4.1收集任务卸载代价。根据时隙t的决策向量a(t),基站可以计算出用户n的任务i选择信道m时的任务卸载代价
Figure PCTCN2022137704-appb-000057
并将其发送给用户n。在本阶段,每一个用户n从基站处收集每一个任务
Figure PCTCN2022137704-appb-000058
选择信道
Figure PCTCN2022137704-appb-000059
时的任务卸载代价
Figure PCTCN2022137704-appb-000060
4.2更新任务决策。本阶段本发明让不超过一个用户更新其一个任务的当前决策,根据在第一阶段收集到的任务卸载代价,每一个用户n使用公式10计算其任务决策更新集合:
Figure PCTCN2022137704-appb-000061
基于收集的信息,在公式10的计算中,用户n不需要知道其它用户任务的相关信息,从而保证了隐私性。如果
Figure PCTCN2022137704-appb-000062
用户n会向基站发送一个请求消息以竞争更新任务决策的机会,否则,用户n将不会发送任何的请求消息。然后,基站将会从已发送请求消息的所有用户中随机选择一个用户k,并向用户k发送允许消息(允许用户更新任务决策)。收到允许消息的用户k将从Δ k(t)中选择一个任务决策更新(i,a),将其发送至基站以更新下一个时隙的决策向量a(t+1),然后将任务i在下一个时隙的决策更新为a,并保持其它任务的决策不变。其它没有收到允许消息的用户将在下一个时隙保持它们任务的决策不变。
基于以上的分析,多用户多任务计算卸载博弈将在有限个时隙内收敛到一个纳什均衡,当基站在一个时隙内没有收到任何请求消息时,基站向所有用户广播结束消息,当每一个用户都收到结束消息时,多用户多任务 计算卸载的博弈过程结束,然后每一个用户将上述过程中最后一个时隙制定的决策作为其任务最终的计算卸载决策,之后根据该决策进行多任务的执行。
本发明的创新技术点及有益效果至少在于:
1.考虑多用户多任务的计算卸载场景,即每个用户有多个计算任务,设计动态的本地计算资源分配方案,考虑到本地计算资源的释放和重新调整,即本地计算的任务在计算结束后立即释放计算资源,并将释放的计算资源重新分配给本地计算还没结束的任务。
2.在多用户多任务计算卸载的过程中,设计动态的通信资源分配方案,考虑到通信资源的释放和重新调整,即卸载任务数据传输结束后立即释放通信资源,并将释放的通信资源重新分配给数据传输还没结束的卸载任务。
3.在多用户多任务计算卸载的过程中,设计动态的云计算资源分配方案,考虑到云计算资源的释放和重新调整,即卸载任务计算结束后立即释放云计算资源,并将释放的云计算资源重新分配给计算还没结束的卸载任务。
4.基于动态的资源分配方案,多用户多任务计算卸载的决策制定通过势博弈理论方法进行分布式实现。
相比于现有的基于博弈理论方法的多用户计算卸载技术,一方面本发明考虑了每个用户有多个计算任务的情况,实现了多用户多任务场景下的计算卸载技术,具有一定的普遍性和灵活性,更符合现实生活场景;另一方面本发明考虑了有限资源的释放和重新调整,设计了动态的本地计算资源、云计算资源和无线宽带资源分配方案,将结束任务所释放的资源重新分配给那些还未结束的任务,从而实现有限资源的高效分配,提高了有限资源的利用率,降低了用户端的时间开销,从而提高了用户设备端的使用体验质量。
本发明经过了技术理论层面的分析和模拟仿真实验的实现,结果证明了本发明在有限资源的利用率和用户任务计算代价方面皆优于现有的技术方案。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的系统实施例仅仅是示意性的,例如单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例方法的全部或部 分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,包括以下步骤:
    建立分布式多任务计算卸载的博弈论模型;
    每个用户基于计算任务响应时间的代价在设备终端本地制定任务计算卸载决策。
  2. 根据权利要求1所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,所述方法具体包括以下步骤:
    采用一个移动边缘计算的网络架构,设计动态的本地计算资源、云计算资源和无线宽带资源分配方案,其中在计算任务结束后立即释放其所分配的资源,然后将释放的资源重新分配给还未结束的计算任务;
    基于所设计的资源分配方案,建立移动边缘计算环境下多用户多任务计算卸载的计算模型,其中包括本地计算和云计算,并基于计算模型构造用户代价模型,对多任务计算卸载问题进行数学建模;
    使用博弈理论方法进行多用户多任务计算卸载博弈模型的建立,在建立的过程中引入势博弈,将多用户多任务计算卸载决策制定问题建模为势博弈模型,其中用户之间以一种自组织的方式在设备终端本地做出任务计算卸载的决策;
    对建立的博弈论模型进行技术理论层面的分析,借助势博弈表明此博弈模型最优解—纳什均衡的存在性,使用基于最佳响应的分布式算法求解 获得博弈模型的纳什均衡解,制定多用户多任务计算卸载性能的评价衡量指标,对博弈论模型进行评估。
  3. 根据权利要求2所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,采用移动边缘计算网络架构,包括N个移动设备用户和一个无线基站,其中无线基站s附近部署了服务器资源;用户集合表示为
    Figure PCTCN2022137704-appb-100001
    每一个移动用户设备终端有多个独立的计算任务,将用户n的计算任务的数量表示为k n
    用户和基站之间的通信链路由M个无线信道组成,表示为
    Figure PCTCN2022137704-appb-100002
    Figure PCTCN2022137704-appb-100003
    用户的任意一个任务在设备终端本地计算或者通过某一个无线信道卸载至边缘服务器端执行;
    每一个用户n的任务i由两部分组成:任务卸载时传输数据的大小D n,i和任务计算所需的CPU周期数L n,i;用户n的任务i的计算卸载决策表示为
    Figure PCTCN2022137704-appb-100004
    其中,a n,i=0表示用户n选择在本地设备上执行计算任务i,a n,i>0表示用户n选择将计算任务i通过无线信道a n,i卸载至服务器端执行;用户n的所有任务的计算卸载决策构成了用户n的策略
    Figure PCTCN2022137704-appb-100005
    Figure PCTCN2022137704-appb-100006
    a=(a 1,a 2,...,a N)表示所有用户的计算卸载策略。
  4. 根据权利要求3所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,其中本地计算资源分配为:
    Figure PCTCN2022137704-appb-100007
    表示用户n的本地设备的计算能力,当a n,i=0时,分配给用户n的任务i的计算资源表示为:
    Figure PCTCN2022137704-appb-100008
    其中,I {A}是指示函数,当A为真时,I {A}=1,当A为假时,I {A}=0;
    当任务计算结束后,该任务所分配的计算资源立即被释放,并且被释放的资源将被重新分配给那些计算还没结束的任务。
  5. 根据权利要求4所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,当a n,i>0时,分配给用户n的任务i的计算资源表示为:
    Figure PCTCN2022137704-appb-100009
    其中F c表示边缘云计算能力;
    当卸载任务计算结束后,该任务所分配的云计算资源立即被释放,并且被释放的资源将被重新分配给那些计算还没结束的卸载任务。
  6. 根据权利要求5所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,当a n,i>0时,分配给用户n的任务i的带宽资源表示为:
    Figure PCTCN2022137704-appb-100010
    当卸载任务的数据传输结束后,该任务所分配的带宽资源立即被释放,并且被释放的资源将被重新分配给同一信道上那些数据传输还没结束的卸载任务。
  7. 根据权利要求6所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,基于本地计算资源分配方案,当a n,i=0时,用户n的任务i在用户设备终端本地计算的时间如公式1所示:
    Figure PCTCN2022137704-appb-100011
    其中,当A<B时,min{A,B}=A;否则的话,min{A,B}=B;
    当用户n卸载其任务i至边缘服务器端执行时,即a n,i>0时,基云计算资源分配方案,用户n的任务i在边缘服务器端云计算的时间如公式2所示:
    Figure PCTCN2022137704-appb-100012
    基于无线带宽资源分配方案,用户n的任务i计算卸载中的数据传输时间如公式3所示:
    Figure PCTCN2022137704-appb-100013
    当a n,i=0时,根据公式1,该任务本地计算的代价为:
    Figure PCTCN2022137704-appb-100014
    当a n,i>0时,根据公式2和公式3,该任务云计算的代价为:
    Figure PCTCN2022137704-appb-100015
    根据公式4和公式5,用户n的任务i的计算代价为:
    Figure PCTCN2022137704-appb-100016
    用户n的代价为该用户的所有任务的平均计算代价,如公式7所示;
    Figure PCTCN2022137704-appb-100017
  8. 根据权利要求7所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,博弈模型建立为:
    在多用户多任务计算卸载问题中,每一个用户的目标就是最小化自己的代价,如公式8所示:
    Figure PCTCN2022137704-appb-100018
    其中,a -n=(a 1,…,a n-1,a n+1,…,a N)表示除了用户n以外其它所有用户的计算卸载策略,给定a -n,用户n会制定一个最佳的策略a n以最小化其代价;
    计算卸载问题建模为一个势博弈模型
    Figure PCTCN2022137704-appb-100019
    其中,
    Figure PCTCN2022137704-appb-100020
    表示用户n的所有任务的集合,
    Figure PCTCN2022137704-appb-100021
    表示用户n的任务i的决策空间,U n,i表示用户n的任务i最小化的效用函数;在势博弈模型Γ n的建立过程中,设置势函数等于用户n的代价函数T n,推导出用户n的任务i的效用函数U n,i
    计算卸载问题建模为一个博弈论模型
    Figure PCTCN2022137704-appb-100022
    其中,所有用户的所有任务的集合
    Figure PCTCN2022137704-appb-100023
    表示参与者集合,
    Figure PCTCN2022137704-appb-100024
    表示用户n的任务i的策略空间,U n,i表示用户n的任务i最小化的效用函数;然后,博弈论模型Γ表示为:
    Figure PCTCN2022137704-appb-100025
    其中,a -(n,i)表示除了用户n的任务i之外的所有用户的所有任务的计 算卸载决策。
  9. 根据权利要求8所述的移动边缘计算环境下多用户多任务计算卸载方法,其特征在于,决策更新迭代过程在一个时隙内完成,所有用户并行进行,由无线基站的时钟信号实现同步;每一个时隙t包括以下两个阶段:
    收集任务卸载代价:根据时隙t的决策向量a(t),基站计算出用户n的任务i选择信道m时的任务卸载代价
    Figure PCTCN2022137704-appb-100026
    并将其发送给用户n;在本阶段,每一个用户n从基站处收集每一个任务
    Figure PCTCN2022137704-appb-100027
    选择信道
    Figure PCTCN2022137704-appb-100028
    时的任务卸载代价
    Figure PCTCN2022137704-appb-100029
    更新任务决策:本阶段本发明让不超过一个用户更新其一个任务的当前决策,根据在第一阶段收集到的任务卸载代价,每一个用户n使用公式10计算其任务决策更新集合:
    Figure PCTCN2022137704-appb-100030
  10. 一种移动边缘计算环境下多用户多任务计算卸载系统,其特征在于,包括:
    模型建立模块,用于建立分布式多任务计算卸载的博弈论模型;
    卸载决策计算模块,用于卸载决策每个用户基于计算任务响应时间的代价在设备终端本地制定任务计算卸载决策。
PCT/CN2022/137704 2021-12-25 2022-12-08 移动边缘计算环境下多用户多任务计算卸载方法及系统 WO2023116460A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111605485.3A CN116339849A (zh) 2021-12-25 2021-12-25 移动边缘计算环境下多用户多任务计算卸载方法及系统
CN202111605485.3 2021-12-25

Publications (1)

Publication Number Publication Date
WO2023116460A1 true WO2023116460A1 (zh) 2023-06-29

Family

ID=86877826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137704 WO2023116460A1 (zh) 2021-12-25 2022-12-08 移动边缘计算环境下多用户多任务计算卸载方法及系统

Country Status (2)

Country Link
CN (1) CN116339849A (zh)
WO (1) WO2023116460A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519991A (zh) * 2024-01-04 2024-02-06 中国矿业大学 基于边云混合的智能化安全双重预防风险识别方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522666A (zh) * 2020-04-27 2020-08-11 西安工业大学 一种云机器人边缘计算卸载模型及其卸载方法
EP3826368A1 (en) * 2019-11-19 2021-05-26 Commissariat à l'énergie atomique et aux énergies alternatives Energy efficient discontinuous mobile edge computing with quality of service guarantees
CN112994911A (zh) * 2019-12-13 2021-06-18 深圳先进技术研究院 计算卸载方法、装置及计算机可读存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3826368A1 (en) * 2019-11-19 2021-05-26 Commissariat à l'énergie atomique et aux énergies alternatives Energy efficient discontinuous mobile edge computing with quality of service guarantees
CN112994911A (zh) * 2019-12-13 2021-06-18 深圳先进技术研究院 计算卸载方法、装置及计算机可读存储介质
CN111522666A (zh) * 2020-04-27 2020-08-11 西安工业大学 一种云机器人边缘计算卸载模型及其卸载方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519991A (zh) * 2024-01-04 2024-02-06 中国矿业大学 基于边云混合的智能化安全双重预防风险识别方法
CN117519991B (zh) * 2024-01-04 2024-03-12 中国矿业大学 基于边云混合的智能化安全双重预防风险识别方法

Also Published As

Publication number Publication date
CN116339849A (zh) 2023-06-27

Similar Documents

Publication Publication Date Title
CN109857546B (zh) 基于Lyapunov优化的多服务器移动边缘计算卸载方法及装置
CN109684075B (zh) 一种基于边缘计算和云计算协同进行计算任务卸载的方法
CN110098969B (zh) 一种面向物联网的雾计算任务卸载方法
Wang et al. A probability preferred priori offloading mechanism in mobile edge computing
CN111475274B (zh) 云协同多任务调度方法及装置
Tianze et al. An overhead-optimizing task scheduling strategy for ad-hoc based mobile edge computing
CN110069341B (zh) 边缘计算中结合功能按需配置的有依赖关系任务的调度方法
CN110489176B (zh) 一种基于装箱问题的多接入边缘计算任务卸载方法
Li et al. On efficient offloading control in cloud radio access network with mobile edge computing
CN110519370B (zh) 一种基于设施选址问题的边缘计算资源分配方法
CN110096362A (zh) 一种基于边缘服务器协作的多任务卸载方法
WO2023024219A1 (zh) 云边协同网络中时延和频谱占用联合优化方法及系统
CN110177055B (zh) 一种边缘计算场景下边缘域资源的预分配方法
CN112969163B (zh) 一种基于自适应任务卸载的蜂窝网络计算资源分配方法
CN111988787B (zh) 一种任务的网络接入和服务放置位置选择方法及系统
CN113992945B (zh) 一种基于博弈论的多服务器多用户视频分析任务卸载方法
WO2023116460A1 (zh) 移动边缘计算环境下多用户多任务计算卸载方法及系统
Xia et al. Near-optimal and learning-driven task offloading in a 5G multi-cell mobile edge cloud
Chen et al. DDPG-based computation offloading and service caching in mobile edge computing
Wu et al. Deep reinforcement learning-based video quality selection and radio bearer control for mobile edge computing supported short video applications
CN113778675A (zh) 一种基于面向区块链网络的计算任务分配系统及方法
Xiao et al. GTTC: A low-expenditure IoT multi-task coordinated distributed computing framework with fog computing
CN110971707B (zh) 一种在移动边缘网络中的分布式服务缓存方法
CN116489708B (zh) 面向元宇宙的云边端协同的移动边缘计算任务卸载方法
Xu et al. Computing offloading and resource allocation algorithm based on game theory for IoT devices in mobile edge computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909775

Country of ref document: EP

Kind code of ref document: A1