CN111262906B - Method for unloading mobile user terminal task under distributed edge computing service system - Google Patents

Method for unloading mobile user terminal task under distributed edge computing service system Download PDF

Info

Publication number
CN111262906B
CN111262906B CN202010016136.7A CN202010016136A CN111262906B CN 111262906 B CN111262906 B CN 111262906B CN 202010016136 A CN202010016136 A CN 202010016136A CN 111262906 B CN111262906 B CN 111262906B
Authority
CN
China
Prior art keywords
edge node
task
node
computing
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010016136.7A
Other languages
Chinese (zh)
Other versions
CN111262906A (en
Inventor
温武少
李子杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202010016136.7A priority Critical patent/CN111262906B/en
Publication of CN111262906A publication Critical patent/CN111262906A/en
Application granted granted Critical
Publication of CN111262906B publication Critical patent/CN111262906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/082Load balancing or load distribution among bearers or channels

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention relates to the field of mobile edge computing, in particular to a user terminal task unloading method of an edge computing service system, which comprises the following steps: the method comprises the steps that a local edge node receives a task unloading request of a local terminal, obtains computing resource use information of the local edge node and an adjacent edge node, and selects the local edge node, the adjacent edge node or a computing node of a cloud computing center to execute a task by adopting a local resource priority strategy based on a resource threshold value according to resource conditions required by the task and currently available edge node resource use conditions; the local edge node requests the selected edge node or the computing node to confirm and execute the task; and the edge node or the computing node bearing the task returns a task execution result to the terminal and sends a task execution record to the cloud computing center. The invention optimizes the reasonability and the balance of the resource use among the edge computing nodes under the condition of ensuring the user experience quality and the service quality by the decision unloading task distribution of the local edge node.

Description

Method for unloading mobile user terminal task under distributed edge computing service system
Technical Field
The invention relates to the field of mobile edge computing, in particular to a user terminal task unloading method of an edge computing service system.
Background
With the rapid development of mobile internet, mobile user equipment, such as smart phones, tablet computers, smart homes, IOT devices, etc., is also increasing. The application scenarios of these devices are also becoming more and more abundant, resulting in many computationally intensive usage scenarios, such as VR, AR, smart recognition, etc. However, the mobile user equipment is limited by the problems of computing power and endurance due to its portability. And the development of the cloud computing technology utilizes the cloud computing capability to endow the mobile equipment with stronger computing power so as to meet the application requirement. However, the cloud computing cluster is generally far from the mobile user equipment, which causes a large communication delay. These clusters have difficulty guaranteeing the quality of service and quality of experience for mobile users.
The proposed new calculation model of moving edge calculation is to solve the above problem. The edge computing deploys the computing nodes on the cloud edge close to the mobile user equipment, and provides computing power for the mobile user equipment, so that the problems that the computing resources of the equipment are limited and the equipment is easily influenced by energy consumption are solved. Edge computing services are provided for a mobile user device by offloading computing tasks on the device to the mobile edge node for execution.
In the conventional mobile edge computing system, the positions and service objects of the edge nodes are mostly set by levels and regions due to the consideration of cost and service quality. Service providers typically deploy an edge computing node in densely populated areas such as cells, teaching parks, and office buildings to provide edge computing services to users in the area. In the edge computing system, the computing resources between the edge nodes cannot be used mutually, and the idle and waste of the computing resources are easily caused. Moreover, the configuration of the computing performance of the edge node needs to be set by considering the traffic peak in the current area to ensure the requirement of the service quality, so that the edge node has a large amount of idle computing resources in the non-busy period of the traffic, and huge energy and resource waste is caused.
A distributed mobile edge computing system deploys edge nodes decentralised at network locations close to mobile user terminals. These edge nodes are connected to network access points of the terminal, such as cellular network base stations, wireless hotspots, etc. to which the mobile user terminal is connected. These edge nodes do not configure much computing resources compared to conventional mobile edge computing systems. Because of the proximity in geographical locations between adjacent edge nodes and the use of high speed network wiring connections. Computing resources or tasks may be borrowed from one another or allocated between adjacent edge nodes. The use of adjacent edge nodes by the mobile user terminal does not introduce much additional delay. However, due to the distributed deployment characteristics, a large number of edge computing node clusters also cause a large amount of computing resources and energy if no reasonable and efficient resource and task allocation strategy is available.
In mobile edge computing, when the mobile user equipment decides to offload a task, which edge node the task should be executed by is a critical issue. Since the transmissions of mobile user equipment in a wireless network area can vary dramatically over time. If the offloading task is performed entirely by one edge node in the network area, it is easy to have insufficient resources to provide service at the peak of the request. Therefore, task placement should distribute the load to the edge nodes capable of providing services in a balanced manner as much as possible on the premise of ensuring the use experience requirements of users, so that the use of the edge node cluster resources is balanced. The decision of this task should be made independently by the local edge node to obtain real-time task offloading and resource allocation decisions. If the task placement decision is made by the remote cloud computing center, the reliability problem of single-point failure is brought, and the decision delay is greatly increased because the decision center is far away from the mobile user equipment.
Disclosure of Invention
The invention aims to solve the problems of reliability and timeliness caused by the fact that the existing task scheduling decision is handed over to a remote data center for execution, provides a method for unloading tasks of a mobile user terminal under a distributed edge computing service system, and optimizes reasonability and balance of resource use among edge computing nodes under the condition that user experience quality and service quality are guaranteed by decision unloading task distribution of a local edge node.
The technical scheme of the invention is as follows: a task unloading method of a mobile user terminal under a distributed edge computing service system is used for deciding unloading task distribution by a local edge node serving a local terminal, and comprises the following steps:
s1, the local edge node receives a task unloading request of the local terminal;
s2, the local edge node acquires the computing resource use information of the local edge node and the adjacent edge node;
s3, selecting a local edge node, an adjacent edge node or a computing node of a cloud computing center to execute a task by the local edge node according to the resource condition required by the task and the current available edge node resource use condition by adopting a local resource priority strategy based on a resource threshold value;
s4, the local edge node requests the selected edge node or the computing node to confirm and execute the task;
and S5, the edge node or the computing node bearing the task returns the task execution result to the terminal, and sends the task execution record to the cloud computing center.
Compared with the prior art, the invention has the following advantages and effects:
1. and the task scheduling decision is executed at the local edge node, so that the problems of reliability and usability caused by centralized decision are avoided.
2. The local edge node stores the resource use condition of the local edge node and the adjacent edge node connected with the local edge node by the resource information database of the local edge node, and updates the database in time according to the broadcast information of other nodes, so that the decision speed is greatly improved, and the extra time delay caused by the centralized acquisition of the node resource use condition is avoided.
3. The method comprises the steps of selecting computing nodes by using a local resource priority strategy based on a resource threshold and a resource optimization scheduling algorithm for ensuring delay, if the number of edge nodes is N and the number of computing resource types is Q, quickly selecting task bearing nodes by the algorithm under the condition that the time complexity is O (QN), and meanwhile, improving the number of tasks which can be borne by an edge node cluster and reducing the occurrence of uneven use of node resources on the premise that the service delay of a user is ensured.
Drawings
FIG. 1 is an overall architecture diagram of a distributed edge computing services system of one embodiment of the invention;
FIG. 2 is a task offload flow diagram for one embodiment of the invention;
FIG. 3 is a flow diagram of a local resource prioritization policy based on resource thresholds according to an embodiment of the present invention.
Detailed Description
The advantages and features of the present invention will become more apparent from the following description and claims, taken in conjunction with the accompanying drawings and the following examples, although the embodiments of the invention are not limited thereto.
Examples
As shown in fig. 1, in the present invention, a distributed edge computing service system is composed of three types of nodes, namely, a local edge node set, an adjacent edge node set, and a cloud computing center node set, which serve a mobile terminal. Local edge node (N)local) By deployment locally to the location of the terminalAn MEC (Mobile Edge computing) server in a wireless base station of a wireless network area, which is in a wireless cellular network area connected with a terminal and is an Edge computing node of the base station, and the Edge computing node forms an Edge computing local service system of the terminal; adjacent edge nodes (assuming that there are k-1 adjacent edge nodes, constitute an adjacent edge node set Nnear={N1,N2,…,Nk-1And) deployed in base stations of adjacent wireless network areas, connected with local edge nodes by adopting a high-speed private network, and forming an edge node group N capable of serving mobile terminal applications together with the local edge nodes, wherein the edge node group is an edge node set which can be scheduled and used by an edge computing local service system according to a preset scheduling principle according to needsk={Nlocal,NnearUseful edge node group N for shortkK edge nodes (also called edge computing nodes); the cloud computing center is a large data center deployed by a system and is responsible for managing and monitoring all edge nodes, and computing nodes of the cloud computing center can also provide unloading services for terminals under specific conditions.
The cloud computing center can be responsible for coordinating or assisting the establishment of neighbor relations among the edge nodes, assisting the establishment of related communication channels to support information transmission among the edge nodes, realizing the functions of task registration, container mirror image manufacturing and distribution and the like, and storing execution records of all task uninstallations, and is used for counting quality of service (QoS) indexes of a distributed edge computing service system and the operation conditions of the edge nodes of an alarm manager.
The mobile user terminal generally communicates directly with the local edge node, uses the services of the distributed edge computing service system, and performs operations such as computing task offloading.
The mobile user terminal runs an application analyzer and an uninstalling manager in an operating system of the mobile user terminal, wherein the application analyzer is used for detecting the uninstallability and the complexity of an application program and a calculation task which run in the current operating system in real time, monitoring the battery power of the terminal, the CPU and GPU utilization rate, the wireless network state, the connection condition with an edge node and other information, and making a decision whether to uninstall the calculation task. The unloading manager is used for managing the life cycle of each task unloading process, and comprises links of saving the context environment of program operation, packaging and sending task unloading requests, receiving task execution results, recovering application program environment and the like.
For a mobile user terminal, once it decides to offload its own part of the computation task to the distributed edge computation service system, it sends an offload request to the local edge node, but does not know which edge computation node actually performs the task, and the quality of service (QoS) of the computation task should be guaranteed by the distributed edge computation service system.
Each edge node has a resource information database for storing the computing resource use information of the edge node and the adjacent edge nodes, and when the edge node makes a decision, the current computing resource use information of each edge node is obtained from the resource information database.
When the computing resources of each edge node change, the computing resource use information broadcasts resource change messages to adjacent edge nodes, the adjacent edge nodes subscribe and receive the resource change messages, and then the resource information databases of the adjacent edge nodes are updated to keep information synchronization.
When the local edge node receives an offload task, there are three optional behaviors: local edge node execution, proximate edge node execution, or cloud computing center execution. The local edge node executes a local resource priority strategy based on a resource threshold value and a resource optimization scheduling algorithm for ensuring delay according to the information of the unloaded tasks and the computing resource use information of each edge node to perform task scheduling distribution, and provides a task scheduling method capable of improving the utilization rate and the throughput of system resources under the condition of ensuring task delay.
The task unloading method of the invention uses the local edge node to decide the distribution of the unloading task, as shown in fig. 2, and comprises the following steps:
s1, the local edge node in the distributed edge computing service system receives the task unloading request of the local terminal (such as a mobile user terminal); the method specifically comprises the following steps:
and S11, the application analyzer of the local terminal determines whether to unload part of tasks to the distributed edge computing service system according to the preset running mode and the current running state of the application analyzer, such as battery power, CPU and GPU utilization rate, task unloadability and complexity, wireless network state and local edge node connection state information.
S12, if the local terminal decides to unload task, the unloading manager builds the characteristic information of unloading demand service and the related index of service quality requirement according to the service request template of local edge node.
Wherein the offloading of demand service characteristic information comprises: number of CPU cycles CC required to execute a tasko(ii) a Estimated byte number VI of input data required by executing tasko(ii) a VO (byte number of estimated data) returned by executing tasko(ii) a Q computing resource requirements r required to execute a taskiWhere 1. ltoreq. i.ltoreq.q, e.g. the value of CPU resource r required for executing a taskcpuSize of memory rmemBandwidth size rioAnd the like.
The relevant indexes of the unloading demand service quality requirement comprise: average connection delay acceptable for local terminal and edge node providing service
Figure BDA0002358957150000041
And maximum connection delay
Figure BDA0002358957150000042
The task set by the local terminal can bear the delay difference at the maximum
Figure BDA0002358957150000043
Minimum bandwidth, average bandwidth (B) required for task executionmin,Bavg)。
And S13, the unloading manager of the local terminal sends the unloading demand service characteristic information and the service quality requirement related index of the task to the local edge node.
S2, the local edge node acquires the computing resource use information of the local edge node and the adjacent edge node;
the computing resource usage information is obtained from a resource information database independently maintained by the local edge node, and includes: available amount of q computing resources (such as CPU, GPU, memory, video memory, disk, network bandwidth) of each edge node in the edge node group
Figure BDA0002358957150000051
Upper limit of resources
Figure BDA0002358957150000052
The preset resource threshold information of each edge node specifically includes preset low, medium and high available resource thresholds
Figure BDA0002358957150000053
Wherein i is more than or equal to 1 and less than or equal to q, j is more than or equal to 1 and less than or equal to k, and
Figure BDA0002358957150000054
the resource information database is independently maintained by each edge node, and is responsible for storing and updating the available quantity, the upper limit information and the resource threshold information of each computing resource of the edge node and the adjacent edge node in real time, subscribing the resource change information issued by the adjacent edge node by using the message queue, and updating the resource information in the resource information database in real time, thereby realizing information synchronization.
And S3, selecting the local edge node, the adjacent edge node or the computing node of the cloud computing center to execute the task by the local edge node according to the resource condition required by the task and the current available edge node resource use condition by adopting a local resource priority strategy based on a resource threshold value.
As shown in fig. 3, the local resource priority policy based on the resource threshold includes the following steps:
s31, calculating available edge node group NkIf each edge node in the network carries the residual resource available amount of the task
Figure BDA0002358957150000055
Figure BDA0002358957150000056
Wherein
Figure BDA0002358957150000057
S32, if all the computing resources of the local edge node satisfy: the remaining resource availability is greater than the preset high available resource threshold of the local edge node, but less than the resource upper limit of the local edge node (i.e., meets the resource upper limit
Figure BDA0002358957150000058
Figure BDA0002358957150000059
Wherein
Figure BDA00023589571500000510
A preset high available resource threshold for the local edge node,
Figure BDA00023589571500000511
the remaining resource availability for the local edge node,
Figure BDA00023589571500000512
the resource upper limit of the local edge node), the offloading task is executed at the local edge node, otherwise, the step S33 is entered;
s33, if all the computing resources of the local edge node meet: the remaining resource availability is greater than the preset medium available resource threshold of the local edge node (i.e. meets the requirement of satisfying the local edge node
Figure BDA00023589571500000513
Figure BDA00023589571500000514
A preset medium available resource threshold value for the local edge node), finding that each computing resource satisfies: the remaining resource availability is greater than the preset high available resource threshold (i.e., meets the requirement of meeting
Figure BDA00023589571500000515
) Adjacent edge node set N ofcAnd local edge node NLocalConstitute a node set Ns={Nc,NLocalAnd from node set NsRandomly selecting one edge node to execute the task, otherwise, entering the step S34;
s34, if all the resources of the local edge node meet: the remaining resource availability is greater than the preset low available resource threshold of the local edge node (i.e., meets the requirement of
Figure BDA00023589571500000516
Figure BDA00023589571500000517
A preset low available resource threshold value for the local edge node), executing a resource optimization scheduling algorithm ensuring delay, selecting the edge node to execute a task, otherwise, entering step S35;
the resource optimization scheduling algorithm for ensuring the delay is used for solving the problems of task execution delay and edge node resource load optimization, and the problems are solved by establishing an unloading task placement model, a task execution delay model and an edge node resource load model.
For each unloading task, one and only one edge node is responsible for executing the task, so the placement strategy of the task can be expressed as:
Figure BDA0002358957150000061
Figure BDA0002358957150000062
in the task execution delay model, since the task execution delay mainly lies in the data transmission delay and the computation delay, the execution delay of the task t can be expressed as:
Tdelay=Ttrans+Tcam
wherein the data transfer delay can be expressed as:
Figure BDA0002358957150000063
wherein r isioThe size of the bandwidth resource requested for the task,
Figure BDA0002358957150000064
is an edge node NjAnd (4) estimating connection delay with the terminal.
And the calculated delay can be expressed as:
Figure BDA0002358957150000065
wherein FreqjIs an edge node NjOf CPU frequency rcpuThe relative size of the CPU resources requested for a task,
Figure BDA0002358957150000066
the available amount of resources is calculated for the CPU of the edge node,
Figure BDA0002358957150000067
is an edge node NjThe size of the remaining available CPU resources to perform the task is estimated,
Figure BDA0002358957150000068
Figure BDA0002358957150000069
is an edge node NjUpper CPU resource limit.
The edge node resource load model may be represented as:
Figure BDA00023589571500000610
wherein r isiThe q computing resource requirements needed to perform the task,
Figure BDA00023589571500000611
in order to be the upper limit of the resources,
Figure BDA00023589571500000612
the available amount of resources is calculated for each edge node for q types.
The task execution delay and edge node resource load optimization problem can be expressed as follows:
Figure BDA0002358957150000071
the resource optimization scheduling algorithm for ensuring the delay is used for obtaining the pareto optimal solution of the optimization problem, and comprises the following steps:
d1 calculation task acceptable delay benchmark
Figure BDA0002358957150000072
Figure BDA0002358957150000073
Wherein
Figure BDA0002358957150000074
Is an edge node NjEstimated connection delay with terminal, FreqjIs an edge node NjThe CPU frequency of (1);
d2, from available edge node group NkFind satisfaction
Figure BDA0002358957150000075
Guaranteed latency candidate node set Ndc
D3, from candidate node set NdcCalculate their priority scores P (assuming the candidate node set has m edge nodes)j=Lj+BjJ is more than or equal to 1 and less than or equal to m, wherein
Figure BDA0002358957150000076
Is the load factor of the edge node,
Figure BDA0002358957150000077
is a margin weight value of each computing resource preset by the edge node,
Figure BDA0002358957150000078
is the equalization coefficient of the node;
and D4, selecting the edge node with the highest priority score for task execution.
S35, if the local edge node has the following computing resources: the remaining resource availability is less than the preset low available resource threshold of the local edge node (i.e., meets the requirement of
Figure BDA0002358957150000079
) Then, whether all the computing resources meet is judged: the remaining resource availability is greater than the preset low available resource threshold of the neighboring edge node (i.e., meets
Figure BDA00023589571500000710
Figure BDA00023589571500000711
Adjacent edge node set N ofwc(ii) a If the adjacent edge node set N exists, the adjacent edge node set NwcAnd a local edge node NLocalConstitute a node set Nws={Nwc,NLocalAnd from node set NwsExecuting a resource optimization scheduling algorithm with punishment and delay guarantee, selecting an edge node to execute a task, otherwise, entering the step S36;
wherein the resource-optimized scheduling algorithm with penalty for ensuring delay is from an edge node set N of w edge nodeswsWherein similar steps as those in the resource optimization scheduling algorithm for ensuring delay are taken, only the priority score calculation method in step D3 is
Figure BDA0002358957150000081
Wherein
Figure BDA0002358957150000082
The load penalty coefficient set for the edge node specifically comprises the following steps:
d1 calculation task acceptable delay benchmark
Figure BDA0002358957150000083
Figure BDA0002358957150000084
Wherein
Figure BDA0002358957150000085
Is an edge node NjEstimated connection delay with terminal, FreqjIs an edge node NjCPU frequency, CC ofoNumber of CPU cycles, VI, required to execute a taskoEstimated byte number, VO, of input data required to execute a taskoNumber of bytes of estimated data, r, returned for executing a taskcpuThe relative size of the CPU resources requested for a task,
Figure BDA0002358957150000086
the available amount of resources is calculated for the CPU of the edge node,
Figure BDA0002358957150000087
is an edge node NjThe size of the remaining available CPU resources to perform the task is estimated,
Figure BDA0002358957150000088
Figure BDA0002358957150000089
is an edge node NjUpper limit of CPU resources; q computing resource requirements r required to execute a taskiWherein i is more than or equal to 1 and less than or equal to q, and comprises a CPU resource value r required by executing a taskcpuSize of memory rmemBandwidth size rio
D2 setting usable edge node group NwsWith w edge nodes, from the available edge node cluster NwsFind satisfaction
Figure BDA00023589571500000810
Figure BDA00023589571500000811
Guaranteed latency candidate node set Ndc
D3, from candidate node set NdcCalculating priority scores of m edge nodes of candidate node set
Figure BDA00023589571500000812
Wherein
Figure BDA00023589571500000813
A load penalty factor set for the edge node,
Figure BDA00023589571500000814
is the load factor of the edge node,
Figure BDA00023589571500000815
is a margin weight value of each computing resource preset by the edge node,
Figure BDA00023589571500000816
is the equalization coefficient of the node;
and D4, selecting the edge node with the highest priority score for task execution.
S36, if the local edge node and the adjacent edge node meet the requirement of each computing resource of the edge node in the edge node set
Figure BDA00023589571500000817
Figure BDA00023589571500000818
The tasks are executed at the computing nodes of the cloud computing center.
S4, the local edge node requests the selected edge node or the computing node to confirm and execute the task; the method specifically comprises the following steps:
and S41, the local edge node sends the unloading demand information of the terminal to the selected edge node or the computing node.
S42, the selected edge node or the selected computing node carries out detection according to the unloading demand information: service characteristic information indicating whether the computing resource of the terminal meets the requirement or not and relevant indexes indicating whether the network connection between the terminal and the terminal meets the required service quality requirement or not; if the two are met, reserving computing resources for the task, and sending a task execution confirmation message to the local edge node; otherwise, returning an offload scheduling failure message to the local edge node, notifying the local edge node to reschedule other resources to execute the task, marking the selected edge node or the computing node as a node that cannot support execution of the offload task by the local edge node, and then resuming the step S3.
When detecting the relevant indexes of the service quality requirement, the edge node needs to determine whether the average connection delay and the maximum connection delay of the task are met according to the connection state of the current edge node and the terminal
Figure BDA0002358957150000091
And minimum bandwidth, average bandwidth (B) of the taskmin,Bavg) These two requirements.
S43, the edge node or the computing node executing the task creates task unloading service and broadcasts the resource use change message to the adjacent edge node;
when the executed task is the edge node, the edge node checks whether a mirror image of the task exists from a container mirror image warehouse of the edge node, if yes, a container of task unloading service is directly created, otherwise, the task mirror image is pulled to the cloud computing center, and then the task unloading service container is created. The edge node also broadcasts its resource usage change information to its neighboring edge nodes.
And S44, the local edge node sends task service information to the terminal.
S45, the unloading manager of the terminal saves the current task context environment, transfers the task to the edge node or the computing node bearing the service, establishes connection with the edge node or the computing node bearing the task, uses the task unloading service, and sends the input data of the task to the edge node or the computing node.
And S46, receiving the task input data by the edge node or the computing node bearing the task, and executing the task.
S5, the edge node or the computing node bearing the task returns a task execution result to the terminal, and sends a task execution record to the cloud computing center; the method specifically comprises the following steps:
and S51, the edge node or the computing node sends the computing result or the data after the task is executed to the terminal.
And S52, receiving and extracting the task execution result or data by the unloading manager of the terminal, recovering the context environment of the task, and applying the task execution result.
And S53, ending the task unloading service by the edge node or the computing node, recovering the computing resource, and broadcasting the resource change message to the adjacent edge node.
And S54, the edge node or the computing node sends the execution record of the task, including the execution time of the task, the request time, the used computing resources and the running log to the cloud computing center.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. A task unloading method of a mobile user terminal under a distributed edge computing service system is characterized in that a local edge node serving a local terminal decides to unload task allocation, and the unloading method comprises the following steps:
s1, the local edge node receives a task unloading request of the local terminal;
s2, the local edge node acquires the computing resource use information of the local edge node and the adjacent edge node;
s3, selecting a local edge node, an adjacent edge node or a computing node of a cloud computing center to execute a task by the local edge node according to the resource condition required by the task and the current available edge node resource use condition by adopting a local resource priority strategy based on a resource threshold value;
s4, the local edge node requests the selected edge node or the computing node to confirm and execute the task;
s5, the edge node or the computing node bearing the task returns a task execution result to the terminal, and sends a task execution record to the cloud computing center;
let the adjacent edge nodes have k-1 and form an available edge node group N including k edge nodes together with the local edge nodek(ii) a The calculating of the resource usage information in step S2 includes: available amount of q computing resources for each edge node in available edge node cluster
Figure FDA0002957426980000011
Upper limit of resources
Figure FDA0002957426980000012
The preset resource threshold information of each edge node comprises preset low, medium and high available resource thresholds
Figure FDA0002957426980000013
Wherein i is more than or equal to 1 and less than or equal to q, j is more than or equal to 1 and less than or equal to k, and
Figure FDA0002957426980000014
step S3 includes:
s31, calculating available edge node group NkIf each edge node in the network carries the residual resource available amount of the task
Figure FDA0002957426980000015
Figure FDA0002957426980000016
Wherein
Figure FDA0002957426980000017
S32, if all the computing resources of the local edge node satisfy: if the available quantity of the residual resources is larger than the preset high available resource threshold value of the local edge node but smaller than the resource upper limit of the local edge node, the unloading task is executed at the local edge node, otherwise, the step S33 is entered;
s33, if all the computing resources of the local edge node meet: if the available quantity of the remaining resources is greater than a preset medium available resource threshold value of the local edge node, searching for each computing resource to meet the following conditions: neighboring edge node set N with remaining resource availability greater than its preset high available resource thresholdcAnd local edge node NLocalConstitute a node set Ns={Nc,NLocalAnd from node set NsRandomly selecting one edge node to execute the task, otherwise, entering the step S34;
s34, if all the resources of the local edge node meet: if the available quantity of the residual resources is larger than the preset low available resource threshold value of the local edge node, executing a resource optimization scheduling algorithm for ensuring delay, and selecting the edge node to execute the task, otherwise, entering the step S35;
s35, if the local edge node has the following computing resources: if the available quantity of the residual resources is smaller than a preset low available resource threshold value of the local edge node, judging whether all the computing resources meet the following conditions: neighboring edge node set N with remaining resource availability greater than a preset low available resource threshold for neighboring edge nodeswc(ii) a If the adjacent edge node set N exists, the adjacent edge node set NwcAnd a local edge node NLocalComposition of a set of available edge nodes Nws={Nwc,NLocalAnd from the set of available edge nodes NwsExecuting a resource optimization scheduling algorithm with punishment and delay guarantee, selecting an edge node to execute a task, otherwise, entering the step S36;
s36, if the local edge node and the adjacent edge node meet the requirement of each computing resource of the edge node in the edge node set
Figure FDA0002957426980000021
Figure FDA0002957426980000022
The task is executed at a computing node of the cloud computing center;
wherein r isiAnd q computing resource requirements needed for executing the task, wherein i is more than or equal to 1 and less than or equal to q.
2. The method of claim 1, wherein step S1 includes:
s11, the application analyzer of the local terminal determines whether to unload part of tasks to the distributed edge computing service system according to the preset running mode and the current running state of the application analyzer;
s12, if the local terminal decides to unload task, the unloading manager builds the characteristic information of unloading demand service and the related index of service quality requirement according to the service request template of local edge node;
the unloading demand service characteristic information comprises: number of CPU cycles CC required to execute a tasko(ii) a Estimated byte number VI of input data required by executing tasko(ii) a VO (byte number of estimated data) returned by executing tasko(ii) a Q computing resource requirements r required to execute a taskiWherein i is more than or equal to 1 and less than or equal to q, and comprises a CPU resource value r required by executing a taskcpuSize of memory rmemBandwidth size rio
The relevant indexes of the unloading demand service quality requirement comprise: average connection delay acceptable for local terminal and edge node providing service
Figure FDA0002957426980000023
And maximum connection delay
Figure FDA0002957426980000024
The task set by the local terminal can bear the delay difference at the maximum
Figure FDA0002957426980000025
Minimum bandwidth B required for task executionminAverage bandwidth Bavg
And S13, the unloading manager of the local terminal sends the unloading demand service characteristic information and the service quality requirement related index of the task to the local edge node.
3. The method of claim 1, wherein the computing resource usage information in step S2 is obtained from a resource information database maintained independently by the local edge node; the resource information database is independently maintained by each edge node, is responsible for storing and updating the available quantity, the upper limit information and the resource threshold information of each computing resource of the edge node and the adjacent edge node in real time, subscribes resource change information published by the adjacent edge node by using a message queue, and immediately updates the resource information in the resource information database, thereby realizing information synchronization.
4. The method of claim 1, wherein the step of executing a resource-optimized scheduling algorithm for ensuring delay in step S34 comprises the steps of:
d1 calculation task acceptable delay benchmark
Figure FDA0002957426980000031
Figure FDA0002957426980000032
Wherein
Figure FDA0002957426980000033
Is an edge node NjEstimated connection delay with terminal, FreqjIs an edge node NjCPU frequency, CC ofoNumber of CPU cycles, VI, required to execute a taskoEstimated byte number, VO, of input data required to execute a taskoNumber of bytes of estimated data, r, returned for executing a taskcpuThe CPU resources requested for a task are relatively largeThe size of the product is small, and the product is small,
Figure FDA0002957426980000034
the available amount of resources is calculated for the CPU of the edge node,
Figure FDA0002957426980000035
is an edge node NjThe size of the remaining available CPU resources to perform the task is estimated,
Figure FDA0002957426980000036
Figure FDA0002957426980000037
is an edge node NjUpper limit of CPU resources; q computing resource requirements r required to execute a taskiWherein i is more than or equal to 1 and less than or equal to q, and comprises a CPU resource value r required by executing a taskcpuSize of memory rmemBandwidth size rio(ii) a The task set by the local terminal has the maximum tolerable delay difference of
Figure FDA0002957426980000038
D2, from available edge node group NkFind satisfaction
Figure FDA0002957426980000039
Guaranteed latency candidate node set Ndc
D3, from candidate node set NdcCalculating priority scores P of m edge nodes of candidate node setj=Lj+BjJ is more than or equal to 1 and less than or equal to w, j is more than or equal to 1 and less than or equal to m, wherein
Figure FDA00029574269800000310
Is the load factor of the edge node,
Figure FDA00029574269800000311
is a margin weight value of each computing resource preset by the edge node,
Figure FDA00029574269800000312
is the equalization coefficient of the node;
and D4, selecting the edge node with the highest priority score for task execution.
5. The method of claim 1, wherein the step S35 of executing a resource-optimized scheduling algorithm with penalty delay guarantee comprises the steps of:
d1 calculation task acceptable delay benchmark
Figure FDA00029574269800000313
Figure FDA00029574269800000314
Wherein
Figure FDA00029574269800000315
Is an edge node NjEstimated connection delay with terminal, FreqjIs an edge node NjCPU frequency, CC ofoNumber of CPU cycles, VI, required to execute a taskoEstimated byte number, VO, of input data required to execute a taskoNumber of bytes of estimated data, r, returned for executing a taskcpuThe relative size of the CPU resources requested for a task,
Figure FDA00029574269800000316
the available amount of resources is calculated for the CPU of the edge node,
Figure FDA0002957426980000041
is an edge node NjThe size of the remaining available CPU resources to perform the task is estimated,
Figure FDA0002957426980000042
Figure FDA0002957426980000043
is an edge node NjUpper limit of CPU resources; q computing resource requirements r required to execute a taskiWherein i is more than or equal to 1 and less than or equal to q, and comprises a CPU resource value r required by executing a taskcpuSize of memory rmemBandwidth size rio(ii) a The task set by the local terminal has the maximum tolerable delay difference of
Figure FDA0002957426980000044
D2 setting usable edge node group NwsWith w edge nodes, from the available edge node cluster NwsFind satisfaction
Figure FDA0002957426980000045
Figure FDA0002957426980000046
Guaranteed latency candidate node set Ndc
D3, from candidate node set NdcCalculating priority scores of m edge nodes of candidate node set
Figure FDA0002957426980000047
Wherein
Figure FDA0002957426980000048
A load penalty factor set for the edge node,
Figure FDA0002957426980000049
is the load factor of the edge node,
Figure FDA00029574269800000410
is a margin weight value of each computing resource preset by the edge node,
Figure FDA00029574269800000411
is the equalization coefficient of the node;
and D4, selecting the edge node with the highest priority score for task execution.
6. The method of claim 1, wherein the step S4 of confirming the task execution request from the local edge node to the selected edge node comprises the steps of:
s41, the local edge node sends the unloading demand information of the terminal to the selected edge node or the computing node;
s42, the selected edge node or the selected computing node carries out detection according to the unloading demand information: service characteristic information indicating whether the computing resource of the terminal meets the requirement or not and relevant indexes indicating whether the network connection between the terminal and the terminal meets the required service quality requirement or not; if the two are met, reserving computing resources for the task, and sending a task execution confirmation message to the local edge node; otherwise, returning an unloading scheduling failure message to the local edge node, and informing the local edge node to reschedule other resources to execute the task;
s43, the edge node or the computing node executing the task creates task unloading service and broadcasts the resource use change message to the adjacent edge node;
s44, the local edge node sends task service information to the terminal;
s45, the unloading manager of the terminal saves the context environment of the current task, transfers the task to the edge node or the computing node providing service, establishes connection with the edge node or the computing node bearing the task, uses the task unloading service, and sends the input data of the task to the edge node or the computing node;
and S46, receiving the task input data by the edge node or the computing node bearing the task, and executing the task.
7. The method of claim 1, wherein step S5 includes:
s51, the edge node or the computing node sends the computing result or data after the task is executed to the terminal;
s52, the unloading manager of the terminal receives and extracts the task execution result or data, restores the context environment of the task and applies the task execution result;
s53, the edge node or the computing node finishes the task unloading service, recovers the computing resource and broadcasts the resource change message to the adjacent edge node;
and S54, the edge node or the computing node sends the execution record of the task, including the execution time of the task, the request time, the used computing resources and the running log to the cloud computing center.
CN202010016136.7A 2020-01-08 2020-01-08 Method for unloading mobile user terminal task under distributed edge computing service system Active CN111262906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010016136.7A CN111262906B (en) 2020-01-08 2020-01-08 Method for unloading mobile user terminal task under distributed edge computing service system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010016136.7A CN111262906B (en) 2020-01-08 2020-01-08 Method for unloading mobile user terminal task under distributed edge computing service system

Publications (2)

Publication Number Publication Date
CN111262906A CN111262906A (en) 2020-06-09
CN111262906B true CN111262906B (en) 2021-05-25

Family

ID=70948577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010016136.7A Active CN111262906B (en) 2020-01-08 2020-01-08 Method for unloading mobile user terminal task under distributed edge computing service system

Country Status (1)

Country Link
CN (1) CN111262906B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11368539B1 (en) * 2021-05-27 2022-06-21 International Business Machines Corporation Application deployment in a multi-cluster environment

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102418896B1 (en) * 2020-07-03 2022-07-11 한국전자통신연구원 Apparatus and method for intelligent scheduling
US11838384B2 (en) 2020-07-03 2023-12-05 Electronics And Telecommunications Research Institute Intelligent scheduling apparatus and method
CN111858051B (en) * 2020-07-20 2023-09-05 国网四川省电力公司电力科学研究院 Real-time dynamic scheduling method, system and medium suitable for edge computing environment
CN111737009A (en) * 2020-07-28 2020-10-02 北京千丁互联科技有限公司 Local end and cloud computing distribution method and device and local server
CN111984408B (en) * 2020-08-14 2021-04-20 昆山华泛信息服务有限公司 Data cooperative processing method based on big data and edge computing and edge cloud platform
CN112039992B (en) * 2020-09-01 2022-10-28 平安资产管理有限责任公司 Model management method and system based on cloud computing architecture
CN112272201B (en) * 2020-09-15 2022-05-27 网宿科技股份有限公司 Equipment management method, system and management cluster
CN112134802A (en) * 2020-09-23 2020-12-25 杭州雾联科技有限公司 Edge computing power resource scheduling method and system based on terminal triggering
CN112671830B (en) * 2020-12-02 2023-05-30 武汉联影医疗科技有限公司 Resource scheduling method, system, device, computer equipment and storage medium
CN112202896A (en) * 2020-09-30 2021-01-08 中移(杭州)信息技术有限公司 Edge calculation method, frame, terminal and storage medium
CN112291304B (en) * 2020-09-30 2024-03-29 国电南瑞科技股份有限公司 Edge internet of things proxy equipment and combined message processing method thereof
CN112312325B (en) * 2020-10-29 2022-08-16 陕西师范大学 Mobile edge task unloading method based on three decision models
CN112543481B (en) * 2020-11-23 2023-09-15 中国联合网络通信集团有限公司 Method, device and system for balancing computing force load of edge node
CN113064715B (en) * 2020-12-16 2024-04-26 上海金融期货信息技术有限公司 Load balancing and caching center system and wind control system for financial field
CN112714164A (en) * 2020-12-22 2021-04-27 北京国电通网络技术有限公司 Internet of things system and task scheduling method thereof
CN112559078B (en) * 2020-12-22 2023-03-21 杭州电子科技大学 Method and system for hierarchically unloading tasks of mobile edge computing server
CN112559187A (en) * 2020-12-22 2021-03-26 杭州电子科技大学 Method and system for dynamically allocating tasks to mobile edge computing server
CN112612553B (en) * 2021-01-06 2023-09-26 重庆邮电大学 Edge computing task unloading method based on container technology
WO2022160155A1 (en) * 2021-01-28 2022-08-04 华为技术有限公司 Method and apparatus for model management
EP4075691B1 (en) * 2021-02-20 2024-05-01 Wangsu Science & Technology Co., Ltd. Resource requesting method and terminal
CN112882809A (en) * 2021-02-23 2021-06-01 国汽(北京)智能网联汽车研究院有限公司 Method and device for determining computing terminal of driving task and computer equipment
CN112799789B (en) * 2021-03-22 2023-08-11 腾讯科技(深圳)有限公司 Node cluster management method, device, equipment and storage medium
CN113032149B (en) * 2021-03-25 2023-09-26 中山大学 Edge computing service placement and request distribution method and system based on evolution game
CN113315818B (en) * 2021-05-10 2023-03-24 华东桐柏抽水蓄能发电有限责任公司 Data acquisition terminal resource adaptation method based on edge calculation
CN113572667B (en) * 2021-06-11 2022-10-28 青岛海尔科技有限公司 Method and device for registering edge computing node and intelligent home system
CN113391647B (en) * 2021-07-20 2022-07-01 中国人民解放军国防科技大学 Multi-unmanned aerial vehicle edge computing service deployment and scheduling method and system
CN113806018B (en) * 2021-09-13 2023-08-01 北京计算机技术及应用研究所 Kubernetes cluster resource mixed scheduling method based on neural network and distributed cache
CN113900837A (en) * 2021-10-18 2022-01-07 中国联合网络通信集团有限公司 Computing power network processing method, device, equipment and storage medium
CN114039700A (en) * 2021-10-26 2022-02-11 南通先进通信技术研究院有限公司 Edge computing system for low-bandwidth satellite communication and working method
CN113891114B (en) * 2021-11-18 2023-12-15 上海哔哩哔哩科技有限公司 Transcoding task scheduling method and device
CN113886094B (en) * 2021-12-07 2022-04-26 浙江大云物联科技有限公司 Resource scheduling method and device based on edge calculation
CN114301924A (en) * 2021-12-09 2022-04-08 中国电子科技集团公司电子科学研究院 Application task scheduling method and node equipment for cloud edge collaborative environment
CN113992691B (en) * 2021-12-24 2022-04-22 苏州浪潮智能科技有限公司 Method, device and equipment for distributing edge computing resources and storage medium
CN115314363B (en) * 2022-02-22 2024-04-12 网宿科技股份有限公司 Service recovery method, service deployment method, server and storage medium
CN115002108B (en) * 2022-05-16 2023-04-14 电子科技大学 Networking and task unloading method for smart phone serving as computing service node
WO2024001302A1 (en) * 2022-06-30 2024-01-04 华为云计算技术有限公司 Mapping system and related method
CN115396515A (en) * 2022-08-19 2022-11-25 中国联合网络通信集团有限公司 Resource scheduling method, device and storage medium
CN115766721A (en) * 2022-11-21 2023-03-07 中国联合网络通信集团有限公司 Service transmission method, device and storage medium thereof
CN116347608B (en) * 2023-04-19 2024-03-15 湖南科技学院 Time division resource self-adaptive adjustment method
CN116208625B (en) * 2023-05-06 2023-07-25 浪潮通信技术有限公司 Information synchronization method, device, electronic equipment and computer readable storage medium
CN116668447B (en) * 2023-08-01 2023-10-20 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight
CN116708445B (en) * 2023-08-08 2024-05-28 北京智芯微电子科技有限公司 Distribution method, distribution network system, device and storage medium for edge computing task

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911478A (en) * 2017-12-06 2018-04-13 武汉理工大学 Multi-user based on chemical reaction optimization algorithm calculates discharging method and device
CN109274745A (en) * 2018-09-28 2019-01-25 北京北斗方圆电子科技有限公司 A kind of Internet of things system and method for fringe node optimization calculating
CN109587735A (en) * 2018-11-12 2019-04-05 电子科技大学 A kind of cooperation with service caching method for measuring mobile edge calculations based on time delay
CN109905470A (en) * 2019-02-18 2019-06-18 南京邮电大学 A kind of expense optimization method for scheduling task based on Border Gateway system
CN110413392A (en) * 2019-07-25 2019-11-05 北京工业大学 The method of single task migration strategy is formulated under a kind of mobile edge calculations scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911478A (en) * 2017-12-06 2018-04-13 武汉理工大学 Multi-user based on chemical reaction optimization algorithm calculates discharging method and device
CN109274745A (en) * 2018-09-28 2019-01-25 北京北斗方圆电子科技有限公司 A kind of Internet of things system and method for fringe node optimization calculating
CN109587735A (en) * 2018-11-12 2019-04-05 电子科技大学 A kind of cooperation with service caching method for measuring mobile edge calculations based on time delay
CN109905470A (en) * 2019-02-18 2019-06-18 南京邮电大学 A kind of expense optimization method for scheduling task based on Border Gateway system
CN110413392A (en) * 2019-07-25 2019-11-05 北京工业大学 The method of single task migration strategy is formulated under a kind of mobile edge calculations scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Joint Optimization for Task Offloading in Edge Computing: An Evolutionary Game Approach;Chongwu Dong等;《sensors》;20190212;全文 *
移动边缘计算卸载技术综述;谢人超等;《通信学报》;20181130;第39卷(第11期);全文 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11368539B1 (en) * 2021-05-27 2022-06-21 International Business Machines Corporation Application deployment in a multi-cluster environment

Also Published As

Publication number Publication date
CN111262906A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111262906B (en) Method for unloading mobile user terminal task under distributed edge computing service system
CN109951821B (en) Task unloading scheme for minimizing vehicle energy consumption based on mobile edge calculation
CN111901424B (en) Cloud edge cooperative network resource smooth migration and reconstruction method and system
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
Tran et al. COSTA: Cost-aware service caching and task offloading assignment in mobile-edge computing
Aljanabi et al. Improving IoT services using a hybrid fog-cloud offloading
WO2023024219A1 (en) Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network
Ye et al. A framework for QoS and power management in a service cloud environment with mobile devices
CN112600895B (en) Service scheduling method, system, terminal and storage medium for mobile edge calculation
WO2012000760A1 (en) Cellular telecommunication system network element, corresponding method and computer -readable storage medium
CN110311979B (en) Task migration method of MEC server and related device
US20140370927A1 (en) Method, system, and equipments for mobility management of group terminals
Wu et al. A profit-aware coalition game for cooperative content caching at the network edge
CN113918240A (en) Task unloading method and device
CN111935205B (en) Distributed resource allocation method based on alternating direction multiplier method in fog computing network
CN112416603A (en) Combined optimization system and method based on fog calculation
CN113391647A (en) Multi-unmanned aerial vehicle edge computing service deployment and scheduling method and system
CN115720237A (en) Caching and resource scheduling method for edge network self-adaptive bit rate video
CN114077497A (en) Cloud edge collaborative edge application deployment method
Malazi et al. Distributed service placement and workload orchestration in a multi-access edge computing environment
WO2020259276A1 (en) Method, apparatus and radio network optimization control function unit (rcf) for network optimization
Mazza et al. A user-satisfaction based offloading technique for smart city applications
CN117076117A (en) Intelligent media meeting place scheduling method and system based on new communication architecture
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
CN115633383A (en) Multi-cooperation server deployment method in edge computing scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant