CN115225496A - Mobile sensing service unloading fault-tolerant method based on edge computing environment - Google Patents

Mobile sensing service unloading fault-tolerant method based on edge computing environment Download PDF

Info

Publication number
CN115225496A
CN115225496A CN202210748556.3A CN202210748556A CN115225496A CN 115225496 A CN115225496 A CN 115225496A CN 202210748556 A CN202210748556 A CN 202210748556A CN 115225496 A CN115225496 A CN 115225496A
Authority
CN
China
Prior art keywords
task
user
time
representing
unloading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210748556.3A
Other languages
Chinese (zh)
Other versions
CN115225496B (en
Inventor
龙廷艳
钟淘淘
邓雨婷
谭小龙
夏云霓
李玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jinyuyun Energy Technology Co ltd
Chongqing University
Original Assignee
Chongqing Jinyuyun Energy Technology Co ltd
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jinyuyun Energy Technology Co ltd, Chongqing University filed Critical Chongqing Jinyuyun Energy Technology Co ltd
Priority to CN202210748556.3A priority Critical patent/CN115225496B/en
Publication of CN115225496A publication Critical patent/CN115225496A/en
Application granted granted Critical
Publication of CN115225496B publication Critical patent/CN115225496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephone Function (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Power Sources (AREA)

Abstract

The invention provides a mobile sensing service unloading fault-tolerant method based on an edge computing environment, which comprises the following steps: s1, acquiring one or any combination data information of a task unloading state of a moving user, server state information, task unloading and uploading time delay and task unloading energy consumption; and S2, performing edge optimization according to the data information acquired in the step S1. The invention can avoid the problem that the user unloading task can not be processed in time when the edge server fails; the time delay and energy consumption of unloading tasks of the edge users are optimized, and the resource utilization rate of the system is effectively increased.

Description

Mobile sensing service unloading fault-tolerant method based on edge computing environment
Technical Field
The invention relates to the technical field of edge computing, in particular to a mobile sensing service unloading fault-tolerant method based on an edge computing environment.
Background
Mobile Edge Computing (MEC) is of great importance in real-time Computing services. Many typical mission-intensive computing applications, such as face recognition, interactive gaming, auto-navigation, augmented reality, remote control planes, etc., benefit from the distributed computing power of high-speed, large-scale processing of mobile edge computing. However, due to unreliability of wireless communication and distributed resource infrastructure in MEC environments, MEC-based applications often encounter various types of system failures, such as resource overflow (MEC overload) or software and hardware failures. Will result in poor quality of user experience. Therefore, it is essential to implement fault tolerance techniques for MEC infrastructure. However, implementing high quality fault tolerance techniques in MECs has certain difficulties and challenges. In an MEC environment, edge nodes responsible for managing data transmissions and edge nodes broadcasting to other wireless networks are typically deployed on wireless Access Points (APs) or internet of things devices. Failure of any one node may compromise the reliability of the overall system. This presents challenges of 1) heterogeneity and dynamics due to the MEC environment. The edge node may be damaged by some malicious activities or other harsh environments, thereby causing frequent failures; 2) Tasks are offloaded and migrated between edge nodes through the edge network. Therefore, when the network connectivity during wireless communication is temporarily cut off, the communication of instant data is affected; 3) The mobile network should have sufficient expansion capability to accommodate the ever increasing number of edge users; 4) Data processing needs to be performed close to the data source to reduce network latency. In this regard, fault tolerant design and service offloading of the mobile edge infrastructure is particularly desirable.
Through extensive and intensive research, the task unloading scheduling problem in the existing edge computing environment is found to have a plurality of defects: (1) The existing method rarely considers the service unloading failure caused by the failure of the edge node. (2) The existing method considers the situation of computation overload of the edge node less. The offloading policy should be dynamically adjusted accordingly.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides a mobile sensing service unloading fault-tolerant method based on an edge computing environment.
In order to achieve the above object, the present invention provides a mobile aware service offload fault tolerance method based on an edge computing environment, comprising the following steps:
s1, acquiring one or any combination data information of a task unloading state of a moving user, server state information, task unloading and uploading time delay and task unloading energy consumption;
and S2, performing edge unloading optimization according to the data information acquired in the step S1.
In a preferred embodiment of the present invention, the acquiring the unloading status of the user task in motion in step S1 includes:
the number of base stations in the edge layer is the number of edge servers, each base station corresponds to one server, and the number of base stations is h, B = { B = 1 ,b 2 ,b 3 ,...,b h H, the number of users is m, U = { U = 1 ,u 2 ,u 3 ,...,u m U, user u k The generated task is T k ={t k,1 ,t k,2 ,t k,3 ,...,t k,n },st k,i For task start unload time, e k,i As task t k,i The local execution time is set to be equal to,
Figure BDA0003717463440000021
as task t k,i Obtaining the execution time of the edge server j The user set covered by each edge server at the time t is U j (t), obtaining user u k The plurality of edge servers to which tasks can be offloaded at time t are set as B k (t),δ k,i,j Indicating whether the task is offloaded to the edge server, if delta k,i,j =0 denotes offloading of task to serverThus, all tasks covered under the jth edge server will be obtained as:
Figure BDA0003717463440000022
the unloading completion time Makespan of the kth user is:
Figure BDA0003717463440000023
in a preferred embodiment of the present invention, the acquiring the server status information in step S1 includes:
Figure BDA0003717463440000031
X(x 0 ,x 1 ,x 2 ,...,x g ) Representing edge Server b j The number of failures that occur within a certain time,
Figure BDA0003717463440000032
f j representing edge Server b j Number of failures of (T) i Is f j Time of occurrence of the fault, τ k,i,j Finger task t k,i At edge server b j Estimated time of execution, FT j ={ft 1 ,ft 2 ,ft 3 ,...ft g The j is a time set of the fault of the jth edge server, g represents the total number of the faults, and the unloading success rate of a single user is as follows:
Figure BDA0003717463440000033
ft o representing user u k The time from the unloading start time of the ith task to the time when the fault occurs; o =1,2,3,. G.
In a preferred embodiment of the present invention, the acquiring the task offloading and uploading delay in step S1 includes:
the communication model is represented as:
C k,i (t)=[C,c tc ,d k,i ]
wherein C represents the total bandwidth resource provided by the edge server;
c t representing the remaining bandwidth resources;
π c representing a bandwidth resource allocation policy;
d k,i representing the amount of data transferred;
C k,i (t) represents a communication model;
in the unloading process, the bandwidth utilization rate obtained by each user is as follows:
Figure BDA0003717463440000034
q (i) representing the energy transmitted through the wireless network base station. g (i,j) Representing mobile devices with base stations (edge servers) b j Channel gain in between, and b j ∈B k (t),ω 0 Power, U, representing background noise j (t) represents a user set covered by the jth edge server at the time t, and the transmission time for unloading and uploading the task is as follows:
Figure BDA0003717463440000035
wherein d is k,i Representing the amount of data transferred;
tr k representing the bandwidth utilization achieved by the user;
Figure BDA0003717463440000041
representing the transmission time of the task unloading and uploading;
in a preferred embodiment of the present invention, the acquiring task offloading energy consumption in step S1 includes:
the communication energy consumption model is as follows:
Figure BDA0003717463440000042
wherein d is k,i Representing the amount of data transferred;
Figure BDA0003717463440000043
representing the transmission time of the task unloading and uploading;
π e representing an energy consumption model strategy;
E k,i representing a communication energy consumption model;
the appropriate power level W for each user is:
W=λ u t ud t di
t u represents the uplink throughput;
t d represents the downlink throughput;
the transmission energy consumption of task unloading is as follows:
Figure BDA0003717463440000044
wherein ω is k,i Average transmission power level required for one-time task offloading, user u k The whole unloading energy consumption is as follows:
Figure BDA0003717463440000045
wherein e is k,i For user u k Energy consumption of local equipment;
Figure BDA0003717463440000046
energy consumption for the far-end edge server;
Figure BDA0003717463440000047
representing the transfer energy consumption for task offloading.
In a preferred embodiment of the present invention, the method for calculating the edge offload optimization in step S2 is:
Figure BDA0003717463440000048
E k representing user u k The whole unloading energy consumption;
F k indicating a single user offload success rate.
In conclusion, by adopting the technical scheme, the invention can avoid that the user unloading task cannot be processed in time when the edge server fails; the time delay and energy consumption of unloading tasks of the edge users are optimized, and the resource utilization rate of the system is effectively increased.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic block diagram of the process of the present invention.
Fig. 2 is a schematic diagram of a process for determining an offload policy for a mobile subscriber according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Step 1: obtaining user task offload status in motion
The environment is based on a two-layer edge cloud infrastructure, namely a background layer and an edge layer.The number of base stations in the edge layer is the number of edge servers, and each base station corresponds to one server. The number of base stations is h, B = { B = { (B) 1 ,b 2 ,b 3 ,...,b h B denotes an edge server set or a base station set, B 1 Representing the 1 st edge server, b 2 Representing the 2 nd edge server, b 3 Representing the 3 rd edge server, b h Representing the h-th edge server. The number of users is m, U = { U = 1 ,u 2 ,u 3 ,...,u m U denotes an edge user set, U 1 Represents the 1 st user, u 2 Denotes the 2 nd user, u 3 Denotes the 3 rd user, u m Denotes the m-th user, user u k The generated task is T k ={t k,1 ,t k,2 ,t k,3 ,...,t k,n },k=1,2,3,...,m,T k Representing user u k Task set of u k Denotes the kth user, t k,1 Representing user u k Task 1, t k,2 Representing user u k Task 2, t k,3 Representing user u k Task 3, t k,n Representing user u k The nth task of (1). st k,i Unload time for task start, also user u k I =1,2,3,.., n. e.g. of a cylinder k,i As task t k,i Is local to the execution time is set to a value that is less than the execution time,
Figure BDA0003717463440000061
as task t k,i The execution time at the edge server. Acquiring a user set covered by the jth edge server at the time t as U j (t), j =1,2,3,.., h, user u is acquired k The plurality of edge servers to which tasks can be offloaded at time t are set as B k (t) of (d). Obtaining user u k To edge server b j Is a distance d k,j 。δ k,i,j Indicating whether a task is offloaded to an edge server, also user u k If the ith task is unloaded to the jth edge server, if delta k,i,j =0 denotes offloading the task to the server if δ k,i,j =1 indicates that the task is not offloaded to the edge server. Thus, the overlay under the jth edge server will be obtainedAll the tasks of the lid are:
Figure BDA0003717463440000062
wherein, | U j (t) | represents the number of users in the user set covered by the jth edge server at the moment t;
t k,i representing user u k (kth user) ith task start unload time;
Figure BDA0003717463440000063
all tasks covered under the jth edge server are represented;
the unloading completion time Makespan of the kth user is:
Figure BDA0003717463440000064
wherein, delta k,i,j (t) denotes user u k Whether the ith task of (1) is offloaded to the jth edge server;
e k,i representing a task t k,i The local execution time of;
Figure BDA0003717463440000065
representing a task t k,i Execution time at the edge server;
Makespan k represents the k-th user u k Unloading completion time of (1);
step 2: obtaining server state information
When the edge server is operating normally, the server state information may be observed. When an edge server failure occurs, the edge server may be partially or fully out of service. This time delays the user task offloading request.
In an edge computing environment, there are several types of failures: 1) Failure of an edge server node; 2) When the edge mobile user moves beyond the communication range of the corresponding edge server, disconnection fault occurs; 3) Failure of a task being executed on an edge server, failure of the task itself.
We assume that the failure times of the edge servers obey a poisson distribution. That is, the failure probability of the edge server at a certain time is:
Figure BDA0003717463440000071
wherein e represents a natural base number;
x! Represents a factorial of x;
x 0 indicating that 0 failures occurred, i.e., no failures occurred;
x 1 indicating 1 failure occurred;
x 2 indicating 2 failures occurred;
x g indicating that g faults occurred;
X(x 0 ,x 1 ,x 2 ,...,x g ) Representing edge Server b j The number of failures that occur in a certain time.
Figure BDA0003717463440000072
f j Representing edge Server b j Number of failures of (T) i Is f j Time of occurrence of fault, τ k,i,j Finger task t k,i At edge server b j The estimated time of execution. FT j ={ft 1 ,ft 2 ,ft 3 ,...ft g The j is the time set of the fault of the jth edge server, g represents the total number of the faults, and the unloading success rate of a single user is as follows:
Figure BDA0003717463440000073
ft o representing user u k The time from the unloading start time of the ith task to the time when the fault occurs; o =1,2,3,.., g;
st k,i representing user u k The ith task start unload time of (1);
Figure BDA0003717463440000074
representing a task t k,i Execution time at the edge server;
and 3, step 3: obtaining task offload upload time delay
Users in the MEC environment have mobility, so the bandwidth for task offloading varies over time. The total bandwidth resource provided by the edge server is C, and the bandwidth resource allocation strategy is assumed to be pi c Residual bandwidth resource c t The amount of data transferred is d k,i The communication model is expressed as:
C k,i (t)=[C,c tc ,d k,i ]
wherein C represents the total bandwidth resource provided by the edge server;
c t representing the remaining bandwidth resources;
π c representing a bandwidth resource allocation policy;
d k,i representing the amount of data transferred;
C k,i (t) represents a communication model;
in the unloading process, the bandwidth utilization rate obtained by each user is as follows:
Figure BDA0003717463440000081
q (i) representing the energy transmitted through the wireless network base station. g is a radical of formula (i,j) Representing a mobile device with a base station (edge server) b j Channel gain in between, and b j ∈B k (t),ω 0 Power, U, representing background noise j (t) represents a user set covered by the jth edge server at the time t, and the transmission time for task unloading and uploading is as follows:
Figure BDA0003717463440000082
wherein, d k,i Representing the amount of data transferred;
tr k representing the bandwidth utilization achieved by the user;
Figure BDA0003717463440000083
representing the transmission time of the task unloading and uploading;
and 4, step 4: obtaining task offload energy consumption
The energy consumption model strategy used in this work is π e WiFi-based transmission power consumption by uplink throughput t u (Mbps) and downlink throughput t d (Mbps). The communication energy consumption model is as follows:
Figure BDA0003717463440000084
wherein d is k,i Representing the amount of data transferred;
Figure BDA0003717463440000091
representing the transmission time of the task unloading and uploading;
π e representing an energy consumption model strategy;
E k,i representing a communication energy consumption model;
the appropriate power level W for each user is:
W=λ u t ud t di
wherein λ is u ,λ d ,λ i Is a network parameter; when it is WiFi, uplink power λ u (mW/Mbps) =283.17, downlink power λ d (mW/Mbps) =137.01, power λ at throughput of 0 i (mW) =132.86; when it is LTE, the uplink power λ u (mW/Mbps) =438.39, downlink power λ d (mW/Mbps) =51.97, power λ at throughput of 0 i (mW)=1288.04;When it is 3G, uplink power λ u (mW/Mbps) =868.98, downlink power lambda d (mW/Mbps) =122.12, power λ when throughput is 0 i (mW)=817.88;
t u Represents the uplink throughput;
t d represents the downlink throughput;
the transmission energy consumption of task unloading is as follows:
Figure BDA0003717463440000092
wherein ω is k,i Average transmission power level required for one-time task offloading, user u k The whole unloading energy consumption is as follows:
Figure BDA0003717463440000093
wherein e k,i For user u k Energy consumption of the local device;
Figure BDA0003717463440000094
energy consumption for the far-end edge server;
Figure BDA0003717463440000095
representing the energy consumption of the transfer of the task offload.
And 4, step 4: user movement model
User u moves with time k With longitude x k (t) and latitude y k (t), the user's movement follows an arbitrary pattern, and the direction and angle of movement is time-varying.
And 5: determining an optimization model
The energy consumption generated by data transmission is the cost when the user terminal task is unloaded. Thus, the computing resources collected from the server are the revenue for the user terminal. It is desirable to achieve as low an average task offloading completion time and an average terminal energy consumption as possible, and as high a task offloading success rate as possible, and the optimization formula obtained thereby is:
Figure BDA0003717463440000101
S={s 1 ,s 2 ,s 3 ,...};
s represents an unloading strategy set;
s 1 representing a 1 st unloading strategy;
s 2 representing the 2 nd unloading strategy;
s 3 represents a type 3 offloading policy;
m represents the number of users;
Makespan k indicating the unloading completion time of the kth user;
E k representing user u k The whole unloading energy consumption;
F k indicating a single user offload success rate;
Figure BDA0003717463440000102
wherein s.t. represents constrained;
|U j (t) | represents the number of users in the user set covered by the jth edge server at the time t;
U j (t) represents the user set covered by the jth edge server at time t;
tr k representing the bandwidth utilization achieved by the user;
c represents the total bandwidth resource provided by the edge server;
u represents a set of edge users;
min means minimum;
(b)
Figure BDA0003717463440000103
and i belongs to an integer;
θ k,i representing a task t k,i B of distribution j A resource allocation rate of;
p denotes edge Server b j The total calculated amount of (a);
(c)
Figure BDA0003717463440000111
and k, i, j belong to integers;
(d)
Figure BDA0003717463440000112
and o is an integer;
(e)s k,i (t)∈{0,1,2,3,4,5,6},k∈U j (t)
wherein alpha is t ,α e ,α f Weights, alpha, representing task offloading completion time, task offloading energy consumption and task completion probability, respectively tef ∈[0,1]And alpha tef And =1. The intuitive significance of the above calculation is that we decide to minimize task offloading completion time and energy consumption, and maximize task offloading success rate. The formula is limited by: (a) The bandwidth available to all users on an edge server cannot exceed the bandwidth provided by the edge server itself. (b) The computing resources occupied by all tasks of the user cannot exceed the computing resources provided by the server, and only one task can be processed at a time. And (c) executing the task on the local or edge node. (d) time of failure of the edge server node. s is k,i (t) indicates the state of the task at time t, and 0,1,2,3,4,5,6 respectively indicate Local Waiting (LW), local Execution (LE), transmission (TS), remote Waiting (RW), remote Execution (RE), remote Completion (CP), and Remote Failure (FL) of the task. And S is a possible unloading strategy set.
And 6: task offloading algorithm
Step 6.1: task allocation
Users initiate task offload requests to all reachable servers, e.g., user u k Set of servers reachableAnd B k (t) sending a request, the requesting server allocating computing resources, such as allocation rate and bandwidth, for the user task. We define a bi-directional priority descriptor p k,j (t) to represent user u k Which edge server is selected as the offload edge server at time t.
Step 6.2: scheduling specific operations of a single task
(1) Selecting task t from local task queue k,i Transmitting to the edge server, entering the remote waiting queue Trans _ Pool of the transmission Pool by the task, and removing the task t from the local task queue k,i The task state becomes TS.
(2) Taking out the task from the Trans _ Pool, putting the task into a remote waiting queue, changing the task state into RW, calculating the service condition of the server bandwidth resource used by the task, and obtaining the residual bandwidth resource c t
(3) Fetching task t from remote waiting queue k,i And putting the task into a work Pool Job _ Pool, calculating system cpu resources used by the task, and changing the task state into RR. And acquiring the starting execution time of the task and the running time of the task.
(4) Get task t from Job _ pool k,i The task is unloaded, the unloading strategy is obtained by using the Dueling DQN algorithm, and the reward value is used as the two-way priority p k,j (t) value.
Step 6.3: task offloading
(1) The process of determining the mobile subscriber offload policy is shown in fig. 2:
firstly, a user moves randomly after a certain time, a reinforcement learning algorithm Dueling DQN observes the state of a system, and the action is guided according to corresponding rewards; bidirectional selection priority descriptor p k,j (t) value the edge servers are ranked according to the rewarding value of the Dueling DQN algorithm, thereby determining to which edge server the user will offload the task.
(2) Acquisition Server b j System state at time i, set of states s = { r = j,l ,c j,l ,n j,l ,v j,l ,e j,l ,t j,l ,p j,l }。r j,l As a system CPU resourceUse cases of c j,l For bandwidth resource usage, n j,l Transmitting the number of tasks in Pool Trans _ Pool for edge server, v j,l The number of tasks, e, that need to be processed but not completed in the Job _ Pool work Pool j,l Is the sum of the time required for processing the remaining tasks in the working pool, t j,l Is the sum of the time required for the transmission of the remaining tasks in the transmission pool, p j,l Is the probability that the task may fail.
(3) The action set is an edge server set covering the current user and comprises the ID of the edge server unloaded by the ith decision task at the moment l, namely a i,l ∈A l And A is l =B k (l)。
a i,l Indicating that the task is unloaded to a certain edge server, namely the edge server acts as an action;
A l representing an edge server set covering the current user, namely an action set;
B k (l) Representing the edge server set covering the user k at the current moment l;
(4) Reward value function R t When an action is completed, the environment is awarded a reward immediately. When the probability of offloading completion is high, the user is encouraged to offload. Offloading may result in positive immediate return. We take the ratio of task execution time to task completion probability as the reward value at time t.
Figure BDA0003717463440000131
Wherein e is k,i As task t k,i A local execution time;
Figure BDA0003717463440000132
representing a task t k,i Execution time at the edge server;
st k,i starting the unloading time for the task;
ft o in time set of failures for jth edge serverAn element;
Figure BDA0003717463440000133
representing the transmission time of the task unloading and uploading;
Figure BDA0003717463440000134
and i belongs to an integer;
Figure BDA0003717463440000135
and i is an integer;
when the edge node resources are overloaded, the system will receive a punitive reward. We denote the penalty award as the negative of the absolute value of the current MEC award, i.e., the negative
Figure BDA0003717463440000136
Resource overload represents the computing resource theta allocated to each task at a certain moment k,i The sum is greater than a threshold beta for the edge server computing resources j I.e. by
Figure BDA0003717463440000137
To prevent computational overload.
Dueling DQN algorithm in step 6
Dulling DQN is an improvement over Deep Q-learning (DQN) algorithm, focusing on the relationship between key states and actions. The problem of overlarge action space caused by difference of edge server equipment can be solved. The difference between DQN and dulling DQN algorithm is at the output, the DQN algorithm connects the fully-connected layers directly after convolution, whereas the dulling DQN does not connect the fully-connected layers directly after the convolution layer, but maps the output to two fully-connected layers. These two fully connected layers will evaluate the value and advantage of the action and state, respectively. The method comprises the following steps:
Q π (s,a)=V π (s)+A π (s,a)
Q π (s, a) represents an action cost function, which depends onState s, action a, policy π;
V π (s) a state cost function representing the state cost V (scalar) of the state;
A π (s, a) represents a dominance function, which represents a dominance value A (vector with the same latitude as the motion space) of each motion a, and the better the motion a is, the greater the dominance is;
and 7: checkpoint checking algorithm
Cloud computing systems have utilized checkpoints as a reactive fault tolerance strategy to mitigate the effects of a failure occurring. The main advantage of using checkpoints instead of replication is to reduce profit loss and retention time loss. Checkpointing is employed herein to recover from edge node failures. The checkpoint algorithm is deeply influenced by two parameters, checkpoint interval and delay. The checkpoint interval represents the time between two closed checkpoints. The checkpoint delay is the time to save the checkpoint. In our work, we used an adaptive checkpoint algorithm to determine the length of the checkpoint interval.
The adaptive checkpoint algorithm steps are as follows:
7.1
Figure BDA0003717463440000141
task t k,i At edge server b j The execution time of (c);
Figure BDA0003717463440000142
task t k,i At edge server b j Residual execution time; z: number of failures during task execution; f j (x z ): edge server b j The probability of failure; η: the gap between the checkpoints.
7.2 for each task t k,i in working Pool Job _ Pool of edge server b j
7.2.1 z =0, initializing checkpoint gap
Figure BDA0003717463440000143
Starting at edge server b j Upper execution task t k,i
7.2.1.1 do
7.2.1.1.1
Figure BDA0003717463440000144
7.2.1.1.1.1 if b j In the event of a fault, z + +,
Figure BDA0003717463440000145
reducing checkpoint gap length η = η (1-F) j (x z ) ); storing the last checkpoint; from the point of time
Figure BDA0003717463440000146
Re-execution is carried out;
7.2.1.1.2 at time Point
Figure BDA0003717463440000147
Executing a check point; increasing the checkpoint gap length η = η (1 +F) j (x z ) ); resuming execution
7.2.1.2
Figure BDA0003717463440000148
And 8: fault tolerant algorithm UDQF
The fault tolerance algorithm UDQF is referred to as a semi-online offload fault tolerance (UDQF) algorithm. UDQF takes as input a movement model and a failure model.
The algorithm flow is as follows:
8.1 edge user set U; an edge server set B; user movement time phi t (ii) a A time set T;
8.2 initializing user and edge server locations;
8.3 for each time t∈T do
8.3.1 predicting p of user offload behavior obtained according to the Dueling DQN Algorithm k,j (t);b j Upper execution task t k,i And b is j ∈B k (t),t k,i ∈T k
8.3.2 if b j Without failure
8.3.2.1 offloading task t k,i To the edge server b j
Figure BDA0003717463440000151
And u is k ∈U j (t);
8.3.3 if b j Is out of order
8.3.3.1 changing task state; restore edge Server b according to step 7 j
8.3.4 when tmod phi t =0 denotes the current environment time t and the user movement time period phi t The ratio of (A) to (B) is an integer; mod represents a remainder;
8.3.4.1 user moves and updates p k,j (t);p k,j (t) denotes user u k Selecting which edge server to use as the unloading edge server at the moment t;
UDQF: 1) According to priority p k,j (t) obtaining an offload behavior of the user; 2) And reasonably setting the resource utilization rate according to the user demand and the resource availability, and preventing the overload of the edge server. 3) The adaptive algorithm is executed step 7 for fault compensation.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. A mobile sensing service unloading fault-tolerant method based on an edge computing environment is characterized by comprising the following steps:
s1, acquiring one of a task unloading state of a moving user, server state information, task unloading and uploading delay and task unloading energy consumption or data information of any combination;
and S2, performing edge unloading optimization according to the data information acquired in the step S1.
2. The mobility-aware service offload fault tolerance method based on edge computing environment of claim 2, wherein the obtaining of the offload status of the user task in motion in step S1 comprises:
the number of base stations in the edge layer is the number of edge servers, each base station corresponds to one server, the number of base stations is h, B = { B = { (B) } 1 ,b 2 ,b 3 ,...,b h The number of users is m, U = { U = } 1 ,u 2 ,u 3 ,...,u m H, user u k The generated task is T k ={t k,1 ,t k,2 ,t k,3 ,...,t k,n },st k,i For task start unload time, e k,i As task t k,i The local execution time is set to a time value,
Figure FDA0003717463430000011
as task t k,i Obtaining the execution time of the edge server j The user set covered by each edge server at the time t is U j (t), obtaining user u k The plurality of edge servers to which tasks can be offloaded at time t are set as B k (t),δ k,i,j Indicating whether the task is offloaded to the edge server, if delta k,i,j =0 represents offloading of tasks to the server, so all tasks covered under the jth edge server will be obtained as:
Figure FDA0003717463430000012
the unloading completion time Makespan of the kth user is:
Figure FDA0003717463430000013
3. the method for offloading fault tolerance for mobility aware services based on edge computing environment as claimed in claim 1, wherein the obtaining of the server state information in step S1 comprises:
Figure FDA0003717463430000014
X(x 0 ,x 1 ,x 2 ,...,x g ) Representing edge Server b j The number of failures that occur in a certain time,
Figure FDA0003717463430000015
f j representing edge Server b j Number of failures of (T) i Is f j Time of occurrence of the fault, τ k,i,j Finger task t k,i At edge server b j Estimated time of execution, FT j ={ft 1 ,ft 2 ,ft 3 ,...ft g The j is the time set of the fault of the jth edge server, g represents the total number of the faults, and the unloading success rate of a single user is as follows:
Figure FDA0003717463430000021
ft o representing user u k The time from the unloading start time of the ith task to the time when the fault occurs; o =1,2,3.
4. The offloading fault tolerance method for mobile aware service based on edge computing environment as claimed in claim 1, wherein the obtaining of the offloading uploading delay of the task in step S1 comprises:
the communication model is represented as:
C k,i (t)=[C,c tc ,d k,i ]
wherein C represents the total bandwidth resource provided by the edge server;
c t representing the remaining bandwidth resources;
π c representing a bandwidth resource allocation policy;
d k,i representing the amount of data transferred;
C k,i (t) represents a communication model;
in the unloading process, the bandwidth utilization rate obtained by each user is as follows:
Figure FDA0003717463430000022
q (i) representing the energy transmitted by a wireless network base station. g (i,j) Representing a mobile device with a base station (edge server) b j Channel gain in between, and b j ∈B k (t),ω 0 Power, U, representing background noise j (t) represents a user set covered by the jth edge server at the time t, and the transmission time for task unloading and uploading is as follows:
Figure FDA0003717463430000023
wherein d is k,i Representing the amount of data transferred;
tr k representing the bandwidth utilization achieved by the user;
Figure FDA0003717463430000031
and the transmission time of the task unloading and uploading is represented.
5. The method for offloading fault tolerance for mobile aware-services based on edge computing environment as claimed in claim 1, wherein the step S1 of obtaining task offloading energy consumption comprises:
the communication energy consumption model is as follows:
Figure FDA0003717463430000032
wherein, d k,i Representing the amount of data transferred;
Figure FDA0003717463430000033
represents anyThe transmission time of the traffic offload upload;
π e representing an energy consumption model strategy;
E k,i representing a communication energy consumption model;
the appropriate power level W for each user is:
W=λ u t ud t di
t u represents the uplink throughput;
t d represents the downlink throughput;
the transmission energy consumption for task unloading is as follows:
Figure FDA0003717463430000034
wherein ω is k,i Average transmission power level required for one-time task offloading, user u k The whole unloading energy consumption is as follows:
Figure FDA0003717463430000035
wherein e is k,i For user u k Energy consumption of the local device;
Figure FDA0003717463430000036
energy consumption for the far-end edge server;
Figure FDA0003717463430000037
representing the transfer energy consumption for task offloading.
6. The offload fault tolerance method for mobile aware service based on edge computing environment as claimed in claim 1, wherein the edge offload optimization computing method in step S2 is:
Figure FDA0003717463430000038
E k representing user u k The whole unloading energy consumption;
F k indicating a single user offload success rate.
CN202210748556.3A 2022-06-28 2022-06-28 Mobile perception service unloading fault-tolerant method based on edge computing environment Active CN115225496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210748556.3A CN115225496B (en) 2022-06-28 2022-06-28 Mobile perception service unloading fault-tolerant method based on edge computing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210748556.3A CN115225496B (en) 2022-06-28 2022-06-28 Mobile perception service unloading fault-tolerant method based on edge computing environment

Publications (2)

Publication Number Publication Date
CN115225496A true CN115225496A (en) 2022-10-21
CN115225496B CN115225496B (en) 2024-08-16

Family

ID=83609823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210748556.3A Active CN115225496B (en) 2022-06-28 2022-06-28 Mobile perception service unloading fault-tolerant method based on edge computing environment

Country Status (1)

Country Link
CN (1) CN115225496B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302900A (en) * 2023-05-26 2023-06-23 煤炭科学研究总院有限公司 Computing power reliability assessment method of multi-access edge computing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783213A (en) * 2018-12-28 2019-05-21 杭州电子科技大学 The workflow fault-tolerant scheduling method of reliability is directed under a kind of edge calculations environment
US10866836B1 (en) * 2019-08-20 2020-12-15 Deke Guo Method, apparatus, device and storage medium for request scheduling of hybrid edge computing
CN113873022A (en) * 2021-09-23 2021-12-31 中国科学院上海微系统与信息技术研究所 Mobile edge network intelligent resource allocation method capable of dividing tasks
CN113950103A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783213A (en) * 2018-12-28 2019-05-21 杭州电子科技大学 The workflow fault-tolerant scheduling method of reliability is directed under a kind of edge calculations environment
US10866836B1 (en) * 2019-08-20 2020-12-15 Deke Guo Method, apparatus, device and storage medium for request scheduling of hybrid edge computing
CN113950103A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113873022A (en) * 2021-09-23 2021-12-31 中国科学院上海微系统与信息技术研究所 Mobile edge network intelligent resource allocation method capable of dividing tasks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闫伟等: "基于自适应遗传算法的MEC任务卸载及资源分配", 电子技术应用, no. 08, 6 August 2020 (2020-08-06), pages 95 - 100 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302900A (en) * 2023-05-26 2023-06-23 煤炭科学研究总院有限公司 Computing power reliability assessment method of multi-access edge computing system
CN116302900B (en) * 2023-05-26 2023-09-05 煤炭科学研究总院有限公司 Computing power reliability assessment method of multi-access edge computing system

Also Published As

Publication number Publication date
CN115225496B (en) 2024-08-16

Similar Documents

Publication Publication Date Title
CN111262906B (en) Method for unloading mobile user terminal task under distributed edge computing service system
WO2022021176A1 (en) Cloud-edge collaborative network resource smooth migration and restructuring method and system
CN110096362B (en) Multitask unloading method based on edge server cooperation
CN103944997B (en) In conjunction with the load-balancing method of random sampling and Intel Virtualization Technology
CN109947574B (en) Fog network-based vehicle big data calculation unloading method
CN102196503B (en) Service quality assurance oriented cognitive network service migration method
CN109151045A (en) A kind of distribution cloud system and monitoring method
CN114138373A (en) Edge calculation task unloading method based on reinforcement learning
US20240303123A1 (en) Managing computer workloads across distributed computing clusters
CN111309393A (en) Cloud edge-side collaborative application unloading algorithm
CN117938755B (en) Data flow control method, network switching subsystem and intelligent computing platform
EP4024212A1 (en) Method for scheduling interference workloads on edge network resources
CN115225496A (en) Mobile sensing service unloading fault-tolerant method based on edge computing environment
CN116321300B (en) Risk-aware mobile edge computing task scheduling and resource allocation method
Long et al. A mobility-aware and fault-tolerant service offloading method in mobile edge computing
CN112416603A (en) Combined optimization system and method based on fog calculation
CN117939519A (en) Wireless ad hoc network data processing method, device and equipment based on edge calculation
Huang et al. Power-aware hierarchical scheduling with respect to resource intermittence in wireless grids
CN117294712A (en) Dynamic calculation unloading strategy based on task group optimization
CN115150892B (en) VM-PM repair strategy method in MEC wireless system with bursty traffic
CN116614517A (en) Container mirror image preheating and distributing method for edge computing scene
CN114301911B (en) Task management method and system based on edge-to-edge coordination
US20230401103A1 (en) System and method of dynamically adjusting virtual machines for a workload
CN110457130A (en) A kind of distributed resource flexible scheduling model, method, electronic equipment and storage medium
CN114884861B (en) Information transmission method and system based on intra-network computation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant