CN115098115A - Edge calculation task unloading method and device, electronic equipment and storage medium - Google Patents

Edge calculation task unloading method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115098115A
CN115098115A CN202210684199.9A CN202210684199A CN115098115A CN 115098115 A CN115098115 A CN 115098115A CN 202210684199 A CN202210684199 A CN 202210684199A CN 115098115 A CN115098115 A CN 115098115A
Authority
CN
China
Prior art keywords
task
unloading
proportion
data
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210684199.9A
Other languages
Chinese (zh)
Inventor
金小敏
丁玉荣
王忠民
陈彦萍
胡俊艳
夏虹
周易驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202210684199.9A priority Critical patent/CN115098115A/en
Publication of CN115098115A publication Critical patent/CN115098115A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/62Uninstallation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of computers, in particular to an edge computing task unloading method, which comprises the following steps: acquiring task data to be processed, and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of the terminal; constructing an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks; optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm to determine the unloading proportion and the compression proportion when the task processing time and the task safety risk are minimum; and the edge server and the terminal process the task data according to the unloading proportion. The method combines data compression and safety protection in the process of unloading the edge computing task, and can reduce the task processing time of the terminal equipment and the safety risk in the process of unloading the task.

Description

Method and device for unloading edge calculation tasks, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to an edge computing task unloading method and device, electronic equipment and a storage medium.
Background
The edge computing is a computing paradigm capable of providing cloud services for users at the edge side of the network, and providing services such as computing, storage and network bandwidth at a place close to the users. The user can unload the tasks of the terminal to the edge server for execution so as to relieve the task processing pressure of the terminal. When the user unloads the tasks, the user needs to decide whether to leave the tasks for local processing in the terminal or unload the tasks to the edge server for processing. The wrong offloading decision not only can not relieve the terminal pressure, but also can increase the extra consumption, take more time and cause more time delay. However, the existing method has a coarse granularity when unloading is performed, the whole task is unloaded, compression or compression with a fixed proportion is not used when task data is transmitted, flexibility is poor, and safety problems in the task unloading process are not considered.
Therefore, it is desirable to provide an edge computing task offloading method to solve the above problems in the prior art.
Disclosure of Invention
The invention aims to provide a method and a device for unloading an edge computing task, electronic equipment and a storage medium, and further solves the problems of long time, low safety and the like of task processing caused by the limitations and the defects of the related technology at least to a certain extent.
According to one aspect of the invention, an edge computing task unloading method is provided, which comprises the following steps:
acquiring task data to be processed, and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of a terminal;
constructing an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks; the unloading proportion is the proportion of the calculated amount of the task data to the calculated amount of the task data when the edge server processes the task data;
optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm to determine the unloading proportion and the compression proportion when the task processing time and the task safety risk are minimum;
and compressing the task data according to the compression ratio and sending the compressed task data to the edge server, and processing the task data by the edge server and the terminal according to the unloading ratio.
In an exemplary embodiment, the method for offloading the edge computing task further includes:
and the edge server decompresses and processes the received task data and returns the obtained processing result to the terminal.
In an exemplary embodiment, the method for offloading the edge computing task further includes:
modeling the computational load of compression and decompression as a non-linear function of compression ratio; the task processing time comprises task data compression time and task data decompression time, and CPU period C consumed by compression or decompression z (y k ) With compression ratio y k The function of (d) is:
Figure BDA0003699469160000021
wherein, ω is k =1/y k Z is compression or decompression, γ k,0 Is the maximum number of CPU cycles,
Figure BDA0003699469160000022
and
Figure BDA0003699469160000023
is a constant.
In an exemplary embodiment, the method for offloading the edge computing task further includes:
setting different security levels for each task, and modeling the task security risk as a function of the security levels to quantify the security levels as security risk costs for the tasks;
the security risk cost of the task is as follows:
ξ(x k )=x k μ k
wherein the content of the first and second substances,
Figure BDA0003699469160000024
theta is the threshold value of the safety constraint, S k Is the security level of task k, x k Is the unloading proportion of the terminal task k.
In an exemplary embodiment, the constructing an unloading model between a task unloading proportion, a task data compression proportion, a task processing time and a task security risk includes:
determining the processing time of the terminal and the processing time of the edge server according to the size of the task data and the task unloading proportion;
and determining the compression time, the transmission time and the decompression time of the task data to be unloaded according to the task unloading proportion and the task data compression proportion.
In an exemplary embodiment, the building an unloading model between a task unloading proportion, a task data compression proportion, a task processing time and a task security risk includes:
and constructing an unloading model of the task processing time and the task safety risk according to a preset weight eta, wherein eta is more than or equal to 0 and less than or equal to 1.
In an exemplary embodiment, the determining the processing time of the terminal and the processing time of the edge server according to the size of the task data and the task offload ratio includes:
the processing time of the task k at the terminal is as follows:
Figure BDA0003699469160000031
the processing time of the task k in the edge server is as follows:
Figure BDA0003699469160000032
wherein x is k For the unloading proportion of terminal task k, D k Is the data size of task k, C k Calculated amount corresponding to unit data of terminal, f k (in CPU cycles/second) is the computational power of the terminal processing task k,
Figure BDA0003699469160000033
(in CPU cycles/second) is the computing power of the edge server processing task k.
In an exemplary embodiment, the determining the compression time, the transmission time, and the decompression time of the task data to be unloaded according to the task unloading ratio and the task data compression ratio includes:
the compression time of the task k at the terminal is as follows:
Figure BDA0003699469160000034
the transmission time for task k to be offloaded to the edge server is:
Figure BDA0003699469160000035
the decompression time of the task k edge server is:
Figure BDA0003699469160000036
wherein the content of the first and second substances,
Figure BDA0003699469160000037
y k to a compression ratio, C co (y k ) For terminal compressionNumber of CPU cycles required, C de (y k ) The number of CPU cycles required to decompress for the edge server.
In an exemplary embodiment, the optimizing the unloading model based on the improved simulated annealing-particle swarm algorithm comprises:
setting the inertia factor to be dynamically linear when updating the particle velocity and position; the inertia factors are:
Figure BDA0003699469160000038
w2 is more than or equal to 0.4 and less than or equal to w1 and less than or equal to 0.9, N is the current iteration number, and N is the total iteration number.
According to another aspect of the present invention, there is provided an edge computing task offloading device, including:
the data acquisition module is used for acquiring task data to be processed and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of the terminal;
the model building module is used for building unloading models among the task unloading proportion, the task data compression proportion, the task processing time and the task safety risk; the unloading proportion is the proportion of the calculated amount of the task data to the calculated amount of the task data when the edge server processes the task data;
the parameter determining module is used for optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm so as to determine the unloading proportion and the compression proportion when the task processing time and the task safety risk are minimum;
and the task unloading module is used for compressing the task data according to the compression ratio and sending the compressed task data to the edge server, and the edge server and the terminal process the task data according to the unloading ratio.
The invention provides a method and a device for unloading an edge calculation task, wherein the method can process the task of a terminal under the conditions of shortest time consumption and strongest safety on one hand; on the other hand, the task processing pressure of the terminal can be effectively relieved, and the operation efficiency and stability of the system are improved.
Drawings
FIG. 1 is a schematic diagram of an application scenario of a method for offloading an edge computing task according to an exemplary embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for offloading an edge computing task in an exemplary embodiment of the invention;
FIG. 3 is a diagram illustrating processing time results of an edge computing task offloading method in an exemplary embodiment of the invention;
FIG. 4 is a diagram illustrating objective function results of a method for offloading tasks in edge computing according to an exemplary embodiment of the present invention;
FIG. 5 is a diagram illustrating data compression results of an edge computing task offloading method in an exemplary embodiment of the invention;
FIG. 6 is a diagram illustrating the results of a method for offloading an edge computing task in an exemplary embodiment of the invention;
FIG. 7 is a diagram illustrating the results of a method for offloading an edge computing task in an exemplary embodiment of the invention;
fig. 8 is a schematic structural diagram of an edge computing task offloading device according to an exemplary embodiment of the invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, embodiments and technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings. Example embodiments and examples, however, may be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments and examples are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments and examples to those skilled in the art. The described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments and examples. In the following description, numerous specific details are provided to give a thorough understanding of embodiments and examples of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Although the steps of the method of the present invention are depicted in the drawings in a particular order, this does not require or imply that all of the steps must be performed in this particular order to achieve desirable results. The flow charts shown in the figures are merely exemplary and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The edge computing is a computing paradigm capable of providing cloud services for users at the edge side of the network, and providing services such as computing, storage and network bandwidth at a place close to the users. The user can unload the tasks of the terminal to the edge server for execution so as to relieve the task processing pressure of the terminal. When the user unloads the tasks, the user needs to decide whether to leave the tasks for local processing in the terminal or unload the tasks to the edge server for processing. The wrong offloading decision not only can not relieve the terminal pressure, but also can increase the extra consumption, take more time and cause more time delay. However, the existing method has a coarse granularity when unloading is performed, unloads the whole task, does not use compression or uses compression with a fixed proportion when task data is transmitted, is lack of flexibility, and does not consider the safety problem in the task unloading process.
Based on the above problems in the related art, the present invention provides an edge computing task offloading method, including: acquiring task data to be processed, and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of a terminal; constructing an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks; the unloading proportion is the proportion of the calculated amount of the task data to the calculated amount of the task data when the edge server processes the task data; optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm to determine an unloading proportion and a compression proportion when the task processing time and the task safety risk are minimum; and compressing the task data according to the compression ratio and sending the compressed task data to the edge server, and processing the task data by the edge server and the terminal according to the unloading ratio. According to the method for unloading the edge calculation task, on one hand, the task of the terminal can be processed under the conditions of shortest time consumption and strongest safety; on the other hand, the task processing pressure of the terminal can be effectively relieved, and the operation efficiency and stability of the system are improved.
Fig. 1 is a schematic diagram of an application scenario of an edge computing task offloading method in an exemplary embodiment of the present invention, where the edge computing system shown in fig. 1 includes an edge server and at least one terminal connected to the edge server; the terminal and the edge server can be connected by a wireless network, a wired network and the like. The terminal can be intelligent monitoring equipment, an intelligent robot and the like, and the wireless network can be a wireless wide area network, a wireless local area network, a wireless metropolitan area network, a wireless personal local area network and the like; the following takes the application scenario shown in fig. 1 as an example to describe the edge computing task offloading method of the present invention in detail.
An exemplary embodiment of the present invention provides an edge computing task offloading method, and fig. 2 is a flowchart of an edge computing task offloading method in an exemplary embodiment of the present invention; as shown in fig. 2, the method for offloading the edge computing task includes the following steps:
step S21: acquiring task data to be processed, and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of a terminal;
in a face recognition system, for example, the data to be processed acquired by a terminal is the captured face image data, the face image data needs to be processed in the process of recognizing and comparing the face image, and the number of CPU cycles required for processing the data is the calculated amount of the task data to be processed.
Step S23: constructing an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks; the unloading proportion is the proportion of the calculated amount of the task data to the calculated amount of the task data when the edge server processes the task data;
in the edge computing task offloading system shown in fig. 1, the terminal does not have sufficient task processing capability, and if the terminal is only relied on to process the task, efficient execution of the task cannot be guaranteed. Aiming at the defect, the terminal task is unloaded to the edge server, the processing is carried out by means of the stronger processing capacity of the edge server, and the obtained result is fed back to the terminal, so that the system efficiency is improved. Further, in order to effectively utilize the respective task processing capacities of the terminal and the edge server, the tasks are divided and unloaded according to a proper proportion, and the terminal and the edge server simultaneously process the tasks, so that the effect of effectively reducing the task processing time is achieved. Further, in order to ensure the safety in the task unloading process, the task safety risk is also used as an optimization target, so that the effect of effectively reducing the safety risk in the task unloading process is achieved.
In an exemplary embodiment, the basic parameters of the terminal and the edge server are: the data size of task k in the terminal is D k The calculated amount corresponding to the unit data of the terminal is C k I.e. the number of CPU cycles required, the computing power of the end processing task k is f k (in CPU cycles/sec), edge servicingThe computing power of the processor to process task k is
Figure BDA0003699469160000071
(in units of CPU cycles/second), the terminal can reduce the execution time by offloading the task to the edge server for calculation, but excessive data causes extra time consumption during task transmission, so the task k on the terminal needs to be unloaded in a proper unloading ratio x k (0≤x k Less than or equal to 1) upload task, x k 0 means that the task is executed completely locally, x k 1 means that the task is completely offloaded to the edge server.
Exemplarily, step S23 may include: determining the processing time of the terminal and the processing time of the edge server according to the size and the unloading proportion of the task data;
according to the unloading ratio x k Total load C of data processed locally by task k at the terminal l (x k ) Comprises the following steps:
C l (x k )=(1-x k )D k C k
terminal local processing time t l (x k ) Comprises the following steps:
Figure BDA0003699469160000072
similar to the local computation model, the processing time t of task k on the edge server m (x k ) Comprises the following steps:
Figure BDA0003699469160000073
exemplarily, step S23 may further include: determining the compression time, transmission time and decompression time of the data to be unloaded according to the unloading proportion and the compression proportion;
specifically, when the task is uploaded to the edge server, the transmission process comprises a calculation task uploading stage, a task execution stage and a result returning stage. Uplink rate r for transmitting data of task k k Comprises the following steps:
Figure BDA0003699469160000074
where B denotes the network bandwidth, p k Denotes the transmission power, g k Representing the noise power gain, θ, of the channel 0 Representing the noise power density of the channel.
In order to overcome the transmission bottleneck caused by the limited capacity of the wireless link between the terminal and the edge server, the transmitted data is compressed at the terminal to reduce the amount of data transmitted to the edge server. Using y k The compression ratio, i.e., the ratio of the amount of data after compression to the amount of data before compression, is expressed. The compression ratio is y due to the limitation of compression algorithm and compression technique l And y u I.e. y k ∈[y l ,y u ]. Modeling the computational load of compression and decompression as a non-linear function of the compression ratio, wherein the task processing time comprises a task data compression time and a task data decompression time. CPU cycle C consumed by compression or decompression z (y k ) Comprises the following steps:
Figure BDA0003699469160000081
wherein ω is k =1/y k And z may be "co" and "de", representing compression and decompression, respectively. I.e. C co (y k ) Representing the number of CPU cycles, C, required for local compression de (y k ) Representing the CPU needed for decompression at the edge server. Gamma ray k,0 Is the maximum number of CPU cycles,
Figure BDA0003699469160000082
and
Figure BDA0003699469160000083
is a constant.
Thus, task k compresses data locally at the terminal at the time
Figure BDA0003699469160000084
Comprises the following steps:
Figure BDA0003699469160000085
after compression, the size of the task data that needs to be transferred to the edge server is
Figure BDA0003699469160000086
Wherein
Figure BDA0003699469160000087
According to the uplink transmission rate r k Transport delay required for offloading of compressed task k to edge server
Figure BDA0003699469160000088
Comprises the following steps:
Figure BDA0003699469160000089
time of edge server decompression
Figure BDA00036994691600000810
Comprises the following steps:
Figure BDA00036994691600000811
thus, according to the unloading ratio x k And a compression ratio y k The time required to process task k may be determined as:
Figure BDA00036994691600000812
in an exemplary embodiment, step S23 may further include: determining task safety risk according to the unloading proportion; offloading tasks to the edge server the offloading tasks may be subject to various malicious attacks and eavesdropping, thus presenting a security risk during the offloading process. Illustratively, different security levels can be set for each task to meet the computational security requirements of different tasks, and the security protection of task data is improved by reducing the security risk cost of each task; and modeling the security risk as a function of security level, quantified as a security risk cost for each task.
The security risk cost for task k may be represented by a mapping function as:
ξ(x k )=x k μ k
wherein
Figure BDA0003699469160000091
Theta denotes the threshold value of the safety constraint, S k Indicating the security level of task k. According to this model, there is a security risk cost if the security level of the task is less than an expected security threshold. Otherwise, there is no security risk. When the unloading ratio x k When 1, the task is completely unloaded at the edge, where the risk cost is highest, when x k When the risk cost is the lowest, the maximum value of the risk cost in the whole system is xi max
In an exemplary embodiment, the time to process the n tasks is
Figure BDA0003699469160000092
The security risk is
Figure BDA0003699469160000093
The task processing time and the task safety risk are constructed into an unloading model through a preset weight eta, wherein the unloading model comprises the following steps:
Figure BDA0003699469160000094
0≤x k ≤1
y l ≤y k ≤y u
0≤η≤1
Figure BDA0003699469160000095
wherein constraint 0 ≦ x k 1 or less, limiting the unloading ratio to 0 to 1, and constraining y l ≤y k ≤y u Limiting the compression ratio to y l To y u Constraint 0 ≦ η ≦ 1 limits weight η to 0 to 1, and only takes safety risk as optimization target when η is 0, and only takes processing time as optimization target when η is 1. Constraining
Figure BDA0003699469160000096
The limited bandwidth does not exceed B.
Step S25: optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm to determine an unloading proportion and a compression proportion when the task processing time and the task safety risk are minimum;
the Particle Swarm Optimization (PSO) is widely applied to solving the continuous optimization problem with the characteristics of few parameters, easy implementation, fast convergence speed, etc., and the basic idea of the PSO is to find the optimal solution through the movement and information sharing of individuals in a group. However, when the particle swarm algorithm is used for solving the constraint problem with a complex solution space, local optimization is easy to fall into. Therefore, a Simulated Annealing (SA) algorithm is embedded into the particle swarm optimization algorithm, the particles of the swarm have a high probability to accept the non-optimal solution to jump out of the local optimal solution when the initial temperature of the algorithm is high, and the global optimal solution can be converged when the later temperature is reduced, so that the algorithm is effectively prevented from falling into the local optimal solution.
Specifically, for the simulated annealing algorithm, the temperature control outer cycle number is assumed to be a, the inner cycle number at each specific temperature is assumed to be b, and the time complexity of the algorithm is O (a × b); for the PSO algorithm, the number of particles in each iteration is constant. Suppose the number of particles in the i-th iteration is N i Wherein i is 1, 2.. multidot.m; m represents the maximum number of iterations, and thus N 1 =N 2 =...=N m N. Suppose eachThe operation time required by each iteration of the particle is T, and the total operation time required by the optimization of the PSO algorithm is N multiplied by m multiplied by T. The total time complexity of the offload algorithm is O (b x N x m x T).
The process of determining the unloading proportion and the compression proportion of the unloading model based on the improved simulated annealing-particle swarm optimization algorithm can comprise the following steps: the simulated annealing algorithm is embedded into the particle swarm optimization algorithm, when the temperature of the algorithm is high in the initial stage, the particles of the swarm have high probability to receive a non-optimal solution and jump out of a local optimal solution, and when the temperature is reduced in the later stage, the overall optimal solution can be converged, so that the algorithm can be effectively prevented from falling into the local optimal solution; when the particle velocity and position are updated, their inertia factors are set to dynamically and linearly change from fixed values. Dynamic values of the inertia factor may yield better optimization results than fixed values. At the beginning, a larger area is searched, the approximate position of the optimal solution is positioned quickly, the particle speed is reduced along with the gradual reduction of the inertia factor, and the fine local search is started.
In an exemplary embodiment, step S25 includes the steps of:
initializing and setting the particle swarm optimization model;
an initial position and an initial velocity are set for each particle and extend into an N-dimensional space, where the position of the particle i is represented as a vector U i =(u 1 ,u 2 ,...,u N ) The flight speed being expressed as a vector V i =(v 1 ,v 2 ,...,v N ) And an initial annealing temperature t.
Determining a target function according to the fitness function;
the fitness function of the particle swarm algorithm, also referred to as an evaluation function, may be determined from the objective function of the problem. Introducing the particle position vector into the fitness function, and calculating the fitness value of the particle to determine the target function
Figure BDA0003699469160000111
Updating the individual extremum and the global extremum of the particle;
in each iteration, the particle updates its velocity and position by tracking individual and global extrema. For the best position found so far for particle i, each particle also knows the best position found so far for all particles in the whole population G best (i) The particles determine the next movement through own experience and companion experience.
Comparing individual extrema P best (i) And the fitness value fitness (i), if
Figure BDA0003699469160000112
Then order
Figure BDA0003699469160000113
Comparing global extreme G with the same principle best (i) And the fitness value fitness (i) if
Figure BDA0003699469160000114
Then let G give best (i)=fitness(i)。
Updating the speed and position of the particles;
updating the velocity and position of the particle, producing a new solution S in the neighborhood of solution S new And calculating a fitness of the particle fitness (S) new )。
The velocity and position of the particles are updated according to the following principles:
V i =wV i +c 1 γ1(P best (i)-u i )+c 2 γ2(G best (i)-u i )
u i =V i +u i
w is an inertia weight, and w is a constant in the standard particle swarm algorithm; γ 1 and γ 2 are random numbers between (0, 1); c. C 1 And c 2 Is a learning factor, usually take c 1 =c 2 =2。
When some constraint problems with complex solution space are solved, local optimization is easy to be involved, and the invention improves the inertia weight in the algorithm. The value of w in the standard particle swarm algorithm is a non-negative constant, and when the value is large, the global optimization capability is strong, and the local optimization capability is weak; when the value is small, the global optimizing capability is weak, and the local optimizing capability is strong; dynamic values of w can therefore achieve better optimization results than fixed values. The dynamic inertial weight is thus set to:
Figure BDA0003699469160000115
w2 is more than or equal to 0.4 and less than or equal to w1 and less than or equal to 0.9, N is the current iteration number, and N is the total iteration number. At the beginning, a larger area is searched, the approximate position of the optimal solution is positioned quickly, the particle speed is reduced as w is gradually reduced, and fine local search is started.
Determining the variation of the objective function compared with the last moment, and determining a new objective function according to the variation;
let the change Δ f of the objective function from the previous time be fitness (S) new ) -a fixness (S) accepting the new solution if Δ f ≦ 0, otherwise accepting the new solution according to Metropolis rules.
Integrating Metropolis acceptance rules of an annealing algorithm into an improved particle swarm algorithm, and designing a hybrid algorithm. The expression of the Metropolis acceptance rule is:
Figure BDA0003699469160000121
wherein E (i) and E (j) are internal energy of the solid in states i and j, respectively, K is Boltzmann constant, T is the current temperature, p ij At temperature T, the system accepts the probability of a new state from state to i-state j. Therefore, according to the formula, if E (i) ≧ E (j), the new state j is accepted, otherwise, the probability is used
Figure BDA0003699469160000122
Accepting the new state j. And the internal energy of the solid in the state and the state, the boltzmann constant, the current temperature and the probability of the system receiving the new state from the state to the state under the temperature are respectively. Therefore, according to the formula, if,
carrying out annealing operation on the objective function;
updating local optimum P while calculating particle fitness best And global optimum G best And then an annealing operation is performed. Iteration is carried out at each temperature t, the optimal value at the current temperature is searched by continuously changing the fitness value, and then the temperature is reduced to continue searching until the temperature reaches the lowest value. And judging whether the termination condition is met, if so, finishing the calculation and outputting a result.
In an exemplary embodiment, step S25 may be implemented by:
inputting: a particle population size N; the maximum iteration number G; learning factor c 1 And c 2 (ii) a An inertial weight w; and simulating the annealing initial temperature t.
And (3) outputting: an unloading ratio and a compression ratio;
Figure BDA0003699469160000123
Figure BDA0003699469160000131
step S27: and compressing the task data according to the compression ratio and sending the compressed task data to the edge server, and processing the task data by the edge server and the terminal according to the unloading ratio.
After the task processing time and the unloading proportion and the compression proportion when the task security risk is minimum are determined through the step S25, the tasks in the terminal are divided according to the unloading proportion, the tasks processed at the terminal and the tasks sent to the edge server are determined, the task data sent to the edge server are compressed according to the compression proportion and sent as a compressed file, so as to reduce the consumption caused by directly transmitting the task data, the calculation load of compression and decompression is modeled as a nonlinear function of the compression ratio, and the calculation load of compression and decompression is included in the task unloading process.
Further, after step S27, the method may further include:
and the edge server decompresses and processes the received task and returns the obtained processing result to the terminal. The compressed data of the edge server may include data to be processed and a data processing request, and the edge server analyzes and processes the data in response to the data processing request.
In an exemplary embodiment, the parameter settings in the edge offload system are shown in table 1, and the effect of using the present invention to perform the offload of the edge calculation task is as follows:
time results obtained by processing local data of the terminal, completely unloading the data to the edge server and processing the data by the edge computing task unloading method in the system are shown in fig. 3, the data processing time is 717ms, 668ms and 569ms, and the task processing time of the edge computing task unloading method is reduced by 14.82% and 20.64%, respectively.
TABLE 1
Figure BDA0003699469160000141
In an exemplary embodiment, objective function values obtained by processing local data of the terminal, completely unloading the data to the edge server, and processing the data by the above-mentioned method for unloading the edge computing task are shown in fig. 4. The target function value of the unloading method of the edge computing task is 0.64, the target function value of the unloading at the edge side is 0.86, and when all tasks are executed locally, the target function value is 1, because the risk cost of each task is 0 when the tasks are executed locally, namely, the target function only has processing time when the tasks are executed locally. The optimization objective of the present invention is to minimize the weighted cost of task processing time and security risks, and experimental results show that joint offloading has better performance than local execution and complete offloading at the edge.
In an exemplary embodiment, to evaluate the compression model during unloading, the objective function values for both compressed and uncompressed cases are compared. As shown in fig. 5, the trend of the change of the function values was similar, the objective function value was 0.626 in the case of no compression, but the weighted objective function value was reduced by about 17.41% after the compression model was added, with the objective function value being 0.517. After the compression model is adopted, the delay and risk cost weighting sum of tasks is greatly reduced.
In an exemplary embodiment, the influence of the target weight factor on the total cost and on the target function is analyzed, and by changing the target weight, the influence on the above-mentioned index at different values is observed. Experimental results as shown in fig. 6, the objective function value increases as the objective weight increases given the same number of tasks. The target weights are used to achieve a balance between latency and security, the values being different when different requirements are met. When eta is 0.2, the requirement of the whole system on safety is high, and when eta is 0.8, the requirement on the real-time performance of the task is high, different users can select different weight parameters, and the weights can be dynamically selected according to the requirements of the users.
In an exemplary embodiment, the result of obtaining the objective function by changing the objective weight is shown in fig. 6, and the objective function value increases as the objective weight increases given the same number of tasks. The target weights are used to achieve a balance between latency and security, the values being different when different requirements are met. The requirement of the whole system on safety is high at that time, the requirement on the real-time performance of the task is high at that time, different users can select different weight parameters, and the weight can be dynamically selected according to the requirements of the users.
In an exemplary embodiment, a Particle Swarm Optimization (PSO), a Simulated Annealing (SA), a Random Algorithm (RA), a Hill Climbing Algorithm (HC) are selected to compare with the Improved Simulated Annealing Particle Swarm Optimization (ISA-PSO) based off-load Algorithm of the present invention, and the convergence and optimal values of the Algorithm are evaluated. As shown in fig. 7, the random algorithm has no selection strategy, and the search performance is the worst, consistent with the expected results. The hill climbing algorithm is a local preferential method. The simulated annealing algorithm is derived from the solid annealing principle, is a probability-based algorithm and is an improvement of the hill climbing algorithm. Therefore, the simulated annealing algorithm is superior to the random algorithm and the hill-climbing algorithm. Experimental results show that the method has the advantages of high convergence rate and minimum optimal value.
An exemplary embodiment of the present invention provides an edge computing task offloading method, including the steps of:
at least one terminal acquires task data to be processed, an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks is constructed, and the unloading model is optimized based on an improved simulated annealing-particle swarm algorithm to determine the unloading proportion and the compression proportion when the task processing time and the task safety risks are minimum;
and the edge server determines the task data to be sent to the edge server in the terminal according to the unloading proportion, compresses the task data to be sent to the edge server according to the compression proportion and sends the compressed task data to the edge server.
An exemplary embodiment of the present invention provides an edge computing task offloading device, and fig. 8 is a schematic structural diagram of an edge computing task offloading device according to an exemplary embodiment of the present invention; as shown in fig. 8, the edge calculation task offloading device includes:
a data obtaining module 80, configured to obtain data to be processed, and determine a task to be processed according to the data to be processed and a unit data calculation amount of the terminal;
the model building module 82 is used for building unloading models among the task unloading proportion, the task data compression proportion, the task processing time and the task safety risk;
a parameter determining module 84, configured to optimize the unloading model based on an improved simulated annealing-particle swarm optimization algorithm to determine an unloading proportion and a compression proportion when the task processing time and the task safety risk are minimum;
and the task unloading module 86 is configured to determine task data to be sent to an edge server in the terminal according to the unloading ratio, compress the task data to be sent to the edge server according to the compression ratio, and send the compressed task data to the edge server.
An exemplary embodiment of the present invention provides an edge computing task offloading system, including:
the terminal is used for acquiring task data to be processed, constructing an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks, and optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm to determine the unloading proportion and the compression proportion when the task processing time and the task safety risks are minimum;
and the edge server is used for determining the task data to be sent to the edge server in the terminal according to the unloading proportion, compressing the task data to be sent to the edge server according to the compression proportion and sending the compressed task data to the edge server.
The method and the device for unloading the edge computing task solve the task unloading problem caused by large data volume, strong terminal heterogeneity and high safety requirement. Aiming at the joint optimization problem of computation unloading, data compression and safety protection, a two-layer framework is established, wherein the two-layer framework comprises a terminal layer and an edge server layer. In order to overcome the wireless transmission bottleneck caused by the limited capacity of a wireless link between a terminal and an edge server, data compression and edge calculation are combined, and a data compression model determines the compression ratio by combining calculation unloading decision and resource allocation optimization, so that the transmission delay of a task is reduced. In addition, different safety levels are set for each task for protecting the safety of the tasks, the safety protection of data is improved by reducing the safety risk cost of each task, the whole task unloading process is safer, and an unloading algorithm based on an improved simulated annealing particle swarm algorithm is provided for solving the unloading model.
The details of each module/unit in the above device have been described in detail in the corresponding method section, and are not described herein again. It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition to the above-described methods and apparatus, embodiments of the invention may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the methods according to various embodiments of the invention described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Another embodiment of the present invention provides an electronic device, which may be used to perform all or part of the steps of the method or the network control method described in this exemplary embodiment. The device comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform steps in a method according to various embodiments of the present invention described in the "exemplary method" section above.
Another embodiment of the present invention provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method according to various embodiments of the present invention described in the "exemplary method" above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present invention have been described above with reference to specific embodiments, but it should be noted that advantages, effects, etc. mentioned in the present invention are only examples and are not limiting, and the advantages, effects, etc. cannot be considered as being essential to the various embodiments of the present invention. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and not for the purpose of limitation, and the foregoing disclosure is not intended to be exhaustive or to limit the invention to the precise details disclosed.
The block diagrams of devices, apparatuses, systems involved in the present invention are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "are used herein to mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An edge computing task offloading method, comprising:
acquiring task data to be processed, and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of a terminal;
constructing an unloading model among a task unloading proportion, a task data compression proportion, task processing time and task safety risks; the unloading proportion is the proportion of the calculated amount of the task data to the calculated amount of the task data when the edge server processes the task data;
optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm to determine the unloading proportion and the compression proportion when the task processing time and the task safety risk are minimum;
and compressing the task data according to the compression ratio and sending the compressed task data to the edge server, and processing the task data by the edge server and the terminal according to the unloading ratio.
2. The method for offloading the task of edge computing according to claim 1, further comprising:
and the edge server decompresses and processes the received task data and returns the obtained processing result to the terminal.
3. The method for offloading edge computing tasks of claim 2, further comprising:
modeling the computational load of compression and decompression as a non-linear function of compression ratio; the task processing time comprises task data compression time and task data decompression time, and CPU period C consumed by compression or decompression z (y k ) To compression ratio y k The function of (d) is:
Figure FDA0003699469150000011
wherein, ω is k =1/y k Z is compression or decompression, γ k,0 Is the maximum number of CPU cycles,
Figure FDA0003699469150000012
and
Figure FDA0003699469150000013
is a constant.
4. The method for offloading edge computing tasks of claim 1, further comprising:
setting different security levels for each task, and modeling the task security risk as a function of the security levels to quantify the security levels as security risk costs for the tasks;
the security risk cost of the task is as follows:
ξ(x k )=x k μ k
wherein the content of the first and second substances,
Figure FDA0003699469150000014
theta is the threshold value of the safety constraint, S k Is the security level of task k, x k Is the unload rate for task k.
5. The method for unloading task of edge computing according to claim 3, wherein the constructing of the unloading model between the task unloading proportion, the task data compression proportion, the task processing time and the task security risk comprises:
determining the processing time of the terminal and the processing time of the edge server according to the size of the task data and the task unloading proportion;
and determining the compression time, the transmission time and the decompression time of the task data to be unloaded according to the task unloading proportion and the task data compression proportion.
6. The method for unloading task of edge computing according to claim 5, wherein the constructing of the unloading model between the task unloading proportion, the task data compression proportion, the task processing time and the task security risk comprises:
and constructing an unloading model of the task processing time and the task safety risk according to a preset weight eta, wherein eta is more than or equal to 0 and less than or equal to 1.
7. The method for offloading task of edge computing according to claim 5, wherein the determining the processing time of the terminal and the processing time of the edge server according to the size of the task data and the task offloading ratio comprises:
the processing time of the task k at the terminal is as follows:
Figure FDA0003699469150000021
the processing time of the task k in the edge server is as follows:
Figure FDA0003699469150000022
wherein x is k For the unload proportion of task k, D k Is the data size of task k, C k Calculated amount corresponding to unit data of terminal, f k (in CPU cycles/second) is the computational power of the terminal processing task k,
Figure FDA0003699469150000023
(in CPU cycles/second) is the computing power of the edge server processing task k.
8. The method for offloading task of edge computing according to claim 7, wherein the determining a compression time, a transmission time, and a decompression time of task data to be offloaded according to the task offloading proportion and the task data compression proportion comprises:
the compression time of the task k at the terminal is as follows:
Figure FDA0003699469150000024
the transmission time for task k to be offloaded to the edge server is:
Figure FDA0003699469150000031
the decompression time of the task k edge server is:
Figure FDA0003699469150000032
wherein the content of the first and second substances,
Figure FDA0003699469150000033
y k in a compressed ratio, C co (y k ) Number of CPU cycles required for terminal compression, C de (y k ) The number of CPU cycles required to decompress for the edge server.
9. The method for offloading edge computing tasks of claim 1, wherein the optimizing the offloading model based on the improved simulated annealing-particle swarm optimization comprises:
setting the inertia factor to be dynamically linear when updating the particle velocity and position; the inertia factor is:
Figure FDA0003699469150000034
w2 is more than or equal to 0.4 and less than or equal to w1 and less than or equal to 0.9, N is the current iteration number, and N is the total iteration number.
10. An edge computing task offloading device, comprising:
the data acquisition module is used for acquiring task data to be processed and determining the calculated amount of the task data according to the task data and the unit task data calculated amount of the terminal;
the model building module is used for building an unloading model among the task unloading proportion, the task data compression proportion, the task processing time and the task safety risk; the unloading proportion is the proportion of the calculated amount of the task data to the calculated amount of the task data when the edge server processes the task data;
the parameter determining module is used for optimizing the unloading model based on an improved simulated annealing-particle swarm algorithm so as to determine the unloading proportion and the compression proportion when the task processing time and the task safety risk are minimum;
and the task unloading module is used for compressing the task data according to the compression ratio and sending the compressed task data to the edge server, and the edge server and the terminal process the task data according to the unloading ratio.
CN202210684199.9A 2022-06-17 2022-06-17 Edge calculation task unloading method and device, electronic equipment and storage medium Pending CN115098115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210684199.9A CN115098115A (en) 2022-06-17 2022-06-17 Edge calculation task unloading method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210684199.9A CN115098115A (en) 2022-06-17 2022-06-17 Edge calculation task unloading method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115098115A true CN115098115A (en) 2022-09-23

Family

ID=83291624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210684199.9A Pending CN115098115A (en) 2022-06-17 2022-06-17 Edge calculation task unloading method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115098115A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117499999A (en) * 2023-12-29 2024-02-02 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN118042495A (en) * 2024-04-12 2024-05-14 华东交通大学 Pressurized security computing unloading and resource optimizing method in ultra-dense network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117499999A (en) * 2023-12-29 2024-02-02 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN117499999B (en) * 2023-12-29 2024-04-12 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN118042495A (en) * 2024-04-12 2024-05-14 华东交通大学 Pressurized security computing unloading and resource optimizing method in ultra-dense network

Similar Documents

Publication Publication Date Title
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN110347500B (en) Task unloading method for deep learning application in edge computing environment
CN108920280B (en) Mobile edge computing task unloading method under single-user scene
CN115098115A (en) Edge calculation task unloading method and device, electronic equipment and storage medium
CN109978142B (en) Neural network model compression method and device
CN109829332B (en) Joint calculation unloading method and device based on energy collection technology
CN113950066A (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN110928654A (en) Distributed online task unloading scheduling method in edge computing system
CN111047563B (en) Neural network construction method applied to medical ultrasonic image
CN109144719B (en) Collaborative unloading method based on Markov decision process in mobile cloud computing system
CN112667400B (en) Edge cloud resource scheduling method, device and system managed and controlled by edge autonomous center
CN115104108A (en) Method and system for partitioning and bit width allocation of deep learning model for distributed system reasoning
CN110531996B (en) Particle swarm optimization-based computing task unloading method in multi-micro cloud environment
CN112579194A (en) Block chain consensus task unloading method and device based on time delay and transaction throughput
CN112446491A (en) Real-time automatic quantification method and real-time automatic quantification system for neural network model
CN112766467B (en) Image identification method based on convolution neural network model
CN111008924A (en) Image processing method and device, electronic equipment and storage medium
CN112866006A (en) Cloud and mist fusion network multi-target task unloading method based on time delay energy consumption balance
CN114564304A (en) Task unloading method for edge calculation
CN113645637A (en) Method and device for unloading tasks of ultra-dense network, computer equipment and storage medium
WO2022246986A1 (en) Data processing method, apparatus and device, and computer-readable storage medium
CN115022319A (en) DRL-based edge video target detection task unloading method and system
CN114625477A (en) Service node capacity adjusting method, equipment and computer readable storage medium
CN116976461A (en) Federal learning method, apparatus, device and medium
CN116828541A (en) Edge computing dependent task dynamic unloading method and system based on multi-agent reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination