CN114880044A - Method, system, medium and electronic terminal for unloading task in edge computing - Google Patents

Method, system, medium and electronic terminal for unloading task in edge computing Download PDF

Info

Publication number
CN114880044A
CN114880044A CN202210483052.3A CN202210483052A CN114880044A CN 114880044 A CN114880044 A CN 114880044A CN 202210483052 A CN202210483052 A CN 202210483052A CN 114880044 A CN114880044 A CN 114880044A
Authority
CN
China
Prior art keywords
unloading
task
offloading
uploading
strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210483052.3A
Other languages
Chinese (zh)
Inventor
王翊
汤涛
蒋芳
许耀华
柏娜
江福林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202210483052.3A priority Critical patent/CN114880044A/en
Publication of CN114880044A publication Critical patent/CN114880044A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention provides a method, a system, a medium and an electronic terminal for unloading tasks in edge computing, wherein the method comprises the following steps: constructing a network unloading model based on the mobile block chain; determining the unloading proportion configuration of the unloading task; determining an uploading and unloading strategy of an unloading task; according to the unloading proportion configuration and the uploading unloading strategy, unloading the unloading task through a network unloading model; and constructing a utility function of the unloading task, and optimizing the utility function to improve the unloading efficiency of the unloading task. A network unloading model is established based on the mobile block chain, the unloading task is unloaded through the network unloading model, and the mobile block chain is applied to the unloading task processing of mobile edge computing, so that peripheral idle equipment can be effectively called to carry out cooperative unloading to form computing distribution, and the unloading efficiency of the unloading task and the resource utilization rate of the peripheral idle equipment are improved; and a utility function of the unloading task is constructed, and the utility function is optimized, so that the unloading efficiency of the unloading task is further improved.

Description

Method, system, medium and electronic terminal for unloading task in edge computing
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, a system, a medium, and an electronic terminal for offloading tasks in edge computing.
Background
The block link technology is predicted to be one of the key technologies of 6G cellular mobile communication due to its advantages of decentralization, anonymity, openness, non-tamper property, etc. The block chain technology based on mobile communication can carry out privacy protection, and saves cost. However, due to the problems of fixed scene and energy consumption of the conventional blockchain, the mobile blockchain generated in the future mobile computing may become the mainstream technology and be applied to various aspects. However, the generation of the mobile block chain depends on the wireless network and the mobile terminal, and since the current mobile terminal is limited by power supply and computing power, it is often difficult to deal with the huge computing power and energy consumption required in the block chain computation, which limits the further development of the mobile block chain. And the Mobile Edge Computing (MEC) technology is adopted to carry out the computation and the unloading of the computation process, so that the burden of the Mobile terminal can be effectively reduced.
Nowadays, the increasing number of mobile users brings huge data task processing amount and data types. However, a lot of resources are still wasted in the case of massive wireless access. There is data that indicates that in a typical data center, the average utilization of resources is only 20% -30%, and that, as far as computers are concerned, even in the idle state, there is a peak power consumption of 60%. If the idle device is called, the increased energy consumption is only about 40%, but the data processing pressure is greatly relieved. Meanwhile, due to low resource utilization rate, parallel tasks are increased, network congestion is caused, and time delay is increased.
Therefore, the moving blockchain technique is equally applicable to moving edge computation. The mobile block chain is used as an application, and peripheral idle equipment is called to carry out cooperative unloading according to decentralization and openness of the mobile block chain to form calculation shunting, so that unloading efficiency of unloading tasks in edge calculation and resource utilization rate of the peripheral idle equipment can be effectively improved.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present invention provides an offloading technical solution for offloading tasks in edge computing, which applies a mobile block chain to cooperative processing in offloading tasks, improves resource utilization, reduces time delay, and balances energy consumption.
To achieve the above and other related objects, the present invention provides the following technical solutions.
An unloading method for unloading tasks in edge computing comprises the following steps:
constructing a network unloading model based on the mobile block chain;
constructing an accident rate function and an unloading cost function of the network unloading model, and determining the unloading proportion configuration of the unloading task according to the accident rate function and the unloading cost function;
comprehensively analyzing the requirements of time delay, energy consumption, accident rate and safety based on the unloading proportion configuration, and determining an uploading and unloading strategy of the unloading task in the network unloading model;
according to the unloading proportion configuration and the uploading and unloading strategy, unloading the unloading task through the network unloading model;
and constructing a utility function of the unloading task, and optimizing the utility function to improve the unloading efficiency of the unloading task.
Optionally, the step of constructing a network offload model based on the mobile block chain includes:
covering a preset area by taking an edge server as a center to form an alliance mobile block chain, wherein the alliance mobile block chain is controlled by the edge server, all devices covered by the edge server achieve consensus and follow an alliance mobile block chain protocol together; the edge server and the plurality of devices in the federation mobility block chain constitute the network offload model.
Optionally, a block of the alliance mobile block chain is divided into a first domain, an information domain and a transaction domain, the first domain is used for storing a hash value, a calculation difficulty, a timestamp and a nonce value, the information domain is used for storing a calculation memory of the edge server and a current state of the device, and the transaction domain is used for storing transaction information.
Optionally, the step of constructing an accident rate function and an unloading cost function of the network unloading model, and determining an unloading proportion configuration of the unloading task according to the accident rate function and the unloading cost function includes:
constructing an accident rate function and an unloading cost function of the network unloading model, and determining a constraint condition of the unloading proportion of the edge server according to the accident rate function and the unloading cost function;
and optimizing and determining the unloading proportion of the edge server by adopting a convex optimization algorithm, determining the number of idle equipment required for cooperating the unloading task, and then calculating the unloading proportion of the idle equipment to complete the unloading proportion configuration of the unloading task.
Optionally, the step of comprehensively analyzing the requirements of time delay, energy consumption, accident rate and safety based on the unloading proportion configuration and determining the uploading and unloading strategy of the unloading task in the network unloading model includes:
based on the network unloading model, providing an integral uploading dispersion unloading strategy and a dispersion uploading non-unloading strategy;
calculating the time delay and energy consumption of the overall uploading of the scattered unloading strategy and the time delay and energy consumption of the scattered uploading non-unloading strategy based on the unloading proportion configuration;
and comprehensively comparing and analyzing the requirements of time delay, energy consumption, accident rate and safety, and selecting one of the overall uploading dispersed unloading strategy and the dispersed uploading non-unloading strategy as the uploading and unloading strategy of the unloading task.
Optionally, the step of constructing a utility function of the offloading task and optimizing the utility function to improve offloading efficiency of the offloading task includes:
constructing a utility function of the unloading task based on the time delay and the energy consumption;
and optimizing the utility function by adopting an epsilon-greedy algorithm to improve the numerical value of the utility function, thereby improving the unloading efficiency of the unloading task.
Optionally, the utility function of the offloading task is:
Figure RE-GDA0003730218480000031
wherein G is a utility function, ε 1 、ε 2 To balance the factor, T begin And E begin Respectively representing the time delay and the energy consumption required for completing the unloading task according to the original queuing theory strategy, and respectively representing the time delay and the energy consumption required for completing the unloading task according to one of the overall uploading dispersed unloading strategy and the dispersed uploading non-unloading strategy.
An offloading system for offloading tasks in edge computing, comprising:
the model building module is used for building a network unloading model based on the mobile block chain;
the task analysis module is used for analyzing and determining unloading proportion configuration and an uploading and unloading strategy of the unloading task in the network unloading model;
the task execution module is used for executing the unloading task;
and the task optimization module is used for optimizing the unloading efficiency of the unloading task.
Optionally, the task optimization module includes a construction unit and an optimization unit, the construction unit is configured to construct a utility function of the offloading task, and the optimization unit is configured to optimize the utility function.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the method of any of the above.
An electronic terminal, comprising: a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the electronic terminal to execute the method.
As described above, the offloading method, system, medium, and electronic terminal for offloading tasks in edge computing provided by the present invention at least have the following beneficial effects:
a network unloading model is established based on the mobile block chain, the unloading task is unloaded through the network unloading model, and the mobile block chain is applied to the unloading task processing of mobile edge computing, so that peripheral idle equipment can be effectively called to carry out cooperative unloading to form computing distribution, and the unloading efficiency of the unloading task and the resource utilization rate of the peripheral idle equipment are improved; in addition, a utility function of the unloading task is constructed, and the utility function is optimized, so that the unloading efficiency of the unloading task is further improved.
Drawings
FIG. 1 is a schematic diagram illustrating steps of an unloading method for unloading tasks in edge computing according to the present invention;
FIG. 2 is a schematic diagram of a network offload model in an alternative embodiment of the invention;
FIG. 3 is a graph illustrating delay simulation for an offload task under three different offload policies in an alternative embodiment of the present invention;
FIG. 4 is a graph illustrating simulation plots of utility functions of an offloading task under a plurality of different optimization algorithms in an alternative embodiment of the invention;
FIG. 5 is a simulation graph of a resource utilization function for an offload task under two different offload policies in an alternative embodiment of the invention;
FIG. 6 is a schematic structural diagram of an unloading system for unloading tasks in edge computing according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated. The structures, proportions, sizes, and other dimensions shown in the drawings and described in the specification are for understanding and reading the present disclosure, and are not intended to limit the scope of the present disclosure, which is defined in the claims, and are not essential to the art, and any structural modifications, changes in proportions, or adjustments in size, which do not affect the efficacy and attainment of the same are intended to fall within the scope of the present disclosure.
As described in the foregoing background, the inventors have studied to find that: on one hand, the increasing computational power and energy consumption requirements of the mobile block chain are met, and on the other hand, the resource waste of mass access caused by the increase of the number of mobile users is avoided. Based on the above, the mobile block chain is applied to the unloading task processing of the mobile edge computing, so that the peripheral idle equipment is called to carry out cooperative unloading to form computing shunting, and the unloading efficiency of the unloading task and the resource utilization rate of the peripheral idle equipment are improved.
As shown in fig. 1, the present invention provides an unloading method for unloading tasks in edge computing, which includes the steps of:
s1, constructing a network unloading model based on the mobile block chain;
s2, constructing an accident rate function and an unloading cost function of the network unloading model, and determining unloading proportion configuration of the unloading task according to the accident rate function and the unloading cost function;
s3, comprehensively analyzing the requirements of time delay, energy consumption, accident rate and safety based on unloading proportion configuration, and determining an uploading and unloading strategy of an unloading task in a network unloading model;
s4, configuring and uploading an unloading strategy according to the unloading proportion, and unloading the unloading task through a network unloading model;
s5, constructing a utility function of the unloading task, and optimizing the utility function to improve the unloading efficiency of the unloading task.
In detail, the step S1 of constructing a network offload model based on the mobile block chain further includes:
the edge server is taken as a center, a preset area is covered, an alliance mobile block chain is formed, the alliance mobile block chain is controlled by the edge server (MEC end), all equipment covered by the edge server are in consensus and follow an alliance mobile block chain protocol together; the edge server and the plurality of devices in the alliance mobile block chain form a network offload model.
In an alternative embodiment of the present invention, as shown in fig. 2, the edge server and the peripheral six devices (device 1, device 2, device 3, device 4, device 5, and device 6) form a federated mobility block chain, which constitutes an efficient network offload model based on the mobility block chain. The device 2 sends a task request for unloading the task to the edge server, and the edge server responds to the request and calls the peripheral idle devices 1, 3, 4, 5 and 6 to cooperatively unload the unloading task.
It should be noted that the conventional blockchain has two parts, namely a block header and a block body, and when the moving blockchain generated based on the moving edge calculation is applied to network offloading, the block is divided into a first domain, an information domain and a transaction domain. The first field is the same as the block head of the traditional block chain and is used for storing hash values, calculation difficulty, timestamps, nonce values and the like. Computing memory C of information domain for storing edge server station And a current state of the device, the current state of the device including an execution frequency of a CPU in the device
Figure RE-GDA0003730218480000051
Usage status of the device l i,j Available memory C when (used or idle) and equipment are idle presonal And rental prices when the device is idle, etc. Wherein l i,j Represents the j device in the i area when i,j When 1, the device resource is used i,j 0 represents the sameWhen the device is not used, the user can request peripheral idle devices to assist in unloading when an unloading request is made. When a user in the area requests to unload a task and uses peripheral idle resources or when a task using the idle resources to cooperatively unload is finished, the task information in the mobile block chain is updated in time, so that the real-time performance of the information in the mobile block chain is ensured, and the openness of the mobile block chain can enable each user node to unconditionally access the information in the block to obtain the state of the peripheral idle resources. The transaction domain is used for storing transaction information, can be the same as a traditional block chain, and adopts a tree structure to ensure that transaction contents are not tampered, so that the security of the transaction is guaranteed.
Assuming that there are a edge servers in a certain area, where a is {1,2, … a }, the area is denoted as Q, the edge servers cover a certain number of devices, where the idle devices can sign a lease agreement with the edge servers, the lease agreement includes a security agreement, the offloaded content cannot be uploaded or stolen for use, and then the internal memory of the lease agreement is handed over to the edge servers for use, and the state of the devices is recorded in the federation blockchain. The number of idle devices in each edge server is represented by the set Y ═ Y 1 ,y 2 ,…y A }. User p i,j Submitting (the jth user in the ith base station) the total offload W i,j . The unloading proportion of unloading to the edge server is alpha bs The unloading ratio of the idle user terminal recorded in the alliance mobile block chain is beta θ
The task proportion occupied by the idle device side is not suitable to be too large, because the idle device side has more force-ineffectiveness elements compared with the edge server side, the probability of accidents is increased. When the idle device end is powered off and disconnected from the network, the task progress may be suspended or terminated, although the spare resources can be timely made up, the proportion of the task occupied by the idle device end still needs to be limited,
Figure RE-GDA0003730218480000052
β lim the maximum task proportion of the idle device side. And for the edge server side, the edge server side also has and idle equipmentThe terminals are in corresponding proportion to perform parallel processing of one task, so that the corresponding resource utilization rate is improved, the time delay is reduced, and meanwhile, the unloading cost of an unloading user is reduced by handing part of tasks to the idle equipment terminal.
When the task is unloaded, a user or an edge server side is required to determine the number of used idle devices, and after the number of the idle devices is determined, the multivariable problem can be converted into the univariate problem. Alpha is alpha bs +nβ θ N represents the number of idle devices cooperatively offloaded. Specific unloading proportion beta of each idle device θ May be represented by alpha bs Denotes that at this time α bs Is the only variable.
In a data center, the utilization rate of resources is only about 20% -30%, and for computer equipment, even if the equipment is not used, the power consumption is only about six times of a peak value, if idle resources are called, although a little energy consumption is increased, the finite capacity of the computing resources is improved, more tasks can be processed, and the network pressure is relieved.
In the present invention, C is used presonal Indicating available memory for each rental user (free device), C station The available memory of the edge server is represented, and if the memory of each user (device) is equal to the memory of each server, the available resource utilization function is equal to the ratio of the sum of the unloaded tasks to the sum of the idle device memory and the server memory of the block chain of the alliance
Figure RE-GDA0003730218480000061
And Sum is used for indicating the unloading task Sum in each area, and the task Sum calls an idle device to carry out cooperative unloading under the condition that the block chain technology is used as an application. The value range is changed after the redundant idle resources are called. From the previous: 0<Sum i1 C station Become present: 0<Sum i1 C station +y n C presonal
For idle devices during lease, though being handled by the edge serverA remote management, but after all, various emergencies occur, so the edge server will leave standby equipment or spare memory in the area to prevent problems in task unloading and processing, so the edge server end will not use up the memory, but keep a part of memory corresponding to the emergencies, that is, the usage ratio is mu 1 The memory of (2).
In detail, the step S2 of constructing an accident rate function and an unloading cost function of the network unloading model, and determining an unloading ratio configuration of the unloading task according to the accident rate function and the unloading cost function further includes:
s21, constructing an accident rate function and an unloading cost function of the network unloading model, and determining the constraint condition of the unloading proportion of the edge server according to the accident rate function and the unloading cost function;
and S22, optimizing and determining the unloading proportion of the edge server by adopting a convex optimization algorithm, determining the number of idle devices required for realizing the unloading task, calculating the unloading proportion of the idle devices, and completing the unloading proportion configuration of the unloading task.
In more detail, for the idle device, the uncertainty factor is increased relative to the edge server side, wherein the most common problems such as power failure and network outage are the most common, so when the edge server side and the idle side cooperatively unload the task, the proportion of the idle device and the number of the idle devices are not too large. Therefore, in step S21, the accident rate function ACC is first constructed i,j
Figure RE-GDA0003730218480000062
And ACC max To the maximum acceptable accident rate, then
Figure RE-GDA0003730218480000063
k 1 The accident rate parameter is generally a fixed value.
In more detail, in step S21, considering the offload cost, the transaction unit price of the edge server is set to P b The transaction unit price of the idle equipment end is P b k 2 ,k 2 Is the discount coefficient. The unloading cost of the edge server and the idle terminal is Pic respectively 1 =P b α bs W i,j ,Pic 2 =nk 2 P b β θ W i,j And (4) showing. Pic max Represents the highest reasonable price, Pic, offered by the edge server 1 + Pic 2 ≤Pic max Then from α bs And beta θ The relationship of (c) may be constrained by the final unload ratio as follows.
Figure RE-GDA0003730218480000071
Thus, in step S21, the offload rate α of the edge server is determined based on the accident rate function and the offload cost function bs The constraint conditions of (1) are:
Figure RE-GDA0003730218480000072
in more detail, the unloading proportion alpha of the edge server is determined bs After the constraint condition (range), in step S22, the unloading ratio α of the edge server is optimized and determined by using the convex optimization algorithm bs Simultaneously determining the number n of idle devices required by the cooperative unloading task, and calculating the unloading proportion beta of the idle devices θ And completing the unloading proportion configuration of the unloading task.
In detail, based on the unloading ratio configuration, the step S3 of comprehensively analyzing the requirements of time delay, energy consumption, accident rate and security, and determining the uploading and unloading strategy of the unloading task in the network unloading model further includes:
s31, based on the network unloading model, providing an integral uploading dispersion unloading strategy and a dispersion uploading non-unloading strategy;
s32, calculating the time delay and energy consumption of the overall uploading of the dispersed unloading strategy and the time delay and energy consumption of the dispersed uploading of the unloading-free strategy based on the unloading proportion configuration;
and S33, comprehensively comparing and analyzing the requirements of time delay, energy consumption, accident rate and safety, and selecting one of the overall uploading and decentralized uploading unloading strategy and the decentralized uploading non-unloading strategy as the uploading and unloading strategy of the unloading task.
In more detail, in step S31, since the entire offload task is eventually distributed to the edge server and the plurality of idle devices for cooperative offload, the upload and offload policies of the offload task can be divided into two types: and uploading a dispersed unloading strategy and a dispersed uploading non-unloading strategy integrally. Wherein, the overall uploading dispersion unloading strategy is as follows: the task requesting user uploads the unloading task segmentation set to the edge server side and puts forward the requirement of the user, the edge server side unloads the segmented small tasks to opposite idle equipment for parallel processing, then the small tasks are transmitted back to the edge server side, and finally the processed tasks are delivered to the requesting user. The strategy of decentralized uploading without unloading is as follows: and the tasks are dispersedly uploaded, the main tasks are directly uploaded to the edge server by the user, and the sub tasks are directly uploaded to each idle device.
In more detail, in step S32, based on the determined unloading ratio configuration, the two uploading and unloading strategies are respectively subjected to time delay and energy consumption calculation analysis.
Further, for the overall uploading and decentralized unloading strategy, when the edge server processes the task, the time delay includes user uploading task time delay, edge server processing self task time delay, time delay from the edge server unloading small task to the idle device, and various queuing time delays.
In the present invention, user p i,j,p Is denoted as r i,j,p And for the edge server end needing to issue the task, the edge server p i,1,s Is denoted as r i,1,s 。p i,j,p Represents the jth mobile user in the ith area, p represents the personal side, and p represents the personal side i,1,s It represents a unique edge server in the ith area and s represents the server side. When queuing delay is not considered, the delay at the edge server side is Time 1
Figure RE-GDA0003730218480000081
Wherein the first part to the right of the formula is denoted asThe uploading time delay of the whole task at the edge server end, the second part at the right side of the formula is represented as the processing time delay of the unloading task on the edge server, the third part at the right side of the formula is represented as the unloading time delay of the rest tasks,
Figure RE-GDA0003730218480000082
the execution frequency of the ith edge server.
In addition, queuing delay is an indispensable part in delay analysis, and an M/M/1 model is constructed based on the queuing theory. MI is the task size, V is the transmission speed of the router, and ZC is the utilization rate of the router.
Figure RE-GDA0003730218480000083
The queuing delay is in direct proportion to the size of a task packet through formula analysis, when the task is small, the queuing delay is reduced, the task is divided by calculation and distribution, the queuing delay is beneficial to reducing the queuing delay, and meanwhile, the queuing delay is also related to the transmission speed and the utilization rate of a router.
According to the network unloading model provided by the invention, queuing delay can occur in three places, namely queuing delay when a user submits an unloading task, queuing delay when an idle resource after processing a small task passes back, and queuing delay when the task is issued. Delay one and delay three will occur at the edge server side and delay two will occur at the idle device side. Finally, compared with the parallel uploading of the split tasks by the requesting users, the Time for saving a little reduced Time delay for the whole uploading of the large tasks s And finally, the time delay of the overall uploading of the distributed unloading strategy at the edge server end is as follows:
Figure RE-GDA0003730218480000084
wherein the content of the first and second substances,
Figure RE-GDA0003730218480000085
expressed as queuing delay when the user submits an offload task,
Figure RE-GDA0003730218480000086
expressed as queuing delay when the edge server issues the task.
For the overall uploading and decentralized unloading strategy, when the idle equipment obtains the issued small task, the time delay has only two stages, and the task time delay and the return task time delay are respectively processed. When a plurality of small tasks are parallel, the time delay is the maximum time delay of a certain time delay, so that only one idle device of the idle ends needs to be subjected to time delay meter analysis, and the time delay T of a single mobile user end is used presonal The time delay average value is easier to analyze and calculate to ensure that beta is θ
Figure RE-GDA0003730218480000087
The same is true. In addition, there is queuing delay, which is the queuing delay for returning after the small task is processed. Finally, the time delay of the idle device end is:
Figure RE-GDA0003730218480000088
wherein the content of the first and second substances,
Figure RE-GDA0003730218480000089
and the queuing delay is expressed when the idle resource after the small task is processed passes back.
It should be noted that, for the overall upload decentralized offload strategy, the total latency is theoretically reduced when the queuing delay is not considered because of the multitask parallelism, but the queuing delay cannot be omitted in practice. When this strategy is adopted, more queuing delay is added than before, so whether the delay is optimized with respect to the original offloading strategy will be an unknown number. And the queuing delay is in direct proportion to the size of the task, and the queuing delay is reduced when the task is smaller, so that the size of a task packet can be reduced by a scattered uploading no-unloading strategy for dispersedly uploading the task, and the queuing delay when the task is issued and the queuing delay when the task is returned are not the same as the queuing delay when the task is integrally uploaded and dispersedly unloaded.
For the distributed uploading non-unloading strategy, the time delay at the edge server end is composed of user uploading time delay, edge server end processing time delay and queuing time delay when the user submits an unloading task. Finally, the time delay of the edge server is:
Figure RE-GDA0003730218480000091
however, the distributed upload no-offload policy may generate a fourth queuing delay, which is a queuing delay of uploading a task to a user side, and in addition, there is a sum of uploading delays of sub-tasks directly uploaded to each idle device. By r i,j,u Representing the file transfer rate between users. Finally, the time delay of the idle device end is:
Figure RE-GDA0003730218480000092
it can be seen that the distributed uploading and unloading strategy optimizes a large amount of time delay at the edge server side, but at the idle device side, although the queuing time delay for returning the idle resource after processing the small task is optimized, a fourth time delay is introduced, and the sum of the uploading time delay of the n-1 small tasks is increased.
When the task decision is given, the edge server will carry out task processing with a plurality of mobile idle devices at the same time, the idle devices process the task at the actual condition soon or slowly, take the mean value of its processing frequency in discussing, so the delay of a plurality of mobile idle device processing is the same, under the condition of going on simultaneously, finally, the processing delay of uninstalling the task is:
Figure RE-GDA0003730218480000093
in an alternative embodiment of the present invention, the overall upload decentralized offload policy is referred to as policy one (Strategy1), the decentralized upload no offload policy is referred to as policy two (Strategy2), and for the two proposed policiesThe policy and the original queuing theory policy are subjected to experimental simulation in the unloading time delay aspect to obtain a time delay curve as shown in fig. 3, and as can be seen from fig. 3, when most of the unloading tasks are at the edge server end, the policy two is superior to the policy one and the original queuing theory unloading policy in time delay, and the unloading proportion alpha at the edge server end bs The optimal time delay can be obtained when the value of (2) is 0.7.
From the time delay aspect, the distributed uploading non-unloading strategy is better than the integral uploading distributed unloading strategy. For task offloading, not only the latency but also the energy consumption issue need to be considered. In the aspect of energy consumption, the optimization on the time delay also affects the energy consumption at the same time, and the time delay optimization omits a plurality of energy consumption steps, so that the analysis can show that the strategy of dispersedly uploading the energy consumption without unloading is better than the strategy of integrally uploading the energy consumption without unloading.
Analyzing the research content of time delay and energy consumption, and analyzing the energy consumption of the scattered uploading non-unloading strategy. For the distributed uploading non-unloading strategy, the energy consumption is divided into energy consumption e for transmitting tasks from the user to the edge server side and the idle equipment side 1 Energy consumption e required for processing tasks 2 Queuing energy consumption e when user submits unloading task 3 And the uploading energy e for directly uploading the sub-tasks to each idle device 4 The total energy consumption E is expressed as: e ═ E 1 +e 2 +e 3 +e 4
Wherein the content of the first and second substances,
Figure RE-GDA0003730218480000101
p station expressed as the task processing power, p, of the edge server side presonal Indicated as task processing power on the idle device side.
From the point of time delay and energy consumption, the scattered uploading non-unloading strategy is obviously superior to the integral uploading scattered unloading strategy; however, in the task uploading process, the overall uploading distributed unloading strategy packs the task as a whole and transmits the packed task to the edge server side, and the distributed uploading non-unloading strategy directly transmits the small task to the rented idle equipment. Compared with the overall uploading and decentralized unloading strategy, the decentralized uploading and non-unloading strategy has the following defects: firstly, the complexity is higher, and a requesting user needs to communicate with n idle devices and users; secondly, the accident rate is higher due to the scattered transmission of a plurality of small tasks, and missed transmission or repeated transmission events occur; finally, the security and privacy cannot be well guaranteed, although the leasing user has a secret protocol in the leasing protocol, the leasing user updates in real time, information exchange in the mobile block chain is delayed, and when the task is unloaded to release the hands of the leasing user, the security and privacy cannot be guaranteed.
Therefore, in step S33, it is necessary to comprehensively compare the analysis delay, the energy consumption, the accident rate and the security, and select one of the overall uploading distributed offloading policy and the distributed uploading non-offloading policy as the uploading offloading policy of the offloading task.
In detail, after the offloading proportion configuration and the uploading offloading policy are determined, in step S4, according to the offloading proportion configuration and the uploading offloading policy, offloading and offloading of the offloading task is performed through the network offloading model, and peripheral idle devices are invoked to achieve cooperative offloading and computational offloading of the offloading task.
In detail, the step S5 of constructing a utility function of the offloading task and optimizing the utility function to improve the offloading efficiency of the offloading task further includes:
s51, constructing a utility function of the unloading task based on time delay and energy consumption;
and S52, optimizing the utility function by adopting an epsilon-greedy algorithm to improve the numerical value of the utility function, and further improving the unloading efficiency of the unloading task.
In more detail, in step S51, according to the determined unloading proportion configuration and uploading unloading strategy, the time delay and energy consumption of the unloading task are analyzed and calculated, and a utility function of the unloading task is constructed based on the time delay and energy consumption:
Figure RE-GDA0003730218480000102
wherein G is a utility function, ε 1 、ε 2 To balance the factor, T begin And E begin Respectively representing the time delay and the energy consumption required for completing the unloading task according to the original queuing theory strategy, and respectively representing the time delay and the energy consumption required for completing the unloading task according to one of the overall uploading dispersed unloading strategy and the dispersed uploading non-unloading strategy.
Aiming at the scattered uploading no-unloading strategy, the size of the task packet is reduced in the aspect of time delay and the queuing time delay is optimized, but the strategy causes corresponding energy consumption increase in the aspect of energy consumption due to the increase of unloading processes, so that the utility function is negative in the time delay part and positive in the energy consumption part. Therefore, the utility function value is called as an equilibrium point value, and if the equilibrium point value is larger, better performance can be obtained with lower loss, for example, a small amount of energy consumption is added to obtain lower time delay, or the optimal time delay is abandoned to obtain the reduction of the energy consumption.
Based on this, in more detail, in step S52, an epsilon-greedy algorithm is used to optimize the utility function to increase the value of the utility function, thereby increasing the unloading efficiency of the unloading task and obtaining better performance with lower loss. Two layers of optimization exist in the epsilon-greedy algorithm, namely the edge server and the user, so that the unloading cost of the user and the experience of the user can be comprehensively considered, and the unloading cost performance of the unloading task is improved.
In an optional embodiment of the present invention, based on Python and MATLAB simulation platforms, the utility functions of all the unloading tasks in a region are simulated and tested, and the size of the unloading task is changed by adopting a single variable principle, where the size of the unloading task is the average value of all the unloading tasks in the region.
In detail, the GBMS algorithm and the multi-node cooperative unloading algorithm based on time delay are adopted as comparison algorithms to carry out experiments. The simulation curve of the corresponding utility function is shown in fig. 4, and the performance of the epsilon-greedy algorithm is better than that of the contrast algorithm, and rises along with the reduction of the epsilon value. In the GBMS algorithm, when the MEC end and the MEC end carry out cooperative unloading, the unloading proportion is set as an upper limit of a value because the limitation of the unloading proportion can not unload all tasks to the edge server end. The algorithm does not consider user cost, the performance is poor after cost limitation exists, the multi-node cooperation unloading algorithm based on time delay is actually a greedy algorithm with optimal time delay, the energy consumption problem is not comprehensively considered, and the performance is lower than that of an epsilon-greedy algorithm after the energy consumption problem is considered in the model. When an epsilon-greedy algorithm is used, an optimal epsilon value of 0.5 is suitable. The performance is better than other algorithms, and the utility function values of users and providers are balanced. (Note: equalizing the user and raising the quotient utility function values, as previously stated, the greedy value ratio for each of the user and provider is 1: 1, and the ε value is also the ratio in the algorithm, which is satisfactory for both the user and provider only around 0.5.)
In an optional embodiment of the present invention, based on Python and MATLAB simulation platforms, a simulation experiment is performed on a resource utilization function capable of executing an offload task in a region, where the number of regions a is 10 and μ 1 0.8, edge server side memory value C station 1000-2000G, the user-usable memory value C presonal 10-20G, each edge server covers 50-100 idle devices. The sum of the offloaded tasks in the Q region is changed by using a single variable principle, and a corresponding resource utilization curve is shown in fig. 5. As can be seen from fig. 5, in the area, the available resources including the edge server and the idle end are set to be constant values, when the unused block chain calls the peripheral idle resources for the application to perform the computational offloading of the task, the upper limit of the resource utilization rate is about 0.45, and the resource utilization rate in the system after calling the idle device reaches about 0.87. Therefore, the calculation distribution algorithm of the unloading task taking the mobile block chain technology as the application improves the resource utilization rate of the whole system.
Meanwhile, as shown in fig. 6, based on the same inventive concept as the above method, the present invention further provides an unloading system for unloading tasks in edge computing, which includes:
the model building module is used for building a network unloading model based on the mobile block chain;
the task analysis module is used for analyzing and determining unloading proportion configuration and an uploading and unloading strategy of the unloading task in the network unloading model;
the task execution module is used for executing the unloading task;
and the task optimization module is used for optimizing the unloading efficiency of the unloading task.
In detail, as shown in fig. 6, the model building module, the task analysis module and the task execution module are connected in sequence, and the task optimization module is connected between the task analysis module and the task execution module; the model building module is used for building a network unloading model based on the mobile block chain, the network unloading model comprises an edge server and a plurality of devices connected with the edge server, and the edge server and a plurality of peripheral idle devices perform cooperative unloading calculation shunting on unloading tasks; the task analysis module is used for analyzing and determining unloading proportion configuration and an uploading and unloading strategy of the unloading task in the network unloading model; the task execution module is used for executing the unloading task; the task optimization module is used for optimizing unloading efficiency of the unloading task. The task analysis module and the task optimization module can adopt the same or different processors to process.
In more detail, as shown in fig. 6, the task optimization module includes a construction unit and an optimization unit, the construction unit is used for constructing a utility function of the unloading task, and the optimization unit is used for optimizing the utility function.
It should be noted that, since the technical principle of the system embodiment is similar to that of the method embodiment, repeated descriptions of the same technical details are not repeated.
In addition, based on the same inventive concept as the method, the invention also provides a computer readable storage medium and an electronic terminal; the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of any of the above; the electronic terminal comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory so as to enable the electronic terminal to execute any one of the methods.
In detail, the computer-readable storage medium may be understood by those of ordinary skill in the art as: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The computer program may be stored in the computer readable storage medium, which when executed performs steps comprising the method embodiments described above; and the computer readable storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In detail, the electronic terminal comprises a processor, a memory, a transceiver and a communication interface, wherein the memory and the communication interface are connected with the processor and the transceiver and are used for realizing mutual communication, the memory is used for storing a computer program, the communication interface is used for communication, and the processor and the transceiver are used for operating the computer program to enable the electronic terminal to execute the steps of the camera coordinate mapping method used in the unmanned driving scene.
In more detail, the memory may comprise Random Access Memory (RAM), Read Only Memory (ROM), and possibly non-volatile memory, such as at least one disk memory; the processor may be a general-purpose processor such as a Central Processing Unit (CPU), a Network Processor (NP), etc., or may be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In summary, in the method, the system, the medium and the electronic terminal for offloading an offload task in edge computing provided by the present invention, a network offload model is constructed based on a mobile block chain, and the offload task is offloaded through the network offload model, and the mobile block chain is applied to the offload task processing of mobile edge computing, so that peripheral idle devices can be effectively invoked for cooperative offload to form computation offload, thereby improving offload efficiency of the offload task and resource utilization rate of the peripheral idle devices; in addition, a utility function of the unloading task is constructed, the utility function is optimized, unloading efficiency of the unloading task is further improved, and better performance can be obtained with lower energy consumption.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (11)

1. A method for unloading tasks in edge computing is characterized by comprising the following steps:
constructing a network unloading model based on the mobile block chain;
constructing an accident rate function and an unloading cost function of the network unloading model, and determining the unloading proportion configuration of the unloading task according to the accident rate function and the unloading cost function;
comprehensively analyzing the requirements of time delay, energy consumption, accident rate and safety based on the unloading proportion configuration, and determining an uploading and unloading strategy of the unloading task in the network unloading model;
according to the unloading proportion configuration and the uploading and unloading strategy, unloading the unloading task through the network unloading model;
and constructing a utility function of the unloading task, and optimizing the utility function to improve the unloading efficiency of the unloading task.
2. The method for offloading task in edge computing according to claim 1, wherein the step of constructing a network offloading model based on a mobile block chain includes:
covering a preset area by taking an edge server as a center to form an alliance mobile block chain, wherein the alliance mobile block chain is controlled by the edge server, all devices covered by the edge server achieve consensus and follow an alliance mobile block chain protocol together; the edge server and the plurality of devices in the federation mobility block chain constitute the network offload model.
3. The method for offloading task in edge computing according to claim 2, wherein a block of the federation mobile block chain is divided into a header field, an information field, and a transaction field, the header field being used for storing a hash value, a computational difficulty, a timestamp, and a nonce value, the information field being used for storing a computational memory of the edge server and a current state of the device, and the transaction field being used for storing transaction information.
4. The method for offloading task in edge computing according to claim 3, wherein the step of constructing an accident rate function and an offloading cost function of the network offloading model, and determining an offloading proportion configuration of the offloading task according to the accident rate function and the offloading cost function comprises:
constructing an accident rate function and an unloading cost function of the network unloading model, and determining a constraint condition of the unloading proportion of the edge server according to the accident rate function and the unloading cost function;
and optimizing and determining the unloading proportion of the edge server by adopting a convex optimization algorithm, determining the number of idle equipment required for cooperating the unloading task, and then calculating the unloading proportion of the idle equipment to complete the unloading proportion configuration of the unloading task.
5. The method for offloading task in edge computing according to claim 1 or 4, wherein the step of determining an offloading policy for offloading task in the network offloading model by comprehensively analyzing requirements of latency, energy consumption, accident rate, and security based on the offloading proportion configuration comprises:
based on the network unloading model, providing an integral uploading dispersion unloading strategy and a dispersion uploading non-unloading strategy;
calculating the time delay and energy consumption of the overall uploading of the scattered unloading strategy and the time delay and energy consumption of the scattered uploading non-unloading strategy based on the unloading proportion configuration;
and comprehensively comparing and analyzing the requirements of time delay, energy consumption, accident rate and safety, and selecting one of the overall uploading dispersed unloading strategy and the dispersed uploading non-unloading strategy as the uploading and unloading strategy of the unloading task.
6. The method for offloading task in edge computing according to claim 5, wherein the step of constructing a utility function of the offloading task and optimizing the utility function to improve offloading efficiency of the offloading task comprises:
constructing a utility function of the unloading task based on the time delay and the energy consumption;
and optimizing the utility function by adopting an epsilon-greedy algorithm to improve the numerical value of the utility function, thereby improving the unloading efficiency of the unloading task.
7. The method for offloading tasks in edge computing according to claim 6, wherein the utility function of the offloading task is:
Figure RE-FDA0003730218470000021
wherein G is a utility function, ε 1 、ε 2 To balance the factor, T begin And E begin Respectively representing the time delay and the energy consumption required for completing the unloading task according to the original queuing theory strategy, and respectively representing the time delay and the energy consumption required for completing the unloading task according to one of the overall uploading dispersed unloading strategy and the dispersed uploading non-unloading strategy.
8. An offloading system for offloading tasks in edge computing, comprising:
the model building module is used for building a network unloading model based on the mobile block chain;
the task analysis module is used for analyzing and determining unloading proportion configuration and an uploading and unloading strategy of the unloading task in the network unloading model;
the task execution module is used for executing the unloading task;
and the task optimization module is used for optimizing the unloading efficiency of the unloading task.
9. The system for offloading tasks in edge computing as recited in claim 8, wherein the task optimization module comprises a construction unit and an optimization unit, the construction unit is configured to construct a utility function for the offloading task, and the optimization unit is configured to optimize the utility function.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 7.
11. An electronic terminal, comprising: a processor and a memory;
the memory is configured to store a computer program and the processor is configured to execute the computer program stored by the memory to cause the electronic terminal to perform the method according to any of claims 1 to 7.
CN202210483052.3A 2022-05-05 2022-05-05 Method, system, medium and electronic terminal for unloading task in edge computing Pending CN114880044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210483052.3A CN114880044A (en) 2022-05-05 2022-05-05 Method, system, medium and electronic terminal for unloading task in edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210483052.3A CN114880044A (en) 2022-05-05 2022-05-05 Method, system, medium and electronic terminal for unloading task in edge computing

Publications (1)

Publication Number Publication Date
CN114880044A true CN114880044A (en) 2022-08-09

Family

ID=82673057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210483052.3A Pending CN114880044A (en) 2022-05-05 2022-05-05 Method, system, medium and electronic terminal for unloading task in edge computing

Country Status (1)

Country Link
CN (1) CN114880044A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110798849A (en) * 2019-10-10 2020-02-14 西北工业大学 Computing resource allocation and task unloading method for ultra-dense network edge computing
CN110928678A (en) * 2020-01-20 2020-03-27 西北工业大学 Block chain system resource allocation method based on mobile edge calculation
CN112115505A (en) * 2020-08-07 2020-12-22 北京工业大学 New energy automobile charging station charging data transmission method based on mobile edge calculation and block chain technology
KR20210069588A (en) * 2019-12-03 2021-06-11 경희대학교 산학협력단 Method for task offloading in mobile edge compuing system using the unmanned aerial vehicles and mobile edge compuing system using the same and unmmanned aerial vehicles thereof
CN113504987A (en) * 2021-06-30 2021-10-15 广州大学 Mobile edge computing task unloading method and device based on transfer learning
WO2022027776A1 (en) * 2020-08-03 2022-02-10 威胜信息技术股份有限公司 Edge computing network task scheduling and resource allocation method and edge computing system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110798849A (en) * 2019-10-10 2020-02-14 西北工业大学 Computing resource allocation and task unloading method for ultra-dense network edge computing
KR20210069588A (en) * 2019-12-03 2021-06-11 경희대학교 산학협력단 Method for task offloading in mobile edge compuing system using the unmanned aerial vehicles and mobile edge compuing system using the same and unmmanned aerial vehicles thereof
CN110928678A (en) * 2020-01-20 2020-03-27 西北工业大学 Block chain system resource allocation method based on mobile edge calculation
WO2022027776A1 (en) * 2020-08-03 2022-02-10 威胜信息技术股份有限公司 Edge computing network task scheduling and resource allocation method and edge computing system
CN112115505A (en) * 2020-08-07 2020-12-22 北京工业大学 New energy automobile charging station charging data transmission method based on mobile edge calculation and block chain technology
CN113504987A (en) * 2021-06-30 2021-10-15 广州大学 Mobile edge computing task unloading method and device based on transfer learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHAO SJ等: "Computational Resource Allocation Strategy in a Public Blockchain Supported by Edge Computing", 《WIRELESS COMMUNICATIONS AND MOBILE COMPUTING》, 15 April 2021 (2021-04-15) *
王忍;王翊;胡艳军;蒋芳;许耀华;: "超密集异构网络中过载MEC服务器的协作卸载", 西安电子科技大学学报, no. 02, 31 December 2020 (2020-12-31) *
王翊等: "Apply auction optimization algorithm to mobile edge computing for Security", 《IET COMMUNICATION》, 9 November 2021 (2021-11-09) *
薛建彬;安亚宁;: "基于边缘计算的新型任务卸载与资源分配策略", 计算机工程与科学, no. 06, 15 June 2020 (2020-06-15) *

Similar Documents

Publication Publication Date Title
Xu et al. A heuristic offloading method for deep learning edge services in 5G networks
Xu et al. Multiobjective computation offloading for workflow management in cloudlet‐based mobile cloud using NSGA‐II
Lo'ai et al. A mobile cloud computing model using the cloudlet scheme for big data applications
CN107295109A (en) Task unloading and power distribution joint decision method in self-organizing network cloud computing
Ye et al. A framework for QoS and power management in a service cloud environment with mobile devices
Hoseiny et al. Using the power of two choices for real-time task scheduling in fog-cloud computing
CN113254095B (en) Task unloading, scheduling and load balancing system and method for cloud edge combined platform
Li et al. Resource scheduling based on improved spectral clustering algorithm in edge computing
da Silva et al. Location of fog nodes for reduction of energy consumption of end-user devices
Li et al. Computation offloading and service allocation in mobile edge computing
Hu et al. Deephome: Distributed inference with heterogeneous devices in the edge
Abouaomar et al. Users-Fogs association within a cache context in 5G networks: Coalition game model
Zhang et al. Enhanced adaptive cloudlet placement approach for mobile application on spark
Chen et al. An edge server placement algorithm in edge computing environment
Huang et al. Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud
Hao et al. A risk-sensitive task offloading strategy for edge computing in industrial Internet of Things
CN117579701A (en) Mobile edge network computing and unloading method and system
Huang et al. Cost-aware resource management based on market pricing mechanisms in edge federation environments
Ma Edge server placement for service offloading in internet of things
CN114880044A (en) Method, system, medium and electronic terminal for unloading task in edge computing
CN114143317B (en) Cross-cloud-layer mobile edge calculation-oriented multi-priority calculation unloading strategy optimization method
Shi et al. Workflow migration in uncertain edge computing environments based on interval many-objective evolutionary algorithm
Gao et al. Construction of an Intelligent APP for dance training mobile information management platform based on edge computing
Cao et al. Delay sensitive large-scale parked vehicular computing via software defined blockchain
Tang et al. To cloud or not to cloud: an on-line scheduler for dynamic privacy-protection of deep learning workload on edge devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination