CN113572804A - Task unloading system, method and device based on edge cooperation - Google Patents

Task unloading system, method and device based on edge cooperation Download PDF

Info

Publication number
CN113572804A
CN113572804A CN202110469402.6A CN202110469402A CN113572804A CN 113572804 A CN113572804 A CN 113572804A CN 202110469402 A CN202110469402 A CN 202110469402A CN 113572804 A CN113572804 A CN 113572804A
Authority
CN
China
Prior art keywords
task
terminal
neural network
edge
unloading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110469402.6A
Other languages
Chinese (zh)
Other versions
CN113572804B (en
Inventor
刘通
姜海涛
刘宇
李波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Vocational Institute of Engineering
Original Assignee
Chongqing Vocational Institute of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Vocational Institute of Engineering filed Critical Chongqing Vocational Institute of Engineering
Priority to CN202110469402.6A priority Critical patent/CN113572804B/en
Publication of CN113572804A publication Critical patent/CN113572804A/en
Application granted granted Critical
Publication of CN113572804B publication Critical patent/CN113572804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of computer communication, in particular to a task unloading system, a method and a device based on edge cooperation, wherein the task unloading system based on edge cooperation comprises: the digital twin layer is used for acquiring operation data of the terminal and the edge node, and training a neural network model according to the acquired operation data, wherein the neural network model is used for determining a task unloading scheme; and the terminal is used for unloading a computing task to at least one edge node according to the determined task unloading scheme. The task unloading system based on edge cooperation provided by the embodiment of the invention is used for training a neural network model by setting the operation data of the digital twin layer acquisition terminal and the edge node, when the terminal is required to unload a task, the terminal or the digital twin layer determines a task unloading scheme according to the parameters of the trained neural network model, and the terminal issues and executes a calculation task according to the determined task unloading scheme.

Description

Task unloading system, method and device based on edge cooperation
Technical Field
The invention relates to the technical field of computer communication, in particular to a task unloading system, method and device based on edge cooperation.
Background
The new service in 6G will feature low latency as a significant feature, and mobile edge computing is considered as an effective solution for guaranteeing low latency services. The mobile terminal unloads the task to the edge server for calculation, so that the network delay can be effectively reduced. Since the task offloading scheme has a great influence on the performance of the edge system, designing an optimized, energy-saving, low-delay task offloading scheme becomes an important research direction. In addition, a large number of researches prove that the task processing between the edge nodes in a cooperation mode not only can realize the efficient utilization of network resources, but also can obviously reduce the task processing time.
However, when the terminal node offloads the task to the edge server, it needs to comprehensively consider factors such as the data amount of the task, the channel state, the distribution of the edge server, the available resources, and the like. Due to the complex and variable environment of the mobile terminal, some learners try to help the mobile terminal perform task unloading by using an artificial intelligence algorithm.
A great deal of literature is widely researched on a scheme for assisting the manual intelligent mobile terminal in task unloading, and good results are obtained. However, the weak computing power and storage capability of the mobile terminal may cause the trained artificial intelligence algorithm to be insufficiently trained and the predictive analysis capability to be inaccurate.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a task offloading system, method and apparatus based on edge collaboration.
The embodiment of the invention is realized in such a way that the task unloading system based on edge cooperation comprises:
the digital twin layer is used for acquiring operation data of the terminal and the edge node, and training a neural network model according to the acquired operation data, wherein the neural network model is used for determining a task unloading scheme; and
the terminal is used for unloading the computing task to at least one edge node according to the determined task unloading scheme.
In one embodiment, the present invention provides an edge-based collaboration task offloading method, which is applied to the digital twin layer described in the above embodiments, and the edge-based collaboration task offloading method includes the following steps:
acquiring operation data of a terminal and an edge node;
training a neural network model according to the acquired operation data;
and issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to a terminal for execution.
In one embodiment, the present invention provides a task offloading method based on edge collaboration, which is applied to the terminal described in the above embodiment, and the task offloading method based on edge collaboration includes the following steps:
uploading operating data to a digital twin layer;
obtaining a neural network model parameter obtained by training the digital twin layer;
solving a task unloading scheme according to the neural network model parameters;
proportionally unloading the computing task to at least one edge node according to the task unloading scheme;
and acquiring a calculation result returned by the edge node.
In one embodiment, the present invention provides an edge-based collaboration task offloading device applied to the digital twin layer as described in the above embodiments, including:
the acquisition module is used for acquiring the operation data of the terminal and the edge node;
the training module is used for training the neural network model according to the acquired operation data;
and the issuing module is used for issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to the terminal for execution.
In one embodiment, the present invention provides an edge-cooperation-based task offloading device, which is applied to a terminal according to the foregoing embodiment, and the edge-cooperation-based task offloading device includes:
the uploading module is used for uploading the operation data to the digital twin layer;
the parameter acquisition module is used for acquiring neural network model parameters obtained by the training of the digital twin layer;
the calculation module is used for calculating a task unloading scheme according to the neural network model parameters;
the unloading module is used for unloading the calculation task to at least one edge node according to the task unloading scheme in proportion;
and the result acquisition module is used for acquiring the calculation result returned by the edge node.
The task unloading system based on edge cooperation comprises a digital twin layer and a terminal, wherein the digital twin layer trains a neural network model in real time by acquiring running data of edge nodes and the terminal, and in this way, the data for training the neural network model is directly derived from the real running data of the network and is closest to the current network condition, so that an unloading scheme obtained by utilizing parameters of the neural network model obtained by training is more fit with the current network condition, and the effectiveness of the unloading scheme is improved; when the terminal needs to execute task unloading, the optimal unloading task unloading scheme is solved by the digital twin layer or the terminal by using the neural network model parameters obtained by training, the calculation task is unloaded to at least one edge node according to the optimal task unloading scheme for calculation, and the calculation result is obtained. The method sets the training process of the neural network model on the digital twin layer, and reduces the requirements of the terminal on storage capacity and computing capacity.
Drawings
FIG. 1 is a block diagram of a task offload system based on edge collaboration provided in one embodiment;
FIG. 2 is a graph comparing power consumption for executing tasks with and without DT assistance in one embodiment;
FIG. 3 is a diagram illustrating a comparison of delays in executing tasks with or without DT assistance in one embodiment;
FIG. 4 is a graph comparing power consumption for different offloading schemes in one embodiment;
FIG. 5 is a graph comparing network latencies for different offloading schemes in one embodiment;
FIG. 6 is a graph comparing cost functions for different offloading schemes in one embodiment;
FIG. 7 is a flow diagram that illustrates a method for task offloading based on edge collaboration, as provided in one embodiment;
FIG. 8 is a flowchart of a method for task offloading based on edge collaboration as provided in another embodiment;
FIG. 9 is a block diagram of an edge collaboration-based task offload device provided in one embodiment;
FIG. 10 is a block diagram illustrating an exemplary task offload device based on edge collaboration as provided in another embodiment;
FIG. 11 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of the present disclosure.
Fig. 1 is a block diagram of an edge-collaboration-based task offloading system provided in an embodiment, and as shown in fig. 1, the edge-collaboration-based task offloading system includes:
the digital twin layer is used for acquiring operation data of the terminal and the edge node, and training a neural network model according to the acquired operation data, wherein the neural network model is used for determining a task unloading scheme; and
the terminal is used for unloading the computing task to at least one edge node according to the determined task unloading scheme.
In the embodiment of the present invention, as shown in fig. 1, a Digital Twin Edge Network (DTEN) structure is as shown in fig. 1, and a macro base station, a micro base station, a Mobile terminal (ME), and an Edge server es (Edge server) installed in each base station together form a physical entity layer. These network elements each have a respective twin map, such as twin macro base station, twin micro base station, twin ME, twin ES, etc. Meanwhile, the communication environment also has a corresponding twin environment in the twin space, and the twin data together form a digital twin layer. ME in the digital twin layer is represented as
Figure RE-GDA0003274174010000041
For twin ES
Figure RE-GDA0003274174010000042
And (4) performing representation. There is a real-time data channel in the physical entity layer and the digital twin layer, and each network element in the physical entity layer sends the current operating state to the DT in real time. The DT system stores historical operation data of network entities, collects current operation data of the network entities and monitors the overall operation condition of the network.
In the embodiment of the invention, the digital twin layer can be an independent physical server or terminal, can also be a server cluster formed by a plurality of physical servers, and can also be a cloud server, and besides the limiting function of the invention, the digital twin layer can also provide basic cloud computing services such as a cloud server, a cloud database, cloud storage, a CDN and the like; as an optional specific implementation manner, the digital twin layer may be disposed on the communication base station, and this configuration manner may make the digital twin layer closer to the terminal, shorten the distance of data transmission, make one server cover a certain area range, be dedicated to network simulation and training of the neural network model in the area range, and make training of the neural network model more targeted. This is merely an optional specific implementation manner, and the actual arrangement manner may be selected according to actual situations, and the embodiment of the present invention is not limited to this specifically.
In the embodiment of the invention, the digital twin layer acquires the operation data of the terminal and the edge node in the network in real time, the operation data comprises but is not limited to the wireless communication environment information between the terminal and the edge node, the current surplus computing resource and storage resource of the edge node, the surplus battery power in the terminal and other important data, and the data such as the brand, size specification and the like of the equipment are not required to be sent to the data twin layer so as to save the communication and storage resource. In the embodiment of the invention, the data used for training the neural network model are historical operating data of the terminal and the edge node and current latest operating data, so the parameters of the neural network model can be considered to be in a dynamic adjustment process so as to be more matched with the current network state.
In the embodiment of the invention, the neural network model is trained to obtain a series of parameters, the task unloading scheme can be solved from the constructed task unloading model by using the parameters, and the solving process can be arranged on a digital twin layer or a terminal.
In the embodiment of the present invention, the terminal may be any device having a computing task requirement, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like, but is not limited thereto. The terminal, the edge node and the data twin layer are connected with each other through a network. In a real network, data of a terminal is numerous, preferably, the digital twin layer corresponds to a plurality of terminals, and the plurality of terminals may be determined by type, or by region range, for example, which is a specific optional implementation manner. In the embodiment of the present invention, the edge node is mainly an edge server with computing capability, and the specific structure type and the installation position thereof and the like can be determined according to actual needs, for example, the edge node is installed on a base station.
In the embodiment of the invention, after the terminal acquires the task unloading scheme, the computing tasks are proportionally distributed to the local and at least one edge node according to the task unloading scheme, and the computing result returned by the edge node is acquired before. The mode of distributing and carrying out the calculation tasks can fully utilize the calculation resources in the network and improve the calculation speed. Specifically, let the number of edge servers in the network be a, an edge server set a is formed, and denoted as a ═ 1, 2. The task of the terminal is expressed as M @ { Dn,Cn,TthIn which D isnIs the data size of the task. CnThe CPU cycles, T, required to perform this taskthIs the maximum tolerated delay to perform this task. While this task can be divided into n subtasks, denoted as
Figure RE-GDA0003274174010000061
i=1,2,..,n。
Figure RE-GDA0003274174010000062
As a subtask MiAmount of data of, satisfy
Figure RE-GDA0003274174010000063
Figure RE-GDA0003274174010000064
For the CPU cycle of the ith subtask, satisfy
Figure RE-GDA0003274174010000065
Figure RE-GDA0003274174010000066
The maximum tolerated delay for executing the subtask is satisfied
Figure RE-GDA0003274174010000067
The allocation being performed locally at the terminalThe task is expressed as
Figure RE-GDA0003274174010000068
Then, the proportion of the tasks distributed locally occupying the total tasks is set as alpha. Beta is aiThe proportion of the task quantity distributed to the ith edge server satisfies that beta is more than or equal to 0i1 ≦ 1i ═ 1, 2., a, and
Figure RE-GDA0003274174010000069
this is true.
The task unloading system based on edge cooperation comprises a digital twin layer and a terminal, wherein the digital twin layer trains a neural network model in real time by acquiring running data of edge nodes and the terminal, and in this way, the data for training the neural network model is directly derived from the real running data of the network and is closest to the current network condition, so that an unloading scheme obtained by utilizing parameters of the neural network model obtained by training is more fit with the current network condition, and the effectiveness of the unloading scheme is improved; when the terminal needs to execute task unloading, the optimal unloading task unloading scheme is solved by the digital twin layer or the terminal by using the neural network model parameters obtained by training, the calculation task is unloaded to at least one edge node according to the optimal task unloading scheme for calculation, and the calculation result is obtained. The method sets the training process of the neural network model on the digital twin layer, and reduces the requirements of the terminal on storage capacity and computing capacity.
In one embodiment of the invention, the digital twin layer comprises an acquisition module and a training solution module;
the acquisition module is used for acquiring the operation data of the terminal and the edge node;
the training solving module is used for training a neural network model according to the operation data acquired by the acquiring module so as to solve an optimal task unloading scheme.
In the embodiment of the present invention, it is to be understood that the manner of acquiring the operation data by the acquisition module herein may be active acquisition or passive reception, where active acquisition refers to sending an operation data acquisition request to a terminal or an edge server within an area range, so that the terminal or the edge server uploads the operation data in response to the operation data acquisition request; passive reception is usually to make the terminal or the edge server actively upload the operation data, and the uploading may be performed according to a certain rule, for example, setting a time condition, changing the operation data into a condition, and the like.
In the embodiment of the invention, the training solving module is used for training the neural network model, and the training data source is the operation data of the terminal and the edge server, which are acquired by the acquisition module, and comprises historical operation data and current operation data; the purpose of training the neural network model is to solve an optimal task offloading scheme. It should be noted here that the solution of the optimal task offloading scheme is performed on the training solution module.
In one embodiment of the invention, the digital twin layer or the terminal is provided with an unloading model, and the unloading model is used for describing a set of task unloading schemes.
In the embodiment of the invention, the unloading model is used for describing a set of task unloading schemes, the set of task unloading schemes is limited by a network space state, a terminal unloading action set and a network state transition probability, and neural network model parameters are obtained through training, namely, the neural network model parameters are used for determining the optimal unloading scheme from the task unloading scheme set.
In an embodiment of the present invention, the digital twin layer further includes a solving module, and the solving module is configured to solve the optimal task offloading scheme according to the trained neural network model.
In the embodiment of the invention, the resolving module is arranged on the digital twin layer, so that the resolving process is directly carried out on the digital twin layer, and the terminal directly obtains and executes the task unloading scheme. Certainly, as another optional specific implementation scheme, the resolving module is arranged on the terminal, and the terminal only needs to obtain the number of parameters of the neural network model from the digital twin layer and resolve the task unloading scheme according to the parameters.
In one embodiment of the invention, the uninstalling model is composed of a state space, an action space, a state transition probability and a reward function;
the state space is used for describing the connection state of the digital twin layer and the terminal;
the action space is used for describing possible task unloading modes of the terminal;
the state transition probability is used for describing the probability that the terminal executes any unloading mode in the state space so that the state space enters the next state from the current state;
the reward function is used for describing reward scores obtained by the terminal executing task unloading, and measuring factors comprise system power and network time delay; the reward function is a jackpot function arranged to increase the long-distance reward score.
In the examples of the present invention, F is definedlFor the maximum CPU cycle frequency of the ME, the ME may be assigned to the subtask MlHas a CPU frequency of flSatisfy fl≤Fl
Figure RE-GDA0003274174010000081
A CPU cycle for ME stored in DT. The estimated time to execute the task locally is expressed as
Figure RE-GDA0003274174010000082
Let ME and
Figure RE-GDA0003274174010000083
the error between the estimated value and the actual value of DT can be obtained, and the difference between the task-executing time and the actual value of DT can be expressed as
Figure RE-GDA0003274174010000084
So as to obtain the real time consumed by the local execution of the task
Figure RE-GDA0003274174010000085
Meanwhile, the power consumed by the ME to execute the task is expressed as
Figure RE-GDA0003274174010000086
Wherein k islFor an effective switched-capacitor coefficient depending on hardware performance, it is set to k in the present inventionl=10-26
Setting the channel rate from the ME to the ith ES as follows:
Figure RE-GDA0003274174010000087
in the formula, WiIn order to be the bandwidth of the channel,
Figure RE-GDA0003274174010000088
the transmission power, g, required for the ME to send the task to the ith ESiFor the channel coefficients of the wireless channel connecting ME and ESi,
Figure RE-GDA0003274174010000089
and
Figure RE-GDA00032741740100000810
respectively interference and noise of the link.
The transmission time required to send a task to an ESi can be expressed as:
Figure RE-GDA00032741740100000811
set the task amount to
Figure RE-GDA00032741740100000812
Is MiIs assigned to the ith ES with the maximum computing power of
Figure RE-GDA00032741740100000813
ESi to subtask MiIs cycled into
Figure RE-GDA00032741740100000814
Satisfy the requirement of
Figure RE-GDA00032741740100000815
Is an estimate of DT's computing power with respect to MEi, then ESi performs task MiThe required computation time estimate is expressed as:
Figure RE-GDA00032741740100000816
further, the error in the value of the calculated delay can be expressed as:
Figure RE-GDA0003274174010000091
the real computation time required for the task to execute in ES is expressed as:
Figure RE-GDA0003274174010000092
since the amount of feedback data after the computation is finished is extremely small, the pass-back time of the task is set to 0[30 ].
Will subtask MiThe total time required for the process to be sent to the ith ES is equal to the sum of the transmission time and the calculation time, and is expressed as:
Figure RE-GDA0003274174010000093
the total time required for the task to collaboratively process in the ES is expressed as:
Figure RE-GDA0003274174010000094
further, the total time for completing the task M is obtained as follows:
Tt=max{Tl,Tf}
in addition, the ith ES is used to calculate the subtask MiThe power consumed was:
Figure RE-GDA0003274174010000095
wherein k isiIs the effective switched-capacitor coefficient of the ith ES.
At the same time, the ME and ES transfer data to the DT resulting in additional communication overhead and power overhead. The ME sends the operation data to the DT in a wireless mode with the data size of BnThe time required to transfer this data can then be found to be:
Tmedt=Bn/rmedt
in the above formula, rmedtFor the channel rate between ME to DT, it is expressed as:
rmedt=Wlog2(1+pnh(Nn+In))
wherein W is the bandwidth, pnTransmit power, h, N, required for ME to transmit datanAnd InIt represents channel coefficients, noise and interference, respectively.
T should be time-sensitive since the current operational data sent to DTmedt≤τ1Is established, τ1Time tolerance values for data sent by the ME.
Furthermore, the ES transmits the operational data to the DT through the fiber optic network. Set task volume C of datan. The time required was:
Tesdt=Cnfiber
υfiberis the speed at which information is transmitted in the optical fiber. The power consumed by the ES for transmitting such data is expressed as:
Pt=ζ·Tesdt
ζ is the power consumed by the ES to transmit information per unit time. Also, to ensure timeliness of the operation of ES delivery to DT, there is Tesdt≤τ2Is established, τ2A time tolerance value for the ES to transmit data.
The total power consumed by the ME is composed of the local execution power and the transmission power, and is represented as:
Figure RE-GDA0003274174010000101
the total power consumed by all ESi is expressed as:
Figure RE-GDA0003274174010000102
further obtaining a system total power expression:
Figure RE-GDA0003274174010000103
in order to effectively measure the effectiveness of the system performance, a cost function r (t) taking power consumption and time delay as main indexes is constructed and expressed as:
r(t)=(Υ-P)Tt
in the above formula, γ is a constant.
In the embodiment of the invention, the agent for implementing task unloading is ME, the state space of the agent is S, and the state space is defined as
Figure RE-GDA0003274174010000104
Wherein, define lm(t) represents time t
Figure RE-GDA0003274174010000105
And
Figure RE-GDA0003274174010000106
connected state of (D), LmTo represent
Figure RE-GDA0003274174010000107
And is defined as:
Figure RE-GDA0003274174010000108
when in use
Figure RE-GDA0003274174010000109
And
Figure RE-GDA00032741740100001010
when a link is created at time t,/m(t) is 1, otherwise 0. gx(t) represents time t
Figure RE-GDA00032741740100001011
And
Figure RE-GDA00032741740100001012
channel gain of the communication link between, GxIs composed of
Figure RE-GDA00032741740100001013
And
Figure RE-GDA00032741740100001014
the state space of inter-radio channel gain is defined as:
Figure RE-GDA00032741740100001015
gx(t) is a discrete value and has a value range of [ gmin(t),gmax(t)],gmax(t) denotes the channel coefficient with the largest value, gmin(t) represents the minimum channel coefficient. c. Cn(t) denotes the time at t
Figure RE-GDA00032741740100001016
Computing resource state that can be allocated to an off-load task, CnIs composed of
Figure RE-GDA00032741740100001017
The computing resource state space allocated to the offload task is defined as:
Figure RE-GDA0003274174010000111
cn(t) is a discrete value and has a value range of [ cmin(t),cmax(t)],cmin(t) represents
Figure RE-GDA0003274174010000112
The smallest computational resource that can be provided at time t, and cmax(t) represents
Figure RE-GDA0003274174010000113
Can be that
Figure RE-GDA0003274174010000114
The maximum computational resources provided by the offloaded task.
In the embodiment of the present invention, it is,
Figure RE-GDA0003274174010000115
from the perspective of reducing the system power consumption overhead and shortening the task execution time, part of tasks are intelligently distributed to be executed locally, and the rest tasks are migrated to
Figure RE-GDA0003274174010000116
Is provided with
Figure RE-GDA0003274174010000117
The motion space of (a) is defined as:
Figure RE-GDA0003274174010000118
Figure RE-GDA0003274174010000119
wherein a isαTo represent
Figure RE-GDA00032741740100001110
Assigning a locally performed action to a subtask with a ratio α, AαTo allocate tasks to local action spaces, by
Figure RE-GDA00032741740100001111
All distribution modes are composed of
Figure RE-GDA00032741740100001112
Wherein a isα,uTo represent
Figure RE-GDA00032741740100001113
A dispensing action ofα,uIs the corresponding motion space representation as
Figure RE-GDA00032741740100001114
Wherein
Figure RE-GDA00032741740100001115
To represent
Figure RE-GDA00032741740100001116
And allocating to the proportion of the locally executed task amount. a isβiTo represent
Figure RE-GDA00032741740100001117
Scale data amount to betaiTask allocation of
Figure RE-GDA00032741740100001118
The action that is performed is one of,
Figure RE-GDA00032741740100001119
indicates that the task is assigned to the ith
Figure RE-GDA00032741740100001120
Space of actions performedWhich is composed of
Figure RE-GDA00032741740100001121
All possible allocation patterns are as follows:
Figure RE-GDA00032741740100001122
i=1,2,...,a
wherein
Figure RE-GDA00032741740100001123
To represent
Figure RE-GDA00032741740100001124
The task allocation action of (1) is performed,
Figure RE-GDA00032741740100001125
is the corresponding motion space and is represented as:
Figure RE-GDA00032741740100001126
wherein
Figure RE-GDA00032741740100001127
To represent
Figure RE-GDA00032741740100001128
And allocating the task quantity executed by the ith ES to be proportional to the task quantity executed by the ith ES.
In the embodiment of the present invention, the state of the system at time t is s (t), and after taking action a (t) in this state, the system will transition to the next state s (t +1), and the state transition probability is P (s (t +1) | s (t), a (t)). Because each element in the state space is independent, P (s (t +1) | s (t)) can be obtained, and the expression of a (t)) is as follows:
Figure RE-GDA00032741740100001129
in the embodiment of the invention, in order to effectively measure the effectiveness of the system in reducing power overhead and network delay, a cost function is defined as a reward function of the model. Besides the current profit, the system also considers the long-term profit in the future and further defines a cumulative reward function r (t) which is expressed as:
Figure RE-GDA0003274174010000121
where λ ∈ (0,1) is a discount factor, and m represents the number of iterations.
The system is in a state s (t) in a time slot t, and then takes an action a (t) pi (s (t)) according to the strategy pi, so that the system enters a next state s (t + 1). Generally by a value function Qπ(s (t), a (t)) to evaluate whether the action is taken, which is expressed as:
Figure RE-GDA0003274174010000122
further, the ME optimal allocation policy is expressed as:
Figure RE-GDA0003274174010000123
in an embodiment of the present invention, the terminal includes a task offloading module configured to offload a computing task to at least one edge node according to a determined task offloading scheme, where the task offloading module specifically includes:
the resolving unit is used for resolving according to the acquired neural network model reference to obtain an optimal task unloading scheme;
the sending unit is used for proportionally distributing the calculation task to an edge node according to the optimal task unloading scheme;
and the circulating unit is used for judging whether the calculation task is distributed completely, if not, updating the unloading model and re-solving to obtain the current optimal task unloading scheme, and proportionally distributing the calculation task to an edge node according to the current optimal task unloading scheme.
In the embodiment of the present invention, the following table describes the task unloading process of the loop unit of the present invention.
Table 1: calculation task unloading algorithm provided by the embodiment of the invention
Figure RE-GDA0003274174010000124
Figure RE-GDA0003274174010000131
The null set Φ is initialized at step 1 for storing state observation data at each point in time. In steps 3 and 4, the current state s (t) is observed, observation data is stored in a set phi, and in steps 5 and 7, whether task unloading is carried out or not is detected according to observation information. If not, returning to the step 3, and if detecting that the task needs to be unloaded, executing the step 8-9, namely obtaining the optimal strategy pi (s' (t) | theta) under the state s (t) according to the updated output result of the main networkπ). In step 9, ME depends on π (s' (t) | θπ) And performing optimized distribution on the tasks, and performing an algorithm until all tasks are distributed.
The following describes the effect of the present invention with a specific simulation example:
the simulation process is realized by Python 3.6 and Matlab2019a together on a computer configured as an Intel Core i 7-47903.40 GHz CPU and 8GB memory. The fixed setting of the computing power of ME in the simulation is 1 × 1011cycles/s, ES has a uniform distribution of computing power [1,9 ]]×1012cycles/s. The number of collaboratable ESs is 10. For ease of analysis, both noise and power are set to 1. The bandwidth is set to be 100Mbps, and the fading channel model is as follows: h is upsilon1/dα/2,g=υ2/(1-d)α/2. In the formulai~CN(0,1),CN
Figure RE-GDA0003274174010000132
Is a mean value of mvVariance of
Figure RE-GDA0003274174010000133
A circularly symmetric complex gaussian distribution. α is the path loss factor, which was set to 3 in the simulation. Effective switch capacitance coefficient is set to 1 x 10-26
The actor judger network with the same structure is built based on the TensorFlow module and consists of 3 layers of full connection layers. The number of neurons in the fully connected layer of the actor network is 60, 30, 30, respectively, and the number of neurons in the fully connected layer of the critic is 60, 60, 30, respectively. The RELU function is used for all activation functions of the fully connected layer. The soft update factor is set to 0.01.
Fig. 2 and 3 analyze the effect of the presence or absence of DT assistance on the power consumption and latency required to execute a task, respectively. It is evident from both figures that the power and delay consumed for processing tasks with the aid of DT is significantly lower when processing tasks of the same amount of tasks. For example, at a task size of 20GB, the assisting system without DT consumes up to 3.3mJ of power, while its corresponding network latency value is up to 2.8 s. When a DT technology is introduced into an MEC environment, under the conditions of intelligent algorithm training, global data providing and the like assisted by data of the DT, the power consumption of the system is reduced obviously, when the task volume is 20GB, the power consumption index is reduced to 0.3mJ, and the network delay can be reduced to 0.2 s. Although the introduction of DT increases the amount of data transferred, it can result in higher performance gains for the system.
In addition, to measure the effectiveness of the TOS-DTA policy, the policy is executed locally by the ME for all tasks, the policy is executed locally by the ME for half of the tasks, and the policy is executed by the ES for half of the tasks, and the random unloading policy is compared. The power consumption for the different strategies is analyzed in comparison in fig. 4. Since the ES device is more powerful than the ME, the ES will consume less power while processing the same task. For example, when the data size of a task is 40GB, the execution of the task by the ME needs to consume about 0.82mJ of power. While offloading all tasks to ES execution consumes only 0.58mJ of power. The random unloading strategy consumes between two powers, about 0.62mJ, but the random unloading strategy has poor stability from the overall analysis, and the consumed power value has larger deviation. Furthermore, the power consumed by allocating tasks evenly in the ME local and ES processes is about 0.3 mJ. Whereas the TOS-DTA strategy proposed herein consumes only 0.23mJ of power when dealing with the same amount of tasks.
The network delay corresponding to the above several task offloading schemes is further analyzed in fig. 5. As can be seen from FIG. 5, the TOS-DTA policy is only 1.14ms long to execute a 40GB amount of tasks. The corresponding time lengths of the tasks which are all executed locally, randomly unloaded, all executed in the ES and half of the execution of the two parties are respectively 2.52ms, 2.27ms, 1.61ms and 1.25 ms.
With the cost function created by the present invention, it is apparent from fig. 6 that the network integration performance of the TOS-DTA policy is significantly better than the remaining 4 policies.
The invention provides a scheme TOS-DTA for realizing intelligent task unloading under the assistance of DT in a digital twin edge network. Due to the fact that the mobile terminal lacks of global perception of the edge server information, the situations that the power consumption is too high, the task execution time is too long and the like occur when the mobile terminal unloads the task to the edge node. Meanwhile, the lack of computing and storage capabilities of the terminal device limits the feasibility of running the AI algorithm directly on the mobile terminal. Since the DT keeps track of the global data of the network, it can provide it with the current operating data of the edge server when the mobile terminal offloads the task to the edge server. Meanwhile, mass data stored in the DT can be used for assisting the ME to train the AI algorithm. After the AI algorithm is trained in the DT, the trained DNN network parameters are sent to the ME, so that the communication overhead is reduced, and the system performance can be effectively improved. The invention constructs a cost function which takes power overhead and network delay as main performance indexes, and establishes a mathematical optimization model aiming at maximizing the cost function. In view of the complexity of the environment where the ME is located, the process of ME offloading task is described as MDP process, and is solved on this basis by using DDPG algorithm in deep enhanced learning. In order to measure the effectiveness of the TOS-DTA strategy, the strategy provided by the invention is compared with the method in the prior art in a simulation experiment. Simulation proves that the TOS-DTA can effectively improve the network performance no matter in the aspects of power overhead, network delay or the combination of the power overhead and the network delay.
An embodiment of the present invention further provides a task offloading method based on edge collaboration, which is applied to the digital twin layer according to the foregoing embodiment, and as shown in fig. 7, the task offloading method based on edge collaboration includes the following steps:
step S702, acquiring operation data of the terminal and the edge node;
step S704, training a neural network model according to the acquired operation data;
step S706, issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to a terminal for execution.
In the embodiment of the present invention, please refer to the contents of the system part of the present invention for the explanation of the above steps, since the method is applied to the digital twin layer in the system, the definition of the digital twin layer in the system is applicable to the method, and the definition of the terminal can be presumed to be applicable to the method.
An embodiment of the present invention further provides a task offloading method based on edge cooperation, which is applied to the terminal according to the foregoing embodiment, and as shown in fig. 8, the task offloading method based on edge cooperation includes the following steps:
step S802, uploading operation data to a digital twin layer;
step S804, obtaining a neural network model parameter obtained by training the digital twin layer;
step S806, solving a task unloading scheme according to the neural network model parameters;
step S808, proportionally unloading the calculation task to at least one edge node according to the task unloading scheme;
step S810, obtaining a calculation result returned by the edge node.
In the embodiment of the present invention, please refer to the content of the system part of the present invention for the explanation of the above steps, since the method is applied to the terminal in the system, the definition of the terminal in the system is all applicable to the method, and the definition of the digital twin layer can be presumed to be applicable to the method.
An embodiment of the present invention further provides a task offloading device based on edge collaboration, which is applied to the digital twin layer in the foregoing embodiment, and as shown in fig. 9, the task offloading device based on edge collaboration includes:
an obtaining module 901, configured to obtain operation data of a terminal and an edge node;
a training module 902, configured to train a neural network model according to the acquired operation data;
and the issuing module 903 is used for issuing the trained neural network model parameters or the task unloading scheme calculated according to the trained neural network model parameters to the terminal for execution.
In the embodiments of the present invention, for the description of the apparatus portion, reference may be made to contents of related portions in a corresponding method or system, and details of the embodiments of the present invention are not repeated here.
An embodiment of the present invention further provides a task offloading device based on edge cooperation, which is applied to the terminal described in the foregoing embodiment, and as shown in fig. 10, the task offloading device based on edge cooperation includes:
an upload module 1001 for uploading the operation data to the digital twin layer;
a parameter obtaining module 1002, configured to obtain a neural network model parameter obtained by the training of the digital twin layer;
a resolving module 1003, configured to resolve the task unloading scheme according to the neural network model parameters;
the unloading module is used for unloading the calculation task to at least one edge node according to the task unloading scheme in proportion;
a result obtaining module 1004, configured to obtain a calculation result returned by the edge node.
In the embodiments of the present invention, for the description of the apparatus portion, reference may be made to contents of related portions in a corresponding method or system, and details of the embodiments of the present invention are not repeated here.
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may particularly be the digital twin layer or the terminal in fig. 1. As shown in fig. 11, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may further store a computer program, and when the computer program is executed by a processor, the computer program may enable the processor to implement the task offloading method based on edge collaboration provided by the embodiment of the present invention. The internal memory may also store a computer program, and when the computer program is executed by the processor, the computer program may enable the processor to execute the task offloading method based on edge cooperation provided by the embodiment of the present invention. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the inventive arrangements and is not intended to limit the computing devices to which the inventive arrangements may be applied, as a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the task offloading device based on edge collaboration provided by the embodiment of the present invention may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 11. The memory of the computer device may store various program modules constituting the edge cooperation-based task offloading device, such as an acquisition module, a training module, and a distribution module shown in fig. 9. The computer program constituted by the respective program modules causes the processor to execute the steps in the task unloading method based on edge cooperation of the respective embodiments of the present invention described in this specification.
For example, the computer device shown in fig. 11 may execute step S702 by an acquisition module in the task offloading device based on edge cooperation shown in fig. 9; the computer device may execute step S704 through the training module; the computer device may perform step S706 through the issuing module.
In one embodiment, the task offloading device based on edge collaboration provided by the embodiment of the present invention may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 11. The memory of the computer device may store various program modules constituting the task offloading device based on edge cooperation, such as an uploading module, a parameter acquiring module, a resolving module, an offloading module, and a result acquiring module shown in fig. 10. The computer program constituted by the respective program modules causes the processor to execute the steps in the task unloading method based on edge cooperation of the respective embodiments of the present invention described in this specification.
For example, the computer apparatus shown in fig. 11 may perform step S802 by the upload module in the task offload device based on edge cooperation as shown in fig. 10; the computer device may execute step S804 through the parameter obtaining module; the computer device may execute step S806 through the resolving module; the computer device may perform step S808 through the uninstallation module; the computer device may perform step S810 through the result obtaining module.
In one embodiment, a computer device is proposed, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
step S702, acquiring operation data of the terminal and the edge node;
step S704, training a neural network model according to the acquired operation data;
step S706, issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to a terminal for execution.
Or:
step S802, uploading operation data to a digital twin layer;
step S804, obtaining a neural network model parameter obtained by training the digital twin layer;
step S806, solving a task unloading scheme according to the neural network model parameters;
step S808, proportionally unloading the calculation task to at least one edge node according to the task unloading scheme;
step S810, obtaining a calculation result returned by the edge node.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of:
step S702, acquiring operation data of the terminal and the edge node;
step S704, training a neural network model according to the acquired operation data;
step S706, issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to a terminal for execution.
Or:
step S802, uploading operation data to a digital twin layer;
step S804, obtaining a neural network model parameter obtained by training the digital twin layer;
step S806, solving a task unloading scheme according to the neural network model parameters;
step S808, proportionally unloading the calculation task to at least one edge node according to the task unloading scheme;
step S810, obtaining a calculation result returned by the edge node.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An edge collaboration-based task offloading system, comprising:
the digital twin layer is used for acquiring operation data of the terminal and the edge node, and training a neural network model according to the acquired operation data, wherein the neural network model is used for determining a task unloading scheme; and
the terminal is used for unloading the computing task to at least one edge node according to the determined task unloading scheme.
2. The edge collaboration-based task offloading system of claim 1, wherein the digital twin layer comprises an acquisition module and a training solution module;
the acquisition module is used for acquiring the operation data of the terminal and the edge node;
the training solving module is used for training a neural network model according to the operation data acquired by the acquiring module so as to solve an optimal task unloading scheme.
3. The edge collaboration-based task offloading system of claim 1, wherein the digital twin layer or the terminal is provided with an offloading model for describing a set of task offloading schemes.
4. The edge collaboration-based task offloading system of claim 3, wherein the digital twin layer further comprises a solution module configured to solve an optimal task offloading scheme according to the trained neural network model.
5. The task offload system based on edge collaboration as claimed in claim 3, wherein the offload model is composed of a state space, an action space, a state transition probability, and a reward function;
the state space is used for describing the connection state of the digital twin layer and the terminal;
the action space is used for describing possible task unloading modes of the terminal;
the state transition probability is used for describing the probability that the terminal executes any unloading mode in the state space so that the state space enters the next state from the current state;
the reward function is used for describing reward scores obtained by the terminal executing task unloading, and measuring factors comprise system power and network time delay; the reward function is a jackpot function arranged to increase the long-distance reward score.
6. The task offloading system based on edge collaboration as recited in claim 1, wherein the terminal includes a task offloading module configured to offload a computing task to at least one edge node according to the determined task offloading scheme, and the task offloading module specifically includes:
the resolving unit is used for resolving according to the acquired neural network model reference to obtain an optimal task unloading scheme;
the sending unit is used for proportionally distributing the calculation task to an edge node according to the optimal task unloading scheme;
and the circulating unit is used for judging whether the calculation task is distributed completely, if not, updating the unloading model and re-solving to obtain the current optimal task unloading scheme, and proportionally distributing the calculation task to an edge node according to the current optimal task unloading scheme.
7. An edge cooperation-based task offloading method applied to the digital twin layer as claimed in claim 1, wherein the edge cooperation-based task offloading method comprises the following steps:
acquiring operation data of a terminal and an edge node;
training a neural network model according to the acquired operation data;
and issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to a terminal for execution.
8. The task unloading method based on edge cooperation is applied to the terminal according to claim 1, and is characterized by comprising the following steps:
uploading operating data to a digital twin layer;
obtaining a neural network model parameter obtained by training the digital twin layer;
solving a task unloading scheme according to the neural network model parameters;
proportionally unloading the computing task to at least one edge node according to the task unloading scheme;
and acquiring a calculation result returned by the edge node.
9. An edge cooperation-based task unloading device applied to the digital twin layer as claimed in claim 1, wherein the edge cooperation-based task unloading device comprises:
the acquisition module is used for acquiring the operation data of the terminal and the edge node;
the training module is used for training the neural network model according to the acquired operation data;
and the issuing module is used for issuing the neural network model parameters obtained by training or the task unloading scheme calculated according to the neural network model parameters obtained by training to the terminal for execution.
10. An edge cooperation-based task unloading device applied to the terminal according to claim 1, wherein the edge cooperation-based task unloading device comprises:
the uploading module is used for uploading the operation data to the digital twin layer;
the parameter acquisition module is used for acquiring neural network model parameters obtained by the training of the digital twin layer;
the calculation module is used for calculating a task unloading scheme according to the neural network model parameters;
the unloading module is used for unloading the calculation task to at least one edge node according to the task unloading scheme in proportion;
and the result acquisition module is used for acquiring the calculation result returned by the edge node.
CN202110469402.6A 2021-04-29 2021-04-29 Task unloading system, method and device based on edge collaboration Active CN113572804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110469402.6A CN113572804B (en) 2021-04-29 2021-04-29 Task unloading system, method and device based on edge collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110469402.6A CN113572804B (en) 2021-04-29 2021-04-29 Task unloading system, method and device based on edge collaboration

Publications (2)

Publication Number Publication Date
CN113572804A true CN113572804A (en) 2021-10-29
CN113572804B CN113572804B (en) 2023-06-30

Family

ID=78161411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110469402.6A Active CN113572804B (en) 2021-04-29 2021-04-29 Task unloading system, method and device based on edge collaboration

Country Status (1)

Country Link
CN (1) CN113572804B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934472A (en) * 2021-12-16 2022-01-14 江西师范大学 Task unloading method, device, equipment and storage medium
CN114051205A (en) * 2021-11-08 2022-02-15 南京大学 Edge optimization method based on reinforcement learning dynamic multi-user wireless communication scene
CN114466356A (en) * 2022-01-29 2022-05-10 重庆邮电大学 Task unloading edge server selection method based on digital twin
CN116055324A (en) * 2022-12-30 2023-05-02 重庆邮电大学 Digital twin method for self-optimization of data center network
WO2023087442A1 (en) * 2021-11-18 2023-05-25 清华大学 Digital twin network-based low-latency and high-reliability transmission method and apparatus, device, and medium
CN117528657A (en) * 2024-01-04 2024-02-06 长春工程学院 Electric power internet of things task unloading method, system, equipment and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347500A (en) * 2019-06-18 2019-10-18 东南大学 For the task discharging method towards deep learning application in edge calculations environment
CN110351754A (en) * 2019-07-15 2019-10-18 北京工业大学 Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
CN110807515A (en) * 2019-10-30 2020-02-18 北京百度网讯科技有限公司 Model generation method and device
CN110941675A (en) * 2019-11-26 2020-03-31 西安交通大学 Wireless energy supply edge calculation delay optimization method based on deep learning
CN111126594A (en) * 2019-11-25 2020-05-08 北京邮电大学 Neural network model dynamic segmentation method and device based on edge calculation
CN111726826A (en) * 2020-05-25 2020-09-29 上海大学 Online task unloading method in base station intensive edge computing network
CN111857065A (en) * 2020-06-08 2020-10-30 北京邮电大学 Intelligent production system and method based on edge calculation and digital twinning
CN112100155A (en) * 2020-09-09 2020-12-18 北京航空航天大学 Cloud edge cooperative digital twin model assembling and fusing method
CN112118601A (en) * 2020-08-18 2020-12-22 西北工业大学 Method for reducing task unloading delay of 6G digital twin edge computing network
CN112367109A (en) * 2020-09-28 2021-02-12 西北工业大学 Incentive method for digital twin-driven federal learning in air-ground network
CN112422644A (en) * 2020-11-02 2021-02-26 北京邮电大学 Method and system for unloading computing tasks, electronic device and storage medium
US20210068025A1 (en) * 2019-08-28 2021-03-04 Cisco Technology, Inc. Optimizing private network during offload for user equipment performance parameters
US20210081787A1 (en) * 2019-09-12 2021-03-18 Beijing University Of Posts And Telecommunications Method and apparatus for task scheduling based on deep reinforcement learning, and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347500A (en) * 2019-06-18 2019-10-18 东南大学 For the task discharging method towards deep learning application in edge calculations environment
CN110351754A (en) * 2019-07-15 2019-10-18 北京工业大学 Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
US20210068025A1 (en) * 2019-08-28 2021-03-04 Cisco Technology, Inc. Optimizing private network during offload for user equipment performance parameters
US20210081787A1 (en) * 2019-09-12 2021-03-18 Beijing University Of Posts And Telecommunications Method and apparatus for task scheduling based on deep reinforcement learning, and device
CN110807515A (en) * 2019-10-30 2020-02-18 北京百度网讯科技有限公司 Model generation method and device
CN111126594A (en) * 2019-11-25 2020-05-08 北京邮电大学 Neural network model dynamic segmentation method and device based on edge calculation
CN110941675A (en) * 2019-11-26 2020-03-31 西安交通大学 Wireless energy supply edge calculation delay optimization method based on deep learning
CN111726826A (en) * 2020-05-25 2020-09-29 上海大学 Online task unloading method in base station intensive edge computing network
CN111857065A (en) * 2020-06-08 2020-10-30 北京邮电大学 Intelligent production system and method based on edge calculation and digital twinning
CN112118601A (en) * 2020-08-18 2020-12-22 西北工业大学 Method for reducing task unloading delay of 6G digital twin edge computing network
CN112100155A (en) * 2020-09-09 2020-12-18 北京航空航天大学 Cloud edge cooperative digital twin model assembling and fusing method
CN112367109A (en) * 2020-09-28 2021-02-12 西北工业大学 Incentive method for digital twin-driven federal learning in air-ground network
CN112422644A (en) * 2020-11-02 2021-02-26 北京邮电大学 Method and system for unloading computing tasks, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEN SUN,HAIBIN ZHANG,YAN ZHANG: "Reducing Offloading Latency for Digital Twin Edge Networks in 6G", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY》, vol. 69, no. 10, pages 3 - 4 *
杭州张量科技有限公司: "基于边缘计算的混合现实数字孪生解决方案", 《自动化博览》, vol. 38, no. 2 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114051205A (en) * 2021-11-08 2022-02-15 南京大学 Edge optimization method based on reinforcement learning dynamic multi-user wireless communication scene
CN114051205B (en) * 2021-11-08 2022-09-13 南京大学 Edge optimization method based on reinforcement learning dynamic multi-user wireless communication scene
WO2023087442A1 (en) * 2021-11-18 2023-05-25 清华大学 Digital twin network-based low-latency and high-reliability transmission method and apparatus, device, and medium
CN113934472A (en) * 2021-12-16 2022-01-14 江西师范大学 Task unloading method, device, equipment and storage medium
CN113934472B (en) * 2021-12-16 2022-03-01 江西师范大学 Task unloading method, device, equipment and storage medium
CN114466356A (en) * 2022-01-29 2022-05-10 重庆邮电大学 Task unloading edge server selection method based on digital twin
CN114466356B (en) * 2022-01-29 2022-10-14 重庆邮电大学 Task unloading edge server selection method based on digital twin
CN116055324A (en) * 2022-12-30 2023-05-02 重庆邮电大学 Digital twin method for self-optimization of data center network
CN116055324B (en) * 2022-12-30 2024-05-07 重庆邮电大学 Digital twin method for self-optimization of data center network
CN117528657A (en) * 2024-01-04 2024-02-06 长春工程学院 Electric power internet of things task unloading method, system, equipment and medium
CN117528657B (en) * 2024-01-04 2024-03-19 长春工程学院 Electric power internet of things task unloading method, system, equipment and medium

Also Published As

Publication number Publication date
CN113572804B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN113572804B (en) Task unloading system, method and device based on edge collaboration
Vemireddy et al. Fuzzy reinforcement learning for energy efficient task offloading in vehicular fog computing
Karanika et al. A demand-driven, proactive tasks management model at the edge
Kim Nested game-based computation offloading scheme for mobile cloud IoT systems
Raj Improved response time and energy management for mobile cloud computing using computational offloading
Tham et al. Stochastic programming methods for workload assignment in an ad hoc mobile cloud
Crutcher et al. Hyperprofile-based computation offloading for mobile edge networks
CN113645637B (en) Method and device for unloading tasks of ultra-dense network, computer equipment and storage medium
Santos et al. Resource provisioning in fog computing through deep reinforcement learning
CN112905315A (en) Task processing method, device and equipment in Mobile Edge Computing (MEC) network
Lin et al. Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things
Guo et al. Energy-efficient incremental offloading of neural network computations in mobile edge computing
Robles-Enciso et al. A multi-layer guided reinforcement learning-based tasks offloading in edge computing
Dong et al. Content caching-enhanced computation offloading in mobile edge service networks
Binh et al. Value-based reinforcement learning approaches for task offloading in delay constrained vehicular edge computing
Zhang A computing allocation strategy for Internet of things’ resources based on edge computing
Lyu et al. Multi-leader multi-follower Stackelberg game based resource allocation in multi-access edge computing
Mekala et al. Asxc $^{2} $ approach: a service-x cost optimization strategy based on edge orchestration for iiot
Huang et al. Mobility-aware computation offloading with load balancing in smart city networks using MEC federation
Xiang et al. Federated deep reinforcement learning-based online task offloading and resource allocation in harsh mobile edge computing environment
Zeng et al. Joint optimization of multi-dimensional resource allocation and task offloading for QoE enhancement in Cloud-Edge-End collaboration
Lu et al. Enhancing vehicular edge computing system through cooperative computation offloading
Kushwaha et al. Optimal device selection in federated learning for resource-constrained edge networks
Tang et al. To cloud or not to cloud: an on-line scheduler for dynamic privacy-protection of deep learning workload on edge devices
Kumaran et al. An efficient task offloading and resource allocation using dynamic arithmetic optimized double deep Q-network in cloud edge platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant