CN115473896B - Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm - Google Patents

Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm Download PDF

Info

Publication number
CN115473896B
CN115473896B CN202211116361.3A CN202211116361A CN115473896B CN 115473896 B CN115473896 B CN 115473896B CN 202211116361 A CN202211116361 A CN 202211116361A CN 115473896 B CN115473896 B CN 115473896B
Authority
CN
China
Prior art keywords
user equipment
calculating
dqn
time delay
resource allocation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211116361.3A
Other languages
Chinese (zh)
Other versions
CN115473896A (en
Inventor
王真
姚楠
刘子全
路永玲
胡成博
杨景刚
张国江
付慧
薛海
朱雪琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Jiangsu Electric Power Co Ltd
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
State Grid Jiangsu Electric Power Co Ltd
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Jiangsu Electric Power Co Ltd, Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd filed Critical State Grid Jiangsu Electric Power Co Ltd
Priority to CN202211116361.3A priority Critical patent/CN115473896B/en
Publication of CN115473896A publication Critical patent/CN115473896A/en
Application granted granted Critical
Publication of CN115473896B publication Critical patent/CN115473896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses an electric power internet of things unloading strategy and resource allocation optimization method based on a DQN algorithm, which comprises the following steps: s1, constructing an DQN model, and setting parameters required by the DQN model and equipment parameters; the equipment comprises user equipment, an MEC server and a cloud server; s2, initializing an DQN model to enable tasks to be executed at user equipment, tasks to be executed at an MEC server and tasks to be executed at a cloud server; s3, calculating time delay and power consumption of each user equipment when calculation tasks of the user equipment are respectively unloaded to user equipment execution, MEC server execution and cloud server execution; and S4, training the DQN model by using the calculated time delay and power consumption, and determining a task unloading strategy and resource allocation by using the trained DQN model. According to the cloud-edge collaborative unloading strategy and resource configuration optimization method, the factors such as time delay, energy consumption and resource configuration are considered, and the advanced computing task priority processing strategy and the computing resource optimization allocation are considered, so that the time delay and the energy consumption are low, and the cloud-edge collaborative unloading strategy and the resource configuration optimization can be more effectively realized.

Description

Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm
Technical Field
The invention relates to the technical field of edge computing task unloading of an electric power Internet of things, in particular to an electric power Internet of things unloading strategy and resource allocation optimization method based on a DQN algorithm.
Background
The electric power internet of things is application of the internet of things in the smart grid, and communication and electric power system infrastructure resources are effectively integrated. On the one hand, the smart grid communication system requires low delay and reliability; on the other hand, the online real-time monitoring of the power transmission line requires low energy consumption and real-time performance of data transmission. Cloud computing can meet the requirements of large data processing and computing, but cannot meet the requirements of low delay and instantaneity due to transmission delay. The edge computing (Mobile edge computing, MEC) overcomes the disadvantages of cloud computing, can provide low-energy consumption, low-latency computing capability, and meets the requirements of emerging application services of smart grids.
As shown in fig. 1, the user equipment transmits a calculation task to the cloud end and the MEC server through the wireless base station, and the calculation result is transmitted to the user equipment to realize secondary task unloading operation. Optimizing offloading decisions and resource allocation to reduce energy consumption or latency costs is critical to achieving efficient task offloading when there are a large number of offloading tasks.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a simple and efficient electric power internet of things unloading strategy and resource allocation optimization method based on the DQN algorithm, which has high feasibility.
In order to solve the problems, the invention provides an electric power internet of things unloading strategy and resource allocation optimization method based on a DQN algorithm, which comprises the following steps:
S1, constructing an DQN model, and setting parameters required by the DQN model and equipment parameters; the equipment comprises user equipment, an MEC server and a cloud server;
S2, initializing an DQN model to enable tasks to be executed at user equipment, tasks to be executed at an MEC server and tasks to be executed at a cloud server;
S3, calculating time delay and power consumption of each user equipment when calculation tasks of the user equipment are respectively unloaded to user equipment execution, MEC server execution and cloud server execution;
And S4, training the DQN model by using the calculated time delay and power consumption, and determining a task unloading strategy and resource allocation by using the trained DQN model.
As a further improvement of the invention, the required parameters of the DQN model include learning rate, greedy coefficient, discount factor, memory bank size, batch learning size.
As a further refinement of the present invention, the device parameters include respective device processor frequency, computational resource cost, communication bandwidth, energy coefficient, task data size, uplink transmit power, noise power, application completion time period, latency, power consumption, resource allocation weights.
As a further development of the invention, in step S2 the task is performed at the user equipment denoted by l, the task is performed at the MEC server denoted by m, and the task is performed at the cloud server denoted by c.
As a further improvement of the present invention, in step S3, calculating the time delay and the power consumption of the execution of the calculation task respectively offloaded to the user equipment for each user equipment includes:
S3.1, calculating the execution time delay of the user equipment i C i represents the computational load of the user equipment i in CPU cycles,/>The CPU frequency of the user equipment i is GHz;
S3.2, calculating the energy consumption of the user equipment i when executing the calculation task Kappa is the energy coefficient of the chip structure.
As a further improvement of the present invention, in step S3, calculating the time delay and the power consumption of the execution of the calculation task of each user equipment to be respectively offloaded to the MEC server includes:
S3.3, taking non-orthogonal multiple access as multiple access scheme in uplink to meet 5G connection requirement, calculating parameter I i=∑pkhk and uplink transmission rate Wherein B is the base station channel bandwidth, p i is the uplink transmission power of the ue I, h i is the channel gains of the ue I and the base station, I i is the interference of other ues in the channel to the ue I, σ is the noise power, p k is the uplink transmission power of other ues, and h k is the channel gains of other ues and the base station;
S3.4, calculating a priority coefficient lambda i, wherein a calculation formula is as follows Wherein q i is the emergency degree of the user equipment i, and q th is the emergency degree threshold of the user equipment;
S3.5, when the computing user equipment i unloads the computing task to the MEC server, computing resource allocation is performed Wherein/>As a delay preference factor, T i represents the maximum delay required by a computing task,/>Is an energy consumption preference factor,/>For computing resource preference factors, η is the lagrangian multiplier and c m is the unit computation resource cost on the MEC server;
S3.6, calculating execution delay of MEC server Wherein D i represents the amount of data transmitted by the user equipment i to the calculation server in bits;
S3.7, calculating energy consumption of MEC server
As a further improvement of the present invention, in step S3, calculating the time delay and the power consumption of each user equipment for each user equipment to be respectively offloaded to the cloud server for execution includes:
S3.8, computing cloud computing resource allocation Wherein c c is the unit computing resource cost on the cloud server;
S3.9, calculating cloud computing time delay Calculating a result data quantity for the cloud server, wherein v is the unit data transmission delay from the base station to the cloud server;
s3.10, calculating the energy consumption of a cloud server, wherein the energy consumption model of the cloud server is the same as the energy consumption model of the MEC server and is from uplink energy consumption, and the energy consumption is expressed as
As a further improvement of the present invention, step S4 includes:
s4.1 calculating a reward function Wherein N represents the number of user equipments, mu t、μe、μf is time delay, energy consumption and resource allocation weight, t i represents time delay of user equipment i, E i represents power consumption of user equipment i, and f i represents resource allocation of user equipment i;
S4.2, updating the Loss function mode of the DQN model to be Wherein n represents the number of state transitions when updating weights, R τ is a reward function, gamma τ is a discount factor, A' is a reward optimal solution in the S τ+n state, lambda is a priority coefficient,/>As target network parameters, θ is a network parameter, S τ is a system state, and a τ is a system action;
s4.3, when the current sampling weight is stored in the experience pool after updating, priority experience value playback is adopted, and the current sampling weight is determined according to a loss function: wherein w is a priority factor for priority experience playback;
and S4.4, the MEC server calculates the priority of the user equipment through the DQN model to obtain a weight vector, and selects the determined unloading strategy and resource configuration based on the weight vector.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any one of the methods described above when executing the program.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the methods described above.
The invention has the beneficial effects that:
According to the electric power Internet of things unloading strategy and resource allocation optimization method based on the DQN algorithm, the factors such as time delay, energy consumption and resource allocation are considered, and the advanced computing task priority processing strategy and computing resource optimization allocation are carried out, so that the time delay and the energy consumption are low; the cloud-edge collaborative unloading strategy and resource allocation optimization method are superior to other comparison models in time delay, energy consumption and energy efficiency, and the scheme can be used for realizing cloud-edge collaborative unloading strategy and resource allocation optimization more effectively according to different user equipment numbers and calculation task quantity scenes.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention, as well as the preferred embodiments thereof, together with the following detailed description of the invention, given by way of illustration only, together with the accompanying drawings.
Drawings
FIG. 1 is a frame diagram of an electric power Internet of things;
FIG. 2 is a flow chart of an electric power Internet of things unloading strategy and resource configuration optimization method based on a DQN algorithm in the invention;
Fig. 3 is a frame diagram of the DQN algorithm in the invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
As shown in fig. 2-3, the method for optimizing the unloading strategy and the resource configuration of the electric power internet of things based on the DQN algorithm in the preferred embodiment of the invention comprises the following steps:
S1, constructing an DQN model, and setting parameters and equipment parameters required by the DQN model; the equipment comprises user equipment, an MEC server and a cloud server;
The required parameters of the DQN model comprise a learning rate lr, a greedy coefficient E, a discount factor gamma, a memory bank size D and a batch learning size minibatch;
The device parameters include respective device processor frequencies, computational resource costs, communication bandwidth, energy coefficients, task data size, uplink transmit power, noise power, application completion time period, latency, power consumption, resource allocation weights.
Step S2, initializing an DQN model to enable tasks to be executed at user equipment, tasks to be executed at an MEC server and tasks to be executed at a cloud server;
optionally, the task is performed at the user equipment and denoted by l, the task is performed at the MEC server and denoted by m, and the task is performed at the cloud server and denoted by c.
Step S3, calculating time delay and power consumption of each user equipment when calculation tasks of each user equipment are respectively unloaded to user equipment execution, MEC server execution and cloud server execution;
Specifically, in step S3, calculating the time delay and the power consumption of the calculation task of each user equipment to be respectively offloaded to the user equipment for execution includes:
step S3.1, calculating the execution time delay of the user equipment i C i represents the computational load of the user equipment i in CPU cycles,/>The CPU frequency of the user equipment i is GHz;
step S3.2, calculating the energy consumption of the user equipment i when executing the calculation task Kappa is the energy coefficient of the chip structure.
Since the MEC server execution delay includes uplink transmission time, time for the MEC server to perform tasks, and time for the output result to be transmitted from the MEC back to the user equipment, the time for receiving the result is ignored since the output result is typically much smaller in size than the input. Thus, in step S3, calculating the latency and power consumption of the respective offloading of the calculation tasks of each user device to the MEC server for execution includes:
Step S3.3, using non-orthogonal multiple access as multiple access scheme in uplink to meet 5G connection requirement, calculating parameter I i=∑pkhk and uplink transmission rate Wherein B is the base station channel bandwidth, p i is the uplink transmission power of the ue I, h i is the channel gains of the ue I and the base station, I i is the interference of other ues in the channel to the ue I, σ is the noise power, p k is the uplink transmission power of other ues, and h k is the channel gains of other ues and the base station;
Step S3.4, calculating a priority coefficient lambda i, wherein the calculation formula is Wherein q i is the emergency degree of the user equipment i, and q th is the emergency degree threshold of the user equipment;
step S3.5, when the computing user equipment i offloads the computing task to the MEC server, computing resource allocation Wherein/>As a delay preference factor, T i represents the maximum delay required by a computing task,/>Is an energy consumption preference factor,/>For computing resource preference factors, η is the lagrangian multiplier and c m is the unit computation resource cost on the MEC server;
step S3.6, calculating the execution delay of the MEC server Wherein D i represents the amount of data transmitted by the user equipment i to the calculation server in bits;
Step S3.7, calculating MEC server energy consumption
Specifically, in step S3, calculating the time delay and the power consumption of each user equipment for each user equipment to be respectively offloaded to the cloud server for execution includes:
step S3.8, computing cloud computing resource allocation Wherein c c is the unit computing resource cost on the cloud server;
step S3.9, calculating cloud computing time delay Calculating a result data quantity for the cloud server, wherein v is the unit data transmission delay from the base station to the cloud server;
step S3.10, calculating the energy consumption of a cloud server, wherein the energy consumption model of the cloud server is the same as the energy consumption model of the MEC server and is from uplink energy consumption, and the energy consumption model is expressed as
And S4, training the DQN model by using the calculated time delay and power consumption, and determining a task unloading strategy and resource allocation by using the trained DQN model. Specifically, step S4 includes:
step S4.1, calculating a reward function Wherein N represents the number of user equipments, mu t、μe、μf is time delay, energy consumption and resource allocation weight, t i represents time delay of user equipment i, E i represents power consumption of user equipment i, and f i represents resource allocation of user equipment i;
step S4.2, updating the Loss function mode of the DQN model to be Wherein n represents the number of state transitions when updating weights, R τ is a reward function, gamma τ is a discount factor, A' is a reward optimal solution in the S τ+n state, lambda is a priority coefficient,/>As target network parameters, θ is a network parameter, S τ is a system state, and a τ is a system action;
Step S4.3, when the updated experience pool is stored, priority experience value playback is adopted, and the weight of the current item sampling is determined according to a loss function: wherein w is a priority factor for priority experience playback;
And S4.4, calculating the priority of the user equipment by the MEC server through the DQN model to obtain a weight vector, and selecting a determined unloading strategy and resource configuration based on the weight vector.
The invention considers the factors of time delay, energy consumption, resource allocation and the like, and the result shows that: the MEC server resource is reasonably configured to be 30 GHz; the advanced computing task preferentially processes the strategy and optimally distributes computing resources, so that the time delay and the energy consumption are low; the cloud-edge collaborative unloading strategy and resource allocation optimization method are superior to other comparison models in time delay, energy consumption and energy efficiency, and the scheme can be used for realizing cloud-edge collaborative unloading strategy and resource allocation optimization more effectively according to different user equipment numbers and calculation task quantity scenes.
The preferred embodiment of the invention also discloses an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the steps of the method described in the above embodiments when executing the program.
The preferred embodiment of the present invention also discloses a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method described in the above embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are merely preferred embodiments for fully explaining the present invention, and the scope of the present invention is not limited thereto. Equivalent substitutions and modifications will occur to those skilled in the art based on the present invention, and are intended to be within the scope of the present invention. The protection scope of the invention is subject to the claims.

Claims (7)

1. The electric power internet of things unloading strategy and resource allocation optimization method based on the DQN algorithm is characterized by comprising the following steps of:
s1, constructing an DQN model, and setting parameters required by the DQN model and equipment parameters; the equipment comprises user equipment, an MEC server and a cloud server;
s2, initializing an DQN model to enable tasks to be executed at user equipment, tasks to be executed at an MEC server and tasks to be executed at a cloud server;
S3, calculating time delay and power consumption of each user equipment when calculation tasks of the user equipment are respectively unloaded to user equipment execution, MEC server execution and cloud server execution;
s4, training the DQN model by using the calculated time delay and power consumption, and determining a task unloading strategy and resource allocation by using the trained DQN model;
In step S3, calculating the time delay and the power consumption of each user equipment for each user equipment to be unloaded to the user equipment, includes:
S3.1 calculating the execution delay of the user equipment i Representing the computational load of the user equipment i in terms of CPU cycles,/>The CPU frequency of the user equipment i is GHz;
s3.2 calculating energy consumption of user equipment i in executing calculation task Kappa is the energy coefficient of the chip structure;
In step S3, calculating the time delay and the power consumption of the calculation task of each user equipment to be respectively offloaded to the MEC server for execution includes:
S3.3 taking non-orthogonal multiple access as multiple access scheme in uplink to meet 5G connection requirement, calculating parameters And uplink transmission rate/>Wherein B is the base station channel bandwidth, p i is the uplink transmission power of the ue I, h i is the channel gains of the ue I and the base station, I i is the interference of other ues in the channel to the ue I, σ is the noise power, p k is the uplink transmission power of other ues, and h k is the channel gains of other ues and the base station;
S3.4 calculating the priority coefficient The calculation formula is/>Wherein/>For the degree of urgency of user equipment i,/>A threshold value for the degree of urgency of the user equipment;
S3.5 computing user Equipment i computing resource Allocation when offloading computing tasks to MEC Server Wherein/>As a delay preference factor,/>Representing the maximum time delay required by a computing task,/>Is an energy consumption preference factor,/>For computing resource preference factors, η is the lagrangian multiplier and c m is the unit computation resource cost on the MEC server;
S3.6 calculating MEC server execution delay ; Wherein/>Representing the amount of data transmitted by the user equipment i to the calculation server in bits;
S3.7 calculating MEC server energy consumption
In step S3, calculating the time delay and the power consumption of each user equipment for each user equipment to be respectively offloaded to the cloud server, including:
S3.8 computing cloud computing resource allocation Wherein c c is the unit computing resource cost on the cloud server;
S3.9 computing cloud computing time delay D i b is the calculated result data quantity of the cloud server, and v is the unit data transmission delay from the base station to the cloud server;
S3.10, calculating the energy consumption of a cloud server, wherein the energy consumption model of the cloud server is the same as the energy consumption model of the MEC server and is from uplink energy consumption, and the energy consumption is expressed as
2. The DQN algorithm-based power internet of things offloading policy and resource configuration optimization method of claim 1, wherein the DQN model required parameters include learning rate, greedy coefficient, discount factor, bank size, batch learning size.
3. The DQN algorithm-based power internet of things offloading policy and resource configuration optimization method of claim 1, wherein the device parameters include respective device processor frequencies, computational resource costs, communication bandwidth, energy coefficients, task data size, uplink transmit power, noise power, application completion time deadlines, latency, power consumption, resource allocation weights.
4. The DQN algorithm-based power internet of things offloading policy and resource allocation optimization method of claim 1, wherein in step S2, a task is performed at the user equipment and denoted by l, a task is performed at the MEC server and denoted by m, and a task is performed at the cloud server and denoted by c.
5. The DQN algorithm-based power internet of things offloading policy and resource configuration optimization method of claim 1, wherein step S4 comprises:
S4.1 calculating a reward function Wherein/>Mu t、μe、μf represents the number of user equipment, and is respectively time delay, energy consumption and resource allocation weight,/>, andRepresenting the delay of user equipment i,/>Representing the power consumption of user equipment i,/>Representing resource allocation of user equipment i;
S4.2 DQN model updates the los function to ; Wherein,Representing the number of state transitions when updating weights, R τ is the reward function, gamma τ is the discount factor, A' is/>Rewarding optimal solution of state, lambda is priority coefficient,/>As target network parameters, θ is a network parameter,/>Is the system state/>The system acts;
S4.3, when the updated data is stored in the experience pool, priority experience value playback is adopted, and the weight value of the current item sampling is determined according to the loss function: ; wherein w is a priority factor for priority experience playback;
The S4.4 MEC server calculates the priority of the user equipment through the DQN model to obtain a weight vector, and selects a determined offloading strategy and resource configuration based on the weight vector.
6. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1-5 when the program is executed.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-5.
CN202211116361.3A 2022-09-14 2022-09-14 Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm Active CN115473896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211116361.3A CN115473896B (en) 2022-09-14 2022-09-14 Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211116361.3A CN115473896B (en) 2022-09-14 2022-09-14 Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm

Publications (2)

Publication Number Publication Date
CN115473896A CN115473896A (en) 2022-12-13
CN115473896B true CN115473896B (en) 2024-06-25

Family

ID=84332779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211116361.3A Active CN115473896B (en) 2022-09-14 2022-09-14 Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm

Country Status (1)

Country Link
CN (1) CN115473896B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117155798B (en) * 2023-03-13 2024-03-01 中国科学院沈阳自动化研究所 Cloud-edge collaborative real-time scheduling method oriented to resource limitation
CN116582873B (en) * 2023-07-13 2023-09-08 湖南省通信建设有限公司 System for optimizing offloading tasks through 5G network algorithm to reduce delay and energy consumption

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010282A (en) * 2021-03-03 2021-06-22 电子科技大学 Edge cloud collaborative serial task unloading method based on deep reinforcement learning
CN114189892A (en) * 2021-12-15 2022-03-15 北京工业大学 Cloud-edge collaborative Internet of things system resource allocation method based on block chain and collective reinforcement learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11427215B2 (en) * 2020-07-31 2022-08-30 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for generating a task offloading strategy for a vehicular edge-computing environment
CN113950066B (en) * 2021-09-10 2023-01-17 西安电子科技大学 Single server part calculation unloading method, system and equipment under mobile edge environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010282A (en) * 2021-03-03 2021-06-22 电子科技大学 Edge cloud collaborative serial task unloading method based on deep reinforcement learning
CN114189892A (en) * 2021-12-15 2022-03-15 北京工业大学 Cloud-edge collaborative Internet of things system resource allocation method based on block chain and collective reinforcement learning

Also Published As

Publication number Publication date
CN115473896A (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN115473896B (en) Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm
CN110928654B (en) Distributed online task unloading scheduling method in edge computing system
Liu et al. Latency and reliability-aware task offloading and resource allocation for mobile edge computing
Yu et al. Computation offloading for mobile edge computing: A deep learning approach
CN111405568B (en) Computing unloading and resource allocation method and device based on Q learning
CN111148134B (en) Multi-user multi-task unloading method based on mobile edge calculation
CN110798849A (en) Computing resource allocation and task unloading method for ultra-dense network edge computing
CN111918339B (en) AR task unloading and resource allocation method based on reinforcement learning in mobile edge network
Ti et al. Computation offloading leveraging computing resources from edge cloud and mobile peers
CN111524034B (en) High-reliability low-time-delay low-energy-consumption power inspection system and inspection method
CN113220356B (en) User computing task unloading method in mobile edge computing
CN113590279B (en) Task scheduling and resource allocation method for multi-core edge computing server
CN112181655A (en) Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
CN109743713B (en) Resource allocation method and device for electric power Internet of things system
CN114697333A (en) Edge calculation method for energy queue equalization
CN110780986B (en) Internet of things task scheduling method and system based on mobile edge computing
CN114025359B (en) Resource allocation and calculation unloading method, system, equipment and medium based on deep reinforcement learning
CN113747507B (en) 5G ultra-dense network-oriented computing resource management method and device
Di Pietro et al. An optimal low-complexity policy for cache-aided computation offloading
CN112423320A (en) Multi-user computing unloading method based on QoS and user behavior prediction
CN110536308A (en) A kind of multinode calculating discharging method based on game
CN116390160A (en) Cloud edge cooperation-based power task unloading method, device, equipment and medium
CN115278784A (en) Method and system for joint optimization of task bandwidth and power of wireless user based on MEC
CN114615705A (en) Single user resource allocation strategy method based on 5G network
Li et al. Real-time optimal resource allocation in multiuser mobile edge computing in digital twin applications with deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant