CN109803292B - Multi-level user moving edge calculation method based on reinforcement learning - Google Patents

Multi-level user moving edge calculation method based on reinforcement learning Download PDF

Info

Publication number
CN109803292B
CN109803292B CN201811597091.6A CN201811597091A CN109803292B CN 109803292 B CN109803292 B CN 109803292B CN 201811597091 A CN201811597091 A CN 201811597091A CN 109803292 B CN109803292 B CN 109803292B
Authority
CN
China
Prior art keywords
secondary user
edge server
value
server
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811597091.6A
Other languages
Chinese (zh)
Other versions
CN109803292A (en
Inventor
葛颂阳
肖亮
龚杰
陈翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Shunde Sun Yat-Sen University Research Institute
Sun Yat Sen University
SYSU CMU Shunde International Joint Research Institute
Original Assignee
Foshan Shunde Sun Yat-Sen University Research Institute
Sun Yat Sen University
SYSU CMU Shunde International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Shunde Sun Yat-Sen University Research Institute, Sun Yat Sen University, SYSU CMU Shunde International Joint Research Institute filed Critical Foshan Shunde Sun Yat-Sen University Research Institute
Priority to CN201811597091.6A priority Critical patent/CN109803292B/en
Publication of CN109803292A publication Critical patent/CN109803292A/en
Application granted granted Critical
Publication of CN109803292B publication Critical patent/CN109803292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a multi-level user mobile edge calculation method based on reinforcement learning, which is based on a priority scanning method under a Dyna structure, is suitable for a mobile edge calculation wireless communication environment with high requirements on frequency spectrum resource shortage, time delay and energy consumption, and belongs to the field of wireless communication. The method mainly comprises four steps: firstly, a main-level user selects an edge server; secondly, the secondary user applies for occupying the edge server to the control center; then the control center processes the application, distributes the edge server, and the secondary user carries out partial computation unloading or complete local computation; and finally calculating the utility of each secondary user. The method applies reinforcement learning to the mobile edge computing wireless communication network, combines the advantages of model-free and priority scanning methods of Q learning, meets the requirements of time delay and energy consumption of each secondary user while ensuring the overall utility performance of the system, and improves the utilization rate of resources.

Description

Multi-level user moving edge calculation method based on reinforcement learning
Technical Field
The invention relates to the field of wireless communication, in particular to a multi-level user mobile edge computing method based on reinforcement learning, which is a method for computing a wireless communication environment at a mobile edge with high requirements on frequency spectrum resource shortage, time delay and energy consumption.
Background
The traditional priority scanning method is only suitable for a deterministic environment, and an ideal result cannot be obtained for an unknown environment. However, based on the conventional model-free Q learning method, all state action values are updated, all state action values are calculated for many times, the time consumption is long, and the optimal strategy for selecting an edge server and determining the unloading task amount of the moving edge cannot be obtained quickly. In order to improve the spectrum resource utilization rate of the whole system, researchers have proposed some methods such as spectrum sharing and spectrum resource allocation based on model assumptions and known communication environments, but these methods cannot give consideration to the utility of each secondary user and the convergence rate of the edge server for selecting the optimal strategy.
Disclosure of Invention
The invention aims to solve the problem that the traditional mobile edge calculation method cannot give consideration to the time delay energy consumption requirement of a user, the system effectiveness and the optimal strategy convergence speed, and provides a multi-level user mobile edge calculation method based on priority scanning under a Dyna structure in reinforcement learning.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the method for calculating the moving edge of the multilevel users based on reinforcement learning is characterized in that the switching method comprises the following steps:
s1, initializing system parameters, and determining the number N of primary usersPNumber of secondary users NSNumber of edge servers and number of control nodes NMThe power of the secondary user, P,the Task amount of the secondary users is Task, the channel capacity C of a channel between each secondary user and the edge server, and the communication urgency Em is zero; initialization method parameter, state of channel corresponding to edge server
Figure GDA0003382867270000021
The initial value is unoccupied, the value is zero, the initialized Q values are all zero, the learning rate is alpha, the recurrence factor is delta, the priority function value is zero, the priority threshold value is theta, and the priority queue is empty, and the iteration is started;
s2, the main-level user selects to occupy the edge server MPThe state value of the server is 1;
s3, the secondary user i puts forward an application M occupying edge server resources according to an epsilon-greedy strategyi1And determining the task quantity x of the calculation unloading;
s4, the control center processes the application of each secondary user and distributes an edge server for the application;
s5, obtaining the secondary user of the edge server resource to realize the unloading calculation, and if the secondary user without the occupation qualification is not obtained, carrying out the complete local calculation;
s6, calculating the current utility of each secondary user as an instant report, and updating the instant report and the successfully connected edge server into a priority model;
s7, communication urgency is updated, a Q value and a priority function are updated, if the priority function is higher than a threshold value theta, the state and the selection are added into a priority queue, and the corresponding Q value is updated according to the priority sequence;
s8, judging whether an iteration termination condition is met, and if so, calculating the average utility of each secondary user after the whole method is executed; if not, the process goes to step S2.
Preferably, the step S4 specifically includes:
s41, the control center processes the application of each secondary user;
s42, if the channel state corresponding to the edge server is 1, applying for that the secondary user of the server can not obtain the right of use, and jumping to the step S44;
s43, if an edge server is applied only once, the edge server is occupied by the applied secondary user, namely the original application server is occupied successfully;
s44, if the edge server is applied for two times or more, sorting according to the communication urgency of each secondary user, and preferentially obtaining the use right of the edge server with higher communication urgency, namely occupying the resources of the non-original application server;
s45, the secondary users who do not obtain the use right of the edge server are randomly distributed with the edge server which is not applied for, and all the edge servers are ensured to be occupied or the secondary users obtain the server resource; if there is an edge server full and some secondary users do not obtain server resources, then the secondary users can only perform full local computations.
Preferably, the step S6 specifically includes:
s61, the utility of each secondary user comprises two parts, namely calculated delay and calculated energy consumption which are in inverse proportion to the utility;
s62, the utility value of the secondary user for local calculation comprises local calculation delay and local calculation energy consumption; the utility value of the secondary user for carrying out the calculation part unloading comprises local calculation delay, local calculation energy consumption, unloading delay and unloading energy consumption.
Preferably, the step S7 specifically includes:
s71, updating the communication degree is mainly determined according to whether the secondary user successfully obtains the edge server resource, if the secondary user successfully occupies the original application server, the communication urgency degree is kept unchanged; if the secondary user can only use the non-original application edge server, Em+ 1; if the secondary user does not obtain the resources of the server and can only compute completely locally, then Em+2;
S72, updating the Q value, namely, superposing the error between the future discount return learned by the learning rate alpha and the current Q value, namely the prediction error, on the current Q value; updating the priority function is that the maximum value of the prediction error and the current priority function value is taken as a new priority function value.
Compared with the prior art, the invention has the beneficial effects that:
1. compared with most of traditional mobile edge calculation methods, the mobile edge calculation method based on the reinforcement learning for the multi-level users can adapt to time-varying environments, has the advantages of improving the utility and being high in convergence speed, greatly improves the utilization rate of system resources, can meet the requirements of secondary users on time delay and energy consumption while meeting the requirements of primary users on resources, and is suitable for mobile edge calculation wireless environments with short spectrum resources and high requirements on time delay and energy consumption.
2. Compared with most of traditional Q learning edge servers and randomly selected spectrum resource allocation methods, the mobile edge calculation method based on the multi-level user of reinforcement learning disclosed by the invention can be converged to an optimal strategy at a higher speed.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flowchart of a method for calculating a moving edge of a multi-level user based on reinforcement learning according to the present invention;
FIG. 2 is a flowchart illustrating the steps of the process of the control center processing the secondary user application;
FIG. 3 is a flowchart illustrating the steps of the process of calculating a utility value for a secondary user in the present invention;
FIG. 4 is a flow chart of the steps of the present invention for updating the communications urgency, Q function, and priority function;
FIG. 5(a) is a graph comparing the utility of the secondary user 1 based on simple Q learning and the method of the present invention;
FIG. 5(b) is a graph comparing the utility of the secondary user 2 based on simple Q learning and the method of the present invention;
FIG. 5(c) is a graph comparing energy consumption of secondary user 1 based on simple Q learning and the method of the present invention;
FIG. 5(d) is a graph comparing energy consumption of secondary user 2 based on simple Q learning and the method of the present invention;
FIG. 5(e) is a time delay comparison graph for the secondary user 1 based on simple Q learning and the method of the present invention;
fig. 5(f) is a delay comparison graph for the secondary user 2 based on simple Q learning and the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example one
The embodiment designs a moving edge calculation method of a multi-level user based on reinforcement learning. The processing flow of the exchange method in the invention comprises the following steps:
s1, initializing system parameters, and determining the number N of primary usersPNumber of secondary users NSNumber of edge servers and number of control nodes NMThe transmitting power P of the secondary users, the Task amount of the secondary users is Task, the channel capacity C of a channel between each secondary user and the edge server and the communication urgency Em are zero; initialization method parameter, state of channel corresponding to edge server
Figure GDA0003382867270000061
The initial value is unoccupied, the value is zero, the initialized Q values are all zero, the learning rate is alpha, the recurrence factor is delta, the priority function value is zero, the priority threshold value is theta, and the priority queue is empty, and the iteration is started;
s2, the main-level user selects to occupy the edge server MPThe state value of the server is 1;
s3, the secondary user i puts forward an application M occupying edge server resources according to an epsilon-greedy strategyi1And determining the task quantity x of the calculation unloading;
s4, the control center processes the application of each secondary user and distributes an edge server for the application;
s41, the secondary users who do not obtain the use right of the edge server are randomly distributed with the edge server which is not applied for, and all the edge servers are ensured to be occupied or the secondary users obtain the server resource; if there is an edge server full and some secondary users do not obtain server resources, then the secondary users can only perform full local computations.
S42, obtaining the secondary user of the edge server resource to realize the unloading calculation, and if the secondary user without the occupation qualification is not obtained, carrying out the complete local calculation;
s6, calculating the current utility of each secondary user as an instant report, and updating the instant report and the successfully connected edge server into a priority model; the utility value of the secondary user for local calculation comprises local calculation delay and local calculation energy consumption; the utility value of the secondary user for carrying out the calculation part unloading comprises local calculation delay, local calculation energy consumption, unloading delay and unloading energy consumption.
S7, updating the communication urgency, and if the secondary user successfully occupies the original application server, keeping the communication urgency unchanged; if the secondary user can only use the non-original application edge server, Em+ 1; if the secondary user does not obtain the resources of the server and can only compute completely locally, then Em+ 2; updating the Q value, and superposing the error between the future discount return learned at the learning rate alpha and the current Q value, namely the prediction error, on the current Q value; taking the maximum value of the prediction error and the current priority function value as a new priority function value, if the priority function is higher than a threshold value theta, adding the state and the selection into a priority queue, and updating a corresponding Q value according to the priority sequence;
and S8, setting and iterating 1000 time slots and 500 independent experiments. Judging whether iteration termination conditions are met, if so, calculating the average utility of each secondary user after the whole method is executed; if not, the process goes to step S2.
The key of the mobile edge computing method is that a reinforcement learning method is used so that a user can adapt to a variable environment in the process of selecting a mobile edge server. The optimal selection strategy of the mobile edge server is obtained under the condition of meeting the requirements of user time delay and energy consumption by utilizing a priority scanning algorithm under a Dyna structure in reinforcement learning and combining the advantages of a priority scanning method of preferentially updating a state with higher priority and Q learning and obtaining experience through interaction with the environment.
Example two
The embodiment of the invention will be described in detail with reference to fig. 1 to fig. 5 in conjunction with a specific embodiment of a moving edge calculation method with two secondary users.
Consider the system model as follows: in the process of mobile edge calculation, 1 primary user and 2 secondary users choose to utilize 3 edge servers to carry out unloading tasks. The Task amount of the secondary user 1 is Task1The Task amount of the secondary user 2 is Task2The channel capacity between two secondary users and three edge servers is matrix C, and the communication urgency is zero. The channel states corresponding to the edge servers are all zero, i.e. are unoccupied. The Q value is zero, the learning rate is 0.8, the discount factor is 0.02, the priority function value is zero, the priority threshold is 0.15, and the priority queue is empty, and the iteration starts.
In each time slot, the primary user firstly occupies the edge server to enable the state of a corresponding channel of the edge server to be 1, the secondary user uses an epsilon-greedy strategy to provide an application for using the edge server, and the unloading amount of a calculation task is determined. The control center processes the application of each secondary user and allocates edge servers to ensure that all the edge servers are occupied or the secondary users obtain server resources; if there is an edge server full and some secondary users do not obtain server resources, then the secondary users can only perform full local computations.
If the local calculation is carried out, calculating the local time delay and energy consumption; if calculation unloading is carried out, two parts are calculated, including time delay energy consumption of local calculation, time delay and energy consumption of unloading calculation. And updates the utility value, Q value, and priority function value.
In each time slot, the two secondary users, influenced by each other's edge server selection, aim to maximize utility values. The termination condition of the iteration is that 300 time slots are carried out and 700 independent experiments are carried out.
The simulation result of fig. 5(a) shows that, in terms of utility value, the moving edge calculation method based on reinforcement learning proposed by the present invention converges faster than the simple Q learning by about 50% for the secondary user 1. The simulation results of fig. 5(b) show that for the secondary user 2, the inventive method is about 30% faster in utility value than the convergence rate based on simple Q learning. As shown in fig. 5(c) and 5(d), the method proposed by the present invention has better performance for the energy consumption of both secondary users than based on simple Q learning. Fig. 5(e) and 5(f) show that the method proposed by the present invention has better performance in terms of calculating the time delay.
In summary, compared with the method based on simple Q learning, the method provided by the present invention has faster convergence speed in terms of utility value, computational energy consumption, and computational delay, while ensuring that the optimal value is unchanged and giving consideration to the performance of two secondary users.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (3)

1. A method for calculating moving edges of multi-level users based on reinforcement learning is characterized by comprising the following steps:
s1, initializing system parameters, and determining the number N of primary usersPNumber of secondary users NSNumber of edge servers and number of control nodes NMThe transmitting power P of the secondary users, the Task amount of the secondary users is Task, the channel capacity C of a channel between each secondary user and the edge server and the communication urgency Em are zero; initialization method parameter, state of channel corresponding to edge server
Figure FDA0003382867260000011
The initial value is unoccupied, the value is zero, the initialized Q values are all zero, the learning rate is alpha, the recurrence factor is delta, the priority function value is zero, the priority threshold value is theta, and the priority queue is empty, and the iteration is started;
s2, the main-level user selects to occupy the edge server MPThe state value of the server is 1;
s3, the secondary user i puts forward an application M occupying edge server resources according to an epsilon-greedy strategyi1And determining the task quantity x of the calculation unloading;
s4, the control center processes the application of each secondary user and distributes an edge server for the application;
s5, obtaining the secondary user of the edge server resource to realize the unloading calculation, and if the secondary user without the occupation qualification is not obtained, carrying out the complete local calculation;
s6, calculating the current utility of each secondary user as an instant report, and updating the instant report and the successfully connected edge server into a priority model;
s7, communication urgency is updated, a Q value and a priority function are updated, if the priority function is higher than a threshold value theta, the state and the selection are added into a priority queue, and the corresponding Q value is updated according to the priority sequence;
s8, judging whether an iteration termination condition is met, and if so, calculating the average utility of each secondary user after the whole method is executed; if not, jumping to step S2;
the step S7 specifically includes:
s71, updating the communication degree is mainly determined according to whether the secondary user successfully obtains the edge server resource, if the secondary user successfully occupies the original application server, the communication urgency degree is kept unchanged; if the secondary user can only use the non-original application edge server, Em+ 1; if the secondary user does not obtain the resources of the server and can only compute completely locally, then Em+2;
S72, updating the Q value, namely, superposing the error between the future discount return learned by the learning rate alpha and the current Q value, namely the prediction error, on the current Q value; updating the priority function is that the maximum value of the prediction error and the current priority function value is taken as a new priority function value.
2. The method for calculating moving edges of multi-level users based on reinforcement learning of claim 1, wherein the step S4 specifically includes:
s41, the control center processes the application of each secondary user;
s42, if the channel state corresponding to the edge server is 1, applying for that the secondary user of the server can not obtain the right of use, and jumping to the step S44;
s43, if an edge server is applied only once, the edge server is occupied by the applied secondary user, namely the original application server is occupied successfully;
s44, if the edge server is applied for two times or more, sorting according to the communication urgency of each secondary user, and preferentially obtaining the use right of the edge server with higher communication urgency, namely occupying the resources of the non-original application server;
s45, the secondary users who do not obtain the use right of the edge server are randomly distributed with the edge server which is not applied for, and all the edge servers are ensured to be occupied or the secondary users obtain the server resource; if there is an edge server full and some secondary users do not obtain server resources, then the secondary users can only perform full local computations.
3. The method for calculating moving edges of multi-level users based on reinforcement learning of claim 1, wherein the step S6 specifically includes:
s61, the utility of each secondary user comprises two parts, namely calculated delay and calculated energy consumption which are in inverse proportion to the utility;
s62, the utility value of the secondary user for local calculation comprises local calculation delay and local calculation energy consumption; the utility value of the secondary user for carrying out the calculation part unloading comprises local calculation delay, local calculation energy consumption, unloading delay and unloading energy consumption.
CN201811597091.6A 2018-12-26 2018-12-26 Multi-level user moving edge calculation method based on reinforcement learning Active CN109803292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811597091.6A CN109803292B (en) 2018-12-26 2018-12-26 Multi-level user moving edge calculation method based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811597091.6A CN109803292B (en) 2018-12-26 2018-12-26 Multi-level user moving edge calculation method based on reinforcement learning

Publications (2)

Publication Number Publication Date
CN109803292A CN109803292A (en) 2019-05-24
CN109803292B true CN109803292B (en) 2022-03-04

Family

ID=66557603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811597091.6A Active CN109803292B (en) 2018-12-26 2018-12-26 Multi-level user moving edge calculation method based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN109803292B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347495B (en) * 2019-07-24 2023-04-28 张�成 Task migration method for performing mobile edge calculation by using deep reinforcement learning
CN110798849A (en) * 2019-10-10 2020-02-14 西北工业大学 Computing resource allocation and task unloading method for ultra-dense network edge computing
CN111400031B (en) * 2020-03-01 2023-08-22 南京大学 Value function-based reinforcement learning method for processing unit deployment
CN113709201B (en) * 2020-05-22 2023-05-23 华为技术有限公司 Method and communication device for computing offloading
CN112512056B (en) * 2020-11-14 2022-10-18 北京工业大学 Multi-objective optimization calculation unloading method in mobile edge calculation network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248693A (en) * 2013-05-03 2013-08-14 东南大学 Large-scale self-adaptive composite service optimization method based on multi-agent reinforced learning
CN108632860A (en) * 2018-04-17 2018-10-09 浙江工业大学 A kind of mobile edge calculations rate maximization approach based on deeply study
CN108924938A (en) * 2018-08-27 2018-11-30 南昌大学 A kind of resource allocation methods for wireless charging edge calculations network query function efficiency

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11009836B2 (en) * 2016-03-11 2021-05-18 University Of Chicago Apparatus and method for optimizing quantifiable behavior in configurable devices and systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248693A (en) * 2013-05-03 2013-08-14 东南大学 Large-scale self-adaptive composite service optimization method based on multi-agent reinforced learning
CN108632860A (en) * 2018-04-17 2018-10-09 浙江工业大学 A kind of mobile edge calculations rate maximization approach based on deeply study
CN108924938A (en) * 2018-08-27 2018-11-30 南昌大学 A kind of resource allocation methods for wireless charging edge calculations network query function efficiency

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《An_Air-Ground_Integration_Approach_for_Mobile_Edge_Computing_in_IoT》;Zhenyu Zhou 等;《IEEE Communications Magazine》;20180814;全文 *
《Learning-Based_Rogue_Edge_Detection_in_VANETs_with_Ambient_Radio_Signals》;Xiaozhen Lu 等;《2018 IEEE International Conference on Communications (ICC)》;20180731;全文 *
《Security_in_Mobile_Edge_Caching_with_Reinforcement_Learning》;Liang Xiao 等;《IEEE Wireless Communications》;20180704;全文 *

Also Published As

Publication number Publication date
CN109803292A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN109803292B (en) Multi-level user moving edge calculation method based on reinforcement learning
CN111953759B (en) Collaborative computing task unloading and transferring method and device based on reinforcement learning
CN111953758B (en) Edge network computing unloading and task migration method and device
CN112286677B (en) Resource-constrained edge cloud-oriented Internet of things application optimization deployment method
CN113296845B (en) Multi-cell task unloading algorithm based on deep reinforcement learning in edge computing environment
CN109600178B (en) Optimization method for energy consumption, time delay and minimization in edge calculation
CN111475274B (en) Cloud collaborative multi-task scheduling method and device
CN108416465B (en) Workflow optimization method in mobile cloud environment
CN113364859B (en) MEC-oriented joint computing resource allocation and unloading decision optimization method in Internet of vehicles
CN110233755B (en) Computing resource and frequency spectrum resource allocation method for fog computing in Internet of things
CN112272102B (en) Method and device for unloading and scheduling edge network service
CN113573363B (en) MEC calculation unloading and resource allocation method based on deep reinforcement learning
CN113993218A (en) Multi-agent DRL-based cooperative unloading and resource allocation method under MEC architecture
CN112732359A (en) Multi-user hybrid computing unloading method and device, electronic equipment and storage medium
CN114885418A (en) Joint optimization method, device and medium for task unloading and resource allocation in 5G ultra-dense network
CN113485826A (en) Load balancing method and system for edge server
CN104684095A (en) Resource allocation method based on genetic operation in heterogeneous network convergence scenes
CN115396953A (en) Calculation unloading method based on improved particle swarm optimization algorithm in mobile edge calculation
CN113139639B (en) MOMBI-oriented smart city application multi-target computing migration method and device
Meng et al. Deep reinforcement learning based delay-sensitive task scheduling and resource management algorithm for multi-user mobile-edge computing systems
CN114980216B (en) Dependency task unloading system and method based on mobile edge calculation
CN110392377A (en) A kind of 5G super-intensive networking resources distribution method and device
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
CN115914230A (en) Adaptive mobile edge computing unloading and resource allocation method
CN113672372A (en) Multi-edge cooperative load balancing task scheduling method based on reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant