CN110430593B - Method for unloading tasks of edge computing user - Google Patents

Method for unloading tasks of edge computing user Download PDF

Info

Publication number
CN110430593B
CN110430593B CN201910747683.XA CN201910747683A CN110430593B CN 110430593 B CN110430593 B CN 110430593B CN 201910747683 A CN201910747683 A CN 201910747683A CN 110430593 B CN110430593 B CN 110430593B
Authority
CN
China
Prior art keywords
user
task
unloading
base station
temperature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910747683.XA
Other languages
Chinese (zh)
Other versions
CN110430593A (en
Inventor
胡洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910747683.XA priority Critical patent/CN110430593B/en
Publication of CN110430593A publication Critical patent/CN110430593A/en
Application granted granted Critical
Publication of CN110430593B publication Critical patent/CN110430593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/336Signal-to-interference ratio [SIR] or carrier-to-interference ratio [CIR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • H04W52/0206Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a task unloading method for an edge computing user in the field of mobile communication. The problem of task offloading of users to base stations in dense cellular networks is affected not only by environmental noise, but also by co-channel interference. The invention considers the user task unloading condition when the co-channel interference exists in the heterogeneous network, establishes a communication model, provides a concept of estimating the initial temperature of the simulated annealing algorithm by using an algorithm based on numerical estimation, adopts the numerical estimation of variable step length to search for approximate initial temperature, overcomes the problem of overlong algorithm time caused by using random initial temperature, then circularly changes the target server of the user by using the simulated annealing algorithm, controls the searching process of the solution by using a cooling function combining logarithm and a polynomial, and searches for a task unloading scheme which ensures that the average unloading time delay of the user is the lowest. Compared with the traditional method, the method can obtain the optimized scheme more quickly, and has better practicability.

Description

Method for unloading task of edge computing user
Technical Field
The invention belongs to the technical field of mobile communication, and relates to a task unloading method for an edge computing user.
Background
The data required by network application is more and more, the real-time requirement is higher and more, and the traditional data request mode that a client is directly connected with a cloud server becomes difficult to meet the Quality of Service (QoS), and the main bottlenecks are the flow pressure of a large amount of data on a return link of a core network and the problem of time delay caused by excessive data transmission links.
Mobile devices have more and more intelligent service functions, which means more power consumption to support higher processing power, and are more indispensable in real life, which makes the mobile devices used more frequently and more frequently, but cannot provide long-term endurance due to the fact that the mobile devices themselves are flexible and light, and the power sources carried thereon are limited in size and weight. Resulting in limited application for mobile devices.
The concept of edge computing is gradually mature, servers with computing capability, called MEC servers, are deployed on access base stations close to users, part of services of an enterprise can be moved from a core network to an edge server, and a mobile device can preferentially find whether corresponding services exist on the edge server when a task is requested.
In order to fully utilize the computing capacity of the edge server and relieve the energy consumption pressure of the mobile equipment, the mobile equipment selects to upload the tasks to the MEC server for computing, the computing capacity of the MEC server is high, the time can be shortened, but the time is needed for uploading the tasks, and the time delay is also caused by queuing a plurality of user tasks on the MEC server, so that a good task scheduling method is needed.
The simulated annealing algorithm is a randomized iterative algorithm for searching an optimal solution, has the capability of jumping out of a local optimal solution, can be used for solving the problem of nonlinear programming such as task scheduling, and can be improved by three modes of controlling an acceptance function of the optimal solution, selecting a proper initial temperature and adjusting a temperature drop function expression.
Disclosure of Invention
Based on the above background, the present invention is directed to solve the scheduling problem between a plurality of user tasks and a plurality of base stations, and provides an improved task offloading algorithm for the non-linear programming problem, including the following steps:
1. the network communication model is established, the network communication model comprises a plurality of hexagonal cells, each cell comprises a plurality of microcellular base stations and picocellular base stations, each base station is provided with a corresponding MEC server, the MEC server capacity of the base stations of the same type is the same, the MEC server capacity of the picocellular base stations is greater than that of the picocellular base stations, each cell comprises a plurality of users, each user equipment can select any MEC server in the cell as an unloading target server, and generated tasks are unloaded to the servers.
2. The method comprises the steps of calculating the total task unloading time delay of user equipment in a honeycomb, firstly calculating the signal-to-interference-and-noise ratio of the user equipment, wherein each honeycomb cell has a co-channel cell due to the existing honeycomb frequency reuse technology, and users in the co-channel cells use the same range of frequency for communication, so that co-channel interference can be generated when the users in the co-channel cells communicate at the same time, after the co-channel interference of a user i is obtained, the signal-to-interference-and-noise ratio of the user i can be calculated, the uplink rate of the user is further calculated, then the unloading time delay of a user i task is calculated, and then the task unloading time delay of each user is summed to obtain the total task unloading time delay.
3. And (4) taking the total user task unloading time delay as an initial value of an evaluation function for solving the initial temperature, and estimating the initial temperature of the simulated annealing algorithm by using a numerical estimation algorithm. The numerical estimation algorithm solves for new temperatures by cyclically changing the user's target MEC server and recalculating a new total unload delay, where the temperature change step size is variable length.
4. After the estimated initial temperature is obtained, the value is used as the initial temperature value of the simulated annealing algorithm, the end temperature and the iteration number upper limit are given, and then the optimal task unloading scheme is solved through the simulated annealing algorithm:
in order to realize the task unloading method for the edge computing user, the invention provides the following technical methods:
1. the method for calculating the user task unloading time delay comprises the following steps:
step 1: calculating the co-channel interference of the user i according to the distance of the co-channel cell, the path loss coefficient, the transmitting power of the user equipment and the like:
and 2, step: calculating the signal-to-interference-and-noise ratio of the user i by environmental noise, co-channel interference of the user, the distance between the user and a target MEC server and the like:
and step 3: calculating the uplink rate of the user according to the channel capacity between the user and the base station and the signal-to-interference-and-noise ratio of the user:
and 4, step 4: calculating the unloading time delay of the task of the user i according to the user content, the task queue condition of the target server, the processing capacity of the target server and the uplink rate of the user;
and 5: and summing the unloading time delays of all the user tasks to obtain the total unloading time delay of the user tasks.
2. A numerical estimation algorithm comprising the steps of:
step 1: the total unloading time delay of the user task is used as an initial value of an evaluation function of numerical value estimation, the cycle number and the trust precision at each temperature are given, and a difference value used for measuring the relation between the acceptance probability and the trust precision of a new solution is given.
Step 2: and randomly replacing the target MEC server of one user with another MEC server, and recalculating the unloading time delay of the total user task.
And step 3: and (4) determining whether to accept a new solution according to the relationship between the new total unloading time delay and the previous unloading time delay, and returning to the step 2 to continue the circulation if the limitation of the circulation times is not reached.
And 4, step 4: and calculating the acceptance probability of the solution at the temperature, and determining the change of the temperature and the change of the temperature change step length according to the relation between the acceptance probability and the trust precision of the solution at the temperature and the relation between the acceptance probability and the trust precision of the solution at the previous temperature.
The manner in which the temperature changes and the temperature step changes is as follows
Figure 117926DEST_PATH_IMAGE001
The following two steps are performed
Figure 133417DEST_PATH_IMAGE002
Figure 165571DEST_PATH_IMAGE003
If it is not
Figure 889682DEST_PATH_IMAGE004
The following two steps are performed
Figure 85390DEST_PATH_IMAGE005
,
Figure 581487DEST_PATH_IMAGE006
If it is not
Figure 680416DEST_PATH_IMAGE007
Then execute
Figure 642556DEST_PATH_IMAGE002
Otherwise perform
Figure 523180DEST_PATH_IMAGE002
Wherein the content of the first and second substances,P k-1is the acceptance probability of the new solution in the k-1 th round,P kis the acceptance probability of the new solution in the k-th round, P is the confidence measure,Tis the temperature, ΔTIs the step of change in temperature.
And 5: and judging whether the difference between the acceptance probability of the solution and the trust precision is smaller than a given difference, if so, outputting the temperature as the predicted initial temperature, and if so, continuing the circulation.
2. The method is characterized in that an annealing algorithm is simulated, the average lowest user task unloading time delay is guaranteed, the core idea is that the average user time delay under a new solution state is continuously tested through a given solution state, so that the solution state under the minimum average time delay is obtained as an optimal task allocation scheme, and the method comprises the following steps:
step 1: and (4) taking an estimated value of the initial temperature obtained by a numerical estimation algorithm as the initial temperature, giving the iteration times and the total iteration time upper limit at each temperature, and giving the termination temperature.
Step 2: and changing the target base station of one user into other target base stations, and recalculating the total user task unloading time delay.
And step 3: and comparing the new total user task unloading delay with the previous total user task unloading delay, if the new total user task unloading delay is smaller than the previous total user task unloading delay, accepting the new solution, otherwise, accepting the new solution according to probability.
And 4, step 4: and (4) judging whether the cycle times are less than the set iteration times at each temperature, if so, returning to the step (2) to continue the cycle, otherwise, executing temperature updating according to the cooling function.
And 5: and judging whether the temperature reaches the set termination temperature or the cycle number reaches the upper limit of the total cycle number, if any one of the temperature and the cycle number reaches the upper limit of the total cycle number, terminating the cycle, and taking the current solution state as the optimal scheduling scheme of the user task. Otherwise, the process continues to loop back to the step 2.
The invention deduces the uplink rate of the user when the co-channel interference exists by establishing a model of the user communication when a plurality of co-channel cells exist, and is closer to the real deployment situation.
Drawings
FIG. 1 is a diagram of co-channel cells of the present invention
FIG. 2 is a schematic diagram of a user offload scenario of the present invention
FIG. 3 is an analysis schematic of the unloading process of the present invention
FIG. 4 is a schematic view of the unloading process of the present invention
Detailed Description
The steps for carrying out the present invention will be described in conjunction with the drawings of the present invention
Step 1: as shown in fig. 2, for the user task offloading scenario in a single cell, there are femtocell base stations and picocell base stations in the cell where the user is located, each base station is equipped with an MEC server, all users can connect to all base stations, and the scheduling algorithm knows in advance the device transmission power of all users and the MEC server processing capability of each base station.
Step 2: the user calculates the co-channel interference suffered by itself, as shown in fig. 1, for any user a in the hexagonal cell, there are co-channel users in 6 co-channel cells on the circumference of distance D from the user a, and since the same frequency is used, interference will be generated to a, for simplicity, the co-channel interference of user i is calculated by that the power of 6 co-channel users is equal to the equipment power of user a, as follows
Figure 179509DEST_PATH_IMAGE008
Wherein
Figure 930296DEST_PATH_IMAGE009
For co-channel interference of user i,UE irepresents the transmit power of the user equipment i,Krepresenting the path loss coefficient of the user equipment i,Drepresenting the distance of the co-channel cells, the co-channel cells of a cell are equal to the distance of the cell, so the interference generated by each co-channel cell is also equal, and multiplying by 6 represents the total co-channel interference. After co-channel interference is calculated, the signal-to-interference-and-noise ratio of the user equipment can be calculated, and the signal-to-interference-and-noise ratio can be expressed as:
Figure 756563DEST_PATH_IMAGE010
in the formula
Figure 169440DEST_PATH_IMAGE011
Is the signal to interference plus noise ratio of user equipment i,UE irepresents the transmit power of the user equipment i,Kthe path loss coefficient between the user equipment i and the unloading target base station, d is the distance between the user equipment i and the unloading target base station, and n is the Gaussian white noise power in the environment.
And step 3: randomly allocating a target base station for each user, calculating the uplink rate between the user and the target base station, wherein the uplink rate of the user i can be expressed as
Figure 889397DEST_PATH_IMAGE012
Wherein the content of the first and second substances,R i,jrepresenting the uplink rate between the ith user equipment and the jth base station,Brepresenting the channel capacity between the user equipment and the base station,
Figure 811085DEST_PATH_IMAGE011
is the signal to interference plus noise ratio of user equipment i.
After the uplink rate of the user is obtained, each user calculates the unloading delay of each task, including the task uploading delay, the task queuing delay and the task processing delay, and the unloading delay of the task of the user i can be obtained as follows:
Figure 728576DEST_PATH_IMAGE013
wherein TE i Is the task offload latency for user i,C irepresenting the amount of tasks carried by the task of user i,R i,jrepresenting the uplink rate between the ith user equipment and the jth base station,Q i,jthe queuing delay of the task representing user i on the jth MEC server,P jis the task computing power of the jth MEC server,L irepresenting the completion time limit of the user i task.
And 4, step 4: and judging whether the task processing time delay meets the time delay requirement of the user, if not, returning to the step 3, and if so, summing the unloading time delays of all the user tasks to obtain the total task unloading time delay.
And 5: and (4) randomly assigning a large initial temperature value, taking the total user task unloading delay obtained in the step (4) as an initial evaluation function value, and taking the distribution condition in the step (3) as an initial solution state.
Step 6: and randomly changing the target base station of one user task into another base station, and recalculating the unloading time delay of all the user tasks. Comparing the total user task unloading time delay in the new state with the previous total user task unloading time delay, if the new total user unloading time delay is less than the previous total user unloading time delayAnd if the new total user unloading time delay is larger than or equal to the initial value, the probability is used for judging whether the new distribution scheme is accepted or not
Figure 528167DEST_PATH_IMAGE014
As the probability of acceptance,
Figure 601165DEST_PATH_IMAGE015
is the step of the change in temperature and,T kthe temperature at the kth iteration, if accepted, is incremented by one.
And 7: and if the acceptance times of the solution in the previous step is increased by one, taking the new total user time delay as the initial value of the evaluation function of the next cycle, otherwise, taking the current evaluation function value as the next evaluation function value. If the set circulation times are reached, calculating the acceptance probability of the new solution in the circulation, and if not, returning to the step 6. Wherein the acceptance probability of the new solution in the round robin is as follows:
Figure 212798DEST_PATH_IMAGE016
whereinP kIs the probability of acceptance for the k-th cycle,N ais the accepted number of new solutions in the round of circulation,NCis a preset cycle number of each cycle and is a constant.
And 8: and judging the relation between the Pk and the set trust precision P, judging whether the difference between the acceptance probability and the set trust precision is smaller than the set difference, if not, updating the values of T and delta T and continuing circulation, otherwise, taking the current value of T as the estimated initial temperature value.
And step 9: and taking the estimated initial temperature value as the initial temperature of the simulated annealing algorithm, and giving a termination temperature, the cycle times at a single temperature and the upper limit of the total iteration times.
Step 10: and randomly changing the target base station of one user task into another base station, and recalculating the unloading time delay of all the user tasks.
Step 11: and comparing the total user task unloading time delay in the new state with the previous total user task unloading time delay, if the new total user unloading time delay is smaller than the initial value, accepting the new distribution scheme, if the new total user unloading time delay is larger than or equal to the initial value, taking the probability as the acceptance probability, and if the new total user unloading time delay is accepted, taking the solution as the next initial solution. The probability is:
Figure 623444DEST_PATH_IMAGE017
step 12: and judging whether the limitation of the circulation times at the same temperature is reached, if the limitation of the circulation times at the same temperature is reached, changing the temperature according to a cooling function, returning to the step 10 again to execute circulation at a new temperature, and if the limitation of the circulation times at the same temperature is not reached, continuing returning to the step 10 to execute circulation.
Wherein the temperature reduction process refers to
Figure 194103DEST_PATH_IMAGE018
In the formulaT k+1Is the temperature of the (k + 1) th iteration, and k is the iteration number and is an integer larger than 1.T 0Is the initial temperature of the simulated annealing algorithm.
Step 13: and judging whether the temperature reaches the termination temperature or reaches the upper limit of the total cycle times, if the temperature and the total cycle times do not reach the termination temperature, continuing the cycle, and if the temperature reaches the termination temperature or the times reaches the upper limit of the total cycle times, terminating the cycle and outputting the state of the current solution as an optimal scheduling method.
The above examples are intended to illustrate the technical solution of the invention, not to limit it, and it should be understood by those skilled in the art that various modifications can be made in the form of the invention without departing from the scope of the invention as defined in the claims.

Claims (3)

1. An edge computing user task offloading method, characterized by: establishing a network communication model between the micro-cells and the micro-cells, and solving an optimization scheme by using a numerical estimation algorithm and a simulated annealing algorithm;
the task uploading time calculation step of the user in the network communication model comprises the following steps:
step 1, calculating the signal-to-interference-and-noise ratio (the sum of interference and noise on the signal ratio) of a user channel according to a path loss model:
Figure 971342DEST_PATH_IMAGE001
wherein the content of the first and second substances,UE iis the transmit power of the ith user, d is the distance of the signal from the base station to the user equipment,
Figure 396989DEST_PATH_IMAGE002
the loss index of the signal is, meanwhile, because a commonly used honeycomb structure can generate six co-channel cells, the interference of users in the 6 co-channel cells and users in the same channel with the user i is represented by multiplying a coefficient 6 in the co-channel interference, D is the distance of the co-channel cells, the distance is approximately set as the distance of the co-channel users, and n is the Gaussian white noise power in the environment;
step 2, calculating the uplink rate of the link between the user equipment and the base station according to the signal-to-interference-and-noise ratio of the user channel:
Figure 357598DEST_PATH_IMAGE003
wherein the content of the first and second substances,R i,jrepresenting the uplink rate between the ith user equipment and the jth base station,Brepresenting the channel capacity between the user equipment and the base station,
Figure 63386DEST_PATH_IMAGE004
the signal-to-interference-and-noise ratio of an uplink channel of the ith user is set;
the total user task unloading time delay is the sum of the task unloading time delays of all the devices, and each device is provided with a task unloading time delay unitThe mobile equipment only has one task waiting for unloading in one scheduling time period, and the user task is availableW i=(C i,L i) In the description that follows,C irepresenting the amount of tasks carried by the task of user i,L ia time limit for completion of the task on behalf of the user i,W ithe task unloading delay of the user i is composed of the uploading delay of the task, the waiting delay of the task in a task unloading queue of the MEC server and the processing delay of the task in the MEC server, and the task unloading delay can be expressed as:
Figure 920525DEST_PATH_IMAGE005
whereinTE iIs the task offload latency for user i,Q i,j the queuing delay of the task representing user i on the jth MEC server,P jis the task computing power of the jth MEC server;
the task unloading queue refers to a task queuing queue formed by uploading tasks to be unloaded to the MEC server by a plurality of users;
the numerical estimation algorithm is an algorithm for predicting the initial temperature of the simulated annealing algorithm by using variable-step-size numerical estimation;
the simulated annealing algorithm takes a temperature prediction result of a numerical estimation algorithm as an initial temperature, and controls the temperature by a temperature drop function combining a polynomial and an exponent;
the cycle process of the simulated annealing algorithm is as follows:
at an initial temperature, randomly changing a target MEC server of one user into other MEC servers, then recalculating the task unloading time of each user, obtaining a new total unloading time, if the new total unloading time is less than the original total unloading time, accepting a new solution, and if the new total unloading time is more than or equal to the original total unloading time, accepting the new solution according to the following probability
Figure 147982DEST_PATH_IMAGE006
WhereinPIn order to accept the probability of being accepted,T kis the temperature at the k-th iteration,
Figure 660085DEST_PATH_IMAGE007
the temperature step length in the iteration is the current round;
the optimization scheme is a user task offloading scheme that minimizes an offloading time of a total user task.
2. The edge computing user task offloading method of claim 1, wherein: each micro cell in the network communication model is provided with 6 adjacent co-channel micro cells, each micro cell is internally provided with a micro cell base station and a micro cell base station, each base station is provided with a corresponding MEC server, user equipment in each micro cell can receive signals of any base station in the micro cell, the power of each micro cell base station is equal, the power of each micro cell base station is greater than that of the micro cell base station, and the computing power of the MEC server is in direct proportion to the power of the base stations.
3. The edge computing user task offloading method of claim 1, wherein: the temperature drop process is composed of a polynomial and a logarithmic term, and the temperature drop function is as follows:
Figure 345013DEST_PATH_IMAGE008
in the formula
Figure 362998DEST_PATH_IMAGE009
Is the temperature at the (k + 1) th iteration, k is the number of iterations,
Figure 874488DEST_PATH_IMAGE010
T 0is a simulated annealing algorithmThe initial temperature of the process.
CN201910747683.XA 2019-08-17 2019-08-17 Method for unloading tasks of edge computing user Active CN110430593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910747683.XA CN110430593B (en) 2019-08-17 2019-08-17 Method for unloading tasks of edge computing user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910747683.XA CN110430593B (en) 2019-08-17 2019-08-17 Method for unloading tasks of edge computing user

Publications (2)

Publication Number Publication Date
CN110430593A CN110430593A (en) 2019-11-08
CN110430593B true CN110430593B (en) 2022-05-13

Family

ID=68416233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910747683.XA Active CN110430593B (en) 2019-08-17 2019-08-17 Method for unloading tasks of edge computing user

Country Status (1)

Country Link
CN (1) CN110430593B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586762B (en) * 2020-04-29 2023-02-17 重庆邮电大学 Task unloading and resource allocation joint optimization method based on edge cooperation
CN112084019B (en) * 2020-08-12 2022-05-10 东南大学 Simulated annealing based calculation unloading and resource allocation method in heterogeneous MEC calculation platform
CN117014313B (en) * 2023-09-26 2023-12-19 工业云制造(四川)创新中心有限公司 Method and system for analyzing equipment data of edge cloud platform in real time

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013097413A1 (en) * 2011-12-31 2013-07-04 深圳华大基因科技服务有限公司 Method and system for constructing diploid monomer
CN107766135A (en) * 2017-09-29 2018-03-06 东南大学 Method for allocating tasks based on population and simulated annealing optimization in mobile cloudlet
CN108874525A (en) * 2018-06-22 2018-11-23 浙江大学 A kind of service request distribution method towards edge calculations environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013097413A1 (en) * 2011-12-31 2013-07-04 深圳华大基因科技服务有限公司 Method and system for constructing diploid monomer
CN107766135A (en) * 2017-09-29 2018-03-06 东南大学 Method for allocating tasks based on population and simulated annealing optimization in mobile cloudlet
CN108874525A (en) * 2018-06-22 2018-11-23 浙江大学 A kind of service request distribution method towards edge calculations environment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Joint Computation Offloading and Interference;Chenmeng Wang, F. Richard Yu, Senior Member, IEEE, Chengchao Lia;《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY》;20170303;第2节 *
一种新型混合模拟退火算法及其应用;陈惟岐;《大庆石油学院学报》;20070831;第1节 *
基于模拟退火算法的换热网络双层优化方法;彭富裕;《石油化工》;20140530;第2节 *
资源受限的移动边缘计算系统中计算卸载问题研究;赵竑宇;《中国优秀博硕士学位论文全文数据库(硕士)》;20190815;说明书第三章,第四章,图4-1 *

Also Published As

Publication number Publication date
CN110430593A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN109413724B (en) MEC-based task unloading and resource allocation scheme
US10217060B2 (en) Capacity augmentation of 3G cellular networks: a deep learning approach
Fadlullah et al. HCP: Heterogeneous computing platform for federated learning based collaborative content caching towards 6G networks
CN113612843B (en) MEC task unloading and resource allocation method based on deep reinforcement learning
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
Elbamby et al. Proactive edge computing in latency-constrained fog networks
CN110430593B (en) Method for unloading tasks of edge computing user
Labidi et al. Energy-optimal resource scheduling and computation offloading in small cell networks
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
EP1499152B1 (en) Method and apparatus for adaptive and online assignment in hierarchical overlay networks
Zhao et al. Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems
Li et al. A delay-aware caching algorithm for wireless D2D caching networks
CN108965009B (en) Load known user association method based on potential game
CN110519849B (en) Communication and computing resource joint allocation method for mobile edge computing
Ojima et al. Resource management for mobile edge computing using user mobility prediction
Sanguanpuak et al. Network slicing with mobile edge computing for micro-operator networks in beyond 5G
Wang et al. Task allocation mechanism of power internet of things based on cooperative edge computing
Hu et al. Mobility-aware offloading and resource allocation in MEC-enabled IoT networks
CN114938381A (en) D2D-MEC unloading method based on deep reinforcement learning and computer program product
Liu et al. Mobility-aware task offloading and migration schemes in scns with mobile edge computing
CN110177383B (en) Efficiency optimization method based on task scheduling and power allocation in mobile edge calculation
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
Park et al. UE throughput guaranteed small cell on/off algorithm with machine learning
Litjens The impact of mobility on UMTS network planning
Nuaymi et al. Call admission control algorithm for cellular CDMA systems based on best achievable performance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant