CN110856240A - Task unloading method and device and readable storage medium - Google Patents

Task unloading method and device and readable storage medium Download PDF

Info

Publication number
CN110856240A
CN110856240A CN201911082350.6A CN201911082350A CN110856240A CN 110856240 A CN110856240 A CN 110856240A CN 201911082350 A CN201911082350 A CN 201911082350A CN 110856240 A CN110856240 A CN 110856240A
Authority
CN
China
Prior art keywords
base station
task
station server
unloading
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911082350.6A
Other languages
Chinese (zh)
Other versions
CN110856240B (en
Inventor
廖卓凡
彭景盛
陈沅涛
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN201911082350.6A priority Critical patent/CN110856240B/en
Publication of CN110856240A publication Critical patent/CN110856240A/en
Application granted granted Critical
Publication of CN110856240B publication Critical patent/CN110856240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • H04W52/0206Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a method for unloading tasks, which comprises the following steps: sending task information of the current task to a corresponding base station server so that the base station server calculates the optimal unloading probability of the current task according to the task information; receiving the optimal unloading probability returned by each base station server, and calculating the average unloading probability according to each optimal unloading probability; judging whether the current task needs to be unloaded according to the average unloading probability; and if so, selecting an optimal base station server from each base station server, and unloading the current task to the optimal base station server. According to the method and the device, whether the current task needs to be unloaded or not is determined according to the average unloading probability, if yes, the current task is unloaded to the optimal base station server, the whole task unloading process is completed, the calculation complexity of task unloading is greatly reduced, and the task unloading efficiency is improved. The application also provides a task unloading device and a readable storage medium, and the task unloading device and the readable storage medium have the beneficial effects.

Description

Task unloading method and device and readable storage medium
Technical Field
The present application relates to the field of task offloading, and in particular, to a method and an apparatus for task offloading, and a readable storage medium.
Background
Due to the limited power reserves and computing power of mobile devices, when running computationally intensive tasks, the power of the device will be quickly consumed if the task is running locally at the mobile device, thereby significantly reducing the device lifetime. The task unloading technology in the edge computing is an effective method for solving the problems, and the mobile device can unload the computing task of the mobile device to a nearby edge server, so that more powerful computing resources are obtained, and the electric quantity of the mobile device is saved.
The deployment characteristics of the 5G technology base station are that the deployment density of the micro base station is high, a user can be covered by a plurality of base stations, and the problem of overlapping of coverage areas of the base stations occurs, however, the existing border computing task unloading application background is mostly under the 4G macro base station, and the algorithm complexity is extremely high when the problem of covering of a plurality of base stations is computed, and the computation is difficult.
Therefore, how to reduce the computational complexity of task offloading and improve offloading efficiency is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a task unloading method, a task unloading device and a readable storage medium, which are used for reducing the computation complexity of task unloading and improving the unloading efficiency.
In order to solve the above technical problem, the present application provides a method for task offloading, including:
sending task information of a current task to a corresponding base station server so that the base station server calculates the optimal unloading probability of the current task according to the task information;
receiving the optimal unloading probability returned by each base station server, and calculating the average unloading probability according to each optimal unloading probability;
judging whether the current task needs to be unloaded according to the average unloading probability;
and if so, selecting an optimal base station server from each base station server, and unloading the current task to the optimal base station server.
Optionally, the calculating, by the base station server, the optimal offloading probability of the current task according to the task information includes:
the base station server is based on the formulaCalculating an optimal unload probability for the current task
Figure BDA0002264354300000022
Wherein the content of the first and second substances,the total overhead expectation of the tasks for all the task off-loaded devices in the jth base station server,
Figure BDA0002264354300000024
the device for unloading the ith task unloads the optimal unloading probability of the current task to the jth base station server,
Figure BDA0002264354300000025
the optimal unloading probability of unloading the current task from the j-th base station server to the j-th base station server is β a compromise constant, xjTotal number of devices offloaded for all tasks in the jth base station server, LiDelay expectation of the current task of the device offloaded for the ith task, EiThe energy consumption expectations of the current task of the devices offloaded for the ith task,
Figure BDA0002264354300000026
the task arrival rate of the current task to the jth base station server,
Figure BDA0002264354300000027
and the service rate for the jth base station server to process the current task.
Optionally, the selecting an optimal base station server from each base station server includes:
acquiring the load capacity of each base station server;
and determining the base station server with the lowest load capacity as the optimal base station server.
Optionally, judging whether the current task needs to be unloaded according to the average unloading probability includes:
generating a random number between zero and one;
judging whether the random number is smaller than the average unloading probability;
if yes, confirming that the current task needs to be unloaded;
if not, determining that the current task does not need to be unloaded.
Optionally, when the current task does not need to be unloaded, the method further includes:
and executing the current task.
The present application further provides a task offloading device, including:
the sending module is used for sending the task information of the current task to the corresponding base station server so that the base station server can calculate the optimal unloading probability of the current task according to the task information;
a receiving module, configured to receive the optimal offloading probability returned by each base station server, and calculate an average offloading probability according to each optimal offloading probability;
the judging module is used for judging whether the current task needs to be unloaded according to the average unloading probability;
and the unloading module is used for selecting an optimal base station server from each base station server when the current task needs to be unloaded, and unloading the current task to the optimal base station server.
Optionally, the unloading module includes:
the obtaining submodule is used for obtaining the load capacity of each base station server;
and the determining submodule is used for determining the base station server with the lowest load capacity as the optimal base station server.
Optionally, the determining module includes:
a generation submodule for generating a random number between zero and one;
the judgment submodule is used for judging whether the random number is smaller than the average unloading probability;
the first confirming submodule is used for confirming that the current task needs to be unloaded when the random number is smaller than the average unloading probability;
and the second confirming submodule is used for confirming that the current task does not need to be unloaded when the random number is greater than or equal to the average unloading probability.
The present application also provides a task offloading device, including:
a memory for storing a computer program;
a processor for implementing the steps of the method of task offloading as described in any of the above when executing the computer program.
The present application also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of task offloading as described in any of the above.
The task unloading method provided by the application comprises the following steps: sending task information of the current task to a corresponding base station server so that the base station server calculates the optimal unloading probability of the current task according to the task information; receiving the optimal unloading probability returned by each base station server, and calculating the average unloading probability according to each optimal unloading probability; judging whether the current task needs to be unloaded according to the average unloading probability; and if so, selecting an optimal base station server from each base station server, and unloading the current task to the optimal base station server.
According to the technical scheme, the average unloading probability is calculated according to the optimal unloading probability returned by each base station server, whether the current task needs to be unloaded or not is determined according to the average unloading probability, if yes, the optimal base station server is selected from each reachable base station server, the current task is unloaded to the optimal base station server, the whole task unloading process is completed, the calculation complexity of task unloading is greatly reduced, and the task unloading efficiency is improved. The application also provides a task unloading device and a readable storage medium, which have the beneficial effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a task offloading method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an equipment model provided in an embodiment of the present application;
fig. 3 is a block diagram of a task offloading device provided in an embodiment of the present application;
FIG. 4 is a block diagram of another task offloading device provided in an embodiment of the present application;
fig. 5 is a structural diagram of a task offloading device according to an embodiment of the present disclosure.
Detailed Description
The core of the application is to provide a task unloading method, a device and a readable storage medium, which are used for reducing the computation complexity of task unloading and improving the unloading efficiency.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Due to the limited power reserves and computing power of mobile devices, when running computationally intensive tasks, the power of the device will be quickly consumed if the task is running locally at the mobile device, thereby significantly reducing the device lifetime. The task unloading technology in the edge computing is an effective method for solving the problems, and the mobile device can unload the computing task of the mobile device to a nearby edge server, so that more powerful computing resources are obtained, and the electric quantity of the mobile device is saved.
The deployment characteristics of the 5G technology base station are that the deployment density of the micro base station is high, a user can be covered by a plurality of base stations, and the problem of overlapping of coverage areas of the base stations occurs, however, the existing border computing task unloading application background is mostly under the 4G macro base station, and the algorithm complexity is extremely high when the problem of covering of a plurality of base stations is computed, and the computation is difficult. Therefore, the present application provides a task offloading method for solving the above problems.
Referring to fig. 1, fig. 1 is a flowchart illustrating a task offloading method according to an embodiment of the present disclosure.
The method specifically comprises the following steps:
s101: sending task information of a current task to a corresponding base station server so that the base station server calculates the optimal unloading probability of the current task according to the task information;
the technical scheme provided by the application is applied to the task unloading equipment, the task information can include but is not limited to the size of input data, required calculation amount, hardware attribute of the task unloading equipment and the like, and the base station server can calculate the optimal unloading probability of the current task according to the task information;
preferably, the base station server mentioned herein calculates the optimal offloading probability of the current task according to the task information, and specifically may be:
the base station server according to the formula
Figure BDA0002264354300000051
Calculating an optimal unload probability for a current task
Figure BDA0002264354300000052
Wherein the content of the first and second substances,the total overhead expectation of the tasks for all the task off-loaded devices in the jth base station server,
Figure BDA0002264354300000054
the device for unloading the ith task unloads the optimal unloading probability of the current task to the jth base station server,
Figure BDA0002264354300000055
the optimal unloading probability of unloading the current task from the jth base station server to all the equipment unloading the tasks in the jth base station server is β a compromise constant, xjTotal number of devices offloaded for all tasks in the jth base station server, LiDelay expectation of the current task of the device offloaded for the ith task, EiThe energy consumption expectations of the current task for the devices offloaded for the ith task,
Figure BDA0002264354300000056
for the task arrival rate of the current task to the jth base station server,
Figure BDA0002264354300000057
the service rate for the jth base station server to process the current task.
In the following description of the specific embodiment, please refer to fig. 2, and fig. 2 is a schematic diagram of an apparatus model provided in an embodiment of the present application, which considers the task offloading problem of multiple users under overlapping coverage areas of multiple base stations in a mobile cellular network, and a single task cannot be further split, that is, the task is executed either locally or on an edge server of a reachable base station. Assuming that each user randomly generates one task within time slot delta, the task generation of all users obeys a poisson distribution. As shown in fig. 2, B ═ B1,b2,...,bnDenotes a set of base stations, U ═ U1,u2,...,upDenotes the set of devices whose tasks are offloaded, T ═ T1,t2,...,tpDenotes the set of all computation tasks. Each base station has a coverage area with a coverage radius of bj,cIs shown in which
Figure BDA0002264354300000061
The situation that all the task unloading devices are covered by the base station can be obtained through the distance between the task unloading devices and the base station and the coverage range of the base station. If user uiBy base station bjCovering, then using
Figure BDA0002264354300000062
To indicate otherwise
Figure BDA0002264354300000063
And (4) showing. Considering that each user will be covered by at least one base station, there are:
the application scenario of the model is an OFDMA (Orthogonal Frequency Division multiple access) mobile cellular network, which is an evolution of OFDM technology, and combines OFDM and FDMA technologies, and after performing parent carrier formation on a channel by using OFDM, a transmission technology for transmitting data is loaded on a part of subcarriers. The base station is connected with the user through a wireless channel, the base station is connected with the edge server through a high-speed optical fiber, data transmission delay of the base station and the server is ignored in the embodiment of the application, and data transmission rate between the user n and the base station b can be obtained through the Xiangnong-Hartley theorem
Figure BDA0002264354300000065
Wherein, WbBandwidth for base station b, N number of devices for task offloading served by base station b, pnFor task offloadingN, the transmission power is determined by an energy control algorithm between the base station and the task unloading equipment; gn,bBandwidth gain between device n and base station b for task offloading; omega is background noise;
to base station bjFor a connected edge server, a feature doublet may be used
Figure BDA0002264354300000066
To represent its CPU frequency and bandwidth. The base station is assumed to transmit a small amount of result data back to the task-off device, so the delay caused by transmitting the result back to the task-off device is not considered in the embodiment of the application. Device u for offloading a single taski,
Figure BDA0002264354300000067
In other words, its computing power is expressed as(CPU frequency). Device u for offloading tasksiGenerated computing task tiE.g. T, with a characteristic bigram of input data size and required computing resource size
Figure BDA0002264354300000069
Represents;
if task tiCalculated locally, then its delay
Figure BDA00022643543000000610
Its energy consumptionα, among other things, is the power consumption per CPU cycle of a task-offloaded device, here employing a widely accepted model
Figure BDA00022643543000000612
Kappa is an energy consumption factor;
for a particular offload scheme, we can use a matrix
Figure BDA0002264354300000071
Describe wherein Mp×(n+1)Each row in the list represents the offload selection of one user task, a total of p rows, p being the number of users,device u for task offloadingiThe unloading decision of (a), whose value is {0,1 }; mp×(n+1)The middle row has (1+ n) elements, and n is the number of base stations; if it is
Figure BDA0002264354300000073
If the value is 1, the device u for unloading the task is representediTask t ofiAt the corresponding edge server bjExecute if
Figure BDA0002264354300000074
It means executing locally. In the embodiment of the application, the user task can be calculated in only one place, namely
Figure BDA0002264354300000075
If the device of task uninstalls the task tiOffloading to edge server bjThen its delay consists of two parts: 1) data transfer delay, 2) task processing delay (including queuing and computation delays). The transmission delay can be expressed as:
Figure BDA0002264354300000076
to base station bjAt the edge server, assuming that the tasks offloaded to it arrive in a Poisson distribution, at base station bjConsidering an M/M/1 queue (M/M/1 queue is a model in the queuing theory, the arrival time is required in the poisson process, the service time is exponential distribution, only one server is provided, the queue length is unlimited, the number of people capable of entering the queue is unlimited), the arrival rate is determined
Figure BDA0002264354300000077
Assuming that the task processing time is exponentially distributed, the tasks are averagedTime of treatment
Figure BDA0002264354300000078
Service rate
Figure BDA0002264354300000079
So that the processing delay is averaged over all tasks offloaded to that base station
Figure BDA00022643543000000710
Its energy consumption
Figure BDA00022643543000000711
On this basis, the delay of task i can then be represented by:
the energy consumption can be expressed as:
here, a cost function is introduced to represent the total cost of the task-off device:
Figure BDA00022643543000000714
β in the formula is a parameter between [0,1] for the tradeoff between the device delay and the energy consumption for task offloading, the value of which can vary from case to case, and is close to 0 if energy conservation is emphasized and close to 1 if delay performance is emphasized, the optimization goal of the embodiment of the present application is to minimize the average overhead of all users, where equation (5) can be transformed into the following objective equation:
considering that equation (6) is a mixed integer nonlinear programming problem, which has no solution of polynomial time complexity, the embodiment of the present application proposes an offloading optimization strategy, and its basic idea is to relax the selection of non-zero, i.e. one, in the original problem into an offloading probability, and determine a final offloading decision by solving an optimal offloading probability:
dividing all task-off-loaded devices U into n subgroups G1,...,Gn},GiThe device indicating the task offloading of the group is taken by base station biCoverage, simultaneous computation task T is naturally also divided into n subgroups { T1,...,Tn}. Since there are cases where a single user is covered by multiple base stations, the user may be repeatedly present in multiple subgroups. For a particular base station biAnd devices G for task offloading within its communication rangei(the number of devices for task offloading is ni) Assume that the user has only two choices: either the computation tasks are processed locally or the tasks are offloaded to an edge server on the base station. To simplify the problem, can useTo indicate that the device that offloads the task will be task tjOffloading to base station biThe probability of (d);
with a group of users GjE is U, j is more than or equal to 1 and less than or equal to n, and n is the number of base stations as an example, then
Figure BDA0002264354300000082
Representing a group of users GjTo the edge server b by each task off-loaded devicejProbability of offloading a task, wherein xjThe number of devices offloading the tasks in the group, t, which can be expected for the delay and energy consumption of each user taskiThe local computation latency expectation may be expressed as:
Figure BDA0002264354300000083
local computing energy consumption expectation of
Figure BDA0002264354300000084
As with the previous edge server, the edge server also considers an M/M/1 queue, the task arrival rate at this time
Figure BDA0002264354300000085
Average task processing time
Figure BDA0002264354300000086
So service rateThe expectation of task processing delay is then:
Figure BDA0002264354300000091
the expectations of the transmission delay are:
the expectations for transmission energy consumption are:
Figure BDA0002264354300000093
from this, the subgroup G can be obtainedjThe user task delay expectation in (1) is:
the expectations of energy consumption are:
Figure BDA0002264354300000095
likewise, a subgroup G can be obtainedjThe total overhead for all user tasks in (1) is expected to be:
Figure BDA0002264354300000096
for this user group, the optimization objective function can be converted into the following form:
Figure BDA0002264354300000097
by solving equation (15), G can be obtainediTo base station b by all task off-load devicesiConsidering that formula (15) is a nonlinear optimization problem, the optimal probability of the server offloading task can also be solved by using a Sequential Quadratic Programming (SQP), a classical mathematical algorithm for solving the nonlinear optimization problem.
S102: receiving the optimal unloading probability returned by each base station server, and calculating the average unloading probability according to each optimal unloading probability;
s103: judging whether the current task needs to be unloaded according to the average unloading probability;
if yes, go to step S104;
since a task unloading device may be covered by a plurality of base station servers, the method calculates an average unloading probability according to the optimal unloading probability calculated by each base station server, and judges whether the current task needs to be unloaded according to the average unloading probability;
when the current task needs to be unloaded, the current task is unloaded to the base station server to be processed with higher efficiency and lower energy consumption, and at the moment, the step S104 can be executed to complete the task unloading;
optionally, when the current task does not need to be unloaded, it is proved that the processing efficiency of the current task on the task unloading device is higher, the energy consumption is lower, and the current task can be executed on the task unloading device.
Optionally, the determining whether the current task needs to be unloaded according to the average unloading probability may specifically be:
generating a random number between zero and one;
judging whether the random number is smaller than the average unloading probability;
if yes, confirming that the current task needs to be unloaded;
if not, determining that the current task does not need to be unloaded.
S104: and selecting an optimal base station server from each base station server, and unloading the current task to the optimal base station server.
Because a task unloading device can be covered by a plurality of base station servers, and the current task only needs to be unloaded to a certain base station server for processing, the optimal base station server can be selected from each base station server, and the current task is unloaded to the optimal base station server;
optionally, the selecting an optimal base station server from each base station server mentioned herein may specifically be:
acquiring the load capacity of each base station server;
and determining the base station server with the lowest load capacity as the optimal base station server.
In densely populated areas such as shopping malls, a communications carrier deploys a plurality of wireless access micro base stations to provide computing resources for users. By operating the task unloading method provided by the embodiment of the application, the task unloading device can quickly calculate which base station can obtain the best effect when unloading, namely lower energy consumption and delay are achieved.
Based on the technical scheme, the task unloading method provided by the application calculates the average unloading probability according to the optimal unloading probability returned by each base station server, then determines whether the current task needs to be unloaded according to the average unloading probability, selects the optimal base station server from each reachable base station server if the current task needs to be unloaded, and unloads the current task to the optimal base station server, so that the whole task unloading process is completed, the calculation complexity of task unloading is greatly reduced, and the task unloading efficiency is improved.
Referring to fig. 3, fig. 3 is a structural diagram of a task offloading device according to an embodiment of the present disclosure.
The task offloading device may include:
a sending module 100, configured to send task information of a current task to a corresponding base station server, so that the base station server calculates an optimal offloading probability of the current task according to the task information;
a receiving module 200, configured to receive the optimal offloading probability returned by each base station server, and calculate an average offloading probability according to each optimal offloading probability;
the judging module 300 is configured to judge whether the current task needs to be unloaded according to the average unloading probability;
the offloading module 400 is configured to select an optimal base station server from each base station server when the current task needs to be offloaded, and offload the current task to the optimal base station server.
Referring to fig. 4, fig. 4 is a block diagram of another task offloading device according to an embodiment of the present disclosure.
The unloading module 400 may include:
the acquisition submodule is used for acquiring the load capacity of each base station server;
and the determining submodule is used for determining the base station server with the lowest load capacity as the optimal base station server.
The determining module 300 may include:
a generation submodule for generating a random number between zero and one;
the judgment submodule is used for judging whether the random number is smaller than the average unloading probability;
the first confirming submodule is used for confirming that the current task needs to be unloaded when the random number is smaller than the average unloading probability;
and the second confirming submodule is used for confirming that the current task does not need to be unloaded when the random number is greater than or equal to the average unloading probability.
Since the embodiment of the device portion and the embodiment of the method portion correspond to each other, please refer to the description of the embodiment of the method portion for the embodiment of the device portion, which is not repeated here.
Referring to fig. 5, fig. 5 is a structural diagram of a task offloading device according to an embodiment of the present disclosure.
The task offload device 500 may vary significantly due to configuration or performance differences and may include one or more processors (CPUs) 522 (e.g., one or more processors) and memory 532, one or more storage media 530 (e.g., one or more mass storage devices) storing applications 542 or data 544. Memory 532 and storage media 530 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a sequence of instruction operations for the device. Still further, the central processor 522 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the task off-load device 500.
Task offload device 500 may also include one or more power supplies 525, one or more wired or wireless network interfaces 550, one or more input-output interfaces 558, and/or one or more operating devices 541, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps in the task offloading method described in fig. 1 to 2 above are implemented by the task offloading device based on the structure shown in fig. 5.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus, device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a function calling device, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above provides a detailed description of a task offloading method, device and readable storage medium provided by the present application. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method of task offloading, comprising:
sending task information of a current task to a corresponding base station server so that the base station server calculates the optimal unloading probability of the current task according to the task information;
receiving the optimal unloading probability returned by each base station server, and calculating the average unloading probability according to each optimal unloading probability;
judging whether the current task needs to be unloaded according to the average unloading probability;
and if so, selecting an optimal base station server from each base station server, and unloading the current task to the optimal base station server.
2. The method of claim 1, wherein the base station server calculates an optimal offloading probability for the current task according to the task information, comprising:
the base station server is based on the formula
Figure FDA0002264354290000011
Calculating an optimal unload probability for the current task
Figure FDA0002264354290000012
Wherein the content of the first and second substances,
Figure FDA0002264354290000013
the total overhead expectation of the tasks for all the task off-loaded devices in the jth base station server,the device for unloading the ith task unloads the optimal unloading probability of the current task to the jth base station server,
Figure FDA0002264354290000015
the optimal unloading probability of unloading the current task from the j-th base station server to the j-th base station server is β a compromise constant, xjTotal number of devices offloaded for all tasks in the jth base station server, LiDelay expectation of the current task of the device offloaded for the ith task, EiThe energy consumption expectations of the current task of the devices offloaded for the ith task,
Figure FDA0002264354290000016
the task arrival rate of the current task to the jth base station server,
Figure FDA0002264354290000017
and the service rate for the jth base station server to process the current task.
3. The method of claim 1, wherein said selecting an optimal base station server from each of the base station servers comprises:
acquiring the load capacity of each base station server;
and determining the base station server with the lowest load capacity as the optimal base station server.
4. The method of claim 1, wherein determining whether the current task needs to be offloaded based on the average offload probability comprises:
generating a random number between zero and one;
judging whether the random number is smaller than the average unloading probability;
if yes, confirming that the current task needs to be unloaded;
if not, determining that the current task does not need to be unloaded.
5. The method of claim 1, when the current task does not need to be offloaded, further comprising:
and executing the current task.
6. A task offloading device, comprising:
the sending module is used for sending the task information of the current task to the corresponding base station server so that the base station server can calculate the optimal unloading probability of the current task according to the task information;
a receiving module, configured to receive the optimal offloading probability returned by each base station server, and calculate an average offloading probability according to each optimal offloading probability;
the judging module is used for judging whether the current task needs to be unloaded according to the average unloading probability;
and the unloading module is used for selecting an optimal base station server from each base station server when the current task needs to be unloaded, and unloading the current task to the optimal base station server.
7. The apparatus of claim 6, wherein the unloading module comprises:
the obtaining submodule is used for obtaining the load capacity of each base station server;
and the determining submodule is used for determining the base station server with the lowest load capacity as the optimal base station server.
8. The apparatus of claim 6, wherein the determining module comprises:
a generation submodule for generating a random number between zero and one;
the judgment submodule is used for judging whether the random number is smaller than the average unloading probability;
the first confirming submodule is used for confirming that the current task needs to be unloaded when the random number is smaller than the average unloading probability;
and the second confirming submodule is used for confirming that the current task does not need to be unloaded when the random number is greater than or equal to the average unloading probability.
9. A task off-loading device, comprising:
a memory for storing a computer program;
processor for implementing the steps of the method of task offloading as claimed in any of claims 1 to 5 when executing the computer program.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of task offloading according to any of claims 1 to 5.
CN201911082350.6A 2019-11-07 2019-11-07 Task unloading method and device and readable storage medium Active CN110856240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911082350.6A CN110856240B (en) 2019-11-07 2019-11-07 Task unloading method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911082350.6A CN110856240B (en) 2019-11-07 2019-11-07 Task unloading method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN110856240A true CN110856240A (en) 2020-02-28
CN110856240B CN110856240B (en) 2022-07-19

Family

ID=69599742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911082350.6A Active CN110856240B (en) 2019-11-07 2019-11-07 Task unloading method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN110856240B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111679864A (en) * 2020-05-19 2020-09-18 河海大学 Task unloading system and unloading method based on machine learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073579A1 (en) * 2007-12-21 2009-06-24 Nokia Siemens Networks S.p.A. Method and systems for handling handover processes in cellular communication networks, corresponding network and computer program product
CN106900011A (en) * 2017-02-28 2017-06-27 重庆邮电大学 Task discharging method between a kind of cellular basestation based on MEC
CN108924254A (en) * 2018-08-03 2018-11-30 上海科技大学 The distributed multi-user calculating task discharging method of customer-centric
CN108958916A (en) * 2018-06-29 2018-12-07 杭州电子科技大学 Workflow unloads optimization algorithm under a kind of mobile peripheral surroundings
CN109548155A (en) * 2018-03-01 2019-03-29 重庆大学 A kind of non-equilibrium edge cloud network access of distribution and resource allocation mechanism
CN109729175A (en) * 2019-01-22 2019-05-07 中国人民解放军国防科技大学 Edge cooperative data unloading method under unstable channel condition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073579A1 (en) * 2007-12-21 2009-06-24 Nokia Siemens Networks S.p.A. Method and systems for handling handover processes in cellular communication networks, corresponding network and computer program product
CN106900011A (en) * 2017-02-28 2017-06-27 重庆邮电大学 Task discharging method between a kind of cellular basestation based on MEC
CN109548155A (en) * 2018-03-01 2019-03-29 重庆大学 A kind of non-equilibrium edge cloud network access of distribution and resource allocation mechanism
CN108958916A (en) * 2018-06-29 2018-12-07 杭州电子科技大学 Workflow unloads optimization algorithm under a kind of mobile peripheral surroundings
CN108924254A (en) * 2018-08-03 2018-11-30 上海科技大学 The distributed multi-user calculating task discharging method of customer-centric
CN109729175A (en) * 2019-01-22 2019-05-07 中国人民解放军国防科技大学 Edge cooperative data unloading method under unstable channel condition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ELIE EL HABER: "Joint Optimization of Computational Cost and Devices Energy for Task Offloading in Multi-Tier Edge-Clouds", 《IEEE TRANSACTIONS ON COMMUNICATIONS》 *
JIN WANG: "An Energy-Efficient Off-Loading Scheme for Low Latency in Collaborative Edge Computing", 《IEEE ACCESS》 *
于博文: "移动边缘计算任务卸载和基站关联协同决策问题研究", 《计算机研究与发展》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111679864A (en) * 2020-05-19 2020-09-18 河海大学 Task unloading system and unloading method based on machine learning

Also Published As

Publication number Publication date
CN110856240B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN108809695B (en) Distributed uplink unloading strategy facing mobile edge calculation
CN110418416B (en) Resource allocation method based on multi-agent reinforcement learning in mobile edge computing system
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN111586696B (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN110096362B (en) Multitask unloading method based on edge server cooperation
Zou et al. A3C-DO: A regional resource scheduling framework based on deep reinforcement learning in edge scenario
CN111258677B (en) Task unloading method for heterogeneous network edge computing
CN109343904B (en) Lyapunov optimization-based fog calculation dynamic unloading method
Zhang et al. Joint task offloading and data caching in mobile edge computing networks
CN113064665B (en) Multi-server computing unloading method based on Lyapunov optimization
CN110851197B (en) Method and system for selecting and unloading tasks of edge computing multi-server
Zhou et al. Markov approximation for task offloading and computation scaling in mobile edge computing
CN110740473A (en) management method for mobile edge calculation and edge server
CN114567895A (en) Method for realizing intelligent cooperation strategy of MEC server cluster
CN111263401A (en) Multi-user cooperative computing unloading method based on mobile edge computing
CN113286317A (en) Task scheduling method based on wireless energy supply edge network
CN110149401A (en) It is a kind of for optimizing the method and system of edge calculations task
CN110780986B (en) Internet of things task scheduling method and system based on mobile edge computing
Mazouzi et al. Maximizing mobiles energy saving through tasks optimal offloading placement in two-tier cloud
Dou et al. Mobile edge computing based task offloading and resource allocation in smart grid
CN110856240B (en) Task unloading method and device and readable storage medium
Singh et al. Profit optimization for mobile edge computing using genetic algorithm
Chen et al. Joint optimization of task caching, computation offloading and resource allocation for mobile edge computing
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
Sun et al. Computation offloading with virtual resources management in mobile edge networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant