CN111158912B - Task unloading decision method based on deep learning in cloud and fog collaborative computing environment - Google Patents

Task unloading decision method based on deep learning in cloud and fog collaborative computing environment Download PDF

Info

Publication number
CN111158912B
CN111158912B CN201911392475.9A CN201911392475A CN111158912B CN 111158912 B CN111158912 B CN 111158912B CN 201911392475 A CN201911392475 A CN 201911392475A CN 111158912 B CN111158912 B CN 111158912B
Authority
CN
China
Prior art keywords
decision
task
consumption
deep neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911392475.9A
Other languages
Chinese (zh)
Other versions
CN111158912A (en
Inventor
张子儒
管畅
吴华明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201911392475.9A priority Critical patent/CN111158912B/en
Publication of CN111158912A publication Critical patent/CN111158912A/en
Application granted granted Critical
Publication of CN111158912B publication Critical patent/CN111158912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a task unloading decision method based on deep learning, which comprises the following steps: the random generation task matrix W is respectively input into S parallel deep neural networks, a decision X is output, the consumption Q (W, X) is calculated, and the optimal decision X given before the deep neural network training is obtained 1 Corresponding consumption Q 1 The method comprises the steps of carrying out a first treatment on the surface of the Selecting data from the data set to train S deep neural networks, inputting a task matrix W into the trained S deep neural networks to obtain an optimal decision X given before training the deep neural networks 2 Corresponding consumption Q 2 The method comprises the steps of carrying out a first treatment on the surface of the Calculation of
Figure DDA0002345354630000011
And inputting the corresponding task matrix W into the parallel S deep neural networks until R reaches a threshold value, and selecting the decision with the minimum consumption, namely the target decision, for the new task unloading decision. After the decision model based on the deep neural network is trained, decisions can be given by simple linear operation, and the operation amount is greatly reduced.

Description

Task unloading decision method based on deep learning in cloud and fog collaborative computing environment
Technical Field
The invention relates to the technical field of task unloading decision making, in particular to a task unloading decision making method based on deep learning in a cloud and fog collaborative computing environment.
Background
Along with the continuous progress of technology and the improvement of life quality of people, the popularization of various mobile equipment or internet of things equipment brings great convenience for the life of people. Because of the limited computing power of the mobile device, the processing speed of the mobile device is often difficult to meet the daily demands of users for computation-intensive programs such as face recognition, augmented reality and the like.
In order to prevent high delay and high power consumption caused by running a large amount of operations of the mobile device, the device often needs to rely on a cloud server to assist in calculation in daily operation, and local tasks are unloaded to the cloud server to run, so that waiting time is shortened, and service life of a battery is prolonged. Although the operation capability of the cloud server is strong, as the number of mobile end users increases, the demand of users for the operation capability of the cloud server increases, and the delay caused by the cloud operation also increases gradually. For some delay-sensitive tasks, cloud computing has become increasingly incompatible with the practical requirements of programs. With the continuous development of wireless communication technology, the "fog calculation" technology of users unloading local tasks to data base stations, data centers and other edge cloud devices around the location to perform operation is mature. Compared with the central cloud, the computing capacity and the storage capacity of the edge cloud are relatively low, but because the edge cloud is closer to the mobile device, the communication overhead of the edge cloud is very small, and the time delay caused by network operation and the like can be reduced to a great extent, so that the requirements of ultra-high bandwidth, ultra-low time delay, business and user perception of a future network are met, and the edge cloud has great practical use value.
The cloud computing and the cloud computing have advantages and disadvantages, and the cloud computing has more sufficient computing capability and higher delay; the delay of fog calculation is extremely low, but its own computing power and storage power are limited. Considering the advantages and disadvantages of the two, only by combining the two, the cloud and fog synergistic calculation can exert the maximum effectiveness. The total delay time and the total energy consumption of all users are comprehensively considered, and how to dynamically determine the unloading mode of each task, so that the total waiting time and the total energy consumption of all users are the shortest, and the key to influence the cloud and mist cooperative computing efficiency is realized. However, the total decision probability is exponentially increased with the number of users and tasks, and in actual situations, a large-scale unloading decision problem is often involved, so that a great amount of operations are required to be performed in the traditional optimization methods such as traversal or linear programming to make decisions, and the actual requirements are difficult to meet.
That is, the existing cloud and fog collaborative unloading decision algorithm needs a large amount of operations to be given, and although the theory can be realized, the factors such as overlong waiting time and overlarge energy consumption make the cloud and fog collaborative unloading decision algorithm not meet the actual use requirements. Meanwhile, the existing method needs to repeat the same calculation process for any given user number and task condition in practice, and the existing decision process can not guide the unloading decision of a new task, so that the model can not be continuously improved along with the use.
Disclosure of Invention
Aiming at the technical defects in the prior art, the invention provides a task unloading decision method based on deep learning in a cloud and fog collaborative computing environment, which is characterized in that a plurality of parallel deep neural networks are constructed, and the deep learning method of the multi-neural network is adopted to achieve the purpose of making reasonable decisions in a short time.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a task unloading decision-making method based on deep learning,
s1, respectively inputting a random generation task matrix W into S parallel deep neural networks, converting output into a decision X through MSE, and calculating consumption Q (W, X) to obtain an optimal decision X before training the deep neural networks 1 Corresponding consumption Q 1
S2, randomly selecting a series of data from the data set to train S deep neural networks, updating the network weight, inputting the task matrix W into the S trained deep neural networks to obtain an optimal decision and corresponding consumption Q given after training the deep neural networks 2
S3, calculating
Figure BDA0002345354610000021
Judging whether R reaches a threshold value, if so, ending, otherwise, repeating S1-S3;
s4, inputting the corresponding task matrix W into the parallel S deep neural networks for new task unloading decisions, and selecting the decision with the minimum consumption from the decisions, namely the target decision.
Wherein the dataset is formed by repeating the following steps a plurality of times:
randomly generating a task matrix W, respectively inputting the task matrix W into S initialized deep neural networks to obtain S decisions X, calculating consumption Q (W, X) corresponding to each decision, and storing the decision with the minimum consumption and the task matrix W in a data set; the step is repeated for a plurality of times until the generated data quantity reaches the set data set size.
Wherein the data set is obtained by combining the task matrix W with the corresponding first optimal decision X 1 Joint generation of the first generated number in a new data substitution datasetThereby updating the data set.
Wherein, the consumption Q (W, X) is calculated as follows:
Figure BDA0002345354610000031
Figure BDA0002345354610000032
wherein a represents a weight between energy consumption and time consumption, T (n) Representing the final time consumption of user n, E (n) Representing the total energy consumption of user n, E (n,m) Representing the total energy consumption, tl, of the m tasks of user n (n) Indicating the total time consumed by the local computing task of user n, te (n) Representing total time consumed by edge cloud computing task of user n, tc (n) The central cloud computing task of the user N consumes total time, N is the number of users, and each user N has M tasks.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002345354610000033
Figure BDA0002345354610000034
Figure BDA0002345354610000035
Figure BDA0002345354610000036
which represents the execution in-place and,
Figure BDA0002345354610000037
the representation is offloaded to the central cloud,
Figure BDA0002345354610000038
the representation is offloaded to the edge cloud,
Figure BDA0002345354610000041
representing two variables with values of 0 or 1.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002345354610000042
Figure BDA0002345354610000043
El (n,m) 、Ee (n,m) 、Ec (n,m) representing the energy consumed by the task in local, edge cloud, central cloud computing, respectively.
Wherein each neural network input layer contains N.times.M nodes, inputs the values of all elements of the task matrix, and the output layer contains 2.times.N.times.M nodes, and outputs the values as a pre-decision X * The input layer and the output layer contain a plurality of hidden layers;
will pre-decide X * Conversion to a decision X consisting of 0,1, taking the mean square error equation MSE, where decision X is such that
Figure BDA0002345354610000044
Wherein x is i Representing the output values of the nodes of the neural network,
Figure BDA0002345354610000045
and the output value of the pre-decision corresponding to the output value of each node of the neural network is represented.
The invention can give out a proper unloading scheme in a very short time, thereby enabling the cloud and fog cooperative calculation to exert the maximum effectiveness thereof, and solving the problems that the traditional unloading decision scheme usually needs a large amount of operation, and when the number of tasks increases, the high delay needed for decision making causes the cloud and fog cooperative calculation to hardly exert the advantages thereof.
Drawings
FIG. 1 is a flow chart of a deep learning based task offloading decision-making method of the present invention;
fig. 2 is a schematic structural diagram of the deep neural network of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
According to the invention, a plurality of parallel deep neural networks are constructed, a plurality of neural networks are utilized to generate a data set required by training in parallel, the data set is continuously updated by newly generated data while the data set is utilized to train the neural network, so that a decision generating neural network meeting actual requirements is generated, and a task unloading scheme with higher precision is provided for a user in a shorter time.
As shown in fig. 1, the task unloading decision method based on deep learning of the invention comprises the following steps:
1. and (5) establishing a model.
Recording the number of users as N under the given condition, wherein each user has M tasks (M is the maximum value of the task number of each user, the task number is less than M and the available size of the task is 0 is filled up), and the size of each task is represented by a task matrix as W, wherein the nth row and the mth column of W are the elements W (n,m) The size of the mth task representing the nth user, the matrix W is known. Record El (n,m) 、Ee (n,m) 、Ec (n,m) Respectively representing the energy consumed by the task in local, edge cloud and central cloud computing, tl (n,m) 、Te (n,m) 、Tc (n,m) Respectively representing the time required by local, edge cloud and central cloud operation of the task, tt (n,m) Representing the time required for uploading the task data, all six variables can be considered as being represented by the task data size W (n,m) The uniquely identified physical quantity can be regarded as a known quantity. Since the task operation result is usually too small compared with the task data size, the task operationThe calculation result is transmitted back to the local time neglected, and only task data uploading time is considered.
For task W (n,m) Using two variables of value 0 or 1
Figure BDA0002345354610000051
Indicating the manner in which the task is offloaded. Wherein, when->
Figure BDA0002345354610000052
The time indicates that the task is operating locally. When->
Figure BDA0002345354610000053
When the task selection is unloaded to the cloud for operation, the task selection is stopped at the moment, < >>
Figure BDA0002345354610000054
Time-representation offload to edge cloud,/->
Figure BDA0002345354610000055
The representation is offloaded to the central cloud. />
Its unloading mode can be expressed as follows:
Figure BDA0002345354610000056
-the execution of the process is performed locally,
Figure BDA0002345354610000057
-offloading to a central cloud of the cloud,
Figure BDA0002345354610000058
-unloading to an edge cloud of the object,
the final energy consumption is represented by the following formula:
Figure BDA0002345354610000059
Figure BDA0002345354610000061
thus, the total energy consumption E of user n (n) Can be expressed as:
Figure BDA0002345354610000062
for user n, the total time consumed by its local computing task can be expressed as:
Figure BDA0002345354610000063
the total time consumed by the edge cloud computing task can be expressed as:
Figure BDA0002345354610000064
the total time consumed by the central cloud computing task can be expressed as:
Figure BDA0002345354610000065
because the user can locally run the task which is not unloaded while waiting for cloud data processing, and the central cloud and the edge cloud can respectively process the data, the total waiting time of the user is the longest value of the three parts of time, namely the final time consumption T of the user n (n) Can be represented by the following formula:
T (n) =max(Tl (n) ,Te (n) ,Tc (n) ) (6)
the weight between energy consumption and time consumption is denoted by a, then for a given offloading decision X, the total consumption Q can be expressed as:
Figure BDA0002345354610000066
2. the data set required for training is initialized.
2.1 randomly initializing S parallel deep neural networks, wherein each neural network input layer comprises N.times.M nodes, the input is the value of each element of a task matrix, the output layer comprises 2.times.N.times.M nodes, and the output is a pre-decision X * . The input layer and the output layer have a plurality of hidden layers.
The data set size is set to specify the amount of data that needs to be saved.
2.2, according to the calculation mode of the neural network, each node of the neural network outputs decimal, and the method of using 0 and 1 to represent decision in model establishment is not met. To pre-decide X * Converting into the nearest decision X consisting of 0,1, the decision X can be pre-determined by using a Mean Square Error (MSE) * Transformation is performed. Wherein X is such that
Figure BDA0002345354610000071
The smallest series has integers of 0, 1.
It is easy to prove when
Figure BDA0002345354610000072
When (I)>
Figure BDA0002345354610000073
So x is i The value is 1, when->
Figure BDA0002345354610000074
In the time-course of which the first and second contact surfaces,
Figure BDA0002345354610000075
so x is i The value is 0. Then by associating the output value of each node of the neural network with +.>
Figure BDA0002345354610000076
The comparison can be converted into a satisfactory decision X.
2.3 randomly generating task matrix W, and respectively inputting W into SIn the deep neural network, S decision schemes X can be obtained from the process 2.2 1 ,X 2 ,X 3 ,…,X S And according to the formula Q (W, X), respectively calculating the consumption corresponding to the S decision schemes, combining the decision scheme with the minimum consumption with the task matrix, and storing the decision scheme and the task matrix into a data set.
2.4 the procedure 2.3 is repeated until the amount of data generated has reached the set data set size.
3. The neural network is trained from the data set while the data set is continually updated with new data.
3.1 continuously randomly generating a task matrix W, repeating the process 2.3 to obtain an optimal decision X given by the deep neural network 1 Corresponding consumption Q 1 And combining the task matrix with the decision to generate new data, and replacing the data generated first in the data set with the data, thereby realizing the updating of the data set.
3.2 randomly picking a series of data from the data set, training the S deep neural networks, and updating the weights of the neural networks.
3.3 executing the process 2.3 again on the task matrix W to obtain the optimal decision X given by the trained neural network 2 Corresponding consumption Q 2
3.4 definition
Figure BDA0002345354610000081
In an ideal state, when the deep neural network tends to converge, the scheme given after training the neural network again is not changed any more, namely Q 1 =Q 2 R=1, so R is used as an index for measuring whether the neural network converges, and the process is repeated for 3.1-3.3 until R tends to 1, at which time the neural network can be considered to be trained.
And inputting a task matrix corresponding to the new task unloading decision situation into the parallel S deep neural networks, and selecting the decision with the least consumption from the parallel S deep neural networks to obtain an ideal decision scheme.
It should be noted that deep learning is a machine learning method, and through the inherent rule and expression level of the deep neural network learning standard sample data, a computer can have analysis capability like a person, and for the newly generated situation, the computer can give a reasonable decision method and theory according to the existing training result.
The traditional deep learning case usually only needs one deep neural network and needs a large amount of known data as a training basis, but because the combination of the number of users and the number of tasks is various, the data meeting various actual demands is difficult to obtain, so the method is theoretically feasible and difficult to apply to production practice, and the method introduces a plurality of parallel deep neural networks, and well solves the problem of lack of data by continuously and circularly updating the data and training processes, so that the model can be more conveniently applied to different actual situations.
Most of the existing decision models need to perform a large amount of operations, and although theoretically feasible, the operation amount often exceeds the practical acceptable range when the number of users and tasks increases. Compared with the existing scheme, the method has the advantages that the increment speed of the operand is more gentle, and more complex decision problems can be processed under the condition of identical operation capacity and the like.
In the prior art, when a new decision is made each time, the existing operation process needs to be repeated, and the existing decision data cannot guide the generation of the new decision. The decision model based on the deep neural network can give a decision by only carrying out simple linear operation after training, and the operation amount is greatly reduced, so that the trained neural network can be directly copied to each cloud server or even a mobile terminal and is directly put into use, and the portability is high.
In short, the deep learning of the multi-neural network provided by the invention is a process of converting an unsupervised learning process into a supervised learning by constructing a plurality of deep neural networks in parallel, randomly generating a data set required by the deep learning under the condition of no data, and updating the data set while training the deep neural network, so that the data set continuously tends to standard data and the deep neural network continuously tends to converge.
The deep neural network provided by the invention adopts the functions of the deep neural network construction function, the cross entropy calculation function, the Adam optimization algorithm and the like provided by the existing deep learning tool box, comprehensively considers the energy consumption and the time consumption of a user, can rapidly give an unloading decision scheme capable of meeting the actual requirements, can greatly reduce the operand required by giving decisions after model training is finished, and has higher accuracy.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (5)

1. The task unloading decision-making method based on deep learning is characterized by comprising the following steps of:
s1, respectively inputting a random generation task matrix W into S parallel deep neural networks, converting output into a decision X through MSE, and calculating consumption Q (W, X) to obtain an optimal decision X before training the deep neural networks 1 Corresponding consumption Q 1
S2, randomly selecting a series of data from the data set to train S deep neural networks, updating the network weight, inputting the task matrix W into the S trained deep neural networks to obtain an optimal decision X given after training the deep neural networks 2 Corresponding consumption Q 2
S3, calculating
Figure FDA0004055060220000011
Judging whether R reaches a threshold value, if so, ending, otherwise, repeating S1-S3;
s4, inputting the corresponding task matrix W into the parallel S deep neural networks for new task unloading decisions, and selecting the decision with the minimum consumption from the decisions, namely the target decision;
the dataset is formed by repeating the following steps a number of times:
randomly generating a task matrix W, respectively inputting the task matrix W into S initialized deep neural networks to obtain S decisions X, calculating consumption Q (W, X) corresponding to each decision, and storing the decision with the minimum consumption and the task matrix W in a data set; repeating the steps for a plurality of times until the generated data quantity reaches the set data set size;
the data set is obtained by combining the task matrix W with the corresponding first optimal decision X 1 New data are generated in combination to replace the data generated first in the data set so as to realize updating of the data set.
2. The deep learning based task offloading decision-making method of claim 1, wherein the consumption Q (W, X) is calculated as follows:
Figure FDA0004055060220000012
wherein a represents a weight between energy consumption and time consumption, T (n) Representing the final time consumption of user n, E (n) Representing the total energy consumption of user n, E (n,m) Representing the total energy consumption, tl, of the m tasks of user n (n) Indicating the total time consumed by the local computing task of user n, te (n) Representing total time consumed by edge cloud computing task of user n, tc (n) The central cloud computing task of the user N consumes total time, N is the number of users, and each user N has M tasks.
3. The deep learning based task offloading decision method of claim 2, wherein,
Figure FDA0004055060220000021
Figure FDA0004055060220000022
Figure FDA0004055060220000023
Figure FDA0004055060220000024
which represents the execution in-place and,
Figure FDA0004055060220000025
the representation is offloaded to the central cloud,
Figure FDA0004055060220000026
the representation is offloaded to the edge cloud,
Figure FDA0004055060220000027
representing two variables with values of 0 or 1.
4. The deep learning based task offloading decision method of claim 3, wherein,
Figure FDA0004055060220000028
Figure FDA0004055060220000029
El (n,m) 、Ee (n,m) 、Ec (n,m) representing the energy consumed by the task in local, edge cloud, central cloud computing, respectively.
5. The deep learning-based task offloading decision-making method of claim 1, wherein each neural network input layer includes N X M nodes, the input is the value of each element of the task matrix, the output layer includes 2X N M nodes, and the output is the pre-decision X * The input layer and the output layer contain a plurality of hidden layers;
will pre-decide X * Conversion to a decision X consisting of 0,1, taking the mean square error equation MSE, where decision X is such that
Figure FDA00040550602200000210
Wherein x is i Representing the output values of the nodes of the neural network,
Figure FDA00040550602200000211
and the output value of the pre-decision corresponding to the output value of each node of the neural network is represented. />
CN201911392475.9A 2019-12-30 2019-12-30 Task unloading decision method based on deep learning in cloud and fog collaborative computing environment Active CN111158912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911392475.9A CN111158912B (en) 2019-12-30 2019-12-30 Task unloading decision method based on deep learning in cloud and fog collaborative computing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911392475.9A CN111158912B (en) 2019-12-30 2019-12-30 Task unloading decision method based on deep learning in cloud and fog collaborative computing environment

Publications (2)

Publication Number Publication Date
CN111158912A CN111158912A (en) 2020-05-15
CN111158912B true CN111158912B (en) 2023-04-21

Family

ID=70558973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911392475.9A Active CN111158912B (en) 2019-12-30 2019-12-30 Task unloading decision method based on deep learning in cloud and fog collaborative computing environment

Country Status (1)

Country Link
CN (1) CN111158912B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782301B (en) * 2020-07-08 2020-12-22 北京邮电大学 Unloading action set acquisition method and device
US11954526B2 (en) 2020-07-10 2024-04-09 Guangdong University Of Petrochemical Technology Multi-queue multi-cluster task scheduling method and system
CN111831415B (en) * 2020-07-10 2024-01-26 广东石油化工学院 Multi-queue multi-cluster task scheduling method and system
CN112134916B (en) * 2020-07-21 2021-06-11 南京邮电大学 Cloud edge collaborative computing migration method based on deep reinforcement learning
CN112433843B (en) * 2020-10-21 2022-07-08 北京邮电大学 Calculation distribution optimization method based on deep reinforcement learning
CN115551105B (en) * 2022-09-15 2023-08-25 公诚管理咨询有限公司 Task scheduling method, device and storage medium based on 5G network edge calculation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257429A (en) * 2018-09-25 2019-01-22 南京大学 A kind of calculating unloading dispatching method based on deeply study
CN110362952A (en) * 2019-07-24 2019-10-22 张�成 A kind of quick calculating task shunt method
CN110535936A (en) * 2019-08-27 2019-12-03 南京邮电大学 A kind of energy efficient mist computation migration method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250311B2 (en) * 2017-03-15 2022-02-15 Salesforce.Com, Inc. Deep neural network-based decision network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257429A (en) * 2018-09-25 2019-01-22 南京大学 A kind of calculating unloading dispatching method based on deeply study
CN110362952A (en) * 2019-07-24 2019-10-22 张�成 A kind of quick calculating task shunt method
CN110535936A (en) * 2019-08-27 2019-12-03 南京邮电大学 A kind of energy efficient mist computation migration method based on deep learning

Also Published As

Publication number Publication date
CN111158912A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111158912B (en) Task unloading decision method based on deep learning in cloud and fog collaborative computing environment
CN113191484B (en) Federal learning client intelligent selection method and system based on deep reinforcement learning
CN110909865B (en) Federated learning method based on hierarchical tensor decomposition in edge calculation
CN110520868A (en) Distributed Reinforcement Learning
CN110928654A (en) Distributed online task unloading scheduling method in edge computing system
CN114912705A (en) Optimization method for heterogeneous model fusion in federated learning
Zou et al. Mobile device training strategies in federated learning: An evolutionary game approach
Zhu et al. A deep-reinforcement-learning-based optimization approach for real-time scheduling in cloud manufacturing
US20220374776A1 (en) Method and system for federated learning, electronic device, and computer readable medium
CN116523079A (en) Reinforced learning-based federal learning optimization method and system
CN113518007B (en) Multi-internet-of-things equipment heterogeneous model efficient mutual learning method based on federal learning
CN113283186A (en) Universal grid self-adaption method for CFD
CN113610227A (en) Efficient deep convolutional neural network pruning method
CN116050540A (en) Self-adaptive federal edge learning method based on joint bi-dimensional user scheduling
Jiang et al. Low-parameter federated learning with large language models
CN116016538A (en) Dynamic environment-oriented side collaborative reasoning task unloading optimization method and system
CN116244484B (en) Federal cross-modal retrieval method and system for unbalanced data
CN110971683B (en) Service combination method based on reinforcement learning
WO2023174189A1 (en) Method and apparatus for classifying nodes of graph network model, and device and storage medium
Wang et al. Bose: Block-wise federated learning in heterogeneous edge computing
CN113743012B (en) Cloud-edge collaborative mode task unloading optimization method under multi-user scene
CN115914230A (en) Adaptive mobile edge computing unloading and resource allocation method
CN113747500B (en) High-energy-efficiency low-delay workflow application migration method based on generation of countermeasure network in complex heterogeneous mobile edge calculation
Jia et al. Efficient federated learning with adaptive channel pruning for edge devices
CN116016223B (en) Data transmission optimization method for data center network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant