CN113407345B - Target driving calculation unloading method based on deep reinforcement learning - Google Patents
Target driving calculation unloading method based on deep reinforcement learning Download PDFInfo
- Publication number
- CN113407345B CN113407345B CN202110712564.8A CN202110712564A CN113407345B CN 113407345 B CN113407345 B CN 113407345B CN 202110712564 A CN202110712564 A CN 202110712564A CN 113407345 B CN113407345 B CN 113407345B
- Authority
- CN
- China
- Prior art keywords
- node
- task
- network
- calculation
- reinforcement learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000002787 reinforcement Effects 0.000 title claims abstract description 37
- 238000004891 communication Methods 0.000 claims abstract description 12
- 230000009471 action Effects 0.000 claims description 48
- 230000008569 process Effects 0.000 claims description 41
- 238000013528 artificial neural network Methods 0.000 claims description 30
- 238000012549 training Methods 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 9
- 230000007704 transition Effects 0.000 claims description 8
- 230000035945 sensitivity Effects 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 6
- 230000001965 increasing effect Effects 0.000 abstract description 5
- 230000000875 corresponding effect Effects 0.000 description 16
- 230000005540 biological transmission Effects 0.000 description 14
- 238000013507 mapping Methods 0.000 description 10
- 238000005265 energy consumption Methods 0.000 description 8
- 238000012216 screening Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002035 prolonged effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000036982 action potential Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013506 data mapping Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
- G06F18/295—Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Probability & Statistics with Applications (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a target driving calculation unloading method based on deep reinforcement learning, which is applied to the wireless communication fields of 5G/6G, the Internet of things and the like and aims at the problem of low calculation unloading operation efficiency caused by the fact that task types are not distinguished in the prior art; the task information enhancement module of the MoE mixed expert system is adopted to remarkably improve the characteristic expression capability of task information, and can remarkably improve the influence duty ratio of the task time delay sensitive characteristics in calculation unloading decision, so that the distinction degree of different types of calculation tasks is increased; the deep reinforcement learning reward mechanism can be customized according to a specific wireless network scene, and can also be adaptively adjusted according to network characteristics.
Description
Technical Field
The invention belongs to the field of wireless communication such as 5G/6G, internet of things and the like, and particularly relates to a calculation unloading technology based on target driving.
Background
The development of wireless communication technologies and applications such as 5G/6G, internet of things and the like has induced two trends: (1) The intelligent characteristics of the network are continuously enhanced, and the network equipment needs to perform a large amount of intelligent computation, such as intelligent image recognition, intelligent data analysis and the like; (2) At the same time, the network devices and the scale are rapidly increasing, and the proportion and the number of the lightweight devices are continuously increasing. Together, these two trends lead to a direct consequence: the large intelligent computing requirements present significant challenges to these resource-constrained lightweight devices.
To address this problem, computing offloading techniques have evolved. In computing offloading, the computing tasks of the lightweight devices are transferred to the appropriate computing resource-rich nodes, thereby enabling computing offloading of the lightweight devices to the resource-rich nodes. In this process, the lightweight device is referred to as a task node, and the resource-rich node that completes the task is referred to as a compute node.
Based on the sending target of the calculation result, the calculation unloading can be divided into two modes of calculation unloading driven by the source node and calculation unloading driven by the target. In source driven computing offloading, the computation results are ultimately returned to the task node, such tasks primarily taking into account the allocation of offload proportions and the selection of offload nodes, both locally offloaded and offloaded at the compute node. In the target driving calculation unloading, the calculation result needs to be transmitted to a remote target node, so that the target driving calculation unloading not only needs to consider the calculation amount distribution proportion problem, but also needs to select a more suitable calculation unloading path according to the target node. Currently, research in computing offloading in industry and academia can be categorized essentially as a source-driven computing offloading model. In practice, the target drive computing offload is widely applied to various wireless networks such as 5G/6G and internet of things, but the related research is still very deficient.
Moreover, the computational offloading of the target drive faces different application scenarios and may also exhibit different requirements. In a specific wireless communication application scene, different types of computing tasks often exist, and the diversification of the computing tasks often means differentiation of delay sensitivity, for example, some computing tasks belong to emergency task types, and the delay sensitivity is high; some wireless communication network nodes belong to periodic tasks or common task types, have no high requirement on time delay, and generally have the problem of energy limitation. Therefore, under the condition that the calculation unloading result is not required to be transmitted back to the source node but is required to be transmitted to other target nodes, how to reasonably allocate resources, and a calculation unloading strategy is formulated for each type of calculation task in a targeted manner, so that the method has important significance for efficient operation of the network calculation unloading process such as 5G/6G, the Internet of things and the like.
Disclosure of Invention
In order to solve the technical problems, the invention provides a target driving calculation unloading method based on deep reinforcement learning, which is combined with a hybrid expert system based on MoE and a deep reinforcement learning framework to reasonably allocate calculation resources and plan an end-to-end unloading path, so that the load balance is maintained and the network survivability is prolonged while the time delay requirements of various tasks are met.
The invention adopts the technical scheme that: a target driving calculation unloading method based on deep reinforcement learning models a wireless communication scene as a network comprising a source node, a target node, a calculation node and a common node, wherein the source node is a calculation task release node, the target node is a calculation task result destination node, the calculation node is a calculation server node, and the common node is a node for providing relay service;
modeling a calculation task unloading process from a source node to a destination node into a Markov decision process, and starting from the source node, calculating a next hop selection and calculation unloading strategy by the current node through a neural network subjected to deep reinforcement learning until the calculation unloading task is completed; the input of the deep reinforcement learning network is a Markov state space, the input is recorded as an observation state, and the output is the optimal lower calculation unloading strategy under the corresponding observation state.
The optimal calculation strategy is specifically that the ratio of the current node to the calculation task to be unloaded and the next-hop node corresponding to the current node are the same, and if the current node is a common node, the unloading ratio is 0.
The rewards of the markov decision process are functions of overall time delay and energy variance for the task.
The observed state includes: the task type features are non-numerical features representing task priority or delay sensitivity, and the common features are other features after the task type features are removed.
The method further comprises the step of adopting a task information enhancement module to process the observed state of the input deep reinforcement learning network, and specifically: the task information enhancement module is based on a MoE hybrid expert system, and the MoE hybrid expert system comprises: a sub-network and a gating network; the sub-network comprises a plurality of expert networks, each expert network corresponds to one calculation unloading strategy of the current task type, and the input of the expert network is a common input characteristic; the input of the gate control network is task type characteristics, and the output is weight output by a corresponding expert network; the output of each expert network is respectively weighted and summed with the corresponding weight to be used as the output of the MoE hybrid expert system.
Also included is stitching the common input features behind the output of the MoE hybrid expert system.
The task type features are represented by One-Hot encoding.
The deep reinforcement learning network calculates the unloading proportion A by self continuous actions prop Discrete into 11 actions from 0.0 to 1.0, and combining the node scale N to generate a two-dimensional discrete action space of 11 multiplied by N; the best action obtained by screening from the two-bit discrete action space is the best next hop and the calculation unloading strategy.
The system also comprises a central server, wherein the central server integrates global data according to the < S, A, R and S' > data collected by each node and trains a deep learning neural network applicable to all the nodes; then, the network parameters are transmitted to each node;
where S represents the state space, A represents the action space, R represents the reward, and S' represents the next state space in the Markov transition process.
The training server is used for simulating and recording a target driving calculation unloading process in a local mode based on the collected state space of the current node, learning an optimal target calculation unloading strategy in an off-line mode, and broadcasting parameters of the neural network after the deep reinforcement learning of the current node is updated to other nodes.
The invention has the beneficial effects that: the invention provides a target drive calculation unloading mechanism based on deep reinforcement learning for solving the target drive calculation unloading requirements in wireless communication networks such as 5G/6G, internet of things and the like, so that the calculation resources in the wireless communication network can be reasonably allocated, calculation unloading decisions can be provided for different time delay sensitive types in a targeted manner, and the network life cycle can be prolonged under the condition that the task time delay can be guaranteed in a scene with limited resources; the method has the following advantages:
1. according to the invention, the task information enhancement module of the MoE hybrid expert system remarkably improves the characteristic expression capability of task information, and a large number of experiments show that compared with a neural network without the MoE module, the task information enhancement module can remarkably improve the influence duty ratio of the task time delay sensitive characteristic in calculation unloading decision, so that the distinction degree of different types of calculation tasks is increased;
2. the reward mechanism of the deep reinforcement learning can be customized according to a specific wireless network scene, and can also be adjusted in a self-adaptive manner according to network characteristics, so that the requirements of uniform energy distribution and calculation task time delay are balanced for the scene with limited energy; for the scene of sufficient energy, computing resources are reasonably planned to ensure the time delay of the computing task with high priority;
3. the distributed computation offload mechanism not only ensures the timeliness of computation offload policies, but also reduces the task burden of computation offload decisions.
Drawings
FIG. 1 is a schematic diagram of a computational offload of the present invention;
FIG. 2 is a single offload decision flow chart of the present invention;
fig. 3 is a schematic diagram of a neural network according to the present invention.
Detailed Description
The present invention will now be further described in order to make the objects, technical solutions and advantages of the present invention more apparent. The DRL-DDCO adopts deep reinforcement learning, synthesizes task time delay and network survivability, learns the mapping relation between the network information environment and the network benefit of decision feedback, and realizes a personalized and differential target drive calculation unloading mechanism aiming at different task types and different network environments.
The whole technical scheme of the invention consists of a deep reinforcement learning framework facing to the target driving calculation unloading strategy and a task information enhancement module based on a hybrid expert system (Mixture of Experts, moE).
In the target-oriented driving computing unloading deep reinforcement learning framework, a real wireless communication scene is modeled into a network constructed by four types of nodes, namely a computing task release node (source node), a computing task result destination node (target node), a computing server node (computing node) and a common node capable of providing relay service.
In a target-driven computing unloading mode, the transmission and unloading of computing tasks are performed cooperatively, namely 'forwarding and unloading at the same time'; and, its forwarding node and offloading policy are determined hop-by-hop based on deep reinforcement learning on the path nodes it traverses. That is, the "target-driven computing offload mode" designed by the present invention implements a "forward-while-offload, decision-while-make" mode.
Specifically, in the calculation unloading process of the target drive, the mechanism firstly models the unloading process into a Markov decision process (Markov Decision Process, MDP), and on the basis, the selection of the next hop and the calculation unloading proportion are calculated by the current node through the neural network which is deeply reinforcement-learned until the calculation unloading task is completed from the source node. The input of the deep reinforcement learning network is a Markov state space, which is simply called an observation state, and the output is the optimal next hop and unloading proportion under the corresponding observation state.
In one decision process round, as shown in fig. 1, all state transitions, calculation offloading policies and corresponding received rewards are stored for training a neural network of fitting action potential value and state mapping relationships in deep reinforcement learning. The converged neural network has the ability to memorize and generalize. In the decision process, the neural network can predict the subsequent potential state transition process to search for the optimal offloading policy based on the current state. In this way, as shown in fig. 2, the DRL-DDCO model can gradually calculate the optimal calculation offloading policy in the calculation offloading process, and correct the offloading policy according to the network environment and task information.
In a task information enhancement module based on the MoE hybrid Expert system, the expression of time delay sensitive characteristics in different types of calculation unloading tasks is mainly enhanced, the MoE hybrid Expert system can reflect the different mapping relation between different types of tasks and corresponding decisions thereof through the combination of a plurality of Expert sub-networks, so that task information characteristics with higher expressive force are output, and a decision system can learn a unified action strategy from the characteristic information of the difference and decision feedback reward data, so that an intelligent unloading decision system for all task types is formed.
The network structure of the present invention is shown in fig. 3, and the main contents include:
1. deep reinforcement learning framework oriented to target drive computing unloading strategy
In a wireless communication network, the problem of calculating and unloading decision in a target driving mode often faces the problems of background traffic interference, difficulty in centering decision of a distributed network and the like. Aiming at the problems, the invention introduces a Deep Learning algorithm (DDQN) to enable an intelligent agent to adaptively learn the relation between a calculation unloading decision and a target income, thereby reasonably making a calculation unloading strategy.
(1) Reinforced learning module
Before explaining the reinforcement learning mode, the overall model of the DRL-DDCO basically accords with the reinforcement learning mode, and firstly, a calculation unloading decision process driven by a target needs to be explained, so that the Markov property is relatively easy to prove to be satisfied, and the invention does not prove about the problem.
a) Markov decision process (MDP, markov Decision Process)
For reinforcement learning, the computational offload scenario of target drive needs to be modeled first as a markov decision process, which includes determining a state space (S), an action space (a), a transition probability (P) and a corresponding reward (R), i.e. classical quaternion < S, a, P, R >. The transition probability P is default to 1 in the case of a seek class problem, because this is a reliably defined network, and transmission failures or errors are not within the scope of the present invention. The other main components are as follows:
S=(I nearby ,T,Topo)
A=(A node ,A prop )
R=f(D,△Var)
where D represents the time delay required to compute the task offload completion and Δvar represents the variance change of the total energy remaining in the neighboring nodes around the current node. The state space is mainly composed of three partial features: (1) I nearby Representing locally collected network states including numbers of surrounding nodes, respective computing resources and information about energy reserves; (2) T represents task information received by a computing node, and the task information comprises data quantity, calculated quantity and other task characteristics of a computing task; (3) Topo represents information of network topology, including Dijkstra distance between each node and target point, which can be obtained through communication algorithm commonly used by network. The above three features constitute a complete state space and uniquely identify the best computational offload strategy. We can therefore also demonstrate that the goal driven computational offload process satisfies the markov property, i.e., the overall process is modeled as the effectiveness of MDP.
The action space consists of two sub-actions: (1) Selecting a next hop node A node This is the meterThe next short-term destination selected by the computing task is not only needed to go to the short-term destination to carry out the computing unloading work, but also can be used as a relay node to forward the computing task; (2) Selecting offload proportion A at the current node prop Because of the proportion, A prop ∈[0,1]The offload ratio is equal to 0 if only relay is active. Secondly, the unloading ratio is based on the calculation amount of the initial requirement, and the setting helps the agent distinguish between each action.
b) Reward setting
The reward formula is a function of overall time delay of the task as a function of energy variance.
For task T j (j is a natural number) its delay D j The total time required from the task release and calculation completion to the transmission of the result to the target node is represented, wherein the total time not only comprises the time delay caused by calculation and unloading, but also comprises the data transmission time delay and the signal propagation time delay; the energy variance is changed, the energy variance after unloading is subtracted from the energy variance before unloading, if the energy variance is reduced and the area energy distribution is more uniform, delta Var < 0, the agent receives a part of forward feedback, and vice versa. The overall prize setting is as follows:
R(D j ,△var)=-α*D j *s j -β*△var
wherein s is j Representing a computing task T j The higher the time delay sensitivity degree of the system is, the more urgent the task is represented, the specific value of the system needs to be combined with the actual application scene, and the system is set according to the emergency degree of the task before training. If the task emergency degree is set to 0-6 and 7 grades are added, s j The value of (2) is a certain value between 0 and 6 corresponding to the task emergency degree. And alpha and beta represent the rewarding coefficients of time delay and variance change, respectively. Typically, the values of α+β=1 are set, and α and β can be set in combination with empirical values, and the parameter values are adjusted in accordance with the training effect in the training.
Assume task T j The entire process from the offload source node to the final destination node requires a kappa-hop transmissionSequentially marked as h 1 ,h 2 ,...h κ And calculate task T j In the h k The time delay introduced by the jump (1.ltoreq.k.ltoreq.κ) is denoted as D j (h k ). Task T j Is the overall delay D of (2) j The formula is as follows:
wherein,represent natural number, D j (h k ) Refer to calculating the h on the offload path k All delays generated at the step node include four parts: transmission delay, propagation delay, unloading delay, and latency. Wherein transmission delay->The formula is as follows:
wherein L is j (h k ) Representing a computing task T j At h k The amount of data to be transmitted during time hopping, N (T j ,h k ) Representing a computing task T j Corresponds to the h k Node of the hop, denominatorRepresentative is the transmission rate at the node.
Before introducing the amount of calculation task data, a definition of the yield is given below,
yield refers to the data volume of the result of a calculation taskInitial data volume for its task->Meaning, namely, the task data volume compression ratio after the calculation is completed. Based on this, the following is defined in the h k Calculation task data amount formula of jump:
R j (h k ) Refers to the amount of computation that has not been completed by the computing task after the computation offload action,the initial calculation amount for the task. When R is j (h k ) When=0, i.e. the task calculation has been completed, then at this time at h k The amount of data that needs to be transferredWherein lambda represents the remaining ratio of the amount of data after the task is offloaded,/>The residual data volume after the task is unloaded.
For calculating unloading delayThe formula is as follows:
wherein,refer to the h k The amount of computation offloaded on the step node is one of the policies we need to make. />Representing a computing task T j In the h k The calculation rate of the hop node.
Propagation delayThe formula is as follows:
wherein W (h k ,h k+1 ) Represents h k Jumping node and h k+1 The distance between the nodes is skipped and v represents the electron wave propagation velocity, which is typically assumed to be 2/3 times the speed of light in the present invention.
Finally waiting for time delayOften in a multitasking computing offload scenario, a good computing offload allocation algorithm needs to coordinate computing tasks based on node busy conditions to reduce +.>The present invention adopts special skills to record waiting delay +.>The invention records the time TP of the special action.
TP j (h k ) Representative task T j Arrive at the h k Time point of node is jumped, andrepresenting the point in time at which the offloading of the amount of computation of the task at that node is complete. When τ (N (T) j ,h k ) H) is less than or equal to 0 k The jump node receives the task T j At the time of completion of the calculation offloading of the previous task, at this time +.>Calculation task T j After completion, the point in time is updated:
TP j (h k+1 )=TP j (h k )+D j (h k )
the energy consumption variance change formula is as follows:
△Var(h k )=Var after -Var before
wherein Λ means the nth (T j ,h k ) Set of adjacent nodes, H l Then refer to node lResidual energy ofAnd->Mean value of residual energy of all nodes in node set lambda before and after unloading action, H (A) node ) Representing the remaining energy in the selected node. While specifying the calculation task T j The total energy consumption formula is as follows:
wherein,represents at the h k The calculation at the jump node unloads the energy consumption, for the convenience of reading the formula, then using +.>Refer to the h k The calculation rate on the jump node is specifically expressed as follows:
wherein,represents the h k The calculation rate of the jump node in the CPU is +.>In the case of (a), the rate of energy consumption per unit of energy conversion, and the energy consumed for calculating the unloading is equal to the rate multiplied by the calculated amount C of unloading j (h k ). Whereas the coefficient v inside is usually set to 10 -11 . The transmission energy consumption is set as follows:
at node N (T j ,h k ) Up-consumed transmission energy sourceEqual to the rate of energy consumption per unit time +.>Multiplying transmission delay->Wherein is->In the calculation formula of (2), wherein N 0 Representing complex gaussian white noise variance, while h represents channel gain and W represents channel bandwidth.
(2) DNN module
The neural network in deep reinforcement learning mainly plays a role in associating a large state space with actions and predicting the value of each action in the state. The deep connection between the state characteristics and the action characteristics in the target driving calculation unloading scene is fully excavated by utilizing the neural network, the function of network memory is exerted, and the generalization capability of the neural network can also process the emerging calculation tasks in the network, so that the problem of multiplexing of the traditional algorithm model is well solved. The neural network structure in the decision framework is shown in the upper half of fig. 3:
a) Input processing
Neural network input is a feature of the state space in MDP, but some preprocessing of the data is required before input, because the difference in dimension and distribution between the data affects the direction in which the training process converges. Similarly, in the execution stage, the data needs to be equally processed and then put into the network to obtain a result. Wherein for a part of the characteristics of the numerical class, such as available computing resources, the task data volume and the computing volume are normalized to a numerical value between 0 and 1 through the maximum and minimum values, and the normalization formula is as follows:
for features where the partial value size is intuitively not significant, such as surplus energy, these values are generally intuitively significant in combination with other surrounding data. The invention therefore discretizes or binarizes similar features, such as marking nodes above the average energy level as 1 and nodes below the average level as 0.
b) Output processing
For the output of the neural network, the output content is that under the corresponding state, the Q value of each action is also a two-dimensional space due to the two-dimensional action space of the neural network. For complex target-driven calculation unloading scenes, the optimal action selection probably cannot perfectly follow normal distribution, so that unreasonable actions are extremely easy to obtain in the sampling process, and the method based on the dual-depth Q network (Double Deep Q Network, DDQN) is adopted to calculate the unloading proportion A of the continuous actions prop Discrete into 11 actions from 0.0 to 1.0, and in combination with the node size N, creates an 11 x N two-dimensional discrete action space. The motion space discretization has the advantages of not only increasing the anti-interference capability of the model, but also providing convenience for active motion screening.
For the training process of the decision model, the invention gives two modes: on-line training and off-line training. The online training has the advantages that the calculation unloading model can be adjusted and calculated according to scene characteristics in real time, but the model occupies larger transmission resources; offline training does not require the transmission of large amounts of training data, but is somewhat slower to react to the scene.
On-line training the invention adopts a CTDE (Centralized Training Distributed Execution ) training module, and specifically:
in a complex computing offload scenario, the distributed training method (Distributed Training, DT) generates additional computing burden for each computing server, and because of the difference between computing resources and received tasks, the model convergence progress between computing servers is not uniform, and network parameters on nodes with faster convergence speed oscillate due to data provided by other lagging nodes. In other words, the problem that the convergence progress is difficult to unify in the distributed training mode may cause the overall network convergence situation to be blocked. Aiming at the situation, the invention provides a working mode of adopting Centralized Training Distributed Execution (CTDE), namely, during the working process, collected < S, A, R, S' > data sets are summarized to a certain central server by each computing node, and then the central server integrates global data and trains a neural network applicable to all computing nodes. Where S' represents the next state in the markov transition process. And finally, the network parameters are transmitted to each computing node by the central training server.
Of course, CTDE also has its own short board, i.e. network parameters that need to propagate large amounts of data including transfer records and iterations on each compute node. In the industrial internet of things scenario with partial link shortage, the problem of additional transmission link burden may be caused. The present invention also proposes an alternative to this problem, namely offline training. The training process of the neural network mainly occurs on an externally hung training server, the server simulates and records a target driving calculation unloading process locally based on the collected server information and network link information, and learns an optimal target calculation unloading strategy offline based on the collected records, and parameters of the neural network are broadcasted to each real node after convergence of the neural network is completed.
The server information collected here is specifically: the service node provides information about the service capabilities, typically including computing power, transmission power, available energy, topology, etc. of the node.
(4) Motion space search optimization module
The action space search optimization (Action Search Optimization, ASO) module is inspired by the ant colony algorithm forbidden options, namely, before the intelligent agent searches redundant action spaces, a certain rule is used for screening out some intuitively invalid action options, and then a corresponding action is selected from the screened action set. The invalid action screening rules are as follows:
(1) the relevant actions of non-adjacent nodes are filtered, wherein the relevant actions comprise the nodes which are not adjacent in the topological relation originally and some nodes which fail due to energy consumption. These actions of passing computational tasks to the node are obviously unreasonable, so such actions are filtered out;
(2) and screening out the node related actions recorded on the calculation unloading path, namely the node related actions which have passed through. This is because even with the possibility of offloading at the node, the corresponding calculation should be done the first time the node is traversed, so the same node should not be traversed twice repeatedly, which prevents a jump back and forth situation;
(3) and screening out related tasks exceeding the residual calculated amount of the calculation tasks. The executed calculated quantity unloading action is matched with the actual calculated quantity, so that the standard unloading action according to the residual calculated quantity is reasonable;
based on the three-point criteria, when selecting actions and selecting the best action of the next state during training and updating, some actions without value can be removed through manual screening, so that invalid records in a memory bank can be reduced, and the quality of the memory bank is improved. The data of the memory library is equivalent to the data set, and the quality of the data set directly determines the convergence degree of the final neural network. Experimental results also show that if the selected actions are not limited, the neural network is difficult to converge in a larger action space.
2. Task information enhancement module based on MoE hybrid expert system
The hybrid expert system (MoE) is a neural network, and also belongs to a model of a hybrid network. The method is suitable for solving the problem of different data mapping relations in the data set. The neural network corresponding to the MoE module is shown in the lower half of FIG. 3, the MoE model mainly comprises two parts, one part is a smaller sub-network (experiments), and is recorded as an expert network, and the expert network can perform specialized training (specialized) through part of a data set, so that the mapping relation in the data can be accurately described; the other part is a gate network (Manager), which may be a DNN neural network, and finally outputs a distribution probability through the SoftMax layer.
The input of the gating network is task type characteristics, the task type characteristics specifically refer to non-numerical characteristics representing task priority or time delay sensitivity in the task, and the non-numerical characteristics are usually represented by One-Hot codes; the input of the sub-network is a common input characteristic, namely, the common input characteristic is that the task is to remove other characteristics except the task type characteristic, and the common input characteristic is usually a numerical characteristic; the common input features and task type features together form the input features of the neural network, namely the state features of the transition.
Each expert network in the sub-network outputs a mapping result with the same dimension, each gating network outputs a corresponding weight of the mapping result, and the method is different from a common neural network in that a plurality of models are trained according to data separation, each model is called an expert, the gating network is used for selecting which expert to use, and the actual output of the MoE is the weight combination of the output of each expert network and the gating network.
Each expert model can fit different functions (various linear or nonlinear functions), so that the MoE model well solves the problem of different mapping relations caused by different responsible data sources. In the invention, moE can well solve the problem of different mapping relations between action values and input states in different types of task decision processes.
Those skilled in the art will appreciate that the multiple expert networks of the sub-networks in the MoE model are employed in the present invention to fit the decision relationships of a single type of task.
Finally, as the condition of weakening the expression of the common type features exists in the process of enhancing the task type information, the common input features are spliced behind the output result of the MoE in the process of unloading decision so as to prevent the condition of information loss. The result of this splice is used as an input to the dual depth Q network.
For the MoE hybrid system module, its loss function is as follows:
wherein p is i The output results of the SoftMax layer are output according to the proportion of the ith expert under the gate control, and the formulas are as follows:
why the system allows each expert to learn different parameters, the answer being the updated gradient of this model. For the gradient of expert:
as can be seen from its gradient formula, the partial data is for p i The gradient of updates will also be larger for larger ratios of expertise, a process called "specialization"; for gate, the ratio p of each expert needs to be controlled i The gradient formula is as follows:
the gradient formulation shows that for some data, if the loss of some expert output is higher than the average loss, then the corresponding p i The reduction, i.e. the expert network is said to be not well predicted for this part of the data; on the contrary, the mapping relation of the part of data, which is predicted by the experient, is described, and the corresponding output duty ratio p is improved i . From the view of the formula, the MoE system realizes the specificity of each experiment and different experiment combinations aiming at different mapping relations according to the differentiation of the gradient updating direction.
In a computing offload scenario, different types of computing tasks are also inconsistent in latency requirements. Part of the tasks do not require time delay, but the calculation amount still needs to be unloaded due to the self resource limitation; and part of emergency task delay requirements are very high, and the lower task delay is required to be maintained all the time, which is similar to the alarm task of forest fire. The same network can be distinguished by adding a task-level One-Hot feature, but the actual effect is very little. Because the vast majority of the network still shares a set of parameters for different task state inputs, a small number of feature inputs do not greatly affect the network results. Therefore, the invention proposes to use the MoE system, and can combine different expert network outputs for different similar tasks. Because the task features focused by each network are inconsistent with the network features, for example, the requirements of the emergency computing task on computing resources are larger, and the computing task with loose time delay requirements focuses on energy consumption distribution more, therefore, the task information enhancement module based on MoE can enable the decision model to formulate personalized computing unloading strategies for different types of tasks.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.
Claims (8)
1. The target driving calculation unloading method based on the deep reinforcement learning is characterized by modeling a wireless communication scene as a network comprising a source node, a target node, a calculation node and a common node, wherein the source node is a calculation task release node, the target node is a calculation task result destination node, the calculation node is a calculation server node, and the common node is a node for providing relay service;
modeling a calculation task unloading process from a source node to a destination node into a Markov decision process, and starting from the source node, calculating a next hop selection and calculation unloading strategy by the current node through a neural network subjected to deep reinforcement learning until the calculation unloading task is completed; the input of the deep reinforcement learning network is a Markov state space, the input is marked as an observation state, and the output is the optimal calculation unloading strategy under the corresponding observation state;
the observed state includes: task type features, which are non-numerical features representing task priority or delay sensitivity, and common input features, which are other features after the task type features are removed;
the method further comprises the step of adopting a task information enhancement module to process the observed state of the input deep reinforcement learning network, and specifically: the task information enhancement module is based on a MoE hybrid expert system, and the MoE hybrid expert system comprises: a sub-network and a gating network; the sub-network comprises a plurality of expert networks, each expert network corresponds to one calculation unloading strategy of the current task type, and the input of the expert network is a common input characteristic; the input of the gate control network is task type characteristics, and the output is weight output by a corresponding expert network; the output of each expert network is respectively weighted and summed with the corresponding weight to be used as the output of the MoE hybrid expert system.
2. The method for unloading target drive calculation based on deep reinforcement learning according to claim 1, wherein the calculation policy is specifically that the ratio of the current node to unload the calculation task and the corresponding next-hop node is 0 if the current node is a common node.
3. A method of targeted driven computational offloading based on deep reinforcement learning as claimed in claim 2, wherein the rewards of the markov decision process are a function of the overall delay of the task as a function of the variance of the energy variance.
4. A method of target driven computational offloading based on deep reinforcement learning as claimed in claim 3, further comprising stitching common input features behind the output of the MoE hybrid expert system.
5. The method for target-driven computational offload based on deep reinforcement learning of claim 4, wherein the task-type features are represented using One-Hot coding.
6. The method for target-driven computational offload based on deep reinforcement learning of claim 5, wherein the deep reinforcement learning network computes offload ratio a from a continuous motion of the network prop Discrete into 11 actions from 0.0 to 1.0, and combining the node scale N to generate a two-dimensional discrete action space of 11 multiplied by N; the best action screened from the two-dimensional discrete action space is the best next hop and calculates the unloading strategy.
7. The method for target-driven computing offloading based on deep reinforcement learning of claim 6, further comprising a central server, wherein the central server integrates global data according to the collected < S, a, R, S' > data of each computing node to train a deep learning neural network applicable to all computing nodes; then, the network parameters are transmitted to each computing node;
where S represents the state space, A represents the action space, R represents the reward, and S' represents the next state space in the Markov transition process.
8. The method for target-driven computational offload based on deep reinforcement learning of claim 7, further comprising a training server for locally simulating and recording a target-driven computational offload process based on the collected state space of the current node, offline learning an optimal target computational offload strategy, and broadcasting parameters of the current node to other nodes after updating the deep reinforcement-learned neural network of the current node.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110712564.8A CN113407345B (en) | 2021-06-25 | 2021-06-25 | Target driving calculation unloading method based on deep reinforcement learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110712564.8A CN113407345B (en) | 2021-06-25 | 2021-06-25 | Target driving calculation unloading method based on deep reinforcement learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113407345A CN113407345A (en) | 2021-09-17 |
CN113407345B true CN113407345B (en) | 2023-12-15 |
Family
ID=77679545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110712564.8A Active CN113407345B (en) | 2021-06-25 | 2021-06-25 | Target driving calculation unloading method based on deep reinforcement learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113407345B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114925778B (en) * | 2022-06-10 | 2024-08-09 | 安徽工业大学 | Reinforcement learning optimization method, method and device for large discrete action space |
CN115457781B (en) * | 2022-09-13 | 2023-07-11 | 内蒙古工业大学 | Intelligent traffic signal lamp control method based on multi-agent deep reinforcement learning |
CN115421929A (en) * | 2022-11-04 | 2022-12-02 | 北京大学 | MoE model training method, device, equipment and storage medium |
CN116149759B (en) * | 2023-04-20 | 2023-07-14 | 深圳市吉方工控有限公司 | UEFI (unified extensible firmware interface) drive unloading method and device, electronic equipment and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151077A (en) * | 2018-10-31 | 2019-01-04 | 电子科技大学 | One kind being based on goal-oriented calculating discharging method |
CN109257429A (en) * | 2018-09-25 | 2019-01-22 | 南京大学 | A kind of calculating unloading dispatching method based on deeply study |
CN109391681A (en) * | 2018-09-14 | 2019-02-26 | 重庆邮电大学 | V2X mobility prediction based on MEC unloads scheme with content caching |
CN111615121A (en) * | 2020-04-01 | 2020-09-01 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Ground mobile station multi-hop task calculation unloading processing method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11032735B2 (en) * | 2019-08-08 | 2021-06-08 | At&T Intellectual Property I, L.P. | Management of overload condition for 5G or other next generation wireless network |
US11977961B2 (en) * | 2019-10-17 | 2024-05-07 | Ambeent Wireless | Method and system for distribution of computational and storage capacity using a plurality of moving nodes in different localities: a new decentralized edge architecture |
-
2021
- 2021-06-25 CN CN202110712564.8A patent/CN113407345B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109391681A (en) * | 2018-09-14 | 2019-02-26 | 重庆邮电大学 | V2X mobility prediction based on MEC unloads scheme with content caching |
CN109257429A (en) * | 2018-09-25 | 2019-01-22 | 南京大学 | A kind of calculating unloading dispatching method based on deeply study |
CN109151077A (en) * | 2018-10-31 | 2019-01-04 | 电子科技大学 | One kind being based on goal-oriented calculating discharging method |
CN111615121A (en) * | 2020-04-01 | 2020-09-01 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Ground mobile station multi-hop task calculation unloading processing method |
Non-Patent Citations (3)
Title |
---|
Deep Reinforcement Learning Empowered Destination Driven Computation Offloading in IoT;Weizhong Wang等;2020 IEEE 20th International Conference on Communication Technology (ICCT);第834-840页 * |
Destination Driven Computation Offloading in Internet of Things;Yunkai Wei等;2019 IEEE Global Communications Conference (GLOBECOM);第1-6页 * |
基于DTN的车载云计算卸载算法;李波;黄鑫;薛端;侯严严;裴以建;;云南大学学报(自然科学版)(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113407345A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113407345B (en) | Target driving calculation unloading method based on deep reinforcement learning | |
Lei et al. | Deep reinforcement learning for autonomous internet of things: Model, applications and challenges | |
Qi et al. | Knowledge-driven service offloading decision for vehicular edge computing: A deep reinforcement learning approach | |
Lei et al. | A multi-action deep reinforcement learning framework for flexible Job-shop scheduling problem | |
Wang et al. | Adaptive and large-scale service composition based on deep reinforcement learning | |
CN114710439B (en) | Network energy consumption and throughput joint optimization routing method based on deep reinforcement learning | |
CN114896899B (en) | Multi-agent distributed decision method and system based on information interaction | |
CN117787186B (en) | Multi-target chip layout optimization method based on hierarchical reinforcement learning | |
CN112990485A (en) | Knowledge strategy selection method and device based on reinforcement learning | |
Li et al. | Multi-swarm cuckoo search algorithm with Q-learning model | |
CN116126534A (en) | Cloud resource dynamic expansion method and system | |
CN115225512B (en) | Multi-domain service chain active reconfiguration mechanism based on node load prediction | |
Yadav | E-MOGWO Algorithm for Computation Offloading in Fog Computing. | |
CN116128028A (en) | Efficient deep reinforcement learning algorithm for continuous decision space combination optimization | |
Tariq et al. | Dynamic Resource Allocation in IoT Enhanced by Digital Twins and Intelligent Reflecting Surfaces | |
Chouikhi et al. | Energy-Efficient Computation Offloading Based on Multi-Agent Deep Reinforcement Learning for Industrial Internet of Things Systems | |
Khan et al. | Communication in Multi-Agent Reinforcement Learning: A Survey | |
CN113435475A (en) | Multi-agent communication cooperation method | |
Park et al. | Learning with delayed payoffs in population games using Kullback-Leibler divergence regularization | |
Yao et al. | Performance Optimization in Serverless Edge Computing Environment using DRL-Based Function Offloading | |
Tang et al. | Joint Optimization of Vehicular Sensing and Vehicle Digital Twins Deployment for DT-Assisted IoVs | |
CN116718198B (en) | Unmanned aerial vehicle cluster path planning method and system based on time sequence knowledge graph | |
CN112464104B (en) | Implicit recommendation method and system based on network self-cooperation | |
CN118170154B (en) | Unmanned aerial vehicle cluster dynamic obstacle avoidance method based on multi-agent reinforcement learning | |
CN118784547A (en) | Route optimization method based on graph neural network and deep reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |