CN108021451A - A kind of adaptive container moving method under mist computing environment - Google Patents
A kind of adaptive container moving method under mist computing environment Download PDFInfo
- Publication number
- CN108021451A CN108021451A CN201711288967.4A CN201711288967A CN108021451A CN 108021451 A CN108021451 A CN 108021451A CN 201711288967 A CN201711288967 A CN 201711288967A CN 108021451 A CN108021451 A CN 108021451A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mist
- container
- msub
- computing environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present invention proposes the adaptive container moving method under a kind of mist computing environment, comprises the following steps:A mist Computational frame based on container is established, container is located on mist node, and mobile application is located at user, and the task of user performs in container;The target that container migrates under scene is calculated to mist to be modeled, the migration target includes time delay, power consumption and migration overhead;The setting of state space and motion space is carried out, Reward Program is defined and sets Q iteration functions;Dimensionality reduction is carried out to state space by deep neural network;By the optimization to acting selection, the dimensionality reduction to motion space is realized.Finally realize the prototype of a container adaptive-migration system and whole flow process is verified.Adaptive container moving method under mist computing environment proposed by the present invention, the resource in calculating mist are preferably planned, reduce the time delay between user and mist node, reduce the energy consumption expense of mist node.
Description
Technical field
The invention belongs to the mist calculating field in computer network, be related to mist calculate, mobile edge calculations, intensified learning,
The methods of deeply learns, and the adaptive container moving method under more particularly to a kind of mist computing environment.
Background technology
Mist calculates becomes promising calculation paradigm in recent years, it provides a flexible framework to support to have
The application of the distributed specific region, specific area of the service quality of similar cloud computing.Mist calculate will substantial amounts of light-duty calculating and
Storage infrastructure (being known as mist node) is deployed near mobile subscriber.So that mobile application can transfer to suitable mist
Node, to shorten access time delay of the user to application.In addition, mist node is flexible, have the scalability, can support mobile subscriber's
Mobility.
The technology that the existing method learnt using deeply carries out container migration under mist computing environment is actually rare.
Related Research Domain be mainly in data center to the scene of Virtual Machine Manager, main method be by the migration to virtual machine,
By dynamic migration of virtual machine to part of nodes, so as to close idle mist node, reach the effect for reducing power consumption.And obtain migration side
The method of case is mainly predicted resource requirement so as to obtain pre-distribution scheme, or based on historical information, by returning
Analysis is returned to obtain some heuritic approaches of resource requirement.
And the Mission Scheduling under scene is calculated for mist, the prior art mainly considers a two-dimentional simple Ma Er
Section husband decision process model, when considering unique user movement, distance between user and mist node is simultaneously modeled, and is obtained simple
State space, in state space, by the value function of computation migration to determine whether task is migrated.
The method of Virtual Machine Manager will be directly migrated under mist computing environment in data center, a series of ask can be brought
Topic, including the dimension disaster problem that high-dimensional state space and motion space are brought, and do not consider to move in modeling process
The mobility problem at family is employed, so as to cause the delay problem under mobile context to be solved well.
And the existing method for scheduling task calculated for mist, the situation of unique user is only considered when establishing state space,
Actual multi-user's situation is not considered.And the transition probability between hypothesis state is fixed, and shape under actual conditions
Transition probability between state is unknown.
In order to overcome the above problem, the present invention proposes a set of mist Computational frame based on container, application program is placed on
In container, and container is placed on mist node.For the container scheduling being optimal, the present invention regards container migration problem as
Stochastic optimization problems, and based on Q study and deep learning strategy, it is empty to design suitable huge markov decision process state
Between and motion space algorithm, solve the problems, such as dimension disaster.On this basis, the present invention realizes the original of a container migration
Type system.
The content of the invention
The present invention proposes the adaptive container moving method under a kind of mist computing environment, and the resource in calculating mist carries out more
Good planning, reduces the time delay between user and mist node, reduces the energy consumption expense of mist node.
In order to reach purpose mentioned above, issue noted above is solved, we, which first proposed, a set of is based on container
Mist Computational frame, time delay, power consumption and migration overhead under scene then are calculated to mist under this frame and are modeled, and is set
The adaptive container migration algorithm based on deeply study is counted, finally realizes the prototype of a container adaptive-migration system
And whole flow process is verified.
In order to achieve the above object, the present invention proposes the adaptive container moving method under a kind of mist computing environment, including
The following steps:
A mist Computational frame based on container is established, container is located on mist node, and mobile application is located at user, uses
The task at family performs in container;
The target that container migrates under scene is calculated to mist to be modeled, the migration target includes time delay, power consumption and moves
Remove pin;
The setting of state space and motion space is carried out, Reward Program is defined and sets Q iteration functions;
Dimensionality reduction is carried out to state space by deep neural network;
By the optimization to acting selection, the dimensionality reduction to motion space is realized.
Further, each mist node has position data, and computing resource total amount, and wherein computing resource includes
Cpu resource, memory source, storage resource and bandwidth resources.
Further, each container has a resource request amount and an actual sendout of resource, described every
A mobile application has a position data and the request data to container.
Further, time delay is calculated by the following formula in the migration target:
dtotal=dnet-k×dcomp,
Wherein, dnetBe data transmission in network produce expense, the distance dependent between user and container, passage path
Loss is defined;dcompIt is the calculation delay on mist node, is determined with the violation degree of the service-level agreement of mist node.
Further, the power consumption of the mist node is defined as follows:
Wherein pidleAnd pmaxRefer to power consumption when cpu busy percentage is 0 and 100%, ui(t) be mist node the utilization of resources
Rate.
Further, the container migration overhead is defined as follows:
Wherein mmigIt is container CiMigration overhead, include propagation delay time, 1 { 〃 } is Allen Iverson bracket.
Further, the dimensionality reduction of the motion space includes action utilization, after obtaining state every time, all from Q value lists
Select corresponding optimal Q values and corresponding action.
Further, the dimensionality reduction of the motion space includes action probe, and each intelligent body all randomly chooses a state,
And selection is restricted, definition return income, when income is just, encouragement migrates.
Further, the dimensionality reduction of the state space is to store all status informations in deep neural network, with
This reduces state space dimension.
Adaptive container moving method under mist computing environment proposed by the present invention, its advantage are as follows:
(1) user mobility is taken into account model by this method, by being modeled to the time delay between user and mist node,
Reduce the time delay of user task under mist computing environment well, preferably adapt to mist computing environment.
(2) this method does not make the assumption that any transition probability, by exempting from the autonomous learning algorithm of model, adaptive learning
Go out the action that should be taken under various different conditions, can be very good to adapt to different mist computing environment.
(3) this method is for one and its approximate by the Q matrix conversions in storage state space by using deep neural network
Three-layer neural network, reduce the dimension of state space under mist computing environment well, solve the problems, such as dimension disaster.
(4) this method sets out the return income letter for contributing to selection to act by the analysis to mist calculating concrete condition
Number, can effectively reduction action selection when choose situation about negatively acting, so as to effectively accelerate whole convergence speed of the algorithm
And preferably reduce unnecessary energy loss.
(5) this method uses container package application, rather than traditional virtual machine, can effectively reduce in mist computing environment
Expense caused by lower migration, is more suitable for various resources all than relatively limited mist computing environment.
Brief description of the drawings
Fig. 1 show mist Computational frame and user's movement figure.
Fig. 2 show the adaptive container moving method flow chart under the mist computing environment of present pre-ferred embodiments.
Fig. 3 show different omega1In the case of average delay comparison diagram.
Fig. 4 show different omega1In the case of average energy consumption comparison diagram.
Fig. 5 show different omega1In the case of overhead comparison diagram.
Fig. 6 show different omega2In the case of average delay comparison diagram.
Fig. 7 show different omega2In the case of average energy consumption comparison diagram.
Fig. 8 show different omega2In the case of overhead comparison diagram.
Fig. 9 show the CPU overhead comparison diagram of container and virtual machine in the case of different loads.
Figure 10 show the migration overhead comparison diagram of container and virtual machine in the case of different loads.
Embodiment
The embodiment of the present invention is provided below in conjunction with attached drawing, but the invention is not restricted to following embodiment.Root
According to following explanation and claims, advantages and features of the invention will become apparent from.It should be noted that attached drawing is using very simple
The form of change and non-accurate ratio is used, be only used for conveniently, lucidly aiding in illustrating the purpose of the embodiment of the present invention.
Fig. 1 show mist Computational frame and user's movement figure.Include five levels altogether in Fig. 1:Client layer, access net
Network layers, mist layer, core network layer and cloud layer.Client layer includes the shifting that mobile subscriber and mobile subscriber are currently running with it
Dynamic application.Mobile application accesses mist layer by accessing network layer, and produces certain time delay.Mist node is located at mist layer, container position
In on mist node, the resource of mist node is asked, and so that mist node produces the expenses such as energy consumption.Mist node passes through core network
Layer is connected with cloud layer.Mobile subscriber moves between mist node, result in the distance between mobile subscriber and the container asked
Increase, so that whether time delay increases, the problem of needing to follow mobile subscriber's migration just to need decision-making into one in container at this time.
If container is migrated, it is only necessary to migrate the necessary Runtime Library of application that container includes and using itself;And to void
Plan machine is migrated, then needs to include whole virtual machine system.
Please refer to Fig.2, Fig. 2 show the adaptive container moving method under the mist computing environment of present pre-ferred embodiments
Flow chart.The present invention proposes the adaptive container moving method under a kind of mist computing environment, comprises the following steps:
Step S100:A mist Computational frame based on container is established, container is located on mist node, and mobile application is positioned at use
With family, the task of user performs in container;
Step S200:The target that container migrates under scene is calculated to mist to be modeled, the migration target includes time delay, work(
Consumption and migration overhead;
Step S300:The setting of state space and motion space is carried out, Reward Program is defined and sets Q iteration functions;
Step S400:Dimensionality reduction is carried out to state space by deep neural network;
Step S500:By the optimization to acting selection, the dimensionality reduction to motion space is realized.
The present invention initially sets up a mist Computational frame based on container, allows F={ F1,F2,…,Fm, C={ C1,C2,…,
Cn, M={ M1,M2,…,MlSet, the set of container of mist node, and the set of mobile application are represented respectively.Container position
In on mist node, mobile application is located at user, and the task of user performs in container.Each mist node has position Fi.l,
And the total amount F of computing resourcei.c, computing resource mainly includes cpu resource, memory source, storage resource and bandwidth resources,
Because mainly considering computing capability when scheduling, cpu resource is mainly considered here, and thinks memory source, storage money
Source and bandwidth resources are sufficient.For container, each container has a position Ci.l (t), and each container has a resource please
The amount of asking Ci.r (t) and the actual sendout C of a resourcei.a(t).In addition, for mobile application, each mobile application has one
Position Mi.l (t) and the request M to containeri.r(t)。
Then we calculate the target that container migrates under scene to mist and are modeled.Migration target mainly includes following several sides
Face:
1. time delay.Time delay dtotalIncluding two aspects, dnetAnd dcomp。dnetIt is the expense that data transmission in network produces, it is main
The distance dependent between user and container is wanted, can be lost with passage path and define it:
Wherein f is signal frequency, di(t) it is the mobile application with mobile subscriber and the mist node where corresponding container
Between distance, hbIt is the height of mist node, cmIt is 3dB, ah under City scenariosmBy being defined as below:
ahm=3.20 (log10(11.75hr))2- 4.97, f > 400MHz,
Wherein hrIt is user's height.
In addition, dcompIt is the calculation delay on mist node, clothes of the time delay mainly with mist node can be learnt by proof
The violation degree (SLAV) of level protocol of being engaged in (SLA) determines.And SLAV is defined as follows:
Obtained by above-mentioned, dcompIt can be defined as:
And dtotalIt can be defined as:
dtotal=dnet+k×dcomp。
2. power consumption.Power consumption ptotalRefer to the power consumption of all mist nodes.If mist node is in sleep pattern, then power consumption is near
Like being 0, in addition, the power consumption of mist node is defined as follows:
Wherein pidleAnd pmaxRefer to power consumption when cpu busy percentage is 0 and 100%.ui(t) be mist node the utilization of resources
Rate, is defined as follows:
3. container migration overhead.Container migration overhead is defined as follows:
Wherein mmigIt is CiMigration overhead, include propagation delay time.1 { 〃 } is Allen Iverson bracket.
4. problem model.So far the model of whole problem can be obtained
After being modeled to the above problem, the setting of state space and motion space is carried out.Due to testing mainly and Mi.l
(t) and Ci.r (t) is related, definition:
Wherein:
In addition, with reference to C.l (t) and C.a (t), the state space of system can be obtained:
According to actual conditions, obtaining corresponding motion space is:
Due to needing overhead minimum, defining Reward Program is:
Rτ=-(dtotal(τ)+ω1ptotal(τ)+ω2mtotal(τ))
Then Q iteration functions are set:
Obviously, huge state space can bring dimension disaster, then pass through deep neural network to this progress dimensionality reduction, and
By the optimization to acting selection, the dimensionality reduction to motion space is realized.Key step includes following three aspects:
1. action utilizes.Safeguard the Q value lists of an optimal value Q compositionWherein
Each single item all byComposition.This link is utilized in action, obtains shape every time
After state, corresponding optimal Q values and corresponding action are all selected from Q value lists.
2. action probe.In the action probe stage, each intelligent body all randomly chooses a state.In order not to allow selection
State too causes negative optimization at random, and certain limitation is made to selection.Definition return income:
WhenWhen, income is just, encouragement migrates.Migration probability:
Finally obtain action:
Wherein:
It is hereby achieved that random action selection algorithm:
3. deep neural network reduces state space.All status informations are stored in deep neural network, with this
Reduce state space dimension.The training objective of neutral net is defined as:
L(θτ)=E [(y (τ)-Q (Sτ, Aτ;θτ))2]
Wherein:
Y (τ)=E [(1- α) Q (Sτ-1, Aτ-1;θτ-1)+α[Rτ-1+γmaxQ(Sτ, Aτ;θτ-1)]|Sτ-1, Aτ-1].
In addition, by experience replay, the association between training every time is reduced.Training algorithm is as follows:
And final container adaptive-migration algorithm:
Finally, on this basis, the prototype system of sleeve containes migration is realized.
The present invention is programmed using Python, simulates mist node, container and user.At the beginning of mist node class includes position
Beginningization module, mist euclidean distance between node pair computing module, power consumption calculation module, cpu resource module, container list module, user list
Module and bandwidth module.Container class includes numbering maintenance module, cpu resource uses module, location updating module, position mould
Block, migration overhead module and big little module.User class includes position initialization module, location updating module, request initialization
Module, request update module, distance calculation module and time delay module with mist node.Three mist node, container and user classes
Form the environment of whole system.In addition, core learns class, including intellectual Agent class, deep neural network part for Q
Brain classes, and memory playback storage Memory classes.Intellectual Agent class includes obtaining optimal action module, obtains currently
The module, memory playback module, pretreatment module and neutral net instruction of (state, action, return value, next state) tuple
Practice module, deep neural network Brain classes include network structure module, and Memory classes include the memory module of memory, carry
Modulus block and storage form.
Experimental data comes from the true of San Francisco and hires out car data, the latitude of data set from 32.87 to
50.31, longitude from -127.08 to -122.0.Region is divided, disposes 7 mist nodes, considers more than 200 user at it
In situation of movement.All mobile subscribers are active, and represent whether they get on or off the bus with 0 and 1, and being represented with this should
With the switching of request.
Setting for parameter, the energy consumption of CPU are obtained by following table:
CPU Utilization (%) | 0% | 10% | 20% | 30% | 40% | 50% |
HP ProLiant G4 | 86 | 89.4 | 92.6 | 96 | 99.5 | 102 |
CPU Utilization (%) | 60% | 70% | 80% | 90% | 100% | |
HP ProLiant G4 | 106 | 108 | 112 | 114 | 117 |
1 cpu busy percentage of table and energy consumption relation table
For other parameters, if f=2.5MHz, hb=35m, hr=1m, cm=3dB, and
Di (t)=| Mi.l(t)-Fj.l(t)|
In addition, Xscale=3, alpha=0.1, gamma=0.9, epsilon=0.9.
For contrast experiment's effect, two benchmark algorithms are have chosen.The algorithm of the present invention is referred to as ODQL, in addition, will pass
System Q learning algorithms carry out discretization and obtain DBQL algorithms, also have the base that an approximation greedy algorithm Myopic is also the present invention
Quasi- algorithm.
By to different omega1Contrasted, obtain following result.With reference to shown in 3~Fig. 5 of figure, Fig. 3 is shown not
Same omega1In the case of average delay comparison diagram, Fig. 4 show the average energy consumption comparison diagram in the case of different omega1, Fig. 5
It show different omega1In the case of overhead comparison diagram.
In addition, the present invention is also to different omega2Contrasted, experimental result is as follows.With reference to 6~Fig. 8 of figure institutes
Show, Fig. 6 show different omega2In the case of average delay comparison diagram, Fig. 7 show different omega2In the case of be averaged
Energy consumption comparison figure, Fig. 8 show different omega2In the case of overhead comparison diagram.Experimental result illustrates proposed by the present invention
Algorithm will be good than the effect of other two algorithms.
In addition, the present invention has also built a container migratory system prototype.Using the CPU of E5-1650v2@3.5GHz,
The memory of 16.0 GB, the desktop computer of the operating system of 16.04 LTS of Ubuntu is as mist node, using i7-4600U@
The CPU of 2.1GHz, the memory of 8.0 GB, the laptop of the operating system of Windows 10, analog subscriber group.On desktop computer
Docker container engines are installed, Nginx Website servers are installed by Docker, WordPress websites and MySQL database,
Ghost websites and SQLite3 databases, and the Docker containers of a static page.Pass through Docker containers on notebook
The different Ubuntu containers of engine management, the interior installation Webbench analog subscribers request of each Ubuntu containers, and pass through tc works
Have to change time delay.At the same time, the present invention sets virtual machine (vm) migration environment under same hardware environment, and has carried out pair
Than.Comparing result is as follows.As shown in Figure 9 and Figure 10, Fig. 9 show the CPU overhead of container and virtual machine in the case of different loads
Comparison diagram, Figure 10 show the migration overhead comparison diagram of container and virtual machine in the case of different loads.From experimental result, together
Etc. under hardware case, the migration overhead of container is the migration overhead much smaller than virtual machine, so mist proposed by the present invention calculates
Container adaptive-migration system under environment is very effective.
Although the present invention is disclosed above with preferred embodiment, so it is not limited to the present invention.Skill belonging to the present invention
Has usually intellectual in art field, without departing from the spirit and scope of the present invention, when can be used for a variety of modifications and variations.Cause
This, the scope of protection of the present invention is defined by those of the claims.
Claims (9)
1. the adaptive container moving method under a kind of mist computing environment, it is characterised in that comprise the following steps:
A mist Computational frame based on container is established, container is located on mist node, and mobile application is located at user, user's
Task performs in container;
The target that container migrates under scene is calculated to mist to be modeled, the migration target includes time delay, power consumption and migration and opens
Pin;
The setting of state space and motion space is carried out, Reward Program is defined and sets Q iteration functions;
Dimensionality reduction is carried out to state space by deep neural network;
By the optimization to acting selection, the dimensionality reduction to motion space is realized.
2. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that described each
Mist node has position data, and a computing resource total amount, wherein computing resource include cpu resource, memory source, storage resource with
And bandwidth resources.
3. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that described each
Container has a resource request amount and an actual sendout of resource, and each mobile application has a position data
And the request data to container.
4. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that the migration
Time delay is calculated by the following formula in target:
dtotalt=dnet+k×dcomp,
Wherein, dnetBe data transmission in network produce expense, the distance dependent between user and container, passage path loss
It is defined;dcompIt is the calculation delay on mist node, is determined by the violation degree of the service-level agreement of mist node.
5. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that the mist section
The power consumption of point is defined as follows:
<mrow>
<msub>
<mi>p</mi>
<mrow>
<mi>t</mi>
<mi>o</mi>
<mi>t</mi>
<mi>a</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>=</mo>
<mo>&Integral;</mo>
<mrow>
<mo>(</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<mo>(</mo>
<mrow>
<msub>
<mi>p</mi>
<mrow>
<mi>i</mi>
<mi>d</mi>
<mi>l</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>p</mi>
<mi>max</mi>
</msub>
<mo>-</mo>
<msub>
<mi>p</mi>
<mrow>
<mi>i</mi>
<mi>d</mi>
<mi>l</mi>
<mi>e</mi>
</mrow>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mo>&times;</mo>
<msub>
<mi>u</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>t</mi>
<mo>,</mo>
</mrow>
Wherein pidleAnd pmaxRefer to power consumption when cpu busy percentage is 0 and 100%, ui(t) be mist node resource utilization.
6. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that the container
Migration overhead is defined as follows:
<mrow>
<msub>
<mi>m</mi>
<mrow>
<mi>t</mi>
<mi>o</mi>
<mi>t</mi>
<mi>a</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>=</mo>
<mo>&Integral;</mo>
<mrow>
<mo>(</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>(</mo>
<mrow>
<msub>
<mi>m</mi>
<mrow>
<msub>
<mi>mig</mi>
<mi>i</mi>
</msub>
</mrow>
</msub>
<mo>&times;</mo>
<mn>1</mn>
<mo>{</mo>
<msub>
<mi>C</mi>
<mi>i</mi>
</msub>
<mo>.</mo>
<mi>l</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&NotEqual;</mo>
<msub>
<mi>C</mi>
<mi>i</mi>
</msub>
<mo>.</mo>
<mi>l</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>t</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>t</mi>
<mo>,</mo>
</mrow>
Wherein mmigIt is container CiMigration overhead, include propagation delay time, 1 { 〃 } is Allen Iverson bracket.
7. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that the action
The dimensionality reduction in space includes action and utilizes, and after obtaining state every time, corresponding optimal Q values and phase are all selected from Q value lists
The action answered.
8. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that the action
The dimensionality reduction in space includes action probe, and each intelligent body all randomly chooses a state, and selection is restricted, definition
Income is returned, when income is just, encouragement migrates.
9. the adaptive container moving method under mist computing environment according to claim 1, it is characterised in that the state
The dimensionality reduction in space is by all status information storages to deep neural network, and state space dimension is reduced with this.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711288967.4A CN108021451B (en) | 2017-12-07 | 2017-12-07 | Self-adaptive container migration method in fog computing environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711288967.4A CN108021451B (en) | 2017-12-07 | 2017-12-07 | Self-adaptive container migration method in fog computing environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108021451A true CN108021451A (en) | 2018-05-11 |
CN108021451B CN108021451B (en) | 2021-08-13 |
Family
ID=62079064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711288967.4A Active CN108021451B (en) | 2017-12-07 | 2017-12-07 | Self-adaptive container migration method in fog computing environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108021451B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109257429A (en) * | 2018-09-25 | 2019-01-22 | 南京大学 | A kind of calculating unloading dispatching method based on deeply study |
CN109710404A (en) * | 2018-12-20 | 2019-05-03 | 上海交通大学 | Method for scheduling task in distributed system |
CN109819452A (en) * | 2018-12-29 | 2019-05-28 | 上海无线通信研究中心 | A kind of Radio Access Network construction method calculating virtual container based on mist |
CN109947567A (en) * | 2019-03-14 | 2019-06-28 | 深圳先进技术研究院 | A kind of multiple agent intensified learning dispatching method, system and electronic equipment |
CN109975800A (en) * | 2019-04-01 | 2019-07-05 | 中国电子科技集团公司信息科学研究院 | Radar network resource management-control method and device, computer readable storage medium |
CN110233755A (en) * | 2019-06-03 | 2019-09-13 | 哈尔滨工程大学 | The computing resource and frequency spectrum resource allocation method that mist calculates in a kind of Internet of Things |
CN110441061A (en) * | 2019-08-13 | 2019-11-12 | 哈尔滨理工大学 | Planet wheel bearing life-span prediction method based on C-DRGAN and AD |
CN110535936A (en) * | 2019-08-27 | 2019-12-03 | 南京邮电大学 | A kind of energy efficient mist computation migration method based on deep learning |
CN110753383A (en) * | 2019-07-24 | 2020-02-04 | 北京工业大学 | Safe relay node selection method based on reinforcement learning in fog calculation |
CN110944375A (en) * | 2019-11-22 | 2020-03-31 | 北京交通大学 | Method for allocating resources of wireless information and energy simultaneous transmission assisted fog computing network |
CN111885137A (en) * | 2020-07-15 | 2020-11-03 | 国网河南省电力公司信息通信公司 | Edge container resource allocation method based on deep reinforcement learning |
CN113656170A (en) * | 2021-07-27 | 2021-11-16 | 华南理工大学 | Intelligent equipment fault diagnosis method and system based on fog calculation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930214A (en) * | 2016-04-22 | 2016-09-07 | 广东石油化工学院 | Q-learning-based hybrid cloud job scheduling method |
US20160359664A1 (en) * | 2015-06-08 | 2016-12-08 | Cisco Technology, Inc. | Virtualized things from physical objects for an internet of things integrated developer environment |
CN107249169A (en) * | 2017-05-31 | 2017-10-13 | 厦门大学 | Event driven method of data capture based on mist node under In-vehicle networking environment |
US20170337091A1 (en) * | 2016-05-17 | 2017-11-23 | International Business Machines Corporation | Allocating compute offload resources |
-
2017
- 2017-12-07 CN CN201711288967.4A patent/CN108021451B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160359664A1 (en) * | 2015-06-08 | 2016-12-08 | Cisco Technology, Inc. | Virtualized things from physical objects for an internet of things integrated developer environment |
CN105930214A (en) * | 2016-04-22 | 2016-09-07 | 广东石油化工学院 | Q-learning-based hybrid cloud job scheduling method |
US20170337091A1 (en) * | 2016-05-17 | 2017-11-23 | International Business Machines Corporation | Allocating compute offload resources |
CN107249169A (en) * | 2017-05-31 | 2017-10-13 | 厦门大学 | Event driven method of data capture based on mist node under In-vehicle networking environment |
Non-Patent Citations (2)
Title |
---|
KULJEET KAUR: "Container as a service at the edge:Trade off between energy efficiency and service availability at fog nano data centers", 《IEEE》 * |
PAOLO BELLAVISTA: "Converging mobile edge computing,fog computing,and IoT quality requirements", 《IEEE》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109257429A (en) * | 2018-09-25 | 2019-01-22 | 南京大学 | A kind of calculating unloading dispatching method based on deeply study |
CN109710404A (en) * | 2018-12-20 | 2019-05-03 | 上海交通大学 | Method for scheduling task in distributed system |
CN109710404B (en) * | 2018-12-20 | 2023-02-07 | 上海交通大学 | Task scheduling method in distributed system |
CN109819452A (en) * | 2018-12-29 | 2019-05-28 | 上海无线通信研究中心 | A kind of Radio Access Network construction method calculating virtual container based on mist |
CN109819452B (en) * | 2018-12-29 | 2022-09-20 | 上海无线通信研究中心 | Wireless access network construction method based on fog computing virtual container |
CN109947567A (en) * | 2019-03-14 | 2019-06-28 | 深圳先进技术研究院 | A kind of multiple agent intensified learning dispatching method, system and electronic equipment |
CN109975800B (en) * | 2019-04-01 | 2020-12-29 | 中国电子科技集团公司信息科学研究院 | Networking radar resource control method and device and computer readable storage medium |
CN109975800A (en) * | 2019-04-01 | 2019-07-05 | 中国电子科技集团公司信息科学研究院 | Radar network resource management-control method and device, computer readable storage medium |
CN110233755A (en) * | 2019-06-03 | 2019-09-13 | 哈尔滨工程大学 | The computing resource and frequency spectrum resource allocation method that mist calculates in a kind of Internet of Things |
CN110233755B (en) * | 2019-06-03 | 2022-02-25 | 哈尔滨工程大学 | Computing resource and frequency spectrum resource allocation method for fog computing in Internet of things |
CN110753383A (en) * | 2019-07-24 | 2020-02-04 | 北京工业大学 | Safe relay node selection method based on reinforcement learning in fog calculation |
CN110441061A (en) * | 2019-08-13 | 2019-11-12 | 哈尔滨理工大学 | Planet wheel bearing life-span prediction method based on C-DRGAN and AD |
CN110535936B (en) * | 2019-08-27 | 2022-04-26 | 南京邮电大学 | Energy efficient fog computing migration method based on deep learning |
CN110535936A (en) * | 2019-08-27 | 2019-12-03 | 南京邮电大学 | A kind of energy efficient mist computation migration method based on deep learning |
CN110944375A (en) * | 2019-11-22 | 2020-03-31 | 北京交通大学 | Method for allocating resources of wireless information and energy simultaneous transmission assisted fog computing network |
CN111885137A (en) * | 2020-07-15 | 2020-11-03 | 国网河南省电力公司信息通信公司 | Edge container resource allocation method based on deep reinforcement learning |
CN111885137B (en) * | 2020-07-15 | 2022-08-02 | 国网河南省电力公司信息通信公司 | Edge container resource allocation method based on deep reinforcement learning |
CN113656170A (en) * | 2021-07-27 | 2021-11-16 | 华南理工大学 | Intelligent equipment fault diagnosis method and system based on fog calculation |
Also Published As
Publication number | Publication date |
---|---|
CN108021451B (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108021451A (en) | A kind of adaptive container moving method under mist computing environment | |
Fan et al. | Digital twin empowered mobile edge computing for intelligent vehicular lane-changing | |
Li et al. | An end-to-end load balancer based on deep learning for vehicular network traffic control | |
Wang et al. | A deep learning based energy-efficient computational offloading method in Internet of vehicles | |
CN111585811B (en) | Virtual optical network mapping method based on multi-agent deep reinforcement learning | |
Zhang et al. | Novel edge caching approach based on multi-agent deep reinforcement learning for internet of vehicles | |
Xu et al. | Joint task offloading and resource optimization in noma-based vehicular edge computing: A game-theoretic drl approach | |
Wang et al. | Collaborative edge computing for social internet of vehicles to alleviate traffic congestion | |
Li et al. | MEC-based dynamic controller placement in SD-IoV: A deep reinforcement learning approach | |
Zhang et al. | A reinforcement learning based task offloading scheme for vehicular edge computing network | |
Jian et al. | A high-efficiency learning model for virtual machine placement in mobile edge computing | |
CN113887748A (en) | Online federal learning task allocation method and device, and federal learning method and system | |
Tian et al. | Spatio-temporal position prediction model for mobile users based on LSTM | |
Tao et al. | DRL-Driven Digital Twin Function Virtualization for Adaptive Service Response in 6G Networks | |
Li et al. | Task computation offloading for multi-access edge computing via attention communication deep reinforcement learning | |
Fu et al. | Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT | |
Yan et al. | Service caching for meteorological emergency decision-making in cloud-edge computing | |
CN114916013B (en) | Edge task unloading delay optimization method, system and medium based on vehicle track prediction | |
CN116367231A (en) | Edge computing Internet of vehicles resource management joint optimization method based on DDPG algorithm | |
Liu et al. | Resource allocation via edge cooperation in digital twin assisted Internet of Vehicle | |
Gu et al. | AI-Enhanced Cloud-Edge-Terminal Collaborative Network: Survey, Applications, and Future Directions | |
CN113572647B (en) | Block chain-edge calculation combined system based on reinforcement learning | |
Yu et al. | Deep reinforcement learning for task allocation in UAV-enabled mobile edge computing | |
CN115904731A (en) | Edge cooperative type copy placement method | |
Cui et al. | Resource-Efficient DNN Training and Inference for Heterogeneous Edge Intelligence in 6G |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |