CN108021451B - Self-adaptive container migration method in fog computing environment - Google Patents
Self-adaptive container migration method in fog computing environment Download PDFInfo
- Publication number
- CN108021451B CN108021451B CN201711288967.4A CN201711288967A CN108021451B CN 108021451 B CN108021451 B CN 108021451B CN 201711288967 A CN201711288967 A CN 201711288967A CN 108021451 B CN108021451 B CN 108021451B
- Authority
- CN
- China
- Prior art keywords
- container
- fog
- migration
- computing environment
- adaptive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a self-adaptive container migration method in a fog computing environment, which comprises the following steps: establishing a fog computing frame based on a container, wherein the container is positioned on a fog node, a mobile application is positioned on a user body, and a task of the user is executed in the container; modeling a target of container migration in a fog calculation scene, wherein the migration target comprises time delay, power consumption and migration overhead; setting a state space and an action space, defining a return function and setting a Q iteration function; reducing the dimension of the state space through a deep neural network; and the dimension reduction of the action space is realized by optimizing the action selection. Finally, the prototype of the container self-adaptive migration system is realized and the whole process is verified. The self-adaptive container migration method in the fog computing environment provided by the invention can better plan resources in the fog computing, reduce time delay between a user and a fog node and reduce energy consumption overhead of the fog node.
Description
Technical Field
The invention belongs to the field of fog calculation in a computer network, relates to methods such as fog calculation, moving edge calculation, reinforcement learning and deep reinforcement learning, and particularly relates to a self-adaptive container migration method in a fog calculation environment.
Background
Fog computing has become a promising computing paradigm in recent years, providing a flexible architecture to support distributed area-specific, domain-specific applications with cloud-like quality of service. Fog computing deploys a large amount of lightweight computing and storage infrastructure (called fog nodes) near mobile users. Therefore, the mobile application can be lowered to a proper fog node, and the access delay of the user to the application is shortened. In addition, the fog nodes are flexible and have expansibility, and the mobility of mobile users can be supported.
The existing technology of container migration in a fog computing environment by using a deep reinforcement learning method is rare. The related research field is mainly a scene of virtual machine management in a data center, and the main method is to dynamically migrate a virtual machine to partial nodes through migration of the virtual machine, so that idle fog nodes are closed, and the effect of reducing power consumption is achieved. The method for obtaining the migration scheme mainly predicts the resource requirements to obtain a pre-distribution scheme, or obtains some heuristic algorithms of the resource requirements through regression analysis based on historical information.
For the task scheduling problem in the fog computing scene, in the prior art, a two-dimensional simple Markov decision process model is mainly considered, when a single user moves, the distance between the user and a fog node is considered and modeling is performed to obtain a simple state space, and whether a task is migrated or not is judged by calculating a migration value function in the state space.
Directly migrating the method for managing the virtual machine in the data center to a fog computing environment can bring a series of problems, including a dimension disaster problem caused by a high-dimension state space and an action space, and a mobility problem of a mobile user is not considered in a modeling process, so that a time delay problem in a mobile scene cannot be well solved.
The existing task scheduling method for fog calculation only considers the condition of a single user when establishing a state space, and does not consider the actual condition of multiple users. And the transition probabilities between states are assumed to be fixed, whereas in practice the transition probabilities between states are unknown.
To overcome the above problems, the present invention proposes a container-based fog computing framework, placing applications in containers, and containers on fog nodes. In order to achieve optimal container scheduling, the container migration problem is regarded as a random optimization problem, and an algorithm suitable for a large Markov decision process state space and an action space is designed based on Q learning and deep learning strategies, so that the problem of dimension disaster is solved. On the basis of the system, the invention realizes a prototype system for container migration.
Disclosure of Invention
The invention provides a self-adaptive container migration method in a fog computing environment, which can better plan resources in fog computing, reduce time delay between a user and a fog node and reduce energy consumption overhead of the fog node.
In order to achieve the above-mentioned purpose and solve the above-mentioned problems, we first propose a set of container-based fog computing framework, then model the delay, power consumption and migration overhead in the fog computing scene under the framework, and design the adaptive container migration algorithm based on deep reinforcement learning, and finally realize the prototype of a container adaptive migration system and verify the whole process.
In order to achieve the above object, the present invention provides an adaptive container migration method in a fog computing environment, including the following steps:
establishing a fog computing frame based on a container, wherein the container is positioned on a fog node, a mobile application is positioned on a user body, and a task of the user is executed in the container;
modeling a target of container migration in a fog calculation scene, wherein the migration target comprises time delay, power consumption and migration overhead;
setting a state space and an action space, defining a return function and setting a Q iteration function;
reducing the dimension of the state space through a deep neural network;
and the dimension reduction of the action space is realized by optimizing the action selection.
Furthermore, each fog node has position data and a total amount of computing resources, wherein the computing resources include CPU resources, memory resources, storage resources, and bandwidth resources.
Further, each container has a resource request amount and an actual resource allocation amount, and each mobile application has location data and request data for the container.
Further, the time delay in the migration target is calculated by the following formula:
dtotal=dnet-k×dcomp,
wherein d isnetThe overhead generated by data transmission in the network is related to the distance between the user and the container, and is defined by path loss; dcompThe calculation time delay on the fog node is determined by the violation degree of the service level agreement with the fog node.
Further, the power consumption of the fog node is defined as follows:
wherein p isidleAnd pmaxRefers to the power consumption, u, at CPU utilization of 0 and 100%i(t) is the resource utilization of the fog node.
Further, the container migration overhead is defined as follows:
wherein m ismigIs a container CiIncluding the transmission delay, 1{ "is eferson bracket.
Furthermore, the dimensionality reduction of the motion space comprises motion utilization, and after the state is obtained each time, the corresponding optimal Q value and the corresponding motion are selected from the Q value list.
Furthermore, the dimensionality reduction of the action space comprises action exploration, the agent randomly selects a state each time, limits the selection action, defines return income, and encourages migration when the income is positive.
Furthermore, the dimensionality reduction of the state space is to store all state information into a deep neural network, so that the dimensionality of the state space is reduced.
The self-adaptive container migration method under the fog computing environment has the following beneficial effects:
(1) according to the method, the user mobility is considered into the model, and the time delay between the user and the fog node is modeled, so that the time delay of the user task in the fog computing environment is well reduced, and the fog computing environment is better adapted.
(2) The method does not make any assumption on the transition probability, adaptively learns the actions to be taken in different states through the model-free autonomous learning algorithm, and can be well suitable for different fog computing environments.
(3) According to the method, the Q matrix of the storage state space is converted into the three-layer neural network similar to the Q matrix of the storage state space by using the deep neural network, so that the dimensionality of the state space in the fog computing environment is well reduced, and the problem of dimensionality disaster is solved.
(4) According to the method, the return revenue function which is beneficial to action selection is set through analysis of the specific conditions of fog calculation, and the condition that negative actions are selected during action selection can be effectively reduced, so that the convergence speed of the whole algorithm is effectively increased, and unnecessary energy consumption loss is better reduced.
(5) The method adopts container packaging application instead of the traditional virtual machine, can effectively reduce the cost generated during migration in the fog computing environment, and is more suitable for the fog computing environment with more limited resources.
Drawings
FIG. 1 shows a fog calculation framework and a user movement diagram.
Fig. 2 is a flow chart of an adaptive container migration method in a fog computing environment according to a preferred embodiment of the present invention.
FIG. 3 shows different omega1Average delay versus case.
FIG. 4 shows different omega1Average energy consumption in case is plotted.
FIG. 5 shows different omega1Overhead versus situation.
FIG. 6 shows different omega2Average delay versus case.
FIG. 7 shows different omega2Average energy consumption in case is plotted.
FIG. 8 shows different omega2Overhead versus situation.
FIG. 9 is a graph comparing the CPU overhead of a container and a virtual machine under different load conditions.
FIG. 10 is a graph comparing migration costs of containers and virtual machines under different load conditions.
Detailed Description
The following description will be given with reference to the accompanying drawings, but the present invention is not limited to the following embodiments. Advantages and features of the present invention will become apparent from the following description and from the claims. It is noted that the drawings are in greatly simplified form and that non-precision ratios are used for convenience and clarity only to aid in the description of the embodiments of the invention.
FIG. 1 shows a fog calculation framework and a user movement diagram. A total of five levels are included in fig. 1: user layer, access network layer, fog layer, core network layer and cloud layer. The user layer includes the mobile user and the mobile application being run on the mobile user. The mobile application accesses the fog layer through the access network layer and generates certain time delay. The fog nodes are located on the fog layer, the containers are located on the fog nodes, resources of the fog nodes are requested, and therefore the fog nodes generate expenses such as energy consumption. The fog nodes are connected with the cloud layer through the core network layer. The distance between the mobile user and the requesting container is increased due to the fact that the mobile user moves among the fog nodes, so that time delay is increased, and whether the container needs to be migrated along with the mobile user or not becomes a problem needing decision making. If the container is migrated, only the runtime library necessary for the application contained in the container and the application itself need to be migrated; and migrating the virtual machine needs to include the whole virtual machine system.
Referring to fig. 2, fig. 2 is a flow chart illustrating a method for adaptive container migration in a fog computing environment according to a preferred embodiment of the invention. The invention provides a self-adaptive container migration method in a fog computing environment, which comprises the following steps:
step S100: establishing a fog computing frame based on a container, wherein the container is positioned on a fog node, a mobile application is positioned on a user body, and a task of the user is executed in the container;
step S200: modeling a target of container migration in a fog calculation scene, wherein the migration target comprises time delay, power consumption and migration overhead;
step S300: setting a state space and an action space, defining a return function and setting a Q iteration function;
step S400: reducing the dimension of the state space through a deep neural network;
step S500: and the dimension reduction of the action space is realized by optimizing the action selection.
The invention first establishes a container-based fog calculation framework, and enables F to be equal to F1,F2,…,Fm},C={C1,C2,…,Cn},M={M1,M2,…,MlRepresents a set of fog nodes, a set of containers, and a set of mobile applications, respectively. The container is located on the fog node, the mobile application is located on the user, and the user's tasks are performed within the container. Each fog node has a position FiL, and the total amount of computing resources FiAnd c, the computing resources mainly comprise CPU resources, memory resources, storage resources and bandwidth resources, and because the computing capacity is mainly considered in scheduling, the CPU resources are mainly considered here, and the memory resources, the storage resources and the bandwidth resources are considered to be sufficient. For containers, each container has a position CiL (t) and each container has a resource request quantity CiR (t) and an actual resource allocation CiA (t). Furthermore, for mobile applications, there is one location M per mobile applicationiL (t) and request for container Mi.r(t)。
We then model the target of container migration in the fog calculation scenario. The migration target mainly comprises the following aspects:
1. and (4) time delay. Time delay dtotalComprising two aspects, dnetAnd dcomp。dnetIs the overhead generated by data transmission in the network, mainly related to the distance between the user and the container, which can be defined by the path loss:
where f is the signal frequency, di(t) is the distance between the mobile application on the mobile user and the fog node where the corresponding container is located, hbIs the height of the fog node, cm3dB, ah in urban scenemIs defined by:
ahm=3.20(log10(11.75hr))2-4.97,f>400MHz,
wherein h isrIs the user height.
In addition, dcompIs the calculated delay at the fog node, which is proven to be determined primarily by the degree of Service Level Agreement (SLA) violation (SLAV) of the fog node. And SLAV is defined as follows:
from the above, dcompCan be defined as:
and dtotalCan be defined as:
dtotal=dnet+k×dcomp。
2. power consumption. Power consumption ptotalRefers to the power consumption of all the fog nodes. If the fog node is in sleep mode, then the power consumption is approximately 0, and in addition to this, the power consumption of the fog node is defined as follows:
wherein p isidleAnd pmaxRefers to the power consumption when the CPU utilization is 0 and 100%. u. ofi(t) is the resource utilization of the fog node, defined as follows:
3. container migration overhead. The container migration overhead is defined as follows:
wherein m ismigIs CiThe migration overhead of (2) includes transmission delay. 1{ "is eferson bracket.
4. A problem model. So far, a model of the whole problem can be obtained
After modeling the above problem, the state space and the operation space are set. Since the experiment is mainly related to MiL (t) and CiR (t) about, define:
wherein:
furthermore, combining C.l (t) and C.a (t), the state space of the system can be derived:
according to the practical situation, the obtained corresponding action space is as follows:
since a minimum of overhead is required, the reward function is defined as:
Rτ=-(dtotal(τ)+ω1ptotal(τ)+ω2mtotal(τ))
then, setting a Q iteration function:
obviously, a huge state space brings dimensionality disasters, so the dimensionality reduction is carried out on the state space through a deep neural network, and the dimensionality reduction of an action space is realized through the optimization of action selection. The method mainly comprises the following three steps:
1. and (4) utilizing the action. Maintaining a Q value list of optimal Q valuesEach of which is composed ofAnd (4) forming. In the step of action utilization, after the state is obtained each time, the corresponding optimal Q value and the corresponding action are selected from the Q value list.
2. And (5) action exploration. In the action exploration phase, the agent randomly selects a state at a time. In order not to make the selected state too random resulting in a negative optimization, certain restrictions are placed on the selection action. And (3) defining return income:
the action is finally obtained:
wherein:
a random action selection algorithm can thus be derived:
3. the deep neural network reduces the state space. And storing all state information into the deep neural network so as to reduce the state space dimension. The training target of the neural network is defined as:
L(θτ)=E[(y(τ)-Q(Sτ,Aτ;θτ))2]
wherein:
y(τ)=E[(1-α)Q(Sτ-1,Aτ-1;θτ-1)+α[Rτ-1+γmaxQ(Sτ,Aτ;θτ-1)]|Sτ-1,Aτ-1].
furthermore, through empirical playback, the association between each training is reduced. The training algorithm is as follows:
and a final container adaptive migration algorithm:
finally, on the basis, a set of prototype system for container migration is realized.
The invention uses Python for programming, simulating a fog node, a container and a user. The fog node class comprises a position initialization module, a fog node distance calculation module, a power consumption calculation module, a CPU resource module, a container list module, a user list module and a bandwidth module. The container class comprises a number maintenance module, a CPU resource use module, a position updating module, a position module, a migration overhead module and a size module. The user class comprises a position initialization module, a position updating module, a request initialization module, a request updating module, a distance calculation module with the fog node and a time delay module. The fog nodes, containers and users make up the environment of the entire system. In addition, the core part is of a Q learning class and comprises an Agent class of an intelligent Agent, a Brain class of a deep neural network part and a Memory playback Memory class. The Agent class of the Agent comprises an optimal action obtaining module, a current (state, action, return value and next state) tuple obtaining module, a Memory playback module, a preprocessing module and a neural network training module, wherein the deep neural network Brain class comprises a network structure module, and the Memory class comprises a Memory storage module, a Memory extraction module and a Memory table.
Experimental data were from real taxi cab data from san francisco, with data sets ranging from 32.87 to 50.31 latitudes and-127.08 to-122.0 longitudes. The area is divided, 7 fog nodes are deployed, and the movement situation of more than 200 users is considered. All mobile users are active and use 0's and 1's to indicate whether they are getting on or off the vehicle, thereby indicating a switch requested by the application.
For the setting of the parameters, the power consumption of the CPU is given by the following table:
CPU Utilization(%) | 0% | 10% | 20% | 30% | 40% | 50% |
HP ProLiant G4 | 86 | 89.4 | 92.6 | 96 | 99.5 | 102 |
CPU Utilization(%) | 60% | 70% | 80% | 90% | 100% | |
HP ProLiant G4 | 106 | 108 | 112 | 114 | 117 |
TABLE 1 CPU utilization and energy consumption relationship table
For other parameters, let f be 2.5MHz, hb=35m,hr=1m,cm3dB, and
di(t)=|Mi.l(t)-Fj.l(t)|
further, Xscale=3,alpha=0.1,gamma=0.9,epsilon=0.9。
To compare the experimental results, two baseline algorithms were chosen. The algorithm is abbreviated as ODQL, in addition, the DBQL algorithm is obtained by discretizing the traditional Q learning algorithm, and the basic algorithm of the invention is also an approximately greedy algorithm Myovic.
By targeting different omega1For comparison, the following results were obtained. Referring to FIGS. 3-5, FIG. 3 shows different omega1Average delay versus time for the case, FIG. 4 shows the average energy consumption versus time for different omega1, FIG. 5 shows the average energy consumption versus time for different omega1Overhead versus situation.
In addition, the present invention is also applicable to different omega2The comparison was performed, and the experimental results are shown below. Referring to FIGS. 6-8, FIG. 6 shows different omega2Average delay versus time for the case, FIG. 7 shows different omega2Average energy consumption in the case of a graph, FIG. 8 shows different omega2Overhead versus situation. The experimental results show that the algorithm provided by the invention has better effect than the other two algorithms.
In addition, the invention also builds a prototype of the container migration system. A CPU of E5-1650v2@3.5GHz, a memory of 16.0 GB and a desktop of an operating system of Ubuntu 16.04 LTS are used as fog nodes, a CPU of i7-4600U @2.1GHz, a memory of 8.0 GB and a notebook computer of an operating system of Windows 10 are used for simulating a user group. The desktop is provided with a Docker container engine, and an Nginx website server, a WordPress site and MySQL database, a Ghost site and SQLite3 database and a Docker container of a static page are arranged through the Docker. Different Ubuntu containers are managed on the notebook through a Docker container engine, a Webbenchmark simulation user request is installed in each Ubuntu container, and time delay is changed through a tc tool. Meanwhile, the virtual machine migration environment under the same hardware environment is set and compared. The comparative results are as follows. As shown in fig. 9 and 10, fig. 9 is a diagram showing a comparison of CPU overhead of the container and the virtual machine under different loads, and fig. 10 is a diagram showing a comparison of migration overhead of the container and the virtual machine under different loads. According to experimental results, under the same hardware condition, the migration cost of the container is far less than that of the virtual machine, so that the container self-adaptive migration system in the fog computing environment provided by the invention is very effective.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.
Claims (9)
1. An adaptive container migration method in a fog computing environment, comprising the steps of:
establishing a fog computing frame based on a container, wherein the container is positioned on a fog node, a mobile application is positioned on a user body, and a task of the user is executed in the container;
modeling a migration target of container migration in a fog calculation scene, wherein the migration target comprises time delay, power consumption and migration overhead;
setting a state space and an action space, defining a return function and setting a Q iteration function;
reducing the dimension of the state space through a deep neural network;
and the dimension reduction of the action space is realized by optimizing the action selection.
2. The method of adaptive container migration in a fog computing environment of claim 1, wherein each of the fog nodes has location data and a total amount of computing resources, wherein computing resources include CPU resources, memory resources, storage resources, and bandwidth resources.
3. The method of adaptive container migration in a fog computing environment of claim 1, wherein each of the containers has a requested amount of resources and an actual allocated amount of resources, and each of the mobile applications has a location data and requested data for a container.
4. The adaptive container migration method in a fog computing environment according to claim 1, wherein the time delay in the migration target is calculated by the following formula:
dtotal=dnet+k×dcomp,
wherein d isnetThe overhead generated by data transmission in the network is related to the distance between the user and the container, and is defined by path loss; dcompThe calculation time delay on the fog node is determined by the violation degree of the service level agreement of the fog node.
5. The adaptive container migration method in a fog computing environment as claimed in claim 1, wherein the power consumption of the fog node is defined as follows:
wherein p isidleAnd pmaxRefers to the power consumption, u, at CPU utilization of 0 and 100%i(t) is the resource utilization of the fog node.
7. The method of claim 1, wherein the dimensionality reduction of the motion space comprises motion utilization, and each time a state is obtained, a corresponding optimal Q value and a corresponding motion are selected from a Q value list.
8. The method of claim 1, wherein the dimensionality reduction of the action space comprises action exploration, wherein an agent randomly selects a state each time, limits selection actions, defines a return benefit, and encourages migration when the benefit is positive.
9. The adaptive container migration method in the fog computing environment according to claim 1, wherein the state space is reduced in dimension by storing all state information in a deep neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711288967.4A CN108021451B (en) | 2017-12-07 | 2017-12-07 | Self-adaptive container migration method in fog computing environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711288967.4A CN108021451B (en) | 2017-12-07 | 2017-12-07 | Self-adaptive container migration method in fog computing environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108021451A CN108021451A (en) | 2018-05-11 |
CN108021451B true CN108021451B (en) | 2021-08-13 |
Family
ID=62079064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711288967.4A Active CN108021451B (en) | 2017-12-07 | 2017-12-07 | Self-adaptive container migration method in fog computing environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108021451B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109257429A (en) * | 2018-09-25 | 2019-01-22 | 南京大学 | A kind of calculating unloading dispatching method based on deeply study |
CN109710404B (en) * | 2018-12-20 | 2023-02-07 | 上海交通大学 | Task scheduling method in distributed system |
CN109819452B (en) * | 2018-12-29 | 2022-09-20 | 上海无线通信研究中心 | Wireless access network construction method based on fog computing virtual container |
CN109947567B (en) * | 2019-03-14 | 2021-07-20 | 深圳先进技术研究院 | Multi-agent reinforcement learning scheduling method and system and electronic equipment |
CN109975800B (en) * | 2019-04-01 | 2020-12-29 | 中国电子科技集团公司信息科学研究院 | Networking radar resource control method and device and computer readable storage medium |
CN110233755B (en) * | 2019-06-03 | 2022-02-25 | 哈尔滨工程大学 | Computing resource and frequency spectrum resource allocation method for fog computing in Internet of things |
CN110753383A (en) * | 2019-07-24 | 2020-02-04 | 北京工业大学 | Safe relay node selection method based on reinforcement learning in fog calculation |
CN110441061B (en) * | 2019-08-13 | 2021-05-07 | 哈尔滨理工大学 | Planet wheel bearing service life prediction method based on C-DRGAN and AD |
CN110535936B (en) * | 2019-08-27 | 2022-04-26 | 南京邮电大学 | Energy efficient fog computing migration method based on deep learning |
CN110944375B (en) * | 2019-11-22 | 2021-01-12 | 北京交通大学 | Method for allocating resources of wireless information and energy simultaneous transmission assisted fog computing network |
CN111885137B (en) * | 2020-07-15 | 2022-08-02 | 国网河南省电力公司信息通信公司 | Edge container resource allocation method based on deep reinforcement learning |
CN113656170A (en) * | 2021-07-27 | 2021-11-16 | 华南理工大学 | Intelligent equipment fault diagnosis method and system based on fog calculation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930214A (en) * | 2016-04-22 | 2016-09-07 | 广东石油化工学院 | Q-learning-based hybrid cloud job scheduling method |
CN107249169A (en) * | 2017-05-31 | 2017-10-13 | 厦门大学 | Event driven method of data capture based on mist node under In-vehicle networking environment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160359664A1 (en) * | 2015-06-08 | 2016-12-08 | Cisco Technology, Inc. | Virtualized things from physical objects for an internet of things integrated developer environment |
US10628222B2 (en) * | 2016-05-17 | 2020-04-21 | International Business Machines Corporation | Allocating compute offload resources |
-
2017
- 2017-12-07 CN CN201711288967.4A patent/CN108021451B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930214A (en) * | 2016-04-22 | 2016-09-07 | 广东石油化工学院 | Q-learning-based hybrid cloud job scheduling method |
CN107249169A (en) * | 2017-05-31 | 2017-10-13 | 厦门大学 | Event driven method of data capture based on mist node under In-vehicle networking environment |
Non-Patent Citations (2)
Title |
---|
Container as a service at the edge:Trade off between energy efficiency and service availability at fog nano data centers;Kuljeet Kaur;《IEEE》;20170622;全文 * |
Converging mobile edge computing,fog computing,and IoT quality requirements;Paolo Bellavista;《IEEE》;20171120;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108021451A (en) | 2018-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108021451B (en) | Self-adaptive container migration method in fog computing environment | |
Li et al. | Collaborative cache allocation and task scheduling for data-intensive applications in edge computing environment | |
CN113377540B (en) | Cluster resource scheduling method and device, electronic equipment and storage medium | |
Prem Jacob et al. | A multi-objective optimal task scheduling in cloud environment using cuckoo particle swarm optimization | |
CN108804227B (en) | Method for computing-intensive task unloading and optimal resource allocation based on mobile cloud computing | |
CN109788489A (en) | A kind of base station planning method and device | |
Fernández-Cerero et al. | Sphere: Simulator of edge infrastructures for the optimization of performance and resources energy consumption | |
CN113992524A (en) | Network slice optimization processing method and system | |
Reddy et al. | Towards energy efficient Smart city services: A software defined resource management scheme for data centers | |
CN114546608A (en) | Task scheduling method based on edge calculation | |
CN111176784A (en) | Virtual machine integration method based on extreme learning machine and ant colony system | |
Jian et al. | A high-efficiency learning model for virtual machine placement in mobile edge computing | |
Belcastro et al. | Edge-cloud continuum solutions for urban mobility prediction and planning | |
CN114090239B (en) | Method and device for dispatching edge resources based on model reinforcement learning | |
CN113014649B (en) | Cloud Internet of things load balancing method, device and equipment based on deep learning | |
Ke et al. | Medley deep reinforcement learning-based workload offloading and cache placement decision in UAV-enabled MEC networks | |
Tang et al. | Multi-user layer-aware online container migration in edge-assisted vehicular networks | |
Devagnanam et al. | Design and development of exponential lion algorithm for optimal allocation of cluster resources in cloud | |
Bao et al. | QoS preferences edge user allocation using reinforcement learning | |
Wu et al. | PECCO: A profit and cost‐oriented computation offloading scheme in edge‐cloud environment with improved Moth‐flame optimization | |
CN116390162A (en) | Mobile edge computing dynamic service deployment method based on deep reinforcement learning | |
CN113992520B (en) | Virtual network resource deployment method and system | |
KR102669963B1 (en) | Computer device for edge computing queue stabilization using reinforcement learning based on liapunov optimization, and method of the same | |
Tang et al. | Edge computing energy-efficient resource scheduling based on deep reinforcement learning and imitation learning | |
Yang et al. | Energy saving strategy of cloud data computing based on convolutional neural network and policy gradient algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |