CN112835684B - Virtual machine deployment method for mobile edge computing - Google Patents

Virtual machine deployment method for mobile edge computing Download PDF

Info

Publication number
CN112835684B
CN112835684B CN202110231766.0A CN202110231766A CN112835684B CN 112835684 B CN112835684 B CN 112835684B CN 202110231766 A CN202110231766 A CN 202110231766A CN 112835684 B CN112835684 B CN 112835684B
Authority
CN
China
Prior art keywords
edge
virtual machine
update
machine deployment
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110231766.0A
Other languages
Chinese (zh)
Other versions
CN112835684A (en
Inventor
简琤峰
鲍璐锟
张美玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110231766.0A priority Critical patent/CN112835684B/en
Publication of CN112835684A publication Critical patent/CN112835684A/en
Application granted granted Critical
Publication of CN112835684B publication Critical patent/CN112835684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a virtual machine deployment method facing mobile edge computing, which constructs a mobile edge computing environment and defines an energy consumption model and a virtual machine set of each edge serverVMAnd edge server queuesSVirtual machine deployment is carried out by using bat model with second-order oscillation factor introduced, and thenmDeployment of virtual machines tonAnd the edge server trains an improved LSTM learning model by using historical data of virtual machine deployment, and performs virtual machine deployment based on the learning model. According to the invention, second-order oscillation is introduced into the Bat algorithm, the problem of local optimal solution is solved, an order transfer mechanism is used when searching near optimal solution, the optimal solution is continuously searched, and finally, an improved LSTM model is trained by using historical deployment data obtained by the algorithm; the invention can fully utilize the historical data of the edge nodes, simultaneously considers the problems of limited resources, decentralization and the like of the edge ends, comprehensively considers the energy consumption problem and the time delay problem, and achieves the aim of low energy consumption and low time delay.

Description

Virtual machine deployment method for mobile edge computing
Technical Field
The invention relates to the technical field of electric digital data processing, in particular to a virtual machine deployment method for mobile edge computing.
Background
With the development of intelligent applications in virtual reality, autopilot and user equipment, the requirements on high computing power and low delay of the server are increasing; since cloud data centers are typically far from the user side, compute-and data-intensive applications cannot be handled in time, mobile edge computing has emerged, which improves quality of service and reduces latency by offloading computing tasks to edge servers.
However, when the edge server receives a large number of storage tasks, huge energy consumption is caused, and time delay is increased when a large number of tasks are processed, so that it is necessary to reasonably allocate storage or calculation tasks on the edge server, and achieve the purposes of energy saving and low delay. The virtual machine is created according to the application program request of the client, the required resource quantity and the operating system type specified by the client, and a plurality of virtual machines are allowed to run on the same physical server through virtualization, so that the virtual machines can be integrated based on a proper layout strategy, and the purposes of energy saving and low delay are achieved.
Virtual machine layout has been the hotspot of cloud computing research for the past few years, and there have been many mature theories and achievements, however, virtual machine deployment problems with respect to edge computing are less, or metrics of interest tend to be more single.
Depending on the optimization objectives in the virtual machine layout, the deployment costs, the energy consumption problems, the response time, and the maximum profitability of the service provider can be broadly divided, however, the existing technology still has some drawbacks:
firstly, the above strategies tend to focus on a single index, and various factors are not comprehensively considered;
secondly, although virtual machine deployment in the edge computing is similar to cloud computing and can be regarded as NP problem to solve, the hardware difference between the edge environment and the cloud environment is large, and the problems of node dispersion, weak edge hardware foundation and limited resources need to be additionally considered in the virtual machine deployment problem;
finally, the above strategies do not fully exploit historical data for virtual machine deployment.
Disclosure of Invention
In order to solve the problems that a plurality of indexes are not comprehensively considered, the particularity of an edge computing environment is ignored, historical data is not fully utilized and the like in the prior art, the invention provides a virtual machine deployment method for mobile edge computing, historical data can be fully utilized, and corresponding solutions are provided for the characteristics of weak resources, decentralization and the like in the edge environment, so that the aims of energy consumption reduction and time delay reduction are achieved, and the method is further applied to the mobile edge computing environment.
The technical scheme adopted by the invention is that the virtual machine deployment method for mobile edge-oriented computing comprises the following steps:
step 1: constructing a mobile edge computing environment;
step 2: defining edgesEnergy consumption model P for each edge server in an edge computing environment s
Step 3: define a set of virtual machines VM and an edge server queue S, vm= { VM 1 ,vm 2 ,vm 3 ,...vm m },S={S 1 ,S 2 ,...S n -wherein m and n are integers greater than 0;
step 4: deploying virtual machines by using a bat model introduced with a second-order concussion factor, and deploying m virtual machines to n edge servers;
step 5: and obtaining historical data of virtual machine deployment, training an improved LSTM learning model, and carrying out virtual machine deployment based on the learning model.
Preferably, the mobile edge computing environment comprises a terminal layer, an edge layer and a cloud data layer which are sequentially matched;
the terminal layer is distributed with a plurality of mobile devices;
a base station and an edge server are distributed in cooperation with the edge layer of the mobile equipment;
the cloud data layer matched with the edge server comprises a server and a storage device.
Preferably, in the step 2,
P s =P idle +(P max -P idle )*u
wherein P is idle Is the power consumed by the edge server in idle state, P max Is the power consumed by the edge server in the full load state, u is the CPU utilization rate of the edge server, u is E [0,1]。
Preferably, in the step 3, the computing power of any two edge servers is different.
Preferably, the step 4 includes the steps of:
step 4.1: initializing parameters to make bat searching space be D dimension,
f i =f min +(f max -f min )×β
wherein i is the ith population, j is the jth task, f i F is a fitness function min And f max Respectively the maximum value and the minimum value of the frequency, beta is a random vector obeying uniform distribution, and beta epsilon [0,1],And->The speeds of the bat after the update at the time t+1 and before the update at the time t are respectively, X * The current global optimal position;
setting an improved bat position updating formula;
step 4.2: generating a new scheme to obtain a new solution x new
x new =x old +rand1×A t
Wherein x is old For the above solution, rand1 is a random number between-1 and1, A t Is the average loudness of all bats in this time step;
step 4.3: updating loudness and pulse emittance to produce a [0,1 ]]Random number rand2 on, if rand2 is smaller than loudness A i And new fitness f new A fitness value f smaller than the previous time i Updating:
f i =f new
where alpha is the loudness update coefficient, alpha e (0, 1),and->The probability of pulse emission after t+1 time update and before t time update is respectively, gamma is an adjustment coefficient, and gamma epsilon (0, 1);
step 4.4: sequencing all individual adaptation values to find out an optimal solution;
step 4.5: if the best solution is not overloaded, update X * If not, using an order transfer mechanism to perform local searching; and if the iteration number is the maximum iteration number, outputting a global optimal value, otherwise, adding 1 to the iteration number, and returning to the step 4.2.
Preferably, in the step 4.1, the improved bat position update formula is,
wherein,and->The positions of the bat after the update at the time t+1 and before the update at the time t are respectively, τ is the disturbance amplitude, σ is a random number, τ epsilon (0,0.1)],σ∈[-1,1]。
Preferably, τ=0.1.
Preferably, in the step 4.5, the order transfer mechanism includes an order exchange operation and a migration operation.
Preferably, in the step 5, the improved LSTM learning model includes a basic LSTM network, and a sigmoid input gate, a forget gate and an output gate are set in cooperation with the network; said step 5 comprises the steps of:
step 5.1: constructing an improved LSTM learning model;
step 5.2: acquiring historical data of virtual machine deployment, and training an LSTM learning model by using a self-adaptive convergence function;
step 5.3: and deploying the virtual machine by using the trained learning model.
Preferably, in the step 5.2, the LSTM learning model is trained with an adaptive convergence function
Wherein, the flag is a temporary variable for counting, when the flag is not equal to 0, the flag is added with 1, the next training is carried out, when the flag is added up to 10, the flag is made to be 0; k is the current number of iterations in the training,loss is the average of the loss functions of k iterations before Is the average of the previous loss functions, loss cur The average value of the loss function of the current iteration is that the loss function is
p i Is a predicted value, r i Is the actual value, z is the batch size of the experiment, i is the serial number of the training batch;
and if gard meets the preset condition, performing step 5.3.
According to the invention, the problem of a local optimal solution is solved by introducing second-order oscillation into the Bat algorithm, when the search is close to the optimal, an order transfer mechanism is used for continuously searching the optimal solution, and finally, the improved LSTM model is trained by using historical deployment data obtained by the algorithm.
The method has the beneficial effects that the historical data of the edge nodes can be fully utilized, meanwhile, the problems of limited resources, decentralization and the like of the edge end are considered, the energy consumption problem and the time delay problem are comprehensively considered, and the aim of low energy consumption and low time delay is fulfilled.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a system architecture diagram of a mobile edge computing environment in accordance with the present invention;
FIG. 3 is a schematic diagram of order exchange and migration mechanisms according to the present invention;
FIG. 4 is a schematic diagram of an improved LSTM learning model structure in the present invention.
Detailed Description
The present invention will be described in detail with reference to examples and drawings, but the present invention is not limited thereto.
The invention relates to a virtual machine deployment method facing mobile edge computing, which comprises the following steps of.
Step 1: a mobile edge computing environment is constructed.
The mobile edge computing environment comprises a terminal layer, an edge layer and a cloud data layer which are sequentially matched and arranged;
the terminal layer is distributed with a plurality of mobile devices;
a base station and an edge server are distributed in cooperation with the edge layer of the mobile equipment;
the cloud data layer matched with the edge server comprises a server and a storage device.
In the invention, the cloud data layer comprises a large number of high-performance servers and storage equipment, and an improved LSTM learning model is deployed for learning the deployment scheme of the virtual machine; the edge layer is provided with a base station and a large number of edge servers, and processes tasks by deploying virtual machines; terminals are mobile devices and are not considered to have intensive computing power.
In the mobile edge computing environment, when the edge layer receives an application request submitted by the terminal layer, the cloud end creates a virtual machine on the edge layer according to the learning model and distributes the virtual machine to a specific edge server according to the placement strategy, so that a single edge server can process a plurality of jobs.
Step 2: defining an energy consumption model P for each edge server in an edge computing environment s
In the step 2 of the above-mentioned process,
P s =P idle +(P max -P idle )*u
wherein P is idle Is the power consumed by the edge server in idle state, P max Is the power consumed by the edge server in the full load state, u is the CPU utilization rate of the edge server, u is E [0,1]。
In the invention, the energy consumed by the server is in linear relation with the CPU utilization rate, so the energy consumption of the server is determined as P s It is known from the related literature and data that, when the server works in a low utilization or idle state, a lot of energy is wasted, about 50% -70% of the power consumption of the server in a full load state, so that the power consumption of the idle server is generally set to 60% of the power consumption in the full load state, i.e., u=0.6.
Step 3: define a set of virtual machines VM and an edge server queue S, vm= { VM 1 ,vm 2 ,vm 3 ,...vm m },S={S 1 ,S 2 ,...S n And wherein m and n are integers greater than 0.
In the step 3, the computing power of any two edge servers is different.
In the invention, step 3 simplifies the deployment problem into the problem of distributing m VMs to n edge servers by defining virtual machines and edge server queues; since the VM is not allowed to be allocated across servers, the resources required for each VM are not made to exceed the total capacity of any server, i.e., exist
Wherein x is pq Indicating whether the p-th VM is assigned to the q-th server.
In the invention, as the virtual machine needs to work on a plurality of dimensions at the same time, physical resources are not excessively distributed in order to ensure the performance of each job; considering the weight coefficient of each factor, two indexes, namely the number of CPUs and the memory size,
wherein, VMC p And VMM (virtual machine) p CPU number and memory size, SC representing p-th VM respectively q And SM q Representing the number of CPUs and the memory size of the q-th edge server, respectively.
In the invention, m and n have no size contrast relation with each other, if the number of servers is very large and the number of virtual machines is small, the allocation can be easily completed, and the allocation can be performed when the conditions are opposite, but the servers are fully loaded, and a part of virtual machines can not be allocated.
Step 4: and deploying the virtual machines by using a bat model with the second-order concussion factors introduced, and deploying m virtual machines to n edge servers.
The step 4 comprises the following steps:
step 4.1: initializing parameters to make bat searching space be D dimension,
f i =f min +(f max -f min )×β
wherein i is the ith population, j is the jth task, f i F is a fitness function min And f max Respectively the maximum value and the minimum value of the frequency, beta is a random vector obeying uniform distribution, and beta epsilon [0,1],And->The speeds of the bat after the update at the time t+1 and before the update at the time t are respectively, X * The current global optimal position;
setting an improved bat position updating formula;
in the step 4.1, the improved bat position updating formula is that,
wherein,and->The positions of the bat after the update at the time t+1 and before the update at the time t are respectively, τ is the disturbance amplitude, σ is a random number, τ epsilon (0,0.1)],σ∈[-1,1]。
τ=0.1。
Step 4.2: generating a new scheme to obtain a new solution x new
x new =x old +rand1×A t
Wherein x is old For the above solution, rand1 is a random number between-1 and1, A t Is the average loudness of all bats in this time step;
step 4.3: updating loudness and pulse emittance to produce a [0,1 ]]Random number rand2 on, if rand2 is smaller than loudness A i And new fitness f new A fitness value f smaller than the previous time i Updating:
f i =f new
where alpha is the loudness update coefficient, alpha e (0, 1),and->The probability of pulse emission after t+1 time update and before t time update is respectively, gamma is an adjustment coefficient, and gamma epsilon (0, 1);
step 4.4: sequencing all individual adaptation values to find out an optimal solution;
step 4.5: if the best solution is not overloaded, update X * If not, using an order transfer mechanism to perform local searching; and if the iteration number is the maximum iteration number, outputting a global optimal value, otherwise, adding 1 to the iteration number, and returning to the step 4.2.
In the step 4.5, the order transfer mechanism includes an order exchange operation and a migration operation.
In the invention, the logic of step 4 is that individual fitness values are generated for each population by using bat swarm algorithm, and after the values are ordered, the optimal solution X is found out * Judging whether the optimal solution is overloaded or not, updating if not, otherwise, carrying out local search by using an order transfer mechanism, and repeatedly deploying after searching until the maximum iteration times are met; that is, each time the bat swarm algorithm is executed, the optimal solution obtained needs to determine whether to overload, so as to determine whether to use an order transfer mechanism, and train the LSTM model on the result meeting the maximum iteration number.
In the present invention, the non-improved bat position update formula isIn order to solve the problem that individual bat swarm algorithm lacks a variation mechanism and is difficult to get rid of once being constrained by a certain local extremum, the bat swarm algorithm avoids sinking into local optimum, introduces a second-order oscillation factor to improve the formula, and obtains an improved bat position updating formulaIn general, the disturbance amplitude is limited to within 10% to avoid deviation from the initial position due to excessive disturbance amplitude, i.e. τ=0.1。
In the invention, in the local search, a new local solution is generated by random walk, rand 1E (-1, 1); during the searching process, the bat can continuously reduce the loudness according to the direction of the prey, increase the frequency, and once the loudness is smaller than the loudness A i And a new fitness f calculated according to the formula new Less than the previous fitness f i Updating loudness, probability of pulse emission and fitness value; wherein α=γ=0.9; in particular, it is desirable to simultaneously satisfy that the random number is smaller than the loudness a i And a new fitness f new A fitness f smaller than the last time i And updating only under two conditions, or else, not updating all three conditions.
In the invention, an order transfer mechanism is introduced to perform local search until the search is near the optimal and a better solution is difficult to obtain, and the infeasible solution is converted into the feasible solution, so that a better solution is found when the search is near the optimal; order transfer mechanisms involve two strategies, one is an orderly exchange operation and one is a migration operation, which is a strategy in a cloud manufacturing, distribution and deployment environment.
In the invention, the ordered exchange operation refers to the exchange of virtual machines among different servers, and is used for relieving the problem of unbalance of resource utilization of an overload server. Taking this embodiment as an example, the virtual machines are ordered according to different resource differences, and the virtual machines on each overload server (the larger the difference is, the priority) are exchanged with the virtual machines on the non-overload server (the smaller the difference is, the priority) until the condition is met or there is no matching VM.
In the invention, the migration operation refers to the migration of the virtual machine from the overload server to the non-overload server. Taking this embodiment as an example, the VM of the overloaded server is moved to other non-overloaded servers until the condition is met or there is no matching VM.
In the present invention, the drawings are taken as examples to illustrate:
FIG. 3 (a) is an example of an orderly switching operation, with A, B and C representing three different servers, the dashed box representing the size of the server resource space, the dashed line of the double-headed arrow representing two servers to be switched, it being apparent that servers A are overloaded, B and C are not overloaded; in fig. 3 (a), there is a possibility that an orderly swap operation occurs between VM1 on server a and VM7 on server B, after the swap, if neither server a nor server B is overloaded, the swap is successful, otherwise, the next interchangeable VM is found;
FIG. 3 (B) is an example of a migration operation, with A, B and C representing three different servers, the dashed box representing the locations to which a VM can migrate, the one-way solid line representing the direction of migration, it being apparent that servers A are overloaded, B and C are not overloaded; the virtual machine VM8 on the server a in fig. 3 (b) is migrated to the server C, and the migration operation needs to be performed after the order exchange operation is completed.
Step 5: and obtaining historical data of virtual machine deployment, training an improved LSTM learning model, and carrying out virtual machine deployment based on the learning model.
In the step 5, the improved LSTM learning model comprises a basic LSTM network, and a sigmoid input gate, a forget gate and an output gate are set in cooperation with the network; said step 5 comprises the steps of:
step 5.1: constructing an improved LSTM learning model;
step 5.2: acquiring historical data of virtual machine deployment, and training an LSTM learning model by using a self-adaptive convergence function;
in the step 5.2, training the LSTM learning model by using the adaptive convergence function
Wherein, the flag is a temporary variable for counting, when the flag is not equal to 0, the flag is added with 1, the next training is carried out, when the flag is added up to 10, the flag is made to be 0; k is the current number of iterations in the training,loss is the average of the loss functions of k iterations before Is the average of the previous loss functions, loss cur Is the average value of the loss function for the current iteration,the loss function is
p i Is a predicted value, r i Is the actual value, z is the batch size of the experiment, i is the serial number of the training batch;
and if gard meets the preset condition, performing step 5.3.
Step 5.3: and deploying the virtual machine by using the trained learning model.
In the invention, the learning model is composed of LSTM units and is controlled by three gates, including a sigmoid input gate, a forgetting gate and an output gate, and the three gates are used for controlling which information is stored and discarded, so that the state of the neural network at each moment is selectively maintained.
In the invention, the self-adaptive convergence function is used for optimizing the loss convergence of the traditional LSTM and accelerating the training speed of the model.
In the present invention, precisely speaking, loss before Is the average of the previous loss function or functions.
In the present invention, convergence is considered when 0.9995< gard < 1.0005.
In the present invention, the batch size in the experiment may be 80 or other reasonable value.

Claims (5)

1. A virtual machine deployment method for mobile edge computing is characterized in that: the method comprises the following steps:
step 1: constructing a mobile edge computing environment;
step 2: defining an energy consumption model P for each edge server in an edge computing environment s
P s =P idle +(P max -P idle )*u
Wherein P is idle Is the power consumed by the edge server in idle state, P max Is the power consumed by the edge server in the full load state, u is the CPU utilization rate of the edge server, u is E [0,1];
Step 3: define a set of virtual machines VM and an edge server queue S, vm= { VM 1 ,vm 2 ,vm 3 ,...vm m },S={S 1 ,S 2 ,...S n -wherein m and n are integers greater than 0;
step 4: deploying virtual machines by using a bat model with a second-order concussion factor introduced, deploying m virtual machines to n edge servers, comprising the following steps:
step 4.1: initializing parameters to make bat searching space be D dimension,
f i =f min +(f max -f min )×β
wherein i is the ith population, j is the jth task, f i F is a fitness function min And f max Respectively the maximum value and the minimum value of the frequency, beta is a random vector obeying uniform distribution, and beta epsilon [0,1],And->The speeds of the bat after the update at the time t+1 and before the update at the time t are respectively, X * The current global optimal position;
an improved bat position update formula is set,
wherein,and->The positions of the bat after the update at the time t+l and before the update at the time t are respectively, τ is the disturbance amplitude, σ is a random number, τ epsilon (0,0.1)],σ∈[-1,1];
Step 4.2: generating a new scheme to obtain a new solution x new
x new =x old +rand1×A t
Wherein x is old For the above solution, randl is a random number between-1 and1, A t Is the average loudness of all bats in this time step;
step 4.3: updating loudness and pulse emittance to produce a [0,1 ]]Random number rand2 on, if rand2 is smaller than loudness A i And new fitness f new A fitness value f smaller than the previous time i Updating:
f i =f new
where alpha is the loudness update coefficient, alpha e (0, 1),and->The probability of pulse emission after t+1 time update and before t time update is respectively, gamma is an adjustment coefficient, and gamma epsilon (0, 1);
step 4.4: sequencing all individual adaptation values to find out an optimal solution;
step 4.5: if the best solution is not overloaded, update X * If not, using an order transfer mechanism to perform local searching; if it isOutputting a global optimal value if the iteration number is the maximum iteration number, otherwise, adding 1 to the iteration number, and returning to the step 4.2;
step 5: obtaining historical data of virtual machine deployment, training an improved LSTM learning model, wherein the improved LSTM learning model comprises a basic LSTM network, and a sigmoid input gate, a forget gate and an output gate are arranged in cooperation with the network, and the method comprises the following steps of:
step 5.1: constructing an improved LSTM learning model;
step 5.2: obtaining historical data of virtual machine deployment, training an LSTM learning model by using an adaptive convergence function,
wherein, the flag is a temporary variable for counting, when the flag is not equal to 0, the flag is added with 1, the next training is carried out, when the flag is added up to 10, the flag is made to be 0; k is the current number of iterations in the training,loss is the average of the loss functions of k iterations before Is the average of the previous loss functions, loss cur The average value of the loss function of the current iteration is that the loss function is
p i Is a predicted value, r i Is the actual value, z is the batch size of the experiment, i is the serial number of the training batch; step 5.3, if gard meets the preset condition;
step 5.3: and deploying the virtual machine by using the trained learning model.
2. The virtual machine deployment method for mobile edge-oriented computing of claim 1, wherein: the mobile edge computing environment comprises a terminal layer, an edge layer and a cloud data layer which are sequentially matched and arranged;
the terminal layer is distributed with a plurality of mobile devices;
a base station and an edge server are distributed in cooperation with the edge layer of the mobile equipment;
the cloud data layer matched with the edge server comprises a server and a storage device.
3. The virtual machine deployment method for mobile edge-oriented computing of claim 1, wherein: in the step 3, the computing power of any two edge servers is different.
4. The virtual machine deployment method for mobile edge-oriented computing of claim 1, wherein: τ=0.1.
5. The virtual machine deployment method for mobile edge-oriented computing of claim 1, wherein: in the step 4.5, the order transfer mechanism includes an order exchange operation and a migration operation.
CN202110231766.0A 2021-03-02 2021-03-02 Virtual machine deployment method for mobile edge computing Active CN112835684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110231766.0A CN112835684B (en) 2021-03-02 2021-03-02 Virtual machine deployment method for mobile edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110231766.0A CN112835684B (en) 2021-03-02 2021-03-02 Virtual machine deployment method for mobile edge computing

Publications (2)

Publication Number Publication Date
CN112835684A CN112835684A (en) 2021-05-25
CN112835684B true CN112835684B (en) 2024-03-22

Family

ID=75934352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110231766.0A Active CN112835684B (en) 2021-03-02 2021-03-02 Virtual machine deployment method for mobile edge computing

Country Status (1)

Country Link
CN (1) CN112835684B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116321300A (en) * 2022-10-17 2023-06-23 湖北文理学院 Risk-aware mobile edge computing task scheduling and resource allocation method
CN116719614A (en) * 2023-08-11 2023-09-08 中国电信股份有限公司 Virtual machine monitor selection method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795208A (en) * 2019-10-11 2020-02-14 南京航空航天大学 Mobile cloud computing self-adaptive virtual machine scheduling method based on improved particle swarm
CN111488208A (en) * 2020-03-22 2020-08-04 浙江工业大学 Edge cloud cooperative computing node scheduling optimization method based on variable step length bat algorithm
CN112101532A (en) * 2020-11-18 2020-12-18 天津开发区精诺瀚海数据科技有限公司 Self-adaptive multi-model driving equipment fault diagnosis method based on edge cloud cooperation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10296367B2 (en) * 2017-02-03 2019-05-21 Microsoft Technology Licensing, Llc Resource management for virtual machines in cloud computing systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795208A (en) * 2019-10-11 2020-02-14 南京航空航天大学 Mobile cloud computing self-adaptive virtual machine scheduling method based on improved particle swarm
CN111488208A (en) * 2020-03-22 2020-08-04 浙江工业大学 Edge cloud cooperative computing node scheduling optimization method based on variable step length bat algorithm
CN112101532A (en) * 2020-11-18 2020-12-18 天津开发区精诺瀚海数据科技有限公司 Self-adaptive multi-model driving equipment fault diagnosis method based on edge cloud cooperation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
云环境中多目标优化的虚拟机放置算法;蔺凯青;李志华;郭曙杰;李双俐;;计算机应用(第12期);全文 *

Also Published As

Publication number Publication date
CN112835684A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN108182115B (en) Virtual machine load balancing method in cloud environment
Alkayal et al. Efficient task scheduling multi-objective particle swarm optimization in cloud computing
Mao et al. Max–min task scheduling algorithm for load balance in cloud computing
CN112835684B (en) Virtual machine deployment method for mobile edge computing
CN111722910B (en) Cloud job scheduling and resource allocation method
CN104657221A (en) Multi-queue peak-alternation scheduling model and multi-queue peak-alteration scheduling method based on task classification in cloud computing
CN112328380A (en) Task scheduling method and device based on heterogeneous computing
CN112214301B (en) Smart city-oriented dynamic calculation migration method and device based on user preference
CN113485826B (en) Load balancing method and system for edge server
CN111813506A (en) Resource sensing calculation migration method, device and medium based on particle swarm algorithm
CN107357652A (en) A kind of cloud computing method for scheduling task based on segmentation sequence and standard deviation Dynamic gene
CN113094159A (en) Data center job scheduling method, system, storage medium and computing equipment
CN111027665A (en) Cloud manufacturing scheduling method based on improved chaotic bat swarm algorithm
Hu et al. Dynamic task offloading in MEC-enabled IoT networks: A hybrid DDPG-D3QN approach
Chalack et al. Resource allocation in cloud environment using approaches based particle swarm optimization
Xu et al. A meta reinforcement learning-based virtual machine placement algorithm in mobile edge computing
CN111488208B (en) Bian Yun collaborative computing node scheduling optimization method based on variable-step-size bat algorithm
CN117032902A (en) Cloud task scheduling method for improving discrete particle swarm algorithm based on load
CN112862083A (en) Deep neural network inference method and device under edge environment
Singhrova et al. Prioritized GA-PSO algorithm for efficient resource allocation in fog computing
CN116954866A (en) Edge cloud task scheduling method and system based on deep reinforcement learning
Hao et al. Research for energy optimized resource scheduling algorithm in cloud computing base on task endurance value
Hu et al. Distributed task offloading based on multi-agent deep reinforcement learning
CN112764932A (en) Deep reinforcement learning-based calculation-intensive workload high-energy-efficiency distribution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant