CN114741160A - Dynamic virtual machine integration method and system based on balanced energy consumption and service quality - Google Patents
Dynamic virtual machine integration method and system based on balanced energy consumption and service quality Download PDFInfo
- Publication number
- CN114741160A CN114741160A CN202210407757.7A CN202210407757A CN114741160A CN 114741160 A CN114741160 A CN 114741160A CN 202210407757 A CN202210407757 A CN 202210407757A CN 114741160 A CN114741160 A CN 114741160A
- Authority
- CN
- China
- Prior art keywords
- virtual machine
- physical host
- load
- host
- resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000010354 integration Effects 0.000 title claims abstract description 41
- 238000005265 energy consumption Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 57
- 238000013508 migration Methods 0.000 claims abstract description 54
- 230000005012 migration Effects 0.000 claims abstract description 53
- 238000009499 grossing Methods 0.000 claims abstract description 35
- 238000003062 neural network model Methods 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 230000015654 memory Effects 0.000 claims abstract description 16
- 230000008602 contraction Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 11
- 230000001174 ascending effect Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 210000004205 output neuron Anatomy 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 2
- 238000004134 energy conservation Methods 0.000 abstract description 6
- 230000015556 catabolic process Effects 0.000 description 6
- 238000006731 degradation reaction Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000008713 feedback mechanism Effects 0.000 description 2
- 230000007334 memory performance Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013468 resource allocation Methods 0.000 description 2
- 241000136406 Comones Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007596 consolidation process Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a virtual resource integration method and a system based on green energy conservation, and discloses a dynamic virtual machine integration method based on balanced energy consumption and service quality, which comprises the following steps: 1) physical host load prediction: predicting the workload of the physical host at the next moment based on a prediction algorithm of a cubic exponential smoothing model and an Elman neural network model; 2) physical host load state detection: identifying a current load state of the physical host through a hybrid load detection algorithm; 3) virtual machine selection: selecting a virtual machine to be migrated on the non-adapted host based on a virtual machine selection algorithm sensed by a CPU and a memory; 4) virtual machine placement: virtual machine placement algorithm based on resource demand expansion and contraction quantity and according to virtual machine migration queueThe resource requirement and the resource information of the suitable load host in the data center, and a suitable physical host for arranging and migrating the virtual machine is selected from the resource requirement and the resource information. The invention reduces the migration times of the virtual machine, consumes lower energy and maintains high-level service quality.
Description
Technical Field
The invention relates to a dynamic virtual machine integration method and system based on balanced energy consumption and service quality, and belongs to the technical field of dynamic virtual machine integration.
Background
With the rapid development of cloud computing technology, Infrastructure as a Service (IaaS) becomes an important Service mode, and users can rent resources including servers, networks, storage, and the like from an IaaS provider as required. A data center having functions of elastic resource supply, virtual service dynamic configuration, virtualization and management of infrastructure resources, and the like, becomes an important carrier for constructing an IaaS service.
However, over the past few years, the extreme energy consumption of cloud data centers has become a significant problem. In general, data center energy is wasted due to various reasons, such as network equipment, server utilization, and inefficient cooling systems of the data center. According to the report of Gartner 2013, the power consumption of the cloud data center is usually huge, which is equivalent to the power consumption of 25,000 households. And during the year 2011-2035, the power demand of global data centers is expected to increase by more than 66%. From the statistical data point of view, the utilization rate of the data center is very low. For example, the average utilization of a data center is between 12% and 18%. Google data centers have been used between 10% and 50% utilization, which is a waste of power because idle servers use 70% of their maximum power on average. The low resource utilization results in a large amount of energy waste and complication, thereby enlarging the capacity of the data center and causing further deterioration of the resource waste.
Therefore, high power consumption and low utilization of cloud data centers are challenges facing cloud computing. An effective and common approach to solving this problem is virtual machine integration. The virtual machine integration is that according to the resource requirements of the virtual machines, the virtual machines are placed on fewer servers through virtual machine migration, and then a part of the servers are changed into a dormant state, so that the energy expenditure of the data center is reduced. However, virtual machine migration can increase the cost of computing resources, and large-scale virtual machine migration can result in additional workload, SLA violations, and considerable energy consumption. Meanwhile, the service may be suspended during the migration of the virtual machine, and the quality of service may be further affected by the long-term migration.
Disclosure of Invention
The invention provides a virtual resource integration system based on green energy conservation, which defines specific processes and related attributes of virtual resource integration, and provides a dynamic virtual machine integration method based on balanced energy consumption and Service Quality on the basis of the model to realize efficient integration of virtual resources.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a dynamic virtual machine integration method based on balanced energy consumption and service quality comprises the following steps:
1) physical host load prediction: predicting the working load of the physical host at the next moment based on a prediction algorithm HCESEA of a cubic exponential smoothing model and an Elman neural network model;
2) physical host load state detection: identifying the current load state of the physical host through a hybrid load detection algorithm HLDA;
3) virtual machine selection: selecting a virtual machine to be migrated on an off-load host based on a virtual machine selection algorithm CM-VMSA perceived by a CPU and a memory;
4) virtual machine placement: virtual machine placement algorithm RDS-VMPA based on resource demand expansion and contraction quantity, and migration queue of virtual machineThe resource demand and the resource information of the suitable load host in the data center are obtained, and a suitable physical host for installing and migrating the virtual machine is selected.
The method realizes the high-efficiency integration of the virtual resources; the migration frequency of the virtual machines and the use number of the servers are reduced, the service resource utilization rate of the data center is improved, and the energy expenditure is reduced, so that the requirements of energy conservation and environmental protection of the data center are met.
The method considers the possibility of potential overload, considers the resource demand expansion amount of the physical host during the placement of the virtual machine, and obviously improves the accuracy of selection of the adaptive host.
In the step 1), aiming at the characteristics of the dynamic property and uncertainty of the load data of the physical host, the application provides a Prediction algorithm HCESEA (Prediction algorithm based on Cubic Exponential Smoothing Model and Elman Neural Network Model) for predicting the working load of the physical host at the next moment, wherein the Prediction algorithm uses a Cubic Exponential Smoothing Model CES (CES) for Prediction, and then uses an Elman Neural Network Model ENN (Elman Neural Network Model, ENN) for predicting the error of the Cubic Exponential Smoothing Model CES, so as to finally obtain a predicted value after error correction. Therefore, the influence of the model parameters on the overall performance is relieved, the ENN model predicts the error of the CES model, and the prediction precision of the ENN model is superior to that of the original data set, so that the prediction performance of the ENN model is further improved. The following is a description of the model to which the HCESE model relates.
Cubic exponential smoothing model CES:
the formula for the cubic exponential smoothing value is as follows:
wherein: alpha represents a smoothing coefficient, 0 < alpha < 1;a first exponential smoothing value representing the time period t;a second exponential smoothing value representing the time period t;represents the three-fold exponentially smoothed value of the time period t.
The CES prediction model is as follows:
Yt+T=At+BtT+CtT2
wherein T is the number of prediction periods, At、Bt、CtAre prediction parameters.
Elman neural network model ENN:
the basic structure of the Elman network consists of 4 parts, namely an input layer, a hidden layer, an output layer and an associated layer; unlike a general neural network, the Elman neural network is additionally provided with an association layer, and the input of the association layer is derived from the output of an implicit layer. This internal feedback mechanism enhances the network's processing power for dynamic time data.
Mathematical model of Elman neural network:
x(k)=f(w1xc(k)+w2u(k-1)) (4)
xc(k)=x(k-1 (5)
y(k)=g(w3x(k)) (6)
wherein, w1A connection weight matrix for the association layer and the hidden layer, w2For the connection weight matrix of the input layer and the hidden layer, w3A connection weight matrix, x, for the hidden layer and the output layerc(k) And x (k) respectively represents the output of the association layer and the hidden layer, y (k) represents the output of the output layer, f is an activation function and is taken as a Sigmoid function, u is an input vector, and g is a transfer function of an output neuron;
prediction algorithm HCESEA:
let data set L'h={lt1,lt2,…,ltnIs t1To tnTrue load of physical host h, Y 'within a time period'hRepresentation from dataset L 'by cubic exponential smoothing model CES'hThe obtained load prediction sequence of the physical host h with the length m can be expressed as Eh=Y′h-L′h={e1,e2,…,em}; elman neural network model ENN according to EhCorrected sequence E 'for error'hThe corrected load prediction of the physical host h is expressed as.
Let mCES、mENNRespectively expressed as a cubic exponential smoothing model CES and an Elman neural network model ENN, xCES、xENNRespectively expressed as input vectors of the two models, then at tn+1At time, the output of the model can be expressed as:
Y*=mCES(xCES)-mENN(xENN)。
in the step 2), since the load of each physical host in the data center is dynamically changed, the load of different physical hosts at the same time is different. For this purpose, the present application proposes to identify the load status of the current physical host by a Hybrid load detection algorithm HLDA (Hybrid load detection algorithm, HLDA), first obtain the expected load and the real-time load of each physical host in the data center, and then divide the load status of the physical host into the following four statuses by a set threshold: an underrun condition, an underloaded condition, a potential overload condition, and an overload condition.
The specific flow of the hybrid load detection algorithm HLDA is as follows: adding a physical host to an overloaded host queue when the load of the physical host is above an upper thresholdAdding a physical host to an underrun host queue when the load of the physical host is below a lower thresholdWhen the load of the physical host is within the threshold value, the judgment is carried out by combining the predicted load of the physical host; adding the physical host to a potentially overloaded host queue if the expected load of the physical host is above an upper threshold limitOtherwise, the physical host is added to the onload host queueIn (1).
In step 3), selecting a virtual machine: live migration of a virtual machine can negatively impact the performance of applications running on the virtual machine, thereby creating some SLA conflicts. The performance degradation during virtual machine migration is associated with the virtual machine migration time. Therefore, the present application provides a Virtual machine selection algorithm CM-VMSA (Virtual machine selection algorithm based on CPU and memory performance, CM-VMSA) based on CPU and memory sensing to select a Virtual machine to be migrated on an off-load host. The algorithm can reduce the migration time of the virtual machine as much as possible on the basis of reducing the migration times of the virtual machine, thereby improving the service quality.
There are physical hosts with class 3 state that need to migrate virtual machines, each of which is in the under-run host queueOverloaded host queueAnd potentially overloaded host queuesPerforming the following steps; therefore, the present application performs the following operations on these class 3 queues:
underrun host queueAdding all virtual machines on the class of physical hosts to a virtual machine migration queueAfter all the virtual machines are migrated to other physical hosts, the node is switched to a sleep mode to achieve the purpose of reducing energy consumption;
overloaded host queueFirstly, arranging all virtual machines on the physical host in a descending order according to the utilization rate of a CPU; secondly, performing ascending arrangement according to the memory amount occupied by the virtual machine under the relative order of descending CPU utilization rate; finally, sequentially trying to migrate out each virtual machine, and calculating the load L of the physical host in real timeCIf L isC<ηuStopping the operation and adding the attempted virtual machine to the virtual machine migration queue
Potentially overloaded host queueFirstly, arranging all virtual machines on the physical host in a descending order according to the utilization rate of a CPU; secondly, performing ascending arrangement according to the memory amount used by the virtual machine under the relative order of descending CPU utilization rate; finally, each virtual machine is tried to be migrated out in turn, andcalculating the load L of the physical hostEIf, ifThe operation is stopped and the attempted migration virtual machine is added toThe CPU load occupied by virtual machine i on physical host h.
In the step 4), the virtual machine is placed: virtual machine placement algorithm RDS-VMPA (Virtual machine placement on resource management scaling, RDS-VMPA) based on resource demand scaling quantityThe resource demand and the resource information of the suitable load host in the data center are obtained, and a suitable physical host for installing and migrating the virtual machine is selected. RDS-VMPA calculates the workload of all physical hosts meeting the resource allocation requirement of the virtual machine at the future time, divides the resource demand reduction queue and the resource demand increase queue according to the prediction condition, sorts the queues by adopting a specific mode, and determines the target host by further screening. The specific algorithm steps are as follows:
1) calculating the resource demand expansion amount on each physical host, wherein the value of the resource demand expansion amount is obtained by subtracting the current working load from the predicted load of the physical host, and the value reflects the change trend of the resource demand of the virtual machine on the physical host;
2) calculating the unallocated resource amount by subtracting the workload at the current moment from the total physical host resource amount, and screening out a difference value between the unallocated resource amount and the resource demand expansion amount, namely a physical host list with the estimated residual resource amount larger than zero to form a candidate physical host list;
3) if the resource demand expansion amount is a negative value, adding the physical host into a resource demand reduction queue, calculating the difference value between the residual resource amount and the resource demand expansion amount, and arranging in a descending order; if the resource demand expansion amount is a positive value, adding the physical host into a resource demand expansion queue, calculating the ratio of the resource demand expansion amount to the residual resource amount as a safety factor SF, and arranging the queue in an ascending order according to the factor value;
4) after the two queues are generated, if one queue is empty, directly selecting a queue head host of the other queue as a setting host; if neither is empty, the priority factors δ of the hosts at the two head of line are compared.
A virtual resource integration system based on green energy conservation comprises a global manager and more than two local managers;
a scheduling module, a data information module, a load prediction center, a load detection center, a virtual machine selection center and a virtual machine placement center are arranged in the global manager, and a monitoring unit is arranged in the local manager;
the local manager collects real-time loads of CPUs and memories on the physical host and the virtual machine through the monitoring unit and sends the data set to a data information module in the global manager;
a scheduling module in the global manager predicts the load condition of each physical host at the next moment by using a load prediction center according to a data set in the data information module;
the load detection center combines the real-time load and the expected load of the physical host to detect and divide the load state of the physical host;
respectively selecting virtual machines to be migrated from the un-suitable hosts through a virtual machine selection center;
and selecting a proper loadable host for the virtual machine to be migrated through the virtual machine placement center, and sending the migration scheme to each local manager for migration.
Virtual resource integration model:
definition 1: the physical hosts in a data center are denoted QH={h1,h2,…,hi,…,hm},hiRepresenting the ith physical host;
definition 2: physical host hiThe virtual machine queue in is represented asvjRepresenting the jth virtual machine;
definition 3: approximately 10% of the CPU overhead may be incurred during the migration of the virtual machine, which may result in a degradation of the performance of the virtual machine. Thus, each virtual machine migration may result in some SLA violations. The number of the virtual machines in real time migration needs to be reduced as much as possible, and the quality of service provided is guaranteed. The migration time and performance degradation formula of the virtual machine is as follows:
wherein,is a virtual machine vjThe time taken for the migration to be completed,is a virtual machine vjAmount of memory used, BjIs the available bandwidth of the network and,is the amount of performance degradation, t, caused by virtual machine migration0Is the start time of the migration, and,is a virtual machine vjCPU utilization of (1);
definition 4: the energy consumption of a data center is expressed as:
wherein EC is in the dataThe energy consumption of the heart is reduced,is a physical host hiAnd E is the power consumption corresponding to the load of the physical host at the moment t.
The definitions of the symbols in the application are shown in table 1.
TABLE 1 symbol definitions
The prior art is referred to in the art for techniques not mentioned in the present invention.
Aiming at the existing problems of virtual machine integration, the invention firstly constructs a virtual resource integration system based on green energy conservation, and the model defines the specific flow and relevant attributes of virtual resource integration. Secondly, on the basis of the model, the dynamic virtual machine integration method based on balanced energy consumption and service quality is provided to realize efficient integration of virtual resources. The strategy contains four parts: physical host load prediction, physical host load state detection, virtual machine selection, and virtual machine placement.
Physical host load prediction: by sensing load information in a data center, the application provides a prediction model (Hybrid cubic empirical smoothing model and Elman model, HCESE) based on a cubic exponential smoothing model and an Elman neural network model to predict the workload of a physical host at the next moment. The model carries out error prediction and correction on the basis of CES, so that the load state of the physical host can be predicted more accurately.
Physical host load state detection: the application provides a Hybrid Load Detection Algorithm (HLDA) to identify the load status of a current physical host, and divide the load status of the physical host into the following four statuses: an underrun condition, an underload condition, a latent overload condition, and an overload condition. The method delicately divides the physical host states, can reduce potential SLA violations, and therefore improves service quality.
Virtual machine selection: the application provides a Virtual machine selection algorithm (CM-VMSA) based on CPU and memory perception, which is used for selecting a Virtual machine needing to be migrated on an un-adapted host. The algorithm can reduce the migration time of the virtual machine as much as possible on the basis of reducing the migration times of the virtual machine, thereby improving the service quality.
Virtual machine placement: the application provides a Virtual machine placement algorithm (RDS-VMPA) based on resource demand scaling, which selects a proper physical host for placing a migration Virtual machine according to the resource demand of a Virtual machine migration queue and the resource information of a proper load host in a data center. The adaptive host selected by the algorithm considers the resource demand expansion amount of the physical host, and can effectively prevent the overload phenomenon caused by the fluctuation of the working load, so that the virtual machines are reasonably distributed to the physical host, and the resource utilization rate of the data center is improved.
The dynamic virtual machine integration method based on balancing energy consumption and service Quality not only reduces the migration times and energy consumption of the virtual machine, but also maintains high-level Quality of service (QoS), and realizes balance between energy consumption and QoS.
Drawings
FIG. 1 is a schematic diagram of a green energy-saving virtual resource integration system according to the present invention.
FIG. 2 is a diagram illustrating the ENN model of the present invention.
FIG. 3 is a structural diagram of HCESEA in accordance with the present invention.
Detailed Description
For better understanding of the present invention, the following examples are given for further illustration of the present invention, but the present invention is not limited to the following examples.
The invention provides a dynamic virtual machine integration method based on balanced energy consumption and service quality. The method firstly constructs a virtual resource integration system based on green energy conservation, and the model defines the specific flow and relevant attributes of virtual resource integration. Secondly, on the basis of the model, a dynamic virtual machine integration method based on balanced energy consumption and service quality is provided to realize efficient integration of virtual resources.
Fig. 1 is a schematic diagram of a green energy-saving-based virtual resource integration system according to the present invention. The model is composed of a global manager and a plurality of local managers. The local manager collects real-time loads of CPUs and memories on the physical host and the virtual machine through the monitoring unit and sends the data set to the data information module in the global manager. The scheduling process comprises the following steps:
step 101: firstly, a scheduling module in the global manager predicts the load condition of each physical host at the next moment by using a load prediction center according to a data set in a data information module;
step 102: the load detection center combines the real-time load and the expected load of the physical host to detect and divide the load state of the physical host;
step 103: and respectively selecting the virtual machines to be migrated from the un-loaded hosts through the virtual machine selection center.
Step 104: and selecting a proper loading host for the virtual machine to be migrated through the virtual machine placement center, and sending the migration scheme to each local manager for migration.
A dynamic virtual machine integration method based on balanced energy consumption and service quality is characterized in that: the method comprises the following steps:
1) physical host load prediction:
aiming at the characteristics of the dynamic property and uncertainty of the load data of the physical host, the application provides a Prediction algorithm HCESEA (Prediction algorithm based on Cubic Exponential Smoothing Model and Elman Neural Network Model) based on a Cubic Exponential Smoothing Model and the Elman Neural Network Model to predict the working load of the physical host at the next moment, wherein the Prediction algorithm uses a Cubic Exponential Smoothing Model CES (Cubic Exponential Smoothing Model, CES) to predict, and then uses an Elman Neural Network Model ENN (Elman Neural Network word, ENN) shown in figure 2 to predict the error of the Cubic Exponential Smoothing Model CES, and finally obtains a predicted value after error correction. Therefore, the influence of the model parameters on the overall performance is relieved, the ENN model predicts the error of the CES model, and the prediction precision of the ENN model is superior to that of the original data set, so that the prediction performance of the ENN model is further improved. The following is a model introduction relating to the HCESE model;
the formula for the cubic exponential smoothing value is as follows:
wherein: alpha represents a smoothing coefficient, 0 < alpha < 1;a first exponential smoothing value representing the time period t;a second exponentially smoothed value representing a time period t;a cubic exponentially smoothed value representing the time period t.
CES prediction model is as follows:
Yt+T=At+BtT+CtT2
wherein T is the number of prediction periods, At、Bt、CtAre prediction parameters.
Elman neural network model ENN:
the basic structure of the Elman network consists of 4 parts, namely an input layer, a hidden layer, an output layer and an associated layer; unlike a general neural network, the Elman neural network is additionally provided with an association layer, and the input of the association layer is derived from the output of an implicit layer. This internal feedback mechanism enhances the network's processing power for dynamic time data.
Mathematical model of Elman neural network:
x(k)=f(w1xc(k)+w2u(k-1)) (4)
xc(k)=x(k-1) (5)
y(k)=g(w3x(k)) (6)
wherein, w1A connection weight matrix for the association layer and the hidden layer, w2For the connection weight matrix of the input layer and the hidden layer, w3A connection weight matrix, x, for the hidden layer and the output layerc(k) X (k) respectively represents the output of the association layer and the hidden layer, y (k) represents the output of the output layer, f is an activation function and is taken as a Sigmoid function, u is an input vector, and g is a transfer function of an output neuron;
the prediction algorithm HCESEA:
let data set L'h={lt1,lt2,…,ltnIs t1To tnTrue load of physical host h, Y 'within a time period'hRepresentation from dataset L 'by cubic exponential smoothing model CES'hThe obtained load prediction sequence of the physical host h with the length m can be represented as Eh=Y′h-L′h={e1,e2,…,em}; elman neural network model ENN according to EhObtaining a corrected sequence E 'of errors'hThe corrected load prediction of the physical host h is expressed asThe structure of the algorithm is shown in fig. 3.
Let mCES、mENNRespectively expressed as a cubic exponential smoothing model CES and an Elman neural network model ENN, xCES、xENNRespectively expressed as input vectors of the two models, then at tn+1At time, the output of the model can be expressed as:
Y*=mCES(xCES)-mENN(xENN)。
fig. 3 is a structure diagram of HCESEA, and the specific steps of predicting HCESEA by the physical host are as follows:
2) physical host load state detection:
because the load of each physical host in the data center is dynamically changed, the load of different physical hosts at the same time can be different. For this purpose, the present application proposes to identify the load status of the current physical host by a Hybrid load detection algorithm HLDA (Hybrid load detection algorithm, HLDA), first obtain the expected load and the real-time load of each physical host in the data center, and then divide the load status of the physical host into the following four statuses by a set threshold: an underrun condition, an underloaded condition, a potential overload condition, and an overload condition.
The specific flow of the hybrid load detection algorithm HLDA comprises the following steps: adding a physical host to an overloaded host queue when the load of the physical host is above an upper thresholdAdding a physical host to an underrun host queue when the load of the physical host is below a lower thresholdWhen the load of the physical host is within the threshold value, the judgment is carried out by combining the predicted load of the physical host; adding the physical host to a potentially overloaded host queue if the expected load of the physical host is above an upper thresholdOtherwise, the physical host is added to the onload host queueIn (1).
The specific steps of detecting the HLDA by the load state of the physical host are as follows:
3) virtual machine selection: live migration of a virtual machine can negatively impact the performance of applications running on the virtual machine, thereby creating some SLA conflicts. The performance degradation during virtual machine migration is associated with the virtual machine migration time. Therefore, the present application provides a Virtual machine selection algorithm CM-VMSA (Virtual machine selection algorithm based on CPU and memory performance, CM-VMSA) based on CPU and memory sensing to select a Virtual machine to be migrated on an off-load host. The algorithm can reduce the migration time of the virtual machine as much as possible on the basis of reducing the migration times of the virtual machine, thereby improving the service quality;
physical hosts with class 3 status need to migrate virtual machines, each of which is in an underrun host queueOverloaded host queueAnd potentially overloaded host queuesThe preparation method comprises the following steps of (1) performing; therefore, the present application performs the following operations on these class 3 queues:
underloaded host queueAdding all virtual machines on the class of physical hosts to a virtual machine migration queueAfter all the virtual machines are migrated to other physical hosts, the node is switched to a sleep mode to achieve the purpose of reducing energy consumption;
overloaded host queueFirstly, arranging all virtual machines on the physical host in a descending order according to the utilization rate of a CPU; secondly, performing ascending arrangement according to the memory amount occupied by the virtual machine under the relative order of descending CPU utilization rate; finally, sequentially trying to migrate out each virtual machine, and calculating the load L of the physical host in real timecIf L isc<ηuStopping the operation and adding the attempted virtual machine to the virtual machine migration queue
Potentially overloaded host queueFirstly, arranging all virtual machines on the physical host in a descending order according to the utilization rate of a CPU; secondly, performing ascending arrangement according to the memory amount used by the virtual machine under the relative order of descending CPU utilization rate; finally, each virtual machine is tried to be migrated in turn, and the load L of the physical host is calculated in real timeEIf, ifThe operation is stopped and the attempted migration virtual machine is added to
The specific steps of the virtual machine for selecting the CM-VMSA are as follows:
4) virtual machine placement:
resource demand expansion and contraction amount based Virtual machine placement algorithm RDS-VMPA (RDS-VMPA), and migration queue of Virtual machineThe resource requirement and the resource information of the suitable host computer in the data center are selected, and the suitable physical host computer for arranging and migrating the virtual machine is selected. RDS-VMPA calculates the work load of the future time of all physical hosts meeting the resource allocation requirement of the virtual machine, divides a resource demand reduction queue and a resource demand increase queue according to the prediction condition, sorts the queues by adopting a specific mode, and determines the target host by further screening. The specific algorithm steps are as follows:
1) calculating the resource demand expansion amount on each physical host, wherein the value of the resource demand expansion amount is obtained by subtracting the current working load from the predicted load of the physical host, and the value reflects the change trend of the resource demand of the virtual machine on the physical host;
2) calculating the unallocated resource amount by subtracting the workload at the current moment from the total physical host resource amount, and screening out a difference value between the unallocated resource amount and the resource demand expansion amount, namely a physical host list with the estimated residual resource amount larger than zero to form a candidate physical host list;
3) if the resource demand expansion amount is a negative value, adding the physical host into a resource demand reduction queue, calculating the difference value between the residual resource amount and the resource demand expansion amount, and arranging in a descending order; if the resource demand expansion amount is a positive value, adding the physical host into a resource demand expansion queue, calculating the ratio of the resource demand expansion amount to the residual resource amount as a safety factor SF, and arranging the queue in an ascending order according to the factor value;
4) after the two queues are generated, if one queue is empty, directly selecting a queue head host of the other queue as a setting host; if neither is empty, the priority factors δ of the hosts at the two head of line are compared.
The specific steps of placing the RDS-VMPA by the virtual machine are as follows:
in summary, the dynamic virtual machine integration method based on balancing energy consumption and quality of service provided by the invention is suitable for solving the virtual machine integration problem. The strategy not only reduces the migration times and energy consumption of the virtual machine, but also maintains high-level service quality, and realizes the balance between the energy consumption and the service quality. This enables cloud service providers to reduce the cost of data centers and improve the service experience for users, thereby further facilitating cloud computing development.
A CloudSim simulation platform is used for evaluating an EQ-DVMCA (Dynamic virtual machine coordination method on balanced energy generation and quality of service) Dynamic virtual machine integration method based on balanced energy consumption and quality of service, which is proposed by the application, and comparing the EQ-DVMCA with some reference algorithms. We use cloudsim4.0 version, which is an event-driven simulator for simulating cloud computing infrastructure and application services. The system supports functions of virtualized resource management and modeling, energy consumption, virtual machine migration and the like.
The experiment simulates a data center consisting of 800 heterogeneous physical hosts, which include 3 types: HP ProLiant ML 110G 4 and HP ProLiant ML 110G 5. The configuration of the host is shown in table 2.
In the experiment we have selected four types of virtual machines: high CPU medium size instances, ultra large size instances, small size instances, and micro-size instances. The attributes of the virtual machine are shown in table 3.
TABLE 3 virtual machine attributes
To make the simulation result authentic, we used a part of the data provided by the CoMon project to perform the simulation experiment. After PM and VM instances are created on the CloudSim platform, the PlanetLab data set is used for generating the workload of the VM, and then the workload is randomly deployed on the PM according to the resource requirement of the VM.
TABLE 4 workload information for PlanetLab dataset
Benchmark algorithm settings
The method comprises the following steps of: static Threshold (THR), Interqualification Range (IQR), Local Regression hub (LRR), Media Absolute Development (MAD), virtual machine selection algorithm: minimum Migration Time (MMT), Random Choice (RC), Maximum Correlation (MC), virtual machine placement algorithm: power Aware Best Fit sharpening (PABFD). By combining these algorithms, 12 different virtual machine consolidation algorithms can be obtained, each of which includes a host load detection algorithm, a virtual machine selection algorithm, and a virtual machine placement algorithm. And setting the security parameters of THR, IQR, LRR and MAD to 0.8, 1.5, 1.2 and 2.5 respectively.
Performance indexes are as follows: virtual Machine Migration times (VMM), runtime SLA violation Time (SLATAH), Service Degradation due to Migration (PDM), Service Level Agreement Violation (SLAV), Energy Consumption (EC), Service quality and Energy Consumption comprehensive evaluation index (ESV).
The experimental results are as follows:
TABLE 10 simulation results of EQ-DVMCA and baseline algorithm under PlanetLab data set
From the above table, it is seen that EQ-DVMCA performs optimally in both of these 6 performance tests. Compared with a benchmark algorithm, the EQ-DVMCA provided by the application not only remarkably reduces the times of virtual machine migration and energy consumption, but also provides reliable QoS. The calculation of table 10 can be used to obtain that, compared with the benchmark algorithm, the EQ-DVMCA of the present application is improved by 8.40% to 21.60% in VMM index, 10.83% to 36.93% in SLATAH index, 33.33% to 62.96% in PDM index, 47.21% to 75.80% in SLAV index, 9.78% to 27.49% in EC index, and 51.97% to 82.24% in ESV index, i.e., the EQ-DVMCA of the present application has significant advantages.
Claims (8)
1. A dynamic virtual machine integration method based on balanced energy consumption and service quality is characterized in that: the method comprises the following steps:
1) physical host load prediction: predicting the working load of the physical host at the next moment based on a prediction algorithm HCESEA of a cubic exponential smoothing model CES and an Elman neural network model ENN;
2) physical host load state detection: identifying the current load state of the physical host through a hybrid load detection algorithm HLDA;
3) virtual machine selection: selecting a virtual machine to be migrated on an un-adapted host based on a CPU and memory perception virtual machine selection algorithm CM-VMSA;
4) virtual machine placement: virtual machine placement algorithm RDS-VMPA based on resource demand expansion and contraction quantity, according to virtual machine migration queueThe resource requirement and the resource information of the suitable load host in the data center, and a suitable physical host for arranging and migrating the virtual machine is selected from the resource requirement and the resource information.
2. The dynamic virtual machine integration method based on balancing energy consumption and quality of service as claimed in claim 1, wherein: in the step 1), a cubic exponential smoothing model CES is used for prediction, an Elman neural network model ENN is used for predicting the error of the cubic exponential smoothing model CES, and finally a predicted value after error correction is obtained.
3. The dynamic virtual machine integration method based on balancing energy consumption and quality of service as claimed in claim 2, wherein: in step 1), three-time exponential smoothing model CES:
the formula for the cubic exponential smoothing value is as follows:
wherein: alpha represents a smoothing coefficient, 0 < alpha < 1;a first exponential smoothing value representing the time period t;a second exponential smoothing value representing the time period t;a cubic exponential smoothing value representing the time period t;
the CES prediction model is as follows:
Yt+T=At+BtT+CtT2
wherein T is the number of prediction periods, At、Bt、CtIs a prediction parameter;
elman neural network model ENN:
the basic structure of the Elman network consists of 4 parts, namely an input layer, a hidden layer, an output layer and an associated layer;
mathematical model of Elman neural network:
x(k)=f(w1xc(k)+w2u(k-1)) (4)
xc(k)=x(k-1) (5)
y(k)=g(w3x(k)) (6)
wherein, w1A connection weight matrix for the associated layer and the hidden layer, w2For the connection weight matrix of the input layer and the hidden layer, w3A connection weight matrix, x, for the hidden layer and the output layerc(k) X (k) respectively represents the output of the association layer and the hidden layer, y (k) represents the output of the output layer, f is an activation function and is taken as a Sigmoid function, u is an input vector, and g is a transfer function of an output neuron;
the prediction algorithm HCESEA:
let data set L'h={lt1,lt2,…,ltnIs t1To tnTrue load of physical host h, Y 'within a time period'hRepresentation from dataset L 'by cubic exponential smoothing model CES'hThe obtained load prediction sequence of the physical host h with the length m can be represented as Eh=Y′h-L′h={e1,e2,…,em}; elman neural network model ENN according to EhObtaining a corrected sequence E 'of errors'hAnd the load prediction of the physical host h after correction is Y'h=Y′h-E′h;
Let mCES、mENNExpressed as cubic exponential smoothing model CES and Elman neural network model ENN, xCES、xENNRespectively expressed as input vectors of the two models, then at tn+1At time, the output of the model can be expressed as:
Y*=mCES(xCES)-mENN(xENN)。
4. the dynamic virtual machine integration method based on balancing energy consumption and quality of service as claimed in claim 3, wherein: in step 2), the expected load and the real-time load of each physical host in the data center are firstly obtained, and then the load state of the physical host is divided into the following four states through the set threshold value: an underrun condition, an underload condition, a latent overload condition, and an overload condition.
5. The dynamic virtual machine integration method based on balancing energy consumption and quality of service as claimed in claim 4, wherein: in step 2), the specific flow of the hybrid load detection algorithm HLDA is as follows: adding a physical host to an overloaded host queue when the load of the physical host is above an upper thresholdWhen the load of the physical host is lower than the lower threshold, the physical host is usedAdding to an underloaded host queueWhen the load of the physical host is within the threshold value, the judgment is carried out by combining the predicted load of the physical host; adding the physical host to a potentially overloaded host queue if the expected load of the physical host is above an upper thresholdOtherwise, the physical host is added to the onload host queueIn (1).
6. The dynamic virtual machine integration method based on balancing energy consumption and quality of service as claimed in claim 5, wherein: in step 3), the physical host with 3 types of states needs to migrate the virtual machines, which are respectively positioned in the underloaded host queueOverloaded host queueAnd potentially overloaded host queuesPerforming the following steps; the following operations are performed on these class 3 queues:
underloaded host queueAdding all virtual machines on the class of physical hosts to a virtual machine migration queueAfter all the virtual machines are migrated to other physical hosts, the node is switched to a sleep modeThe purpose of reducing energy consumption is achieved;
overloaded host queueFirstly, arranging all virtual machines on the physical host in a descending order according to the utilization rate of a CPU; secondly, performing ascending arrangement according to the memory amount occupied by the virtual machine under the relative order of descending CPU utilization rate; finally, each virtual machine is tried to be migrated in turn, and the load L of the physical host is calculated in real timecIf L isc<ηuStopping the operation and adding the attempted virtual machine to the virtual machine migration queue
Potentially overloaded host queueFirstly, arranging all virtual machines on the physical host in a descending order according to the utilization rate of a CPU; secondly, performing ascending arrangement according to the memory amount used by the virtual machine under the relative order of descending CPU utilization rate; finally, each virtual machine is tried to be migrated in turn, and the load L of the physical host is calculated in real timeEIf, ifThe operation is stopped and the attempted migration virtual machine is added to
7. The dynamic virtual machine integration method based on balancing energy consumption and quality of service as claimed in claim 65, wherein: in the step 4), the specific steps of the RDS-VMPA algorithm based on the resource demand expansion and contraction quantity are as follows:
1) calculating the resource demand expansion amount on each physical host, wherein the value of the resource demand expansion amount is obtained by subtracting the current working load from the predicted load of the physical host, and the value reflects the change trend of the resource demand of the virtual machine on the physical host;
2) calculating the unallocated resource amount by subtracting the workload at the current moment from the total physical host resource amount, and screening out the difference between the unallocated resource amount and the resource demand expansion amount, namely, a physical host list with the predicted residual resource amount larger than zero to form a candidate physical host list;
3) if the resource demand expansion amount is a negative value, adding the physical host into a resource demand reduction queue, calculating the difference value between the residual resource amount and the resource demand expansion amount, and arranging the difference value in a descending order; if the resource demand expansion amount is a positive value, adding the physical host into a resource demand expansion queue, calculating the ratio of the resource demand expansion amount to the residual resource amount as a safety factor SF, and arranging the queue in an ascending order according to the factor value;
4) after the two queues are generated, if one queue is empty, directly selecting a queue head host of the other queue as a placement host; if neither is empty, the priority factors δ of the hosts at the two head of line are compared.
8. The utility model provides a virtual resource integration system based on green is energy-conserving which characterized in that: the system comprises a global manager and more than two local managers;
a scheduling module, a data information module, a load prediction center, a load detection center, a virtual machine selection center and a virtual machine placement center are arranged in the global manager, and a monitoring unit is arranged in the local manager;
the local manager collects real-time loads of CPUs and memories on the physical host and the virtual machine through the monitoring unit and sends the data set to a data information module in the global manager;
a scheduling module in the global manager predicts the load condition of each physical host at the next moment by using a load prediction center according to the data set in the data information module;
the load detection center combines the real-time load and the expected load of the physical host to detect and divide the load state of the physical host;
respectively selecting virtual machines to be migrated from the un-suitable hosts through a virtual machine selection center;
and selecting a proper loadable host for the virtual machine to be migrated through the virtual machine placement center, and sending the migration scheme to each local manager for migration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210407757.7A CN114741160A (en) | 2022-04-19 | 2022-04-19 | Dynamic virtual machine integration method and system based on balanced energy consumption and service quality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210407757.7A CN114741160A (en) | 2022-04-19 | 2022-04-19 | Dynamic virtual machine integration method and system based on balanced energy consumption and service quality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114741160A true CN114741160A (en) | 2022-07-12 |
Family
ID=82282157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210407757.7A Pending CN114741160A (en) | 2022-04-19 | 2022-04-19 | Dynamic virtual machine integration method and system based on balanced energy consumption and service quality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114741160A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116405391A (en) * | 2023-04-10 | 2023-07-07 | 长扬科技(北京)股份有限公司 | OpenStack-based virtual machine node screening method, system and storage medium |
CN118394452A (en) * | 2024-06-24 | 2024-07-26 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Method for optimizing energy efficiency of cloud infrastructure |
-
2022
- 2022-04-19 CN CN202210407757.7A patent/CN114741160A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116405391A (en) * | 2023-04-10 | 2023-07-07 | 长扬科技(北京)股份有限公司 | OpenStack-based virtual machine node screening method, system and storage medium |
CN118394452A (en) * | 2024-06-24 | 2024-07-26 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Method for optimizing energy efficiency of cloud infrastructure |
CN118394452B (en) * | 2024-06-24 | 2024-09-27 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Method for optimizing energy efficiency of cloud infrastructure |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yadav et al. | An adaptive heuristic for managing energy consumption and overloaded hosts in a cloud data center | |
Yadav et al. | Adaptive energy-aware algorithms for minimizing energy consumption and SLA violation in cloud computing | |
Cao et al. | Dynamic VM consolidation for energy-aware and SLA violation reduction in cloud computing | |
Yadav et al. | Mums: Energy-aware vm selection scheme for cloud data center | |
Tarafdar et al. | Energy and quality of service-aware virtual machine consolidation in a cloud data center | |
Beloglazov et al. | Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers | |
Fu et al. | Virtual machine selection and placement for dynamic consolidation in Cloud computing environment | |
Shaw et al. | Use of proactive and reactive hotspot detection technique to reduce the number of virtual machine migration and energy consumption in cloud data center | |
JP5756478B2 (en) | Optimizing power consumption in the data center | |
Sayadnavard et al. | A reliable energy-aware approach for dynamic virtual machine consolidation in cloud data centers | |
Salimian et al. | An adaptive fuzzy threshold-based approach for energy and performance efficient consolidation of virtual machines | |
CN106528266B (en) | Method and device for dynamically adjusting resources in cloud computing system | |
A. El-Moursy et al. | Multi-dimensional regression host utilization algorithm (MDRHU) for host overload detection in cloud computing | |
Fu et al. | Predicted affinity based virtual machine placement in cloud computing environments | |
Zakarya et al. | An energy aware cost recovery approach for virtual machine migration | |
Ding et al. | Adaptive virtual machine consolidation framework based on performance-to-power ratio in cloud data centers | |
Rajabzadeh et al. | Energy-aware framework with Markov chain-based parallel simulated annealing algorithm for dynamic management of virtual machines in cloud data centers | |
CN105607952B (en) | Method and device for scheduling virtualized resources | |
CN107220108B (en) | Method and system for realizing load balance of cloud data center | |
Jararweh et al. | Energy efficient dynamic resource management in cloud computing based on logistic regression model and median absolute deviation | |
US10198295B2 (en) | Mechanism for controlled server overallocation in a datacenter | |
Hasan et al. | Heuristic based energy-aware resource allocation by dynamic consolidation of virtual machines in cloud data center | |
Duggan et al. | An autonomous network aware vm migration strategy in cloud data centres | |
CN114741160A (en) | Dynamic virtual machine integration method and system based on balanced energy consumption and service quality | |
Zakarya | An extended energy-aware cost recovery approach for virtual machine migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |