CN106899660B - Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model - Google Patents
Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model Download PDFInfo
- Publication number
- CN106899660B CN106899660B CN201710064154.0A CN201710064154A CN106899660B CN 106899660 B CN106899660 B CN 106899660B CN 201710064154 A CN201710064154 A CN 201710064154A CN 106899660 B CN106899660 B CN 106899660B
- Authority
- CN
- China
- Prior art keywords
- data center
- load
- prediction
- host
- virtual machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/12—Arrangements for remote connection or disconnection of substations or of equipment thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a cloud data center energy-saving scheduling implementation method based on a rolling grey prediction model. The energy-saving process of the cloud data center is abstracted into four modules of load prediction, error verification, thermal perception classification and virtual machine scheduling. Predicting the workload of the data center at the next moment through a load prediction module to obtain the load utilization rate of each host; the heat perception classification module divides all the hosts into heat states according to the predicted values of the host load utilization rate, the host utilization rate in a hotter state is at a higher level, and the host utilization rate in a cooler state is lower; in order to keep most of the hosts in a milder thermal state, the virtual machine scheduling module performs operations such as migration and integration on the virtual machines on the hosts according to the classification result of the thermal state, and finally achieves the aims of guaranteeing the service quality of the data center and reducing the energy consumption of the data center. The method solves the problem that the traditional gray model falls into the low precision due to the lack of partial numerical values.
Description
Technical Field
The invention belongs to the field of cloud computing energy-saving scheduling, and particularly relates to a cloud data center energy-saving scheduling implementation method based on a rolling grey prediction model.
Background
Cloud computing, as a new technology in the information communication technology industry, has gradually entered thousands of households, and is favored by enterprises and individual users due to the characteristics of high benefit, low threshold and high expansibility. With the gradual maturity of cloud computing development and the continuous enrichment of user demands, the scale of a series of supporting facilities such as a data center server is rapidly increased. Large-scale cloud computing data centers containing tens of thousands of service nodes are also established around the world, so that more computing resources and storage resources are saved in the cloud, but a series of energy consumption problems are caused. Green and peace organizations state in their cloud computing reports that IT is predicted that by 2020, the energy consumption of data centers of major IT operators worldwide will approach 19630 hundred million kilowatt-hours, which will exceed the total sum of the electricity consumed by current germany, france, canada and brazil. Meanwhile, the work report of the Chinese government indicates that the main task in 2008 is to "increase the energy conservation and emission reduction and environmental protection, and the Ministry of science and technology and the Ministry of industry and belief have made an action scheme of the special energy conservation and emission reduction technology in 2014" to play a leading role in relieving the resource and environmental constraints of science and technology. The increasing energy consumption cost increases the burden of enterprises, increases the carbon emission which affects the climate change in the global range, and rings an alarm clock for environmental problems.
The rapidly rising energy consumption of these cluster servers has become an important factor affecting enterprise efficiency and its development. The research in the field of cloud computing energy conservation also becomes a key task for the research of new technologies at home and abroad. At present, the starting points of energy-saving research of cloud computing at home and abroad are different, and certain defects also exist, the problems of workload monitoring and prediction of a cloud data center under the condition of flow burst still exist, and the scheduling and integrating strategy of a virtual machine has defects.
Disclosure of Invention
In order to more intelligently and effectively reduce the power consumption of the data center and guarantee the service quality of the data center under the condition of network traffic burst, the invention provides a cloud data center energy-saving scheduling implementation method based on a rolling grey prediction model.
The invention is realized by the following technical scheme.
The energy-saving scheduling implementation method of the cloud data center based on the rolling grey prediction model abstracts an energy-saving process of the cloud data center into four modules, namely load prediction, error verification, thermal perception classification and virtual machine scheduling; the data center working load is predicted, so that the state classification of each host is carried out, and the aim of saving energy is fulfilled through a virtual machine scheduling algorithm.
Further, the load prediction specifically includes: and predicting the workload of the data center by using a rolling grey prediction model to obtain the load utilization rate of each host node of the data center at the next moment.
Further, the error checking specifically includes: and carrying out error check on the load predicted value and the actual working load, determining the deviation value of the current predicted result, learning based on an error check module, and correcting the predicted result.
Further, the heat perception classification is specifically: carrying out thermal perception classification on all hosts in the cloud data center according to the current load prediction values of the hosts, and introducing a service level protocol as a reference index to set an upper bound and a lower bound of a host working load threshold; when the host load utilization rate is higher than the upper threshold limit, lower than the lower threshold limit, between the upper threshold limit and the lower threshold limit and the load utilization rate is 0, the host load utilization rate is divided into four different thermal states.
Further, the virtual machine scheduling specifically includes: and performing virtual machine scheduling on the current host according to the thermal state of the host, solving the problems of overload and no load of the host through the virtual machine scheduling operation, and maintaining each host of the data center in a healthy thermal state.
Further, load information of the data center and various performance data during operation are intelligently monitored by being based on a rolling grey prediction model algorithm and being realized in a cloud data center environment in a modularized mode.
Compared with the prior art, the invention has the following advantages and technical effects:
the invention builds the load prediction evaluation model by mining and analyzing the real-time working load data of the data center and utilizing the working load prediction algorithm based on the gray prediction model to predict the load state of each server in the data center at the next moment, thereby avoiding the overload or no-load phenomenon of the server and more effectively dealing with the current common network flow burst phenomenon. Establishing a utilization rate heat sensing classification model of the host by analyzing the incidence relation between the load and the energy efficiency; and a host classification standard based on a thermal perception mechanism is provided to solve the problem of energy consumption performance caused by uneven direct load of each host in the data center.
The load information of the data center and various performance data during operation are intelligently monitored by realizing the method into a cloud data center environment in a modularized manner based on a rolling gray prediction model algorithm. The energy consumption level of the data center during working, especially under the condition of flow emergency, is reduced, the operation and maintenance cost of a cloud service provider is reduced, meanwhile, the indexes of the data center SLA are improved, and the cloud service use experience of a user is guaranteed.
Drawings
Fig. 1 is a framework diagram of cloud computing intelligent resource scheduling.
Fig. 2 is a cloud data center system energy saving framework diagram.
Fig. 3 is a graph of a load prediction experiment result of the cloud data center task sequence 1.
Fig. 4 is a graph of a load prediction experiment result of the cloud data center task sequence 2.
FIG. 5 is a comparison of experimental data for the rolling gray prediction model and the ARIMA model.
FIG. 6 is a comparison graph of the rolling gray prediction model and the ARIMA model mean deviation ratio.
Detailed Description
In order to make the technical solutions and advantages of the present invention more apparent, the following detailed description is made with reference to the accompanying drawings, but the present invention is not limited thereto.
1. Policy framework
1.1 cloud computing resource Intelligent scheduling framework
Fig. 1 is an architecture diagram of a cloud computing platform resource intelligent scheduling framework, which is divided into a host layer, a virtual machine layer, a performance evaluation layer, a scheduling layer and a user layer from bottom to top. Wherein the scheduling layer and the evaluation layer are the core of the whole energy-saving strategy framework. Each layer will be explained below.
The host layer refers to all servers in the cloud data center, and includes all physical host nodes. The hardware devices are infrastructure of the bottom layer of the cloud environment, and provide a hardware foundation for energy-saving scheduling management.
The virtual machine layer is established on the basis of a host layer virtualization technology, and a resource pool of the virtual machine layer is formed by virtualizing a plurality of server entities, so that common computing and resource sharing in a cloud environment can be realized.
The performance evaluation layer is used for collecting and evaluating performance data such as load utilization rate, energy consumption, SLA and PUE of the cloud data center. The evaluation layer needs to perform data communication with the virtual layer to acquire information such as the utilization condition of the virtual machine resources and the running state of each virtual machine.
The scheduling layer performs initial virtual machine allocation and virtual machine migration operation on the host on the basis of data such as load and energy consumption collected by the performance evaluation layer, and guarantees that the host can operate in a good responsible utilization rate environment by scheduling the virtual machines.
The user layer refers to all users and service requesters in the cloud computing environment, including individual users, enterprise users, and all users of cloud computing. The user layer will send a new service request to the data center all the time.
1.2 energy-saving framework of cloud data center system
Fig. 2 is a framework diagram of an energy-saving architecture of a cloud computing data center system, which is divided into four modules from top to bottom, namely a load prediction module, an error check module, a thermal perception classification module and a virtual machine scheduling module. Each of the modules will be explained below.
(1) A load prediction module: the data center processes thousands of service requests every second, and the load prediction module can continuously monitor the workload data of the physical machines (Power machines) in the data center and predict the CPU utilization of each PM at a future time by analyzing the effective historical load data. The load prediction module can help us to effectively distinguish overloaded and unloaded servers in the current state of the data center.
(2) An error checking module: after the load predictor completes the prediction process, the error check module calculates the deviation between the actual value and the predicted value and optimizes future prediction results by analyzing and calculating their relative errors.
(3) A heat perception classification module: according to the workload prediction values obtained by the modules, the physical machines are divided into four categories, namely boiling point (boiling point), warm point (warming point), cool point (cooling point) and freezing point (freezing point). Under the energy-saving framework, a Service Level Agreement (SLA) is introduced as an important reference index, and the upper bound and the lower bound of a PM workload threshold are set by using the SLA. When the current load utilization of the PM is higher than the upper threshold limit, the PM is called a boiling point; when its utilization is below the lower threshold (non-0) we call the cool spot; the freezing point is when the load utilization rate is 0; when the load utilization of the PM is between the upper and lower threshold bounds we refer to as a temperature point. The heat perception classification module can well know and regulate the load condition inside the current data center by dividing the PM into the category areas.
(4) The virtual machine scheduling module: the purpose of energy conservation in cloud computing is to enable the whole data center to operate at a high utilization rate and to guarantee the cloud service quality of users to the maximum extent. It is therefore desirable to have as much PM as possible operate in the form of a warm spot. It is therefore desirable to convert more boiling PM to the warm point and incorporate as much as possible of the cool or freeze point PM. The virtual machine scheduling module can transfer part of the virtual machines running on the boiling point to the cooling points, and part of the cooling points with enough capacity can be dynamically integrated into a whole.
2. Load prediction
2.1 data center energy efficiency index
The energy efficiency indexes under the cloud data center environment provided by the invention are richer, and the performance index data required to be acquired are listed in table 1. There are mainly three types of indicators, namely, time type, environment type, and performance indicator.
TABLE 1
The time type is a significant attribute that determines the run time and scheduling time interval of the data center during the test. The environment type defines specific configuration parameters of each host and each virtual machine in the data center and the length of a currently processed cloud task list, and the indexes jointly determine the objective state of the data center in the operation process. The performance indexes mainly reflect the operation conditions of the current data center, including the utilization rate and energy consumption data of the host at each moment, the energy efficiency of the data center at a certain time interval, the proportion of the host meeting or violating the service level agreement of the data center, the number of the closed hosts in the scheduling process, the total number of the virtual machine migrations in the scheduling process and the like.
2.2 scrolling Grey prediction model
In the face of the traffic burst phenomenon, it is necessary to accurately analyze and process massive network traffic data in real time, which is very challenging. When the traffic emergency occurs, the frequency of boiling points and cooling points in the data center is more frequent, and the server overload phenomenon is more serious. A data center workload prediction method for handling traffic emergencies is presented herein.
The change of the working load of the data center has certain volatility and uncertainty, the short-term prediction model for establishing the working load belongs to the problem of modeling of a small number of data, the gray model is more suitable for solving the problem of predicting the states of less data and poor information than other models, and the residual error state prediction model for the working load can be established through a small number of data and is suitable for establishing the short-term prediction model for the working load of the data center.
The working load prediction method is provided based on an improved gray model, the gray prediction method is considered to be a good replacement strategy for a time series prediction model, in a cloud environment, historical working load values of all PMs in a data center are used as historical data series, and the gray model describes the establishment process of the whole system by using a one-time differential equation.
Let x(0)=(x(0)(1),x(0)(2),…,x(0)(n)) is the original workload sequence, where n is the length of the data sequence and 1 accumulation thereof generates the sequence as x(1)=(x(1)(1),x(1)(2),…,x(1)(n)), wherein
Definition of x(1)Has a gray derivative of
d(k)=x(0)(k)=x(1)(k)-x(1)(k-1).
Let z(1)Is a series x(1)Generates a series of numbers, i.e.
z(1)(k)=αx(1)(k)+(1-α)x(1)(k-1),
Wherein, alpha is a difference coefficient, and alpha belongs to (0, 1).
The gray differential equation model for GM (1,1) is then defined as
d(k)+az(1)(k) B, or x(0)(k)+az(1)(k)=b,
x(0)(k)+az(1)(k)=b,
In formula (1), x(0)(k) Called the gray derivative, a the coefficient of development, z(1)(k) Referred to as the whitened background value, and b as the amount of grey contribution.
Substituting schedule k to 2,3, …, n into equation (1) and introducing the matrix vector notation:
the GM (1,1) model can be expressed as a matrix Y Bu. where u is the matrix of fitting coefficients for a and B is the matrix of information provided by the original sequence.
Using regression analysis to obtain an estimate of a, b, so that the array of corresponding whitening models is
Thus obtaining the predicted value
Accordingly, the next working load predicted value is obtained:
the conventional GM (1,1) model is likely to fall into the predicament of low prediction accuracy due to the lack of data and partial values. After the rolling GM (1,1) model is adopted, the prediction model is reestablished every time the next prediction is carried out, the model can continuously adopt newer data for prediction, and the data which are too old are abandoned. The historical data of the first 100 scheduling periods is used for predicting the result of the 101 th period and is continuously updated, so that the prediction precision can be greatly improved. The value of the translation transformation constant C in the model is dynamically updated according to the historical load of the data center, and the gray model is ensured to be always in an available state in the prediction process. Selection of sequences by iterationThe maximum value Max lambda (k) and the minimum value Min lambda (k) of the medium lambda (k) ensure that Min lambda (k) and Max lambda (k) are both in the compatible spaceAnd (4) the following steps.
3. Virtual machine classification strategy based on heat perception
The traditional virtual machine scheduling strategy is mostly established based on a heuristic algorithm (genetic algorithm, simulated annealing, particle swarm optimization and the like), the heuristic algorithm cannot avoid the problem of falling into local optimal solution, the heuristic algorithm is essentially a greedy strategy, and the optimal solution which does not accord with the greedy rule is missed. The virtual machine classification scheduling strategy based on thermal perception redistributes the virtual machines among the physical nodes according to the cold and hot states of the hosts, so that the number of overloaded hosts and unloaded hosts in the data center can be effectively reduced, and the balance between energy consumption and SLA is achieved.
The thermal sensing model continuously monitors thermal state information of physical nodes of the data center, divides the current physical nodes into four types according to a cold and hot state, wherein the four types are respectively a boiling point (boiling point), a warm point (warming point), a cooling point (cooling point) and a freezing point (freezing point), and sets an upper bound and a lower bound of a PM working load threshold value by using SLA. When the current load utilization of the PM is higher than the upper threshold limit, the PM is called a boiling point; when its utilization is below the lower threshold (non-0) we call the cool spot; the freezing point is when the load utilization rate is 0; when the load utilization of the PM is between the upper and lower threshold bounds we refer to as a temperature point. We need to have as much PM as possible run in the form of a warm spot. It is therefore desirable to convert more boiling PM to the warm point and incorporate as much as possible of the cool or freeze point PM. The virtual machine scheduling module can transfer part of virtual machines running on a boiling point to a cooling point, part of cooling points with enough capacity can be dynamically integrated into a whole, and a physical machine at the freezing point can be shut down to save energy.
4. Description of the experiments
For the rolling gray prediction model, firstly, the data center is required to operate for a period of time t, and a certain amount of historical load data is acquired as input samples and is provided for the rolling gray prediction model to learn. The larger the time t, the more accurate the rolling gray prediction model predicts, where experiment t takes 100 sample periods in length.
The calculation method of the average absolute deviation value and the average absolute percentage error of the rolling gray prediction model experimental test is as follows:
wherein x(0)(k) As an actual value of the load utilization rate,the prediction result of the rolling gray prediction model is obtained.
Load data in 100 sampling periods of a data center are collected to serve as an initial input sequence of a rolling gray prediction model, then a rolling gray prediction model algorithm is operated, the algorithm calculates and predicts the data sequence of the first 100 sampling periods to obtain load data at the 101 th period, then the average absolute deviation and the average absolute percentage error of the actual working load value and the predicted value at the 101 th period are calculated and compared to adjust the original rolling model, and the input data sequence is updated. Then the workload of the 102 th, 103 th and later cycles is predicted accordingly.
Fig. 3 and 4 are experimental results of workload prediction on two sets of cloud task sequences, wherein the cloud task sequence of fig. 3 has traffic accidents occurred at 1-5 hours and 22-24 hours, and the cloud task sequence of fig. 4 has traffic accidents occurred at 10 hours and 20 hours. The rolling grey prediction model for the experiment is compared with an autoregressive moving average (ARIMA) model and an Actual load value, and two groups of experiment results show that the prediction precision of the rolling grey prediction model to the workload of the data center is closer to an Actual value (Actual), and compared with the ARIMA model, the rolling grey prediction model is more timely in feedback under the condition of flow emergency.
Fig. 5 is an experimental result after the cloud task sequence shown in fig. 4 is executed, and the result shows that the average deviation ratio of the rolling gray prediction model in the load prediction process of the cloud data center is 6.93%, while the average deviation ratio of the ARIMA model is 10.35%, and the overall prediction effect of the rolling gray prediction model is better. Fig. 6 is a deviation ratio area diagram after the rolling gray prediction model and ARIMA model experiments, and if the area enclosed by the polygonal line and the X axis is smaller, it represents that the prediction deviation is smaller and the accuracy is higher. As shown in the figure, the deviation ratio of the ARIMA model is much larger than the area ratio of the rolling gray prediction model, and the advantages of the rolling gray prediction model in cloud data center load prediction are more prominent.
Claims (1)
1. The cloud data center energy-saving scheduling implementation method based on the rolling grey prediction model is characterized in that an energy-saving process of a cloud data center is abstracted into four modules, namely load prediction, error verification, thermal perception classification and virtual machine scheduling; the data center working load is predicted, so that the state classification of each host is carried out, and the aim of saving energy is fulfilled through a virtual machine scheduling algorithm; the load prediction specifically comprises: predicting the workload of the data center by using a rolling grey prediction model to obtain the load utilization rate of each host node of the data center at the next moment; the error checking specifically comprises: error checking is carried out on the load predicted value and the actual working load, the deviation value of the current predicted result is determined, learning is carried out based on an error checking module, and the predicted result is corrected; the thermal perception classification is specifically: carrying out thermal perception classification on all hosts in the cloud data center according to the current load prediction values of the hosts, and introducing a service level protocol as a reference index to set an upper bound and a lower bound of a host working load threshold; when the load utilization rate of the host is higher than an upper threshold limit, lower than a lower threshold limit, between the upper threshold limit and the lower threshold limit and the load utilization rate is 0, the host is divided into four different thermal states;
the cloud computing platform resource intelligent scheduling framework is divided into a host layer, a virtual machine layer, a performance evaluation layer, a scheduling layer and a user layer from bottom to top; the scheduling layer and the evaluation layer are the core of the whole energy-saving strategy framework;
the host layer refers to all servers in the cloud data center and comprises all physical host nodes;
the virtual machine layer is established on the basis of a host layer virtualization technology, and a resource pool of the virtual machine layer is formed by virtualizing a plurality of server entities, so that common computing and resource sharing in a cloud environment can be realized;
the performance evaluation layer is used for collecting and evaluating the load utilization rate, energy consumption, SLA and PUE performance data of the cloud data center; the evaluation layer needs to perform data communication with the virtual layer to acquire the utilization condition of the virtual machine resources and the running state information of each virtual machine;
the scheduling layer performs initial virtual machine allocation and virtual machine migration operation on the host on the basis of the load and energy consumption data collected by the performance evaluation layer, and guarantees that the host can operate in a good responsible utilization rate environment by scheduling the virtual machines;
the user layer refers to all users and service requesters in the cloud computing environment, and comprises individual users, enterprise users and all users of cloud computing; the user layer sends a new service request to the data center all the time;
the energy-saving architecture of the cloud computing data center system is divided into four modules from top to bottom, namely a load prediction module, an error check module, a thermal perception classification module and a virtual machine scheduling module;
(1) a load prediction module: the load prediction module can continuously monitor the work load data of a physical Machine (Power Machine) in the data center and predict the CPU utilization rate of each PM at a future moment by analyzing effective historical load data; the load prediction module can effectively distinguish overloaded and unloaded servers in the current state of the data center;
(2) an error checking module: after the load prediction module completes the prediction process, the error check module calculates the deviation between the actual value and the predicted value and optimizes the future prediction result by analyzing and calculating the relative error of the actual value and the predicted value;
(3) a heat perception classification module: according to the workload prediction value obtained by the module, the physical machines are divided into four categories, namely a boiling point (boiling point), a warm point (warming point), a cooling point (cooling point) and a freezing point (freezing point); introducing a Service Level Agreement (SLA) as an important reference index, and setting the upper bound and the lower bound of a PM working load threshold by utilizing the SLA; when the current load utilization rate of the PM is higher than the upper threshold limit, the PM is called a boiling point; when the utilization rate is lower than the lower threshold (not 0), the method is called a cool spot; the freezing point is when the load utilization rate is 0; when the load utilization rate of the PM is between the upper and lower threshold limits, the PM is called a temperature point; the heat perception classification module well understands and regulates the load condition inside the current data center by dividing the PM into classification areas;
(4) the virtual machine scheduling module: the data center workload prediction method for handling the traffic emergency comprises the following steps:
the working load prediction method is provided based on an improved gray model, the gray prediction method is considered to be a good replacement strategy for a time series prediction model, in a cloud environment, historical working load values of all PMs in a data center are used as historical data series, and the gray model uses a primary differential equation to describe the establishment process of the whole system;
let x(0)=(x(0)(1),x(0)(2),…,x(0)(n)) is the original workload sequence, where n is the length of the data sequence and 1 accumulation thereof generates the sequence as x(1)=(x(1)(1),x(1)(2),…,x(1)(n)), wherein
Definition of x(1)Has a gray derivative of
d(k)=x(0)(k)=x(1)(k)-x(1)(k-1).
Let z(1)Is a series x(1)Generates a series of numbers, i.e.
z(1)(k)=αx(1)(k)+(1-α)x(1)(k-1),
Wherein, alpha is a difference coefficient and belongs to (0, 1);
the gray differential equation model for GM (1,1) is then defined as
d(k)+az(1)(k) B, or x(0)(k)+az(1)(k)=b,
Wherein x is(0)(k) Called the gray derivative, a the coefficient of development, z(1)(k) Referred to as the whitening background value, b as the amount of grey contribution;
substituting schedule k to 2,3, …, n into equation (1) and introducing the matrix vector notation:
the GM (1,1) model can be expressed as a matrix Y-Bu. where u is a matrix of fitting coefficients of a and B, and B is a matrix of information provided by the original sequence;
using regression analysis to obtain an estimate of a, b, so that the array of corresponding whitening models is
Thus obtaining the predicted value
Accordingly, the next working load predicted value is obtained:
after the rolling GM (1,1) model is adopted, the prediction model is reestablished every time the next prediction is carried out, the model can continuously adopt newer data for prediction, and the data which are too old are abandoned; dynamically updating the value of a translation transformation constant C in the model according to the historical load of the data center, and ensuring that the gray model is always in an available state in the prediction process; selection of sequences by iterationThe maximum value Max lambda (k) and the minimum value Min lambda (k) of the medium lambda (k) ensure that Min lambda (k) and Max lambda (k) are both in the compatible spaceInternal;
adopting a virtual machine classification strategy based on thermal perception:
the heat perception model continuously monitors the heat state information of the physical nodes of the data center, the virtual machine scheduling module can migrate part of virtual machines running on the boiling point to the cold spots, the cold spots with enough large capacity can be dynamically integrated into a whole, and the physical machines at the freezing point can be shut down to save energy.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710064154.0A CN106899660B (en) | 2017-01-26 | 2017-01-26 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
PCT/CN2017/113854 WO2018137402A1 (en) | 2017-01-26 | 2017-11-30 | Cloud data centre energy-saving scheduling implementation method based on rolling grey prediction model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710064154.0A CN106899660B (en) | 2017-01-26 | 2017-01-26 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106899660A CN106899660A (en) | 2017-06-27 |
CN106899660B true CN106899660B (en) | 2021-05-14 |
Family
ID=59199276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710064154.0A Active CN106899660B (en) | 2017-01-26 | 2017-01-26 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106899660B (en) |
WO (1) | WO2018137402A1 (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106899660B (en) * | 2017-01-26 | 2021-05-14 | 华南理工大学 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
CN108491248A (en) * | 2018-03-07 | 2018-09-04 | 山东大学 | A kind of triggering method and realization system of the dynamic migration of virtual machine based on prediction |
CN109445903B (en) * | 2018-09-12 | 2022-03-29 | 华南理工大学 | Cloud computing energy-saving scheduling implementation method based on QoS feature discovery |
CN109445906B (en) * | 2018-10-11 | 2021-07-23 | 北京理工大学 | Method for predicting quantity of virtual machine demands |
CN109324953B (en) * | 2018-10-11 | 2020-08-04 | 北京理工大学 | Virtual machine energy consumption prediction method |
CN109784504A (en) * | 2018-12-24 | 2019-05-21 | 贵州宇豪科技发展有限公司 | Data center's long-distance intelligent operation management method and system |
CN110275677B (en) | 2019-05-22 | 2022-04-12 | 华为技术有限公司 | Hard disk format conversion method and device and storage equipment |
US11461210B2 (en) | 2019-06-26 | 2022-10-04 | Kyndryl, Inc. | Real-time calculation of data center power usage effectiveness |
CN110806918A (en) * | 2019-09-24 | 2020-02-18 | 梁伟 | Virtual machine operation method and device based on deep learning neural network |
CN111191851B (en) * | 2020-01-03 | 2023-06-23 | 中国科学院信息工程研究所 | Knowledge graph-based data center energy efficiency optimization method |
CN111552553B (en) * | 2020-04-29 | 2023-03-10 | 电子科技大学 | Multi-task rapid scheduling method based on simulated annealing |
CN111752710B (en) * | 2020-06-23 | 2023-01-31 | 中国电力科学研究院有限公司 | Data center PUE dynamic optimization method, system and equipment and readable storage medium |
CN116382863B (en) * | 2023-03-19 | 2023-09-05 | 广州智捷联科技有限公司 | Intelligent energy-saving scheduling method for data center |
CN116404755A (en) * | 2023-04-18 | 2023-07-07 | 内蒙古铖品科技有限公司 | Big data processing system and method based on Internet of things |
CN116846074B (en) * | 2023-07-04 | 2024-03-19 | 深圳市利业机电设备有限公司 | Intelligent electric energy supervision method and system based on big data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2713307A1 (en) * | 2012-09-28 | 2014-04-02 | Accenture Global Services Limited | Liveness detection |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2570922A1 (en) * | 2011-09-13 | 2013-03-20 | Alcatel Lucent | Method and system for managing an elastic server farm |
CN102833326A (en) * | 2012-08-15 | 2012-12-19 | 广东工业大学 | Grey prediction-based cloud storage load balancing method |
CN103916438B (en) * | 2013-01-06 | 2017-04-12 | 上海计算机软件技术开发中心 | Cloud testing environment scheduling method and system based on load forecast |
CN104765642B (en) * | 2015-03-24 | 2017-11-10 | 长沙理工大学 | Virtual machine deployment method and system based on dynamic prediction model under cloud environment |
CN105607948A (en) * | 2015-12-18 | 2016-05-25 | 国云科技股份有限公司 | Virtual machine migration prediction method based on SLA |
CN106020934A (en) * | 2016-05-24 | 2016-10-12 | 浪潮电子信息产业股份有限公司 | Optimized deploying method based on virtual cluster online migration |
CN106899660B (en) * | 2017-01-26 | 2021-05-14 | 华南理工大学 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
-
2017
- 2017-01-26 CN CN201710064154.0A patent/CN106899660B/en active Active
- 2017-11-30 WO PCT/CN2017/113854 patent/WO2018137402A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2713307A1 (en) * | 2012-09-28 | 2014-04-02 | Accenture Global Services Limited | Liveness detection |
Non-Patent Citations (1)
Title |
---|
An adaptive power management with policy selection mechanism based on reward-punishment scheme;Fa-Gui Liu等;《2013 International Conference on Machine Learning and Cybernetics》;20130717;全文 * |
Also Published As
Publication number | Publication date |
---|---|
WO2018137402A1 (en) | 2018-08-02 |
CN106899660A (en) | 2017-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106899660B (en) | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model | |
Huo et al. | Will the urbanization process influence the peak of carbon emissions in the building sector? A dynamic scenario simulation | |
WO2021063033A1 (en) | Energy consumption model training method for air conditioner and air conditioning system control method | |
WO2020206705A1 (en) | Cluster node load state prediction-based job scheduling method | |
Yi et al. | Toward efficient compute-intensive job allocation for green data centers: A deep reinforcement learning approach | |
CN103294546B (en) | The online moving method of virtual machine of multi-dimensional resource performance interference aware and system | |
WO2023103349A1 (en) | Load adjustment method, management node, and storage medium | |
CN106951059A (en) | Based on DVS and the cloud data center power-economizing method for improving ant group algorithm | |
CN104407688A (en) | Virtualized cloud platform energy consumption measurement method and system based on tree regression | |
Li et al. | Edge cloud resource expansion and shrinkage based on workload for minimizing the cost | |
CN113039506A (en) | Data center infrastructure optimization method based on causal learning | |
CN109491760A (en) | A kind of high-effect data center's Cloud Server resource autonomous management method and system | |
Li et al. | Dynamic virtual machine consolidation algorithm based on balancing energy consumption and quality of service | |
Zhang et al. | A new energy efficient VM scheduling algorithm for cloud computing based on dynamic programming | |
MirhoseiniNejad et al. | ALTM: Adaptive learning-based thermal model for temperature predictions in data centers | |
Wang et al. | Cloud workload analytics for real-time prediction of user request patterns | |
CN105933138B (en) | Space-time dimension combined cloud service credibility situation assessment and prediction method | |
Chen et al. | Power and thermal-aware virtual machine scheduling optimization in cloud data center | |
WO2021051441A1 (en) | Energy conservation system for hadoop cluster | |
Kanagaraj et al. | Uniform distribution elephant herding optimization (UDEHO) based virtual machine consolidation for energy-efficient cloud data centres | |
CN104636848A (en) | Project energy management contract system based on energy node control technology | |
CN116090893A (en) | Control method and system for comprehensive energy participation auxiliary service of multiple parks | |
Shi et al. | Three-Way Ensemble prediction for workload in the data center | |
CN210776794U (en) | Device for carrying out load analysis and prediction aiming at temperature change | |
Cardoso et al. | An efficient energy-aware mechanism for virtual machine migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |