WO2018137402A1 - Cloud data centre energy-saving scheduling implementation method based on rolling grey prediction model - Google Patents
Cloud data centre energy-saving scheduling implementation method based on rolling grey prediction model Download PDFInfo
- Publication number
- WO2018137402A1 WO2018137402A1 PCT/CN2017/113854 CN2017113854W WO2018137402A1 WO 2018137402 A1 WO2018137402 A1 WO 2018137402A1 CN 2017113854 W CN2017113854 W CN 2017113854W WO 2018137402 A1 WO2018137402 A1 WO 2018137402A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data center
- load
- rolling
- cloud data
- host
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/12—Arrangements for remote connection or disconnection of substations or of equipment thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Definitions
- the invention belongs to the field of cloud computing energy-saving scheduling, and particularly relates to a cloud data center energy-saving scheduling implementation method based on a rolling gray prediction model.
- cloud computing As an emerging technology in the information and communication technology industry, cloud computing has gradually entered the ranks of thousands of households. It has been favored by enterprises and individual users with its high efficiency, low threshold and high scalability. With the gradual maturity of cloud computing development and the continuous enrichment of user demand, the scale of a series of supporting facilities such as data center servers is also growing rapidly. Large-scale cloud computing data centers with tens of thousands of service nodes have been established around the world, which has saved more computing resources and storage resources in the cloud, but it has also triggered a series of energy consumption problems. . Greenpeace said in its cloud computing report that by 2020, the energy consumption of data centers of major IT operators around the world will be close to 196.3 billion kWh, which will exceed the current total consumption of electricity in Germany, France, Canada and Brazil.
- the present invention provides a cloud data center energy-saving scheduling implementation method based on a rolling gray prediction model.
- a cloud data center energy-saving scheduling implementation method based on a rolling gray prediction model which abstracts the energy-saving process of the cloud data center into four modules: load prediction, error checking, thermal sensing classification, and virtual machine scheduling; and predicting the data center workload Therefore, the states of each host are classified, and the virtual machine scheduling algorithm is used to achieve energy saving.
- the load prediction is specifically: predicting a workload of the data center by using a rolling gray prediction model, and obtaining load utilization of each host node in the data center at the next moment.
- the error check is specifically: performing error check on the load predicted value and the actual workload, determining a deviation value of the current prediction result, and learning based on the error check module to correct the prediction result.
- the heat-aware classification is specifically: performing hot-sensing classification on all hosts in the cloud data center according to the current load prediction value of the host, and introducing a service level agreement as a reference indicator to set an upper boundary and a lower bound of the host workload threshold; When the host load utilization is higher than the threshold upper bound, lower than the lower threshold, the upper and lower thresholds, and the load utilization is 0, the four hot states are divided into four different hot states.
- the virtual machine scheduling is specifically: scheduling virtual machines according to the hot state of the current host, and solving the problem of overload and no load of the host through the virtual machine scheduling operation, and maintaining the hosts in the data center in a healthy heat. In the state.
- load information of the data center and the performance data of the runtime are intelligently monitored by implementing the rolling gray prediction model algorithm and implementing it into the cloud data center environment in a modular form.
- the present invention has the following advantages and technical effects:
- the invention mines and analyzes the real-time workload data of the data center, and uses the workload prediction algorithm based on the gray prediction model to establish a load prediction evaluation model, and predicts the load state of each server in the data center at the next moment to avoid overload or empty of the server.
- the phenomenon of loading more effectively respond to the current widespread network traffic bursts.
- the host's utilization heat perception classification model is established.
- the host classification standard based on the heat perception mechanism is proposed to solve the energy consumption performance problem caused by the direct load imbalance of each host in the data center.
- the invention intelligently monitors the load information of the data center and the performance data of the runtime by implementing the rolling gray prediction model algorithm and implementing it into the cloud data center environment in a modular form. Reduce the energy consumption level in the data center, especially in the case of traffic bursts, reduce the operation and maintenance costs of the cloud service provider, and at the same time improve the data center SLA indicators to ensure the user's cloud service experience.
- Figure 1 is a diagram of a cloud computing intelligent resource scheduling framework.
- Figure 2 shows the energy-saving framework of the cloud data center system.
- FIG. 3 is a graph of load prediction experiment results of cloud data center task sequence 1.
- Figure 5 is a comparison of the experimental data of the rolling gray prediction model and the ARIMA model.
- Figure 6 is a comparison of the average deviation ratio of the rolling gray prediction model and the ARIMA model.
- FIG. 1 is an architectural diagram of a cloud computing platform resource intelligent scheduling framework, which is divided into a host layer, a virtual machine layer, a performance evaluation layer, a scheduling layer, and a user layer from bottom to top.
- the scheduling layer and the evaluation layer are the core of the entire energy-saving strategy framework. Each layer will be explained below.
- the host tier refers to all servers in the cloud data center, including all physical host nodes. These hardware devices are the lowest infrastructure in the cloud environment, providing a hardware foundation for energy-efficient scheduling management.
- the virtual machine layer is based on the virtualization technology of the host layer. By virtualizing multiple server entities, the resource pool of the virtual machine layer is formed, which enables common computing and resource sharing in the cloud environment.
- the performance evaluation layer refers to the collection and evaluation of load data, energy consumption, SLA, PUE and other performance data of the cloud data center.
- the evaluation layer needs to communicate with the virtual layer to obtain information about the utilization of virtual machine resources and the running status of each virtual machine.
- the scheduling layer performs virtual machine initial allocation and virtual machine migration operation based on the data collected by the performance evaluation layer, such as load and energy consumption.
- the virtual machine is scheduled to ensure that the host can run in a good responsibility. Under the utilization environment.
- the user layer refers to all users and service requesters in the cloud computing environment, including individual users, enterprise users, and all users of cloud computing.
- the user layer will always issue new service requests to the data center.
- FIG. 2 is a framework diagram of an energy-saving architecture of a cloud computing data center system, which is divided into four modules from top to bottom, namely a load prediction module, an error checking module, a thermal sensing classification module, and a virtual machine scheduling module. Each module will be explained below.
- Load prediction module The data center processes thousands of service requests per second.
- the load prediction module can continuously monitor the workload data of the physical machines in the data center and analyze the effective historical load data. Predict the CPU utilization of each PM at a future time.
- Load forecasting module can help us in effective areas
- the server is overloaded and unloaded in the current state of the data center.
- error check module After the load predictor completes the prediction process, the error check module calculates the deviation between the actual value and the predicted value, and optimizes the future prediction result by analyzing and calculating their relative errors.
- Thermal sensing classification module According to the workload prediction value obtained by the above module, we divide the physical machine into four categories, called boiling point, warming point, cooling point, Freezing point.
- SLA Service Level Agreement
- SLA Service Level Agreement
- the thermal perceptual classification module can well understand and regulate the load situation inside the current data center by dividing the category area of the PM.
- Virtual machine scheduling module The purpose of cloud computing energy saving is to enable the entire data center to operate at a higher utilization rate and to maximize the quality of the user's cloud service. So we need to make as many PMs as possible to operate as a warm spot. Therefore, we need to convert more boiling point PM into the temperature point and integrate some cool or freezing point PM as much as possible.
- the virtual machine scheduling module will migrate some of the virtual machines running at the boiling point to the cool spot, and some of the cool spots with sufficient capacity will be dynamically integrated into one.
- Table 1 lists the performance indicator data that needs to be obtained. There are three main types of indicators, namely time type, environment type and performance indicator.
- the time type is a significant attribute that determines the running time of the data center and the scheduling time interval during the test.
- the environment type defines the specific configuration parameters of each host and virtual machine in the data center, and the length of the currently processed cloud task list. These indicators together determine the objective state of the data center operation process.
- the performance indicators mainly reflect the current data center operation, including the utilization rate and energy consumption data of the host at various times, the energy efficiency of the data center in a certain time interval, the proportion of the data center that meets and violates the service level agreement, and the scheduling process. The number of hosts that were shut down, the total number of virtual machine migrations that occurred during the scheduling process, and so on.
- the workload changes in the data center have certain volatility and uncertainty.
- the short-term prediction model for establishing workload is a small data modeling problem, while the gray model is more suitable for solving the problem of less data and poor information state prediction than other models.
- the residual state prediction model of the workload can be established with a small amount of data, which is suitable for the establishment of short-term prediction models for data center workloads.
- the workload prediction method is based on the improved gray model, and the gray prediction method is considered as a time series pre-
- the model is a good alternative strategy.
- the gray model uses a differential equation to describe the whole system establishment process.
- x (0) (x (0) (1), x (0) (2), ..., x (0) (n)) be the original workload sequence, where n is the length of the data sequence, 1 time
- x (0) (k) is called the derivative of gray, a factor known as development, z (1) (k) referred whitening background value, b referred to the amount of ash effect.
- u is a matrix of fitting coefficients of a and b
- B is an information matrix provided by the original sequence.
- the traditional GM (1,1) model is likely to fall into the dilemma of low prediction accuracy due to the lack of data and partial values.
- the prediction model will be re-established for each next prediction. The model will continue to use newer data for prediction, and the old data will be discarded.
- Using the historical data of the first 100 scheduling periods to predict the results of the 101st cycle and continuously updating them can greatly improve the accuracy of prediction.
- We will dynamically update the value of the translation transformation constant C in the model based on the historical load of the data center to ensure that the gray model is always available during the prediction process. Iteratively select the sequence The maximum value of ⁇ (k), Max ⁇ (k) and the minimum value Min ⁇ (k), ensure that both Min ⁇ (k) and Max ⁇ (k) are in the available space.
- the traditional virtual machine scheduling strategy is mostly based on heuristic algorithms (genetic algorithm, simulated annealing, particle swarm optimization, etc.), and the heuristic algorithm can not avoid the problem of falling into the local optimal solution.
- the heuristic algorithm is essentially a kind of greed. The strategy that leads to an optimal solution that does not meet the greedy rules will be missed.
- the heat-aware virtual machine classification scheduling policy redistributes the virtual machines between physical nodes according to the "hot and cold" state of the host, which can effectively reduce the number of overloaded and no-load hosts in the data center, and achieve the balance between energy consumption and SLA. .
- the thermal sensing model continuously monitors the thermal state information of the physical nodes of the data center, and divides the current physical nodes into four types according to the "hot and cold" state, which are boiling point, warming point, and cooling point. ), freezing point, and use SLA to set the upper and lower bounds of the PM workload threshold.
- the current load utilization of PM is higher than the upper bound of the threshold, we call it the boiling point; when its utilization is lower than the threshold lower bound (non-zero), we call it the cold spot; when the load utilization is 0, it is the freezing point; when PM is used;
- the load utilization is between the upper and lower limits of the threshold, we call it the temperature point.
- the boiling point of the PM is between the temperature points and integrates some cool spots or PMs with freezing points as much as possible.
- the virtual machine scheduling module will migrate some of the virtual machines running at the boiling point to the cool spot. Some of the cool spots with sufficient capacity will be dynamically integrated into one, and the physical machine at the freezing point will be shut down to save energy.
- the rolling gray prediction model we first need to let the data center run for a period of time t, and obtain a certain amount of historical load data as input samples for the rolling gray prediction model learning. The larger the time t, the more accurate the prediction result of the rolling gray prediction model.
- the experiment t takes 100 sampling period lengths.
- the algorithm calculates and predicts the 101st data by calculating the data sequence of the first 100 sampling periods.
- the load data at the cycle time is compared with the average absolute deviation and the average absolute percentage error of the actual and predicted workload values at the 101st cycle time to adjust the original rolling model and update the input data sequence.
- the workload of the 102nd, 103rd and subsequent cycles is then predicted accordingly.
- Figure 3 and Figure 4 show the experimental results of workload prediction for two cloud task sequences.
- the cloud task sequence of Figure 3 has a traffic incident at 1-5 hours and 22-24 hours.
- the cloud of Figure 4 A traffic incident occurred at the 10th hour and the 20th hour of the task sequence.
- the experimental rolling grey prediction model is compared with the autoregressive moving average (ARIMA) model and the actual load value. Both sets of experimental results show that the rolling gray prediction model is more accurate than the actual value for the data center workload (Actual). Compared with the ARIMA model, the rolling gray prediction model is more timely for feedback in the case of traffic bursts.
- ARIMA autoregressive moving average
- FIG. 5 is an experimental result after running the cloud task sequence shown in FIG. 4.
- the result shows that the average deviation ratio of the rolling gray prediction model in the load prediction process of the cloud data center is 6.93%, and the average deviation ratio of the ARIMA model is 10.35. %, the overall prediction effect of the rolling gray prediction model is better.
- Fig. 6 is a deviation ratio area chart after the rolling gray prediction model and the ARIMA model experiment. If the area enclosed by the line and the X axis is smaller, the smaller the prediction deviation is, the higher the precision is. As shown The deviation of the ARIMA model shown is much larger than that of the rolling gray prediction model, which highlights the advantage of the rolling gray prediction model in cloud data center load forecasting.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Disclosed is a cloud data centre energy-saving scheduling implementation method based on a rolling grey prediction model. In the present invention, a cloud data centre energy-saving flow is abstracted as four modules of load prediction, error checking, thermal perception classification and virtual machine scheduling. A working load of the data centre at the next moment is predicted by means of the load prediction module to obtain a load utilization rate of each host. The thermal perception classification module can divide the thermal states of all hosts according to predicted values of the load utilization rates of the hosts, wherein the utilization rate of a host in the state of being hotter is at a higher level, while the utilization rate of a host in the state of being cooler is at a lower level. In order that most of the hosts are maintained in the thermal state of being relatively mild, the virtual machine scheduling module performs operations, such as migration and integration, on a virtual machine on each of the hosts according to a classification result of the thermal states, so as to finally achieve the purposes of guaranteeing the service quality of the data centre and reducing the energy consumption thereof. The present invention overcomes the problem that a traditional grey model has the difficulty of low precision due to the deficiency of some values.
Description
本发明属于云计算节能调度领域,具体涉及基于滚动灰色预测模型的云数据中心节能调度实现方法。The invention belongs to the field of cloud computing energy-saving scheduling, and particularly relates to a cloud data center energy-saving scheduling implementation method based on a rolling gray prediction model.
云计算作为信息通信技术行业的新兴技术已逐渐走入千家万户,它以高效益、低门槛、高扩展性的特点受到了广大企业及个体用户的青睐。随着云计算发展的逐渐成熟、用户需求的不断丰富,数据中心服务器等一系列的配套设施规模也在飞速增长。世界各地也都建立起了包含着数以万计服务节点的大规模云计算数据中心,这使得更多的计算资源及存储资源被保存在云端,但由此也引发了一系列的能耗问题。绿色和平组织在其云计算报告中称,预计到2020年,全球主要IT运营商的数据中心的能耗将接近19630亿千瓦时,这将超过目前德国、法国、加拿大和巴西消耗的电力总量总和。与此同时,中国政府工作报告指出在2008年的主要任务是“加大节能减排和环境保护力度,科技部与工信部于2014年制定了《2014-2015年节能减排科技专项行动方案》,来发挥科技缓解资源环境约束的引领作用。日益不断攀升的能耗开销在加重了企业负担的同时,也增加了全球范围内影响气候变化的碳排放量,给环境问题敲响了警钟。As an emerging technology in the information and communication technology industry, cloud computing has gradually entered the ranks of thousands of households. It has been favored by enterprises and individual users with its high efficiency, low threshold and high scalability. With the gradual maturity of cloud computing development and the continuous enrichment of user demand, the scale of a series of supporting facilities such as data center servers is also growing rapidly. Large-scale cloud computing data centers with tens of thousands of service nodes have been established around the world, which has saved more computing resources and storage resources in the cloud, but it has also triggered a series of energy consumption problems. . Greenpeace said in its cloud computing report that by 2020, the energy consumption of data centers of major IT operators around the world will be close to 196.3 billion kWh, which will exceed the current total consumption of electricity in Germany, France, Canada and Brazil. sum. At the same time, the Chinese government work report pointed out that the main task in 2008 was to “enhance energy conservation, emission reduction and environmental protection. The Ministry of Science and Technology and the Ministry of Industry and Information Technology formulated the “Special Action Plan for Energy Conservation and Emission Reduction Technology 2014-2015” in 2014. To play a leading role in technology to alleviate the constraints of resources and environment. Increasingly rising energy consumption costs have increased the burden on enterprises, but also increased the carbon emissions affecting climate change on a global scale, which has sounded the alarm for environmental issues.
这些集群服务器快速攀升的能耗已经成为影响企业效益及其发展的重要因素。云计算节能领域的研究也成为了国内外新兴技术研究的重点任务。目前国内外对于云计算节能研究的出发点各不相同,也存在着一定的不足,目前对于云数据中心在流量突发情况下的工作负载监控、预测问题仍存在着较大的不足,对虚拟机的调度整合策略存在着缺陷。The rapid increase in energy consumption of these cluster servers has become an important factor affecting the efficiency and development of enterprises. Research in the field of cloud computing energy conservation has also become a key task in the research of emerging technologies at home and abroad. At present, the starting point of cloud computing energy conservation research at home and abroad is different, and there are certain deficiencies. At present, there are still major deficiencies in the workload monitoring and prediction of cloud data centers in the case of traffic bursts. The scheduling integration strategy has drawbacks.
发明内容Summary of the invention
为了更加智能有效地在网络流量突发情况下减少数据中心的功耗并保障其服务质量,本发明提供基于滚动灰色预测模型的云数据中心节能调度实现方法。In order to more intelligently and effectively reduce the power consumption of the data center and guarantee the quality of service in the case of network traffic bursts, the present invention provides a cloud data center energy-saving scheduling implementation method based on a rolling gray prediction model.
本发明具体通过如下技术方案实现。The present invention is specifically achieved by the following technical solutions.
基于滚动灰色预测模型的云数据中心节能调度实现方法,其将云数据中心的节能流程抽象为负载预测、误差校验、热感知分类和虚拟机调度四个模块;通过对数据中心工作负载进行预测,从而对各主机进行状态分类,进而通过虚拟机调度算法达到节能的目的。
A cloud data center energy-saving scheduling implementation method based on a rolling gray prediction model, which abstracts the energy-saving process of the cloud data center into four modules: load prediction, error checking, thermal sensing classification, and virtual machine scheduling; and predicting the data center workload Therefore, the states of each host are classified, and the virtual machine scheduling algorithm is used to achieve energy saving.
进一步地,所述负载预测具体是:利用滚动灰色预测模型对数据中心的工作负载进行预测,得到下一时刻数据中心各主机节点的负载利用率。Further, the load prediction is specifically: predicting a workload of the data center by using a rolling gray prediction model, and obtaining load utilization of each host node in the data center at the next moment.
进一步地,所述误差校验具体是:对负载预测值与实际工作负载进行误差校验,确定当前预测结果的偏差值,并基于误差校验模块进行学习,校正预测结果。Further, the error check is specifically: performing error check on the load predicted value and the actual workload, determining a deviation value of the current prediction result, and learning based on the error check module to correct the prediction result.
进一步地,所述热感知分类具体是:根据主机当前的负载预测值对云数据中心内所有主机进行热感知分类,并引入服务水平协议作为参考指标设定主机工作负载阈值的上界和下界;当主机负载利用率在高于阈值上界、低于阈值下界、处于阈值上下界之间、负载利用率为0这四种情况下划分为不同的四种热状态。Further, the heat-aware classification is specifically: performing hot-sensing classification on all hosts in the cloud data center according to the current load prediction value of the host, and introducing a service level agreement as a reference indicator to set an upper boundary and a lower bound of the host workload threshold; When the host load utilization is higher than the threshold upper bound, lower than the lower threshold, the upper and lower thresholds, and the load utilization is 0, the four hot states are divided into four different hot states.
进一步地,所述虚拟机调度具体是:根据当前主机的热状态对其进行虚拟机调度,通过虚拟机调度操作解决主机过载、空载的问题,将数据中心各主机的维持在一个健康的热状态下。Further, the virtual machine scheduling is specifically: scheduling virtual machines according to the hot state of the current host, and solving the problem of overload and no load of the host through the virtual machine scheduling operation, and maintaining the hosts in the data center in a healthy heat. In the state.
进一步地,通过基于滚动灰色预测模型算法,并以模块化的形式实现到云数据中心环境中,来智能地监控数据中心的负载信息、以及运行时的各性能数据。Further, the load information of the data center and the performance data of the runtime are intelligently monitored by implementing the rolling gray prediction model algorithm and implementing it into the cloud data center environment in a modular form.
与现有技术相比,本发明具有如下优点和技术效果:Compared with the prior art, the present invention has the following advantages and technical effects:
本发明通过挖掘并分析数据中心实时工作负载数据,利用基于灰色预测模型的工作负载预测算法建立负载预测评估模型,对数据中心中各服务器下一时刻的负载状态进行预测,避免服务器的过载或空载现象,更加有效地应对目前较为普遍的网络流量突发现象。通过分析负载与能效的关联关系,建立主机的利用率热感知分类模型;并提出基于热感知机制的主机分类标准,来解决数据中心各主机直接负载不均导致的能耗性能问题。The invention mines and analyzes the real-time workload data of the data center, and uses the workload prediction algorithm based on the gray prediction model to establish a load prediction evaluation model, and predicts the load state of each server in the data center at the next moment to avoid overload or empty of the server. The phenomenon of loading, more effectively respond to the current widespread network traffic bursts. By analyzing the relationship between load and energy efficiency, the host's utilization heat perception classification model is established. The host classification standard based on the heat perception mechanism is proposed to solve the energy consumption performance problem caused by the direct load imbalance of each host in the data center.
本发明通过基于滚动灰色预测模型算法,并以模块化的形式实现到云数据中心环境中,来智能地监控数据中心的负载信息、以及运行时的各性能数据。减少数据中心工作时尤其是流量突发情况下的能耗水平,降低云服务提供商的运维成本,与此同时提升数据中心SLA的指标,保障用户的云服务使用体验。The invention intelligently monitors the load information of the data center and the performance data of the runtime by implementing the rolling gray prediction model algorithm and implementing it into the cloud data center environment in a modular form. Reduce the energy consumption level in the data center, especially in the case of traffic bursts, reduce the operation and maintenance costs of the cloud service provider, and at the same time improve the data center SLA indicators to ensure the user's cloud service experience.
图1为云计算智能资源调度框架图。Figure 1 is a diagram of a cloud computing intelligent resource scheduling framework.
图2为云数据中心系统节能框架图。Figure 2 shows the energy-saving framework of the cloud data center system.
图3为云数据中心任务序列1的负载预测实验结果图。FIG. 3 is a graph of load prediction experiment results of cloud data center task sequence 1.
图4为云数据中心任务序列2的负载预测实验结果图。4 is a graph of load prediction experiment results of the cloud data center task sequence 2.
图5为滚动灰色预测模型和ARIMA模型实验数据对比图。Figure 5 is a comparison of the experimental data of the rolling gray prediction model and the ARIMA model.
图6为滚动灰色预测模型和ARIMA模型平均偏差比对比图。
Figure 6 is a comparison of the average deviation ratio of the rolling gray prediction model and the ARIMA model.
为了使本发明的技术方案及优点更加清楚明白,以下结合附图,进行进一步的详细说明,但本发明的实施和保护不限于此。In order to make the technical solutions and advantages of the present invention more comprehensible, the following detailed description is made in conjunction with the accompanying drawings.
1.策略框架1. Strategic framework
1.1云计算资源智能调度框架1.1 Cloud computing resource intelligent scheduling framework
图1是云计算平台资源智能调度框架的架构图,从下而上分为主机层、虚拟机层、性能评估层、调度层和用户层。其中调度层和评估层是整个节能策略框架的核心所在。下面将对每一层进行阐述。FIG. 1 is an architectural diagram of a cloud computing platform resource intelligent scheduling framework, which is divided into a host layer, a virtual machine layer, a performance evaluation layer, a scheduling layer, and a user layer from bottom to top. The scheduling layer and the evaluation layer are the core of the entire energy-saving strategy framework. Each layer will be explained below.
主机层是指云数据中心中的所有服务器,包含所有物理主机节点。这些硬件设备是云环境最底层的基础设施,为我们进行节能调度管理提供了硬件基础。The host tier refers to all servers in the cloud data center, including all physical host nodes. These hardware devices are the lowest infrastructure in the cloud environment, providing a hardware foundation for energy-efficient scheduling management.
虚拟机层则是建立在主机层虚拟化技术基础上的,通过将多台服务器实体虚拟化,构成了虚拟机层这个资源池,能够实现云环境中的共同计算及资源共享。The virtual machine layer is based on the virtualization technology of the host layer. By virtualizing multiple server entities, the resource pool of the virtual machine layer is formed, which enables common computing and resource sharing in the cloud environment.
性能评估层是指对云数据中心的的负载利用率、能耗、SLA、PUE等性能数据进行采集评估。评估层需要与虚拟层进行数据通信来获取虚拟机资源利用情况、各虚拟机的运行状态等信息。The performance evaluation layer refers to the collection and evaluation of load data, energy consumption, SLA, PUE and other performance data of the cloud data center. The evaluation layer needs to communicate with the virtual layer to obtain information about the utilization of virtual machine resources and the running status of each virtual machine.
调度层则在基于性能评估层所采集负载、能耗等数据的基础上,对主机进行虚拟机初次分配,以及虚拟机迁移操作,通过对虚拟机进行调度来保障主机能够运行在一个良好的负责利用率环境下。The scheduling layer performs virtual machine initial allocation and virtual machine migration operation based on the data collected by the performance evaluation layer, such as load and energy consumption. The virtual machine is scheduled to ensure that the host can run in a good responsibility. Under the utilization environment.
用户层是指云计算环境中的所有用户及服务请求者,包括个人用户、企业用户以及云计算的所有使用者。用户层会时刻对数据中心发出新的服务请求。The user layer refers to all users and service requesters in the cloud computing environment, including individual users, enterprise users, and all users of cloud computing. The user layer will always issue new service requests to the data center.
1.2云数据中心系统节能框架1.2 Cloud Data Center System Energy Saving Framework
图2是云计算数据中心系统节能架构的框架图,从上而下分为四个模块,分别是负载预测模块、误差校验模块、热感知分类模块以及虚拟机调度模块。下面将对每一个模块进行阐述。2 is a framework diagram of an energy-saving architecture of a cloud computing data center system, which is divided into four modules from top to bottom, namely a load prediction module, an error checking module, a thermal sensing classification module, and a virtual machine scheduling module. Each module will be explained below.
(1)负载预测模块:数据中心每秒都会处理成千上万的服务请求,负载预测模块可以持续地监测数据中心中物理机(Power Machine)的工作负载数据,并通过分析有效历史负载数据来预测每一个PM在未来时刻的CPU利用率。负载预测模块可以帮助我们有效地区
分数据中心当前状态下过载与空载的服务器。(1) Load prediction module: The data center processes thousands of service requests per second. The load prediction module can continuously monitor the workload data of the physical machines in the data center and analyze the effective historical load data. Predict the CPU utilization of each PM at a future time. Load forecasting module can help us in effective areas
The server is overloaded and unloaded in the current state of the data center.
(2)误差校验模块:当负载预测器完成了预测过程之后,误差校验模块会计算实际值与预测值之间的偏差,并通过分析和计算它们的相对误差来优化未来的预测结果。(2) Error check module: After the load predictor completes the prediction process, the error check module calculates the deviation between the actual value and the predicted value, and optimizes the future prediction result by analyzing and calculating their relative errors.
(3)热感知分类模块:根据上述模块得到的工作负载预测值,我们将物理机分为了四种类别,称为沸点(boiling point)、温点(warming point)、凉点(cooling point)、冰点(freezing point)。我们在此节能架构下引入了服务水平协议(Service Level Agreement,SLA)作为我们的一项重要参考指标,并利用SLA设定了PM工作负载阈值的上界和下界。当PM当前的负载利用率高于阈值上界时我们称之为沸点;当其利用率低于阈值下界(非0)时我们称之为凉点;负载利用率为0时为冰点;当PM的负载利用率处于阈值上下界之间时我们称之为温点。热感知分类模块通过对PM进行类别区域的划分,我们可以很好地了解并调控当前数据中心内部的负载情况。(3) Thermal sensing classification module: According to the workload prediction value obtained by the above module, we divide the physical machine into four categories, called boiling point, warming point, cooling point, Freezing point. We introduced a Service Level Agreement (SLA) as an important reference indicator under this energy-saving architecture, and used SLA to set the upper and lower bounds of the PM workload threshold. When the current load utilization of PM is higher than the upper bound of the threshold, we call it the boiling point; when its utilization is lower than the threshold lower bound (non-zero), we call it the cold spot; when the load utilization is 0, it is the freezing point; when PM is used; When the load utilization is between the upper and lower limits of the threshold, we call it the temperature point. The thermal perceptual classification module can well understand and regulate the load situation inside the current data center by dividing the category area of the PM.
(4)虚拟机调度模块:云计算节能的目的是使得整个数据中心能够以较高的利用率运转,并且最大程度保障用户的云服务质量。因此我们需要使得尽可能多的PM以温点的形式运转。因此我们需要转换更多的沸点PM到温点之中,并尽可能地整合一些凉点或者冰点的PM。虚拟机调度模块会将部分运行在沸点上的虚拟机迁移至凉点之中,部分容量足够大的凉点将会被动态整合为一体。(4) Virtual machine scheduling module: The purpose of cloud computing energy saving is to enable the entire data center to operate at a higher utilization rate and to maximize the quality of the user's cloud service. So we need to make as many PMs as possible to operate as a warm spot. Therefore, we need to convert more boiling point PM into the temperature point and integrate some cool or freezing point PM as much as possible. The virtual machine scheduling module will migrate some of the virtual machines running at the boiling point to the cool spot, and some of the cool spots with sufficient capacity will be dynamically integrated into one.
2.负载预测2. Load forecasting
2.1数据中心能效指标2.1 Data Center Energy Efficiency Indicators
本文提出的云数据中心环境下的能效指标更为丰富,表1列出了需要获取的性能指标数据。主要有三种指标类型,分别是时间类型、环境类型和性能指标。The energy efficiency indicators in the cloud data center environment presented in this paper are more abundant. Table 1 lists the performance indicator data that needs to be obtained. There are three main types of indicators, namely time type, environment type and performance indicator.
表1Table 1
时间类型是具有重大意义的属性,它决定了测试过程中数据中心的运行时间以及调度时间间距。环境类型定义了数据中心中各主机以及虚拟机的具体配置参数,以及当前处理的云任务列表的长度大小,这些指标共同决定了数据中心运作过程中的客观状态。性能指标主要反映当前数据中心的运转情况,包括主机在各个时刻下的利用率、能耗数据,数据中心在某时间区间下的能效,数据中心符合及违反服务水平协议的主机比例,调度过程中被关闭的主机数量,调度过程发生的虚拟机迁移总数等数据。The time type is a significant attribute that determines the running time of the data center and the scheduling time interval during the test. The environment type defines the specific configuration parameters of each host and virtual machine in the data center, and the length of the currently processed cloud task list. These indicators together determine the objective state of the data center operation process. The performance indicators mainly reflect the current data center operation, including the utilization rate and energy consumption data of the host at various times, the energy efficiency of the data center in a certain time interval, the proportion of the data center that meets and violates the service level agreement, and the scheduling process. The number of hosts that were shut down, the total number of virtual machine migrations that occurred during the scheduling process, and so on.
2.2滚动灰色预测模型2.2 rolling gray prediction model
面对流量突发现象,需要实时准确地分析处理海量的网络业务量数据,这具有很大的挑战性。而当流量突发情况发生时,数据中心中沸点及凉点的出现频率会更加频繁,服务器过载现象更为严重。本文提出一种应对流量突发情况的数据中心工作负载预测方法。In the face of traffic bursts, it is very challenging to analyze and process massive amounts of network traffic data in real time. When the traffic burst occurs, the frequency of boiling point and cold spot in the data center will be more frequent, and the overload phenomenon of the server is more serious. This paper proposes a data center workload prediction method to deal with traffic bursts.
数据中心的工作负载变化具有一定的波动性与不确定性,建立工作负载的短期预测模型属于少数据建模问题,而灰色模型相比别的模型更适合解决少数据、贫信息状态预测问题,通过少量的数据便可建立工作负载的残差状态预测模型,适合用于数据中心工作负载的短期预测模型的建立中去。The workload changes in the data center have certain volatility and uncertainty. The short-term prediction model for establishing workload is a small data modeling problem, while the gray model is more suitable for solving the problem of less data and poor information state prediction than other models. The residual state prediction model of the workload can be established with a small amount of data, which is suitable for the establishment of short-term prediction models for data center workloads.
该工作负载预测方法基于改进后的灰色模型提出,灰色预测方法被视为是时间序列预
测模型很好的替代策略,在云环境中,我们将数据中心中各PM的历史工作负载值作为历史数据序列,灰色模型使用了一次微分方程来描述整个系统的建立过程。The workload prediction method is based on the improved gray model, and the gray prediction method is considered as a time series pre-
The model is a good alternative strategy. In the cloud environment, we use the historical workload value of each PM in the data center as the historical data sequence. The gray model uses a differential equation to describe the whole system establishment process.
设x(0)=(x(0)(1),x(0)(2),…,x(0)(n))为原始工作负载数列,其中n为数据序列的长度,其1次累加生成数列为x(1)=(x(1)(1),x(1)(2),…,x(1)(n)),其中Let x (0) = (x (0) (1), x (0) (2), ..., x (0) (n)) be the original workload sequence, where n is the length of the data sequence, 1 time The cumulative generation sequence is x (1) = (x (1) (1), x (1) (2), ..., x (1) (n)), where
定义x(1)的灰导数为Define the grey derivative of x (1) as
d(k)=x(0)(k)=x(1)(k)-x(1)(k-1).d(k)=x (0) (k)=x (1) (k)-x (1) (k-1).
令z(1)为数列x(1)的邻值生成数列,即Let z (1) generate a sequence of neighbors of the sequence x (1) , ie
z(1)(k)=αx(1)(k)+(1-α)x(1)(k-1),z (1) (k)=αx (1) (k)+(1-α)x (1) (k-1),
其中,α为差异系数,α∈(0,1)。Where α is the coefficient of variation, α∈(0,1).
于是定义GM(1,1)的灰微分方程模型为Then the gray differential equation model defining GM(1,1) is
d(k)+az(1)(k)=b,或x(0)(k)+az(1)(k)=b,d(k)+az (1) (k)=b, or x (0) (k)+az (1) (k)=b,
x(0)(k)+az(1)(k)=b,x (0) (k)+az (1) (k)=b,
在式(1)中,x(0)(k)称为灰导数,a称为发展系数,z(1)(k)称为白化背景值,b称为灰作用量。In formula (1), x (0) (k) is called the derivative of gray, a factor known as development, z (1) (k) referred whitening background value, b referred to the amount of ash effect.
将时刻表k=2,3,…,n代入(1)式并引入矩阵向量记号:Substituting the timetable k=2,3,...,n into equation (1) and introducing the matrix vector notation:
则GM(1,1)模型可以表示为矩阵Y=Bu.其中u为a和b的拟合系数矩阵,B为原始序列提供的信息矩阵。Then the GM(1,1) model can be expressed as a matrix Y=Bu. where u is a matrix of fitting coefficients of a and b, and B is an information matrix provided by the original sequence.
用回归分析求得a,b的估计值,于是数列相应的白化模型为
Using regression analysis to obtain the estimated values of a and b, then the corresponding whitening model of the series is
于是得到预测值Then get the predicted value
从而相应地得到下一时刻工作负载预测值:Therefore, the next-time workload prediction value is obtained accordingly:
传统的GM(1,1)模型很有可能会由于数据及部分数值的匮乏而陷入预测精度低的困境。采用滚动GM(1,1)模型后,每进行下一次预测都将重新建立预测模型,模型会不断采用较新的数据进行预测,过旧的数据将会被摈弃。利用前100个调度周期历史数据预测第101个周期结果,并不断更新,可以大大提升预测的精度。我们将根据数据中心历史负载动态更新模型中平移变换常量C的值,确保预测过程中灰色模型始终处于可用状态。通过迭代选出序列中λ(k)的最大值Maxλ(k)以及最小值Minλ(k),确保Minλ(k)和Maxλ(k)均在可容空间内。The traditional GM (1,1) model is likely to fall into the dilemma of low prediction accuracy due to the lack of data and partial values. After adopting the rolling GM (1,1) model, the prediction model will be re-established for each next prediction. The model will continue to use newer data for prediction, and the old data will be discarded. Using the historical data of the first 100 scheduling periods to predict the results of the 101st cycle and continuously updating them can greatly improve the accuracy of prediction. We will dynamically update the value of the translation transformation constant C in the model based on the historical load of the data center to ensure that the gray model is always available during the prediction process. Iteratively select the sequence The maximum value of λ(k), Maxλ(k) and the minimum value Minλ(k), ensure that both Minλ(k) and Maxλ(k) are in the available space. Inside.
3.基于热感知的虚拟机分类策略3. Thermal sensing based virtual machine classification strategy
传统的虚拟机调度策略的建立大多基于启发式算法(遗传算法、模拟退火、粒子群算法等),而启发式算法会无法避免陷入局部最优解的问题,启发式算法本质上是一种贪心策略,导致不符合贪心规则的最优解会被错过。基于热感知的虚拟机分类调度策略根据主机的“冷热”状态对物理节点之间的虚拟机进行重分配,可以有效减少数据中心中的过载与空载主机数,达到能耗与SLA的平衡。The traditional virtual machine scheduling strategy is mostly based on heuristic algorithms (genetic algorithm, simulated annealing, particle swarm optimization, etc.), and the heuristic algorithm can not avoid the problem of falling into the local optimal solution. The heuristic algorithm is essentially a kind of greed. The strategy that leads to an optimal solution that does not meet the greedy rules will be missed. The heat-aware virtual machine classification scheduling policy redistributes the virtual machines between physical nodes according to the "hot and cold" state of the host, which can effectively reduce the number of overloaded and no-load hosts in the data center, and achieve the balance between energy consumption and SLA. .
热感知模型持续监控数据中心物理节点的热状态信息,将当前物理节点根据“冷热”状态划分为四种类型,分别为沸点(boiling point)、温点(warming point)、凉点(cooling point)、冰点(freezing point),并利用SLA设定了PM工作负载阈值的上界和下界。当PM当前的负载利用率高于阈值上界时我们称之为沸点;当其利用率低于阈值下界(非0)时我们称之为凉点;负载利用率为0时为冰点;当PM的负载利用率处于阈值上下界之间时我们称之为温点。我们需要使得尽可能多的PM以温点的形式运转。因此我们需要转换更多
的沸点PM到温点之中,并尽可能地整合一些凉点或者冰点的PM。虚拟机调度模块会将部分运行在沸点上的虚拟机迁移至凉点之中,部分容量足够大的凉点将会被动态整合为一体,处于冰点的物理机将会被关闭以节能。The thermal sensing model continuously monitors the thermal state information of the physical nodes of the data center, and divides the current physical nodes into four types according to the "hot and cold" state, which are boiling point, warming point, and cooling point. ), freezing point, and use SLA to set the upper and lower bounds of the PM workload threshold. When the current load utilization of PM is higher than the upper bound of the threshold, we call it the boiling point; when its utilization is lower than the threshold lower bound (non-zero), we call it the cold spot; when the load utilization is 0, it is the freezing point; when PM is used; When the load utilization is between the upper and lower limits of the threshold, we call it the temperature point. We need to make as many PMs as possible to operate as a warm spot. So we need to convert more
The boiling point of the PM is between the temperature points and integrates some cool spots or PMs with freezing points as much as possible. The virtual machine scheduling module will migrate some of the virtual machines running at the boiling point to the cool spot. Some of the cool spots with sufficient capacity will be dynamically integrated into one, and the physical machine at the freezing point will be shut down to save energy.
4.实验说明4. Experimental description
对于滚动灰色预测模型来说,我们首先需要先让数据中心运行一段时间t,获取一定数量的历史负载数据作为输入样本提供给滚动灰色预测模型学习。时间t越大,滚动灰色预测模型预测的结果就越准,此处实验t取100个采样周期长度。For the rolling gray prediction model, we first need to let the data center run for a period of time t, and obtain a certain amount of historical load data as input samples for the rolling gray prediction model learning. The larger the time t, the more accurate the prediction result of the rolling gray prediction model. Here, the experiment t takes 100 sampling period lengths.
滚动灰色预测模型实验测试的平均绝对偏差值与平均绝对百分误差的计算方法如定义如下:The calculation method of the average absolute deviation value and the average absolute percentage error of the experimental test of the rolling gray prediction model is as follows:
其中x(0)(k)为负载利用率的实际值,(k)为滚动灰色预测模型的预测结果。Where x (0) (k) is the actual value of the load utilization, (k) is the predicted result of the rolling gray prediction model.
我们采集了数据中心100个采样周期内的负载数据作为滚动灰色预测模型的初始输入序列,紧接着运行滚动灰色预测模型算法,算法通过对前100个采样周期的数据序列进行计算预测得到第101个周期时刻的负载数据,再对比计算第101个周期时刻的工作负载实际值与预测值的平均绝对偏差和平均绝对百分误差来调整原有的滚动模型,并更新输入数据序列。之后依此预测第102、103及以后周期的工作负载。We collected the load data in the data center for 100 sampling periods as the initial input sequence of the rolling gray prediction model, and then runs the rolling gray prediction model algorithm. The algorithm calculates and predicts the 101st data by calculating the data sequence of the first 100 sampling periods. The load data at the cycle time is compared with the average absolute deviation and the average absolute percentage error of the actual and predicted workload values at the 101st cycle time to adjust the original rolling model and update the input data sequence. The workload of the 102nd, 103rd and subsequent cycles is then predicted accordingly.
图3和图4是对两组云任务序列进行工作负载预测的实验结果,其中图3的云任务序列在第1-5小时、第22-24小时发生了流量突发事件,图4的云任务序列在第10小时和第20小时发生了流量突发事件。实验用滚动灰色预测模型与自回归移动平均(ARIMA)模型以及实际负载值进行了对比,两组实验结果都显示滚动灰色预测模型对于数据中心的工作负载的预测精度更为接近实际值(Actual),相比ARIMA模型,滚动灰色预测模型对于流量突发情况下的反馈更为及时。Figure 3 and Figure 4 show the experimental results of workload prediction for two cloud task sequences. The cloud task sequence of Figure 3 has a traffic incident at 1-5 hours and 22-24 hours. The cloud of Figure 4 A traffic incident occurred at the 10th hour and the 20th hour of the task sequence. The experimental rolling grey prediction model is compared with the autoregressive moving average (ARIMA) model and the actual load value. Both sets of experimental results show that the rolling gray prediction model is more accurate than the actual value for the data center workload (Actual). Compared with the ARIMA model, the rolling gray prediction model is more timely for feedback in the case of traffic bursts.
图5是在运行图4所示云任务序列之后的实验结果,结果显示滚动灰色预测模型的在云数据中心的负载预测过程中的平均偏差比为6.93%,而ARIMA模型的平均偏差比为10.35%,滚动灰色预测模型的整体预测效果更优。图6为滚动灰色预测模型与ARIMA模型实验后的偏差比面积图,若折线与X轴围成的面积越小,则代表其预测偏差越小,精度越高。如图
所示ARIMA模型的偏差比面积要远大于滚动灰色预测模型,更凸显出滚动灰色预测模型在云数据中心负载预测中的优势。
FIG. 5 is an experimental result after running the cloud task sequence shown in FIG. 4. The result shows that the average deviation ratio of the rolling gray prediction model in the load prediction process of the cloud data center is 6.93%, and the average deviation ratio of the ARIMA model is 10.35. %, the overall prediction effect of the rolling gray prediction model is better. Fig. 6 is a deviation ratio area chart after the rolling gray prediction model and the ARIMA model experiment. If the area enclosed by the line and the X axis is smaller, the smaller the prediction deviation is, the higher the precision is. As shown
The deviation of the ARIMA model shown is much larger than that of the rolling gray prediction model, which highlights the advantage of the rolling gray prediction model in cloud data center load forecasting.
Claims (6)
- 基于滚动灰色预测模型的云数据中心节能调度实现方法,其特征在于将云数据中心的节能流程抽象为负载预测、误差校验、热感知分类和虚拟机调度四个模块;通过对数据中心工作负载进行预测,从而对各主机进行状态分类,进而通过虚拟机调度算法达到节能的目的。The cloud data center energy-saving scheduling implementation method based on the rolling gray prediction model is characterized in that the energy-saving process of the cloud data center is abstracted into four modules: load prediction, error checking, thermal sensing classification and virtual machine scheduling; The prediction is performed to classify each host, and the virtual machine scheduling algorithm is used to achieve energy saving.
- 根据权利要求1所述的基于滚动灰色预测模型的云数据中心节能调度实现方法,其特征在于所述负载预测具体是:利用滚动灰色预测模型对数据中心的工作负载进行预测,得到下一时刻数据中心各主机节点的负载利用率。The cloud data center energy-saving scheduling implementation method based on the rolling gray prediction model according to claim 1, wherein the load prediction is specifically: predicting a workload of the data center by using a rolling gray prediction model, and obtaining data at a next moment. The load utilization of each host node in the center.
- 根据权利要求1所述的基于滚动灰色预测模型的云数据中心节能调度实现方法,其特征在于所述误差校验具体是:对负载预测值与实际工作负载进行误差校验,确定当前预测结果的偏差值,并基于误差校验模块进行学习,校正预测结果。The cloud data center energy-saving scheduling implementation method based on the rolling gray prediction model according to claim 1, wherein the error verification is specifically: performing error checking on the load prediction value and the actual workload, and determining the current prediction result. The deviation value is calculated based on the error check module to correct the prediction result.
- 根据权利要求1所述的基于滚动灰色预测模型的云数据中心节能调度实现方法,其特征在于所述热感知分类具体是:根据主机当前的负载预测值对云数据中心内所有主机进行热感知分类,并引入服务水平协议作为参考指标设定主机工作负载阈值的上界和下界;当主机负载利用率在高于阈值上界、低于阈值下界、处于阈值上下界之间、负载利用率为0这四种情况下划分为不同的四种热状态。The cloud data center energy-saving scheduling implementation method based on the rolling gray prediction model according to claim 1, wherein the heat-aware classification is: performing heat-sensing classification on all hosts in the cloud data center according to the current load prediction value of the host. And introduce a service level agreement as a reference indicator to set the upper and lower bounds of the host workload threshold; when the host load utilization is above the upper threshold, below the threshold lower bound, between the upper and lower thresholds, and the load utilization is 0. These four cases are divided into four different thermal states.
- 根据权利要求1所述的基于滚动灰色预测模型的云数据中心节能调度实现方法,其特征在于所述虚拟机调度具体是:根据当前主机的热状态对其进行虚拟机调度,通过虚拟机调度操作解决主机过载、空载的问题,将数据中心各主机的维持在一个健康的热状态下。The cloud data center energy-saving scheduling implementation method based on the rolling gray prediction model according to claim 1, wherein the virtual machine scheduling is specifically: scheduling virtual machines according to a hot state of the current host, and scheduling operations through the virtual machine. Solve the problem of host overload and no load, and maintain the hosts in the data center in a healthy hot state.
- 根据权利要求1~5任一项所述的基于滚动灰色预测模型的云数据中心节能调度实现方法,其特征在于通过基于滚动灰色预测模型算法,并以模块化的形式实现到云数据中心环境中,来智能地监控数据中心的负载信息、以及运行时的各性能数据。 The cloud data center energy-saving scheduling implementation method based on the rolling gray prediction model according to any one of claims 1 to 5, characterized in that the method is implemented in a cloud data center environment by using a rolling gray prediction model algorithm and in a modular form. To intelligently monitor the load information of the data center and the performance data of the runtime.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710064154.0A CN106899660B (en) | 2017-01-26 | 2017-01-26 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
CN201710064154.0 | 2017-01-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018137402A1 true WO2018137402A1 (en) | 2018-08-02 |
Family
ID=59199276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/113854 WO2018137402A1 (en) | 2017-01-26 | 2017-11-30 | Cloud data centre energy-saving scheduling implementation method based on rolling grey prediction model |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106899660B (en) |
WO (1) | WO2018137402A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784504A (en) * | 2018-12-24 | 2019-05-21 | 贵州宇豪科技发展有限公司 | Data center's long-distance intelligent operation management method and system |
CN111191851A (en) * | 2020-01-03 | 2020-05-22 | 中国科学院信息工程研究所 | Data center energy efficiency optimization method based on knowledge graph |
CN114048913A (en) * | 2021-11-19 | 2022-02-15 | 江苏科技大学 | Rescue planning method based on particle swarm algorithm and genetic algorithm mixing |
CN114443212A (en) * | 2021-12-22 | 2022-05-06 | 天翼云科技有限公司 | Thermal migration management method, device, equipment and storage medium |
US11461210B2 (en) | 2019-06-26 | 2022-10-04 | Kyndryl, Inc. | Real-time calculation of data center power usage effectiveness |
CN116382863A (en) * | 2023-03-19 | 2023-07-04 | 广州智捷联科技有限公司 | Intelligent energy-saving scheduling method for data center |
CN116404755A (en) * | 2023-04-18 | 2023-07-07 | 内蒙古铖品科技有限公司 | Big data processing system and method based on Internet of things |
CN116846074A (en) * | 2023-07-04 | 2023-10-03 | 深圳市利业机电设备有限公司 | Intelligent electric energy supervision method and system based on big data |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106899660B (en) * | 2017-01-26 | 2021-05-14 | 华南理工大学 | Cloud data center energy-saving scheduling implementation method based on rolling grey prediction model |
CN108491248A (en) * | 2018-03-07 | 2018-09-04 | 山东大学 | A kind of triggering method and realization system of the dynamic migration of virtual machine based on prediction |
CN109445903B (en) * | 2018-09-12 | 2022-03-29 | 华南理工大学 | Cloud computing energy-saving scheduling implementation method based on QoS feature discovery |
CN109445906B (en) * | 2018-10-11 | 2021-07-23 | 北京理工大学 | Method for predicting quantity of virtual machine demands |
CN109324953B (en) * | 2018-10-11 | 2020-08-04 | 北京理工大学 | Virtual machine energy consumption prediction method |
CN110275677B (en) * | 2019-05-22 | 2022-04-12 | 华为技术有限公司 | Hard disk format conversion method and device and storage equipment |
CN110806918A (en) * | 2019-09-24 | 2020-02-18 | 梁伟 | Virtual machine operation method and device based on deep learning neural network |
CN111552553B (en) * | 2020-04-29 | 2023-03-10 | 电子科技大学 | Multi-task rapid scheduling method based on simulated annealing |
CN111752710B (en) * | 2020-06-23 | 2023-01-31 | 中国电力科学研究院有限公司 | Data center PUE dynamic optimization method, system and equipment and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102833326A (en) * | 2012-08-15 | 2012-12-19 | 广东工业大学 | Grey prediction-based cloud storage load balancing method |
EP2570922A1 (en) * | 2011-09-13 | 2013-03-20 | Alcatel Lucent | Method and system for managing an elastic server farm |
CN103916438A (en) * | 2013-01-06 | 2014-07-09 | 上海计算机软件技术开发中心 | Cloud testing environment scheduling method and system based on load forecast |
CN106020934A (en) * | 2016-05-24 | 2016-10-12 | 浪潮电子信息产业股份有限公司 | Optimized deployment method based on virtual cluster online migration |
CN106899660A (en) * | 2017-01-26 | 2017-06-27 | 华南理工大学 | Cloud data center energy-saving distribution implementation method based on trundle gray forecast model |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2713307B1 (en) * | 2012-09-28 | 2018-05-16 | Accenture Global Services Limited | Liveness detection |
CN104765642B (en) * | 2015-03-24 | 2017-11-10 | 长沙理工大学 | Virtual machine deployment method and system based on dynamic prediction model in cloud environment |
CN105607948A (en) * | 2015-12-18 | 2016-05-25 | 国云科技股份有限公司 | Virtual machine migration prediction method based on SLA |
-
2017
- 2017-01-26 CN CN201710064154.0A patent/CN106899660B/en active Active
- 2017-11-30 WO PCT/CN2017/113854 patent/WO2018137402A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2570922A1 (en) * | 2011-09-13 | 2013-03-20 | Alcatel Lucent | Method and system for managing an elastic server farm |
CN102833326A (en) * | 2012-08-15 | 2012-12-19 | 广东工业大学 | Grey prediction-based cloud storage load balancing method |
CN103916438A (en) * | 2013-01-06 | 2014-07-09 | 上海计算机软件技术开发中心 | Cloud testing environment scheduling method and system based on load forecast |
CN106020934A (en) * | 2016-05-24 | 2016-10-12 | 浪潮电子信息产业股份有限公司 | Optimized deployment method based on virtual cluster online migration |
CN106899660A (en) * | 2017-01-26 | 2017-06-27 | 华南理工大学 | Cloud data center energy-saving distribution implementation method based on trundle gray forecast model |
Non-Patent Citations (1)
Title |
---|
ZHANG, LEI ET AL.: "Multi-step Optimized GM(1,1) Model-based Short Term Resource Load Prediction in Cloud Computing", COMPUTER ENGINEERING AND APPLICATIONS, 31 May 2014 (2014-05-31), ISSN: 1002-8331 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784504A (en) * | 2018-12-24 | 2019-05-21 | 贵州宇豪科技发展有限公司 | Data center's long-distance intelligent operation management method and system |
US11461210B2 (en) | 2019-06-26 | 2022-10-04 | Kyndryl, Inc. | Real-time calculation of data center power usage effectiveness |
CN111191851A (en) * | 2020-01-03 | 2020-05-22 | 中国科学院信息工程研究所 | Data center energy efficiency optimization method based on knowledge graph |
CN111191851B (en) * | 2020-01-03 | 2023-06-23 | 中国科学院信息工程研究所 | Knowledge graph-based data center energy efficiency optimization method |
CN114048913A (en) * | 2021-11-19 | 2022-02-15 | 江苏科技大学 | Rescue planning method based on particle swarm algorithm and genetic algorithm mixing |
CN114048913B (en) * | 2021-11-19 | 2024-05-28 | 江苏科技大学 | Rescue planning method based on mixture of particle swarm algorithm and genetic algorithm |
CN114443212A (en) * | 2021-12-22 | 2022-05-06 | 天翼云科技有限公司 | Thermal migration management method, device, equipment and storage medium |
CN116382863A (en) * | 2023-03-19 | 2023-07-04 | 广州智捷联科技有限公司 | Intelligent energy-saving scheduling method for data center |
CN116382863B (en) * | 2023-03-19 | 2023-09-05 | 广州智捷联科技有限公司 | Intelligent energy-saving scheduling method for data center |
CN116404755A (en) * | 2023-04-18 | 2023-07-07 | 内蒙古铖品科技有限公司 | Big data processing system and method based on Internet of things |
CN116846074A (en) * | 2023-07-04 | 2023-10-03 | 深圳市利业机电设备有限公司 | Intelligent electric energy supervision method and system based on big data |
CN116846074B (en) * | 2023-07-04 | 2024-03-19 | 深圳市利业机电设备有限公司 | Intelligent electric energy supervision method and system based on big data |
Also Published As
Publication number | Publication date |
---|---|
CN106899660A (en) | 2017-06-27 |
CN106899660B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018137402A1 (en) | Cloud data centre energy-saving scheduling implementation method based on rolling grey prediction model | |
Zhu et al. | A three-dimensional virtual resource scheduling method for energy saving in cloud computing | |
Haghshenas et al. | Magnetic: Multi-agent machine learning-based approach for energy efficient dynamic consolidation in data centers | |
Yi et al. | Toward efficient compute-intensive job allocation for green data centers: A deep reinforcement learning approach | |
Yi et al. | Efficient compute-intensive job allocation in data centers via deep reinforcement learning | |
WO2021051441A1 (en) | Energy conservation system for hadoop cluster | |
CN104407688A (en) | Virtualized cloud platform energy consumption measurement method and system based on tree regression | |
CN109491760A (en) | A kind of high-effect data center's Cloud Server resource autonomous management method and system | |
Zhang et al. | An Energy and SLA‐Aware Resource Management Strategy in Cloud Data Centers | |
Li et al. | Dynamic virtual machine consolidation algorithm based on balancing energy consumption and quality of service | |
Zhang et al. | A new energy efficient VM scheduling algorithm for cloud computing based on dynamic programming | |
Shen et al. | Host load prediction with bi-directional long short-term memory in cloud computing | |
Chen et al. | Power and thermal-aware virtual machine scheduling optimization in cloud data center | |
Shao et al. | Energy-aware dynamic resource allocation on hadoop YARN cluster | |
Ma et al. | Virtual machine migration techniques for optimizing energy consumption in cloud data centers | |
Tian et al. | Modeling and analyzing power management policies in server farms using stochastic petri nets | |
Hou et al. | Research on optimization of GWO-BP Model for cloud server load prediction | |
Li et al. | Temperature aware power allocation: An optimization framework and case studies | |
Xiong et al. | Energy-saving optimization of application server clusters based on mixed integer linear programming | |
Ou et al. | Container Power Consumption Prediction Based on GBRT-PL for Edge Servers in Smart City | |
Wilkins et al. | Hybrid Heterogeneous Clusters Can Lower the Energy Consumption of LLM Inference Workloads | |
Ismaeel et al. | Real-time energy-conserving vm-provisioning framework for cloud-data centers | |
CN111083201A (en) | Energy-saving resource allocation method for data-driven manufacturing service in industrial Internet of things | |
Shi et al. | Three-Way Ensemble prediction for workload in the data center | |
Lin et al. | A multi-agent reinforcement learning-based method for server energy efficiency optimization combining DVFS and dynamic fan control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17893515 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 04/12/2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17893515 Country of ref document: EP Kind code of ref document: A1 |