CN109324953B - Virtual machine energy consumption prediction method - Google Patents

Virtual machine energy consumption prediction method Download PDF

Info

Publication number
CN109324953B
CN109324953B CN201811185005.0A CN201811185005A CN109324953B CN 109324953 B CN109324953 B CN 109324953B CN 201811185005 A CN201811185005 A CN 201811185005A CN 109324953 B CN109324953 B CN 109324953B
Authority
CN
China
Prior art keywords
hidden layer
layer node
output
training
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811185005.0A
Other languages
Chinese (zh)
Other versions
CN109324953A (en
Inventor
邹伟东
夏元清
李慧芳
张金会
翟弟华
戴荔
刘坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201811185005.0A priority Critical patent/CN109324953B/en
Publication of CN109324953A publication Critical patent/CN109324953A/en
Application granted granted Critical
Publication of CN109324953B publication Critical patent/CN109324953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • G06F11/3062Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations where the monitored property is the power consumption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a virtual machine energy consumption prediction method. The invention can realize the virtual machine energy consumption prediction. In the invention, the network training error and the compression factor are fed back to the output of the hidden layer by adding an acceleration item in the existing incremental extreme learning machine model to enable the prediction result to be closer to the output sample, so that the number of redundant hidden layer nodes of the incremental extreme learning machine can be reduced, and the network convergence speed of the incremental extreme learning machine is accelerated; by introducing compression factors and a progressive solution, namely calculating more optimal hidden layer node parameters including input weights, thresholds, output weights and network training errors by randomly generating output weights and combining the network training errors, the compression factors and the input samples in the training process, the network structure can be optimized, the stability of the network training process is improved, and the network training errors are effectively reduced.

Description

Virtual machine energy consumption prediction method
Technical Field
The invention relates to the technical field of cloud computing, in particular to a virtual machine energy consumption prediction method.
Background
With the rapid development of the internet and cloud computing, many cloud data centers provide cloud services to the outside in a cloud computing service mode, namely, cloud service providers. At present, cloud data centers which provide cloud services externally consume a large amount of energy every day, and the energy consumption cost becomes a problem which cannot be ignored by cloud service providers. Therefore, how to save energy and reduce energy consumption has become a key problem to be solved urgently by cloud service providers. Under a cloud service mode of infrastructure as a service (IaaS), the energy consumption of a Virtual Machine (VM) is accurately predicted, the method has important significance for making a scheduling strategy and a migration merging strategy for virtual machine scheduling among different Physical Machines (PM), and meanwhile, the energy consumption can be reduced, and the method is beneficial to environmental protection; and is favorable for making a reasonable pricing strategy and further attracting users.
Designing a prediction model and a learning algorithm are key problems of virtual machine energy consumption prediction research. In the prior art, a prediction model based on a traditional incremental extreme learning machine has a plurality of redundant nodes for reducing the accuracy and efficiency of virtual machine energy consumption prediction, and hidden layer node parameters are randomly generated to influence the stability of the incremental extreme learning machine, so that the network training error is large, and therefore, the design of an efficient prediction model has very important significance for virtual machine energy consumption prediction.
Disclosure of Invention
In view of the above, the invention provides a virtual machine energy consumption prediction method, which is implemented by introducing an acceleration term and a progressive solution into an incremental extreme learning machine model and a training process, and constructing the incremental extreme learning machine model based on the acceleration term and the progressive solution, so as to realize accurate prediction of virtual machine energy consumption and effectively improve prediction accuracy and efficiency.
The invention discloses a virtual machine energy consumption prediction method, which adopts an incremental extreme learning machine based on an acceleration term and a progressive solution to realize virtual machine energy consumption prediction, and specifically comprises the following steps:
step one, adopting historical data of virtual machine energy consumption to construct a training sample set, wherein the output of the sample is a virtual machine energy consumption value of a selected time point, and the input is virtual machine operation parameters of a plurality of time points before the selected time point;
step two, constructing an incremental extreme learning machine model with introduced acceleration terms and progressive solutions as an expression (1), and training by using a training sample set;
Figure BDA0001825989250000021
where i is represented as the ith node in the hidden layer,Lfrepresenting the number of the nodes of the hidden layer determined after training;
Figure BDA0001825989250000022
output matrix represented as the ith hidden layer node αi-1ei-1The acceleration item added for the ith hidden layer node, wherein ei-1The error of network training introduced by adding the (i-1) th hidden layer node in the training expresses the difference between the output generated by the learning machine and the ideal output given by the sample when only the (1) th to (i-1) th hidden layer nodes exist in the learning machine, αi-1The compression factor determined for the ith hidden layer node is obtained by iterative calculation of network training errors;
Figure BDA0001825989250000023
the method is characterized in that the method is an evolutionary value of an output weight determined for the ith hidden layer node, and is a linear combination of a random given value of the output weight of the hidden layer node and an improved value of the output weight of the hidden layer node, wherein the improved value of the output weight of the hidden layer node is obtained by iterative calculation of an input weight of the hidden layer node and a threshold value of the hidden layer node obtained in training, and the input weight of the hidden layer node and the threshold value of the hidden layer node are obtained by iterative calculation of a network training error and an input sample of the hidden layer node in training; of nodes of the hidden layer
Figure BDA0001825989250000024
The input weight value and the threshold value jointly form the progressive solution;
and step three, inputting the running parameters of the virtual machines at a plurality of time points before the current time point into the incremental extreme learning machine trained in the step two, and predicting the energy consumption value of the virtual machine at the current time point.
Further, the training process of the incremental extreme learning machine based on the acceleration term and the progressive solution comprises the following steps, which are executed every time a training sample is input:
defining the input vector of the current sample as x and the output as y;
step 1, extreme learning machine networkIn the initialization stage, the initial value of the hidden layer node number L is 0, and the maximum value is LmaxNetwork training error eL=e0Y, the expected error value is;
step 2, adding a node in the hidden layer, making L self-add 1, assigning L to the newly added hidden layer node as the node number, and randomly generating the output weight β of the newly added hidden layer nodeLAnd a related parameter vL、zLAnd satisfy 0<vL<zL<1,vL+zL=1;
And 3, calculating an output feedback matrix of the nodes of the newly added hidden layer according to the formula (2):
HL=eL-1L)-1(2)
in the formula, eL-1Is determined in the training process of adding the previous hidden node, and represents the network training error when the number of the hidden nodes is L-1;
step 4, calculating the input weight of the newly added hidden layer node according to the formula (3):
Figure BDA0001825989250000031
in the formula (I), the compound is shown in the specification,
Figure BDA0001825989250000032
Moore-Penrose generalized inverse of x;
and 5, calculating a threshold value of the newly added hidden layer node according to the formula (4):
bL=rmse(HL-aL·x)(4)
wherein rmse is a root mean square error function;
and 6, calculating an output matrix of the nodes of the newly added hidden layer according to the formula (5):
Figure BDA0001825989250000033
in the formula, u () may adopt a general excitation function in a neural network, such as sine (), sig ();
and 7, calculating a compression factor of the newly added hidden layer node according to the formula (6):
Figure BDA0001825989250000034
step 8, calculating an improved value of the output weight of the newly added hidden layer node according to the formula (7):
Figure BDA0001825989250000041
step 9, calculating the evolution value of the output weight of the node of the newly added hidden layer according to the formula (8):
Figure BDA0001825989250000042
step 10, calculating a network training error value after L th newly added hidden layer nodes are added according to the formula (9):
Figure BDA0001825989250000043
in the formula, αL-1eL-1Is an acceleration term;
step 11, judging whether L is satisfied, wherein the L is more than or equal to LmaxOr eLIf the | | is less than or equal to the predetermined threshold, finishing the training and ending the process, otherwise, returning to the step 2.
Further, the unit of the time point is day, and the virtual machine operation parameter of the time point is an average value of the virtual machine operation parameters from 0 time to 24 time of the day.
Further, the virtual machine operating parameters include: CPU utilization rate, memory utilization rate, number of executed instructions in unit time and number of caches lost in unit time.
Has the advantages that:
the invention can overcome the defects that a plurality of redundant hidden layer nodes reducing the accuracy and the learning efficiency exist in the prior method, hidden layer node parameters are randomly generated to influence the stability of the incremental extreme learning machine, the requirement of virtual machine energy consumption prediction can be met to a certain extent, and a new thought and a new way are provided for more accurately predicting the virtual machine energy consumption, and the method specifically comprises the following steps:
1. by adding an acceleration item into the existing incremental extreme learning machine model and feeding back the network training error and the compression factor to the output of the hidden layer, the prediction result is closer to the output sample, the number of redundant hidden layer nodes of the incremental extreme learning machine can be reduced, and the network convergence speed of the incremental extreme learning machine is accelerated.
2. By introducing a compression factor and an evolutionary solution into the training process of the existing incremental extreme learning machine model, namely calculating more optimal hidden layer node parameters including an input weight, a threshold value, an output weight and a network training error by randomly generating the output weight and combining the network training error, the compression factor and an input sample in the training process, the network structure can be optimized, the stability of the network training process is improved, and the network training error is effectively reduced.
Drawings
FIG. 1 is a flow chart of an incremental extreme learning machine algorithm based on an acceleration term and a progressive solution.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a virtual machine energy consumption prediction method, which comprises the following basic ideas: and inputting historical data of the energy consumption of the virtual machine into the incremental extreme learning machine to predict and obtain the current energy consumption of the virtual machine. Meanwhile, the incremental extreme learning machine is improved, firstly, an acceleration item is added in the incremental extreme learning machine model, the acceleration item expresses the influence of a compression factor and a network training error on a prediction result, the convergence speed of the incremental extreme learning machine can be accelerated, and the generalization performance of the incremental extreme learning machine is improved; and secondly, random parameters adopted in the training process of the existing incremental extreme learning machine are optimized, an output feedback matrix and a compression factor are introduced, and the evolution solution of the hidden layer node parameters including the input weight, the threshold, the evolution value of the output weight and the network training error is obtained directly according to the input sample and the network training error, so that the network output result gradually approaches the standard output of the training sample, and the stability of the incremental extreme learning machine is improved.
The prediction method comprises the steps of constructing, training and predicting the incremental extreme learning machine based on the acceleration term and the progressive solution, as shown in figure 1, and comprises the following specific steps:
the method comprises the following steps: and acquiring historical operating parameters and energy consumption data of the virtual machine to form a training sample.
The method comprises the following steps of constructing a training sample set by using historical operating parameters and energy consumption data of a virtual machine, wherein the input of a sample is the operating parameters of the virtual machine, and comprises the following steps: and outputting the CPU utilization rate, the memory utilization rate, the number of executed instructions in unit time and the number of caches lost in unit time as the energy consumption value of the virtual machine.
Acquiring historical operating parameters and energy consumption data of the virtual machine M days before a predicted time point (the unit of the time point in the embodiment is 'day'), recording the operating parameters and energy consumption values of the virtual machine from 0 hour to 24 hours every day, calculating the average value of the operating parameters and the energy consumption values of the virtual machine at the time point as the data of the time point, recording the data of M time points in total, and forming a historical data set
Figure BDA0001825989250000061
Wherein, yjThe energy consumption value of the virtual machine at the jth time point is obtained; the corresponding j time-th input sample is expressed as xj=[x1j,x2j,x3j,x4j]Wherein x is1jVirtual machine CPU utilization, x, denoted as jth time2jMemory utilization, x, expressed as the jth time3jNumber of executed instructions, x, expressed as jth time4jExpressed as the number of missing caches at the jth time.
Step two: and constructing an incremental extreme learning machine model based on the acceleration term and the progressive solution.
The incremental extreme learning machine based on the acceleration term and the progressive solution comprises three parts: input layer, hiddenIncluding layers and output layers, the number of nodes of the input layer and the input vector xjHas the same number of elements, and outputs the node number output vector y of the output layerjThe number of hidden layer nodes is L, the L value is obtained according to the actual training situation, and the maximum number of hidden layer nodes is possibly set as the maximum number of L of the jump-out conditionmaxAnd may be less than LmaxThe value of (c).
The incremental extreme learning machine model based on the acceleration term and the progressive solution is shown as an expression (1):
Figure BDA0001825989250000062
where i is represented as the ith node in the hidden layer, LfRepresenting the number of the nodes of the hidden layer determined after training;
Figure BDA0001825989250000063
output matrix represented as the ith hidden layer node αi-1ei-1The acceleration item added for the ith hidden layer node, wherein ei-1The network training error introduced by adding the (i-1) th hidden layer node in the training expresses the difference between the output generated by the learning machine and the ideal output given by the sample when only the (1) th to the (i-1) th hidden layer nodes exist in the learning machine, αi-1The compression factor determined for the ith hidden layer node is obtained by iterative calculation of network training errors;
Figure BDA0001825989250000071
the method is characterized in that the method is an evolutionary value of an output weight determined for the ith hidden layer node, and is a linear combination of a random given value of the output weight of the hidden layer node and an improved value of the output weight of the hidden layer node, wherein the improved value of the output weight of the hidden layer node is obtained by iterative calculation of an input weight of the hidden layer node, a threshold value of the hidden layer node and a compression factor obtained in training, and the input weight of the hidden layer node and the threshold value of the hidden layer node are obtained by iterative calculation of a network training error and an input sample of the hidden layer node in training; of nodes of the hidden layer
Figure BDA0001825989250000072
The input weight and the threshold value jointly form the progressive solution.
Step three: and training an incremental extreme learning machine based on an acceleration term and a progressive solution.
In the training process of the incremental extreme learning machine in the prior art, firstly, input parameters including input weights and thresholds of nodes of a newly-added hidden layer are randomly set at the initial stage of training, and then, the output weights are obtained according to the error between the output of network training and ideal values. The basic thought of the training process of the learning machine is that the training is carried out by adopting the sequence opposite to the prior art, namely, the output parameters of the newly added hidden layer nodes are randomly generated in the initial training stage and comprise output weight values and output weight value linear optimization parameters, then the input weight values and threshold values are obtained through iterative calculation according to network training errors and input samples of the hidden layer nodes in network training, then compression factors of the newly added hidden layer nodes are iteratively calculated through the network training errors of the hidden layer nodes in the network training, then further improved values of the output weight values are obtained through iterative calculation through the input weight values, the threshold values and the compression factors, and finally the evolved values of the output weight values are obtained through linear calculation through the random output weight values, the linear optimization parameters and the improved values of the output weight values. Compared with the prior art, the network training process not only obtains a better output weight, but also obtains an optimized input weight and a threshold value, and can reduce the fluctuation of network training errors, thereby improving the generalization performance of the network.
The learning process of the present invention includes the following steps, which are performed once every training sample is input:
step 3.1, in the network initialization stage of the extreme learning machine, the initial value of the hidden layer node number L is set to be 0, and the maximum value is set to be LmaxNetwork training error eL=e0Y, the expected error value is; here, the input vector of the current sample is defined as x, and the output is defined as y;
step 3.2, adding a node in the hidden layer, making L self-add 1, and simultaneously assigning L to the newly added hidden layer node as the nodePoint number, randomly generating output weight β of the new hidden layer nodeLAnd a related parameter vL、zLAnd satisfy 0<vL<zL<1,vL+zL1 is ═ 1; providing basis for outputting a feedback matrix and an evolutionary solution for subsequent calculation; in the prior art, the step is to randomly generate an input weight and a threshold of a newly added hidden layer node;
and 3.3, calculating an output feedback matrix of the nodes of the newly added hidden layer according to the formula (2):
HL=eL-1L)-1(2)
in the formula, eL-1The method is determined in the training process of adding the previous hidden node, and the method represents a network training error when the number of the hidden nodes is L-1, and an output feedback matrix is calculated through a randomly generated output weight and the network training error of the previous hidden node;
step 3.4, calculating the input weight of the node of the newly added hidden layer according to the formula (3):
Figure BDA0001825989250000081
in the formula (I), the compound is shown in the specification,
Figure BDA0001825989250000082
Moore-Penrose generalized inverse of x;
and 3.5, calculating a threshold value of the newly added hidden layer node according to the formula (4):
bL=rmse(HL-aL·x)(4)
wherein rmse is a root mean square error function;
as can be seen from the formulas (3) and (4), the input weight and the threshold of the invention are obtained by the operation of the output feedback matrix and the input sample, and the values of the input weight and the threshold can change along with the network training error and the change of the input sample, so that the input weight and the threshold are better than the numerical values randomly generated in the prior art, thereby reducing the fluctuation range of the network training error and improving the stability of the network;
and 3.6, calculating an output matrix of the nodes of the newly added hidden layer according to the formula (5):
Figure BDA0001825989250000091
in the formula, the operation process u () of the output matrix can adopt a general excitation function in the neural network, such as a sine function, a sigmoidal function, and the like, that is, the operation process u () of the output matrix can adopt a general excitation function in the neural network, that is, a sine function, a sigmoidal function, and the like
Figure BDA0001825989250000092
Etc.;
and 3.7, calculating a compression factor of the newly added hidden layer node according to the formula (6):
Figure BDA0001825989250000093
step 3.8, calculating an improved value of the output weight of the newly added hidden layer node according to the formula (7), and comparing the network training error e of the previous hidden layer node with the prior artL-1And a compression factor αL-1Introduced into the calculation of the output weight improvement value:
Figure BDA0001825989250000094
step 3.9, calculating an evolution value of the output weight of the newly added hidden layer node according to the formula (8), and performing linear combination operation on the random value of the output weight and the improved value to obtain an evolution solution of the output weight;
Figure BDA0001825989250000095
step 3.10, calculating the network training error after adding the L th newly added hidden layer node according to the formula (9) as follows:
Figure BDA0001825989250000096
in the formula, αL-1eL-1Is an acceleration term;
step 3.11, judging whether L is satisfied, wherein the L is more than or equal to LmaxOr eLIf the | | is less than or equal to the predetermined value, finishing the training and ending the process, otherwise, returning to the step 3.2.
The training sample is divided into two parts, one part is used for training, and the other part is used for testing; and after the training of the learning machine is completed, testing by using the test sample.
Step four: and predicting the energy consumption value of the virtual machine by the incremental extreme learning machine based on the acceleration term and the progressive solution.
The incremental extreme learning machine based on the acceleration term and the progressive solution, which is obtained by training by the method, can predict the energy consumption value of the virtual machine to be predicted, and input the CPU utilization rate, the memory utilization rate, the number of executed instructions in unit time and the number of caches (cache is a high-speed buffer memory) lost in unit time of the virtual machine at the current time point into the trained incremental extreme learning machine to predict the energy consumption value of the virtual machine at the current time point.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A virtual machine energy consumption prediction method is characterized in that an incremental extreme learning machine based on an acceleration term and a progressive solution is adopted to realize virtual machine energy consumption prediction, and the method specifically comprises the following steps:
step one, adopting historical data of virtual machine energy consumption to construct a training sample set, wherein the output of the sample is a virtual machine energy consumption value of a selected time point, and the input is virtual machine operation parameters of a plurality of time points before the selected time point;
step two, constructing an incremental extreme learning machine model with introduced acceleration terms and progressive solutions as an expression (1), and training by using a training sample set;
Figure FDA0001825989240000011
where i is represented as the ith node in the hidden layer, LfRepresenting the number of the nodes of the hidden layer determined after training;
Figure FDA0001825989240000014
output matrix represented as the ith hidden layer node αi-1ei-1The acceleration item added for the ith hidden layer node, wherein ei-1The error of network training introduced by adding the (i-1) th hidden layer node in the training expresses the difference between the output generated by the learning machine and the ideal output given by the sample when only the (1) th to (i-1) th hidden layer nodes exist in the learning machine, αi-1Is the compression factor determined for the ith hidden layer node and is determined by the network training error ei-1Calculating to obtain;
Figure FDA0001825989240000012
the method is characterized in that the method is an evolutionary value of an output weight determined for the ith hidden layer node, and is a linear combination of a random given value of the output weight of the hidden layer node and an improved value of the output weight of the hidden layer node, wherein the improved value of the output weight of the hidden layer node is obtained by iterative calculation of an input weight of the hidden layer node and a threshold value of the hidden layer node obtained in training, and the input weight of the hidden layer node and the threshold value of the hidden layer node are obtained by iterative calculation of a network training error and an input sample of the hidden layer node in training; of nodes of the hidden layer
Figure FDA0001825989240000013
The input weight value and the threshold value jointly form the progressive solution;
and step three, inputting the running parameters of the virtual machines at a plurality of time points before the current time point into the incremental extreme learning machine trained in the step two, and predicting the energy consumption value of the virtual machine at the current time point.
2. The method according to claim 1, wherein the training process of the incremental extreme learning machine based on the acceleration term and the progressive solution comprises the following steps, which are performed every time a training sample is input:
defining the input vector of the current sample as x and the output as y;
step 1, in the network initialization stage of the extreme learning machine, the initial value of the hidden layer node number L is set to be 0, and the maximum value is set to be LmaxNetwork training error eL=e0Y, the expected error value is;
step 2, adding a node in the hidden layer, making L self-add 1, assigning L to the newly added hidden layer node as the node number, and randomly generating the output weight β of the newly added hidden layer nodeLAnd a related parameter vL、zLAnd satisfy 0<vL<zL<1,vL+zL=1;
And 3, calculating an output feedback matrix of the nodes of the newly added hidden layer according to the formula (2):
HL=eL-1L)-1(2)
in the formula, eL-1Is determined in the training process of adding the previous hidden node, and represents the network training error when the number of the hidden nodes is L-1;
step 4, calculating the input weight of the newly added hidden layer node according to the formula (3):
Figure FDA0001825989240000021
in the formula (I), the compound is shown in the specification,
Figure FDA0001825989240000022
Moore-Penrose generalized inverse of x;
and 5, calculating a threshold value of the newly added hidden layer node according to the formula (4):
bL=rmse(HL-aL·x) (4)
wherein rmse is a root mean square error function;
and 6, calculating an output matrix of the nodes of the newly added hidden layer according to the formula (5):
Figure FDA0001825989240000023
in the formula, u () may adopt a general excitation function in a neural network, such as sine (), sig ();
and 7, calculating a compression factor of the newly added hidden layer node according to the formula (6):
Figure FDA0001825989240000024
step 8, calculating an improved value of the output weight of the newly added hidden layer node according to the formula (7):
Figure FDA0001825989240000031
step 9, calculating an optimized value of the output weight of the newly added hidden layer node according to the formula (8):
Figure FDA0001825989240000032
step 10, calculating a network training error value after L th newly added hidden layer nodes are added according to the formula (9):
Figure FDA0001825989240000033
in the formula, αL-1eL-1Is an acceleration term;
step 11, judging whether L is satisfied, wherein the L is more than or equal to LmaxOr eLIf the | | is less than or equal to the predetermined threshold, finishing the training and ending the process, otherwise, returning to the step 2.
3. The method according to claim 1, wherein the time point is in days, and the virtual machine operation parameter of the time point is an average value of the virtual machine operation parameters from 0 to 24 times of the day.
4. The method of claim 1, wherein the virtual machine operating parameters comprise: CPU utilization rate, memory utilization rate, number of executed instructions in unit time and number of caches lost in unit time.
CN201811185005.0A 2018-10-11 2018-10-11 Virtual machine energy consumption prediction method Active CN109324953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811185005.0A CN109324953B (en) 2018-10-11 2018-10-11 Virtual machine energy consumption prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811185005.0A CN109324953B (en) 2018-10-11 2018-10-11 Virtual machine energy consumption prediction method

Publications (2)

Publication Number Publication Date
CN109324953A CN109324953A (en) 2019-02-12
CN109324953B true CN109324953B (en) 2020-08-04

Family

ID=65261277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811185005.0A Active CN109324953B (en) 2018-10-11 2018-10-11 Virtual machine energy consumption prediction method

Country Status (1)

Country Link
CN (1) CN109324953B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781068B (en) * 2019-11-07 2022-11-08 南京邮电大学 Data center cross-layer energy consumption prediction method based on isomorphic decomposition method
CN112948115B (en) * 2021-03-01 2022-12-06 北京理工大学 Cloud workflow scheduler pressure prediction method based on extreme learning machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105743699A (en) * 2016-01-27 2016-07-06 中国航空工业集团公司沈阳飞机设计研究所 Fault early warning method and system for virtual environment
CN106899660A (en) * 2017-01-26 2017-06-27 华南理工大学 Cloud data center energy-saving distribution implementation method based on trundle gray forecast model
CN107315642A (en) * 2017-06-22 2017-11-03 河南科技大学 A kind of least energy consumption computational methods in green cloud service offer
CN107911255A (en) * 2017-12-28 2018-04-13 李淑芹 A kind of power grid energy consumption processing unit based on cloud computing system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567661B (en) * 2010-12-31 2014-03-26 北京奇虎科技有限公司 Program recognition method and device based on machine learning
US9189619B2 (en) * 2012-11-13 2015-11-17 International Business Machines Corporation Runtime based application security and regulatory compliance in cloud environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105743699A (en) * 2016-01-27 2016-07-06 中国航空工业集团公司沈阳飞机设计研究所 Fault early warning method and system for virtual environment
CN106899660A (en) * 2017-01-26 2017-06-27 华南理工大学 Cloud data center energy-saving distribution implementation method based on trundle gray forecast model
CN107315642A (en) * 2017-06-22 2017-11-03 河南科技大学 A kind of least energy consumption computational methods in green cloud service offer
CN107911255A (en) * 2017-12-28 2018-04-13 李淑芹 A kind of power grid energy consumption processing unit based on cloud computing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于二次指数平滑预测的虚拟机调度方法研究;王斌 等;《计算机应用研究》;20170331;第34卷(第3期);723-726 *

Also Published As

Publication number Publication date
CN109324953A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN109445906B (en) Method for predicting quantity of virtual machine demands
CN106022521B (en) Short-term load prediction method of distributed BP neural network based on Hadoop architecture
EP3938963A1 (en) Scheduling computation graphs using neural networks
Peng et al. Accelerating minibatch stochastic gradient descent using typicality sampling
WO2022083009A1 (en) Customized product performance prediction method based on heterogeneous data error compensation fusion
WO2021175058A1 (en) Neural network architecture search method and apparatus, device and medium
CN109324953B (en) Virtual machine energy consumption prediction method
CN112418482A (en) Cloud computing energy consumption prediction method based on time series clustering
CN111832839B (en) Energy consumption prediction method based on sufficient incremental learning
CN112884236B (en) Short-term load prediction method and system based on VDM decomposition and LSTM improvement
Wu et al. A deadline-aware estimation of distribution algorithm for resource scheduling in fog computing systems
CN104899101B (en) Software testing resource dynamic allocation method based on multi-target difference evolution algorithm
Liu et al. An improved Adam optimization algorithm combining adaptive coefficients and composite gradients based on randomized block coordinate descent
CN109992412A (en) Capacity regulation method, device, storage medium and the Cloud Server of Cloud Server
CN114564787A (en) Bayesian optimization method, device and storage medium for target-related airfoil design
CN113886454A (en) Cloud resource prediction method based on LSTM-RBF
Qu et al. Parallel genetic algorithm model based on AHP and neural networks for enterprise comprehensive business
Xiao et al. On performance optimization and quality control for approximate-communication-enabled networks-on-chip
CN110532057B (en) Method for predicting resource usage amount of container
CN112667394B (en) Computer resource utilization rate optimization method
CN115796327A (en) Wind power interval prediction method based on VMD (vertical vector decomposition) and IWOA-F-GRU (empirical mode decomposition) -based models
CN114510872A (en) Cloud server aging prediction method based on self-attention mechanism DLSTM
CN111274530B (en) Container cloud resource prediction method
Meng et al. A reliability optimization framework for public cloud services based on Markov process and hierarchical correlation modelling
Wang et al. Research on grey function analysis model (1, 1) based on regional economy impact analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant