CN103164742A - Server performance prediction method based on particle swarm optimization nerve network - Google Patents

Server performance prediction method based on particle swarm optimization nerve network Download PDF

Info

Publication number
CN103164742A
CN103164742A CN2013101131161A CN201310113116A CN103164742A CN 103164742 A CN103164742 A CN 103164742A CN 2013101131161 A CN2013101131161 A CN 2013101131161A CN 201310113116 A CN201310113116 A CN 201310113116A CN 103164742 A CN103164742 A CN 103164742A
Authority
CN
China
Prior art keywords
particle
neural network
population
iteration
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101131161A
Other languages
Chinese (zh)
Other versions
CN103164742B (en
Inventor
程春玲
李阳
张登银
张怡婷
万腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201310113116.1A priority Critical patent/CN103164742B/en
Publication of CN103164742A publication Critical patent/CN103164742A/en
Application granted granted Critical
Publication of CN103164742B publication Critical patent/CN103164742B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a server performance prediction method based on a particle swarm optimization nerve network, and belongs to the technical field of computer performance management. The performance of a server in cloud computing is predicted based on an improved Elman nerve network. Firstly, the number of nodes of an input layer of the Elman nerve network according to relevance of sample data; and secondly, the Elman nerve network is trained by a PSO (particle swarm optimization) algorithm based on particle swarm distribution. The concept of particle aggregation degree is introduced in the PSO algorithm based on particle swarm distribution, a particle swarm is scattered when the aggregation degree is high, diversity of the particle swarm is kept, and the optimizing capacity of the algorithm is improved. Fine precision of a prediction model in short-term prediction and long-term prediction is kept, and the training speed of the nerve network is increased.

Description

A kind of server performance Forecasting Methodology based on the particle group optimizing neural network
Technical field
The present invention relates to a kind of server performance Forecasting Methodology, relate in particular to a kind of server performance Forecasting Methodology based on the particle group optimizing neural network that is applicable to cloud computing, belong to computing power administrative skill field.
Background technology
Along with cloud platform scale is day by day huge, how to improve the resource utilization of server in cloud environment, become a major issue of cloud management.In scheduling of resource for increase and the release of timely adjustresources, avoid scheduling of resource excessively frequently, just need to predict the performance of server in the cloud platform.Require must satisfy two in performance prediction aspect: on the one hand, it is high that the performance prediction accuracy rate is wanted, otherwise can cause significant impact to the scheduling of cloud platform resource, even causes the paralysis of cloud platform.On the other hand, due to the mutual shared resource of software systems in the cloud platform, it is uncertain, uncontrollable that server system uses pattern and the scale of resource, in the cloud platform, shake usually can appear in the performance of server in a short time, for fear of scheduling of resource excessively frequently, so the resources model also will have good performance in long-term forecasting.
At present, the research of the server performance prediction of cloud platform is less, Huang etc. have proposed a kind of based on the double smoothing Forecasting Methodology, and the method is come Optimization Index smoothing prediction algorithm by improving minimum traditional least-squares algorithm, and this algorithm can effectively improve the forecasting efficiency of exponential smoothing algorithm.The average forecast model of a kind of movement-based of the propositions such as Ishak comes the development trend of predictive server performance.The Classical forecast model requires sequence that obvious linear feature must be arranged, and historical data changes up and down by a small margin at certain oblique line, and requires historical data more, otherwise the accuracy of predicted value is low.Nonlinear, random, the traditional often fluctuation of capture server performance well of Forecasting Methodology because server performance in the cloud platform changes.Therefore, Heath etc. propose a kind of method based on the prediction of M/M/1 queuing model, the method is mainly to come the load of predictive server by queuing model, however this Forecasting Methodology in the server load increase rapidly in situation, predicted value and actual value depart from larger.Shi etc. have proposed a kind of based on markovian Forecasting Methodology, calculate the transition probability of every kind of state with statistical method, predict state constantly at lower a moment by the state of previous moment, can predict fast.Yet because the Markov chain method needs in advance the resource performance data to be added up in a large number, set up transition matrix more difficult.Zhang etc. have proposed the forecast model based on Kalman filtering, the method adopts the state-space model of signal and noise, criterion with least mean-square error, estimation according to the estimated value of previous moment and current observed reading update mode variable, try to achieve current estimated value, can be suitable for processing in real time and Computing.Mao etc. have proposed a kind of forecast model based on gray prediction, this model generates the rule of processing the searching system change to raw data, generation has strong regular data sequence, then set up corresponding Differential Equation Model, thus the situation of prediction things future developing trend, and this forecast model needn't be considered the regularity of distribution of data, needed historical data is few, Forecasting Methodology is simple and easy to be realized, the short-term forecasting ratio of precision is higher, and long-term forecasting is inaccurate.Yet these forecast models are owing to can not approaching well nonlinear trend, and are although the precision performance is good in short-term forecasting, not accurate enough in long-term forecasting.
Artificial neural network (Artificial Neural Networks, be abbreviated as ANNs) also referred to as neural network (NNs) or be called link model (Connection Model), it is a kind of model animal nerve network behavior feature, the algorithm mathematics model that carries out the distributed parallel information processing.This network relies on the complexity of system, by adjusting interconnective relation between inner great deal of nodes, thereby reaches the purpose of process information.Along with technical development, neural network has been widely used in the numerous areas such as signal processing, pattern-recognition, optimal control.
The study of neural network (training) process is an iterative process, and namely network connects weights and the continuous process of revising of threshold value.And particle swarm optimization algorithm (Particle Swarm Optimizimation, be called for short PSO) owing to having good global optimizing ability, also can have stronger local optimal searching ability by parameter adjustment, desired parameters is less, the advantages such as programmed process is simple are used to the optimization of neural network more and more.The ultimate principle of utilizing the particle group optimizing neural network is to comprise all connection weights and the threshold value (being the particle coding) of neural network with each individuality in population, each individuality calculates its fitness value by fitness function, principle iteration with the fitness maximum is sought new optimal particle, connection weights and threshold value that optimal particle comprises are optimum solution, give neural network with it, then by work such as the neural network pattern recognition after optimizing, predictions.The basic step that builds the particle group optimizing neural network model is as follows:
1) select suitable neural network structure, comprise the neuron number of each layer, thus the connection weights in definite neural network and the number of threshold value;
2) neural network connection weights and threshold value are carried out the particle coding;
3) particle initialization of population;
4) calculate the fitness value of each particle by fitness function, determine individual extreme value and colony's extreme value;
5) upgrade position and the speed of each particle, calculate the fitness value of new particle, until satisfy stopping criterion for iteration, thus obtain optimal particle;
6) the connection weights and the threshold value that optimal particle are comprised are given neural network.
Because the particle group optimizing neural network has the advantages such as algorithm complex is low, the training time is shorter, therefore can consider to utilize the particle group optimizing neural network model to carry out the prediction of server performance.Yet, the optimizing ability of standard particle group algorithm mainly depends on the interaction between particle, in each iterative process, in population, particle constantly approaches globally optimal solution to optimal particle, increasing particle will be assembled in groups, and lose oneself speed, and become more and more sluggish, the globally optimal solution that is difficult to find.This situation can have a strong impact on the accuracy of prediction.
Summary of the invention
Technical matters to be solved by this invention is overcoming the prior art deficiency, a kind of server performance Forecasting Methodology based on the particle group optimizing neural network is provided, utilize the improved particle swarm optimization neural network that the performance of server is predicted, improved accuracy and the real-time of forecast model.
The present invention specifically solves the problems of the technologies described above by the following technical solutions:
At first a kind of server performance Forecasting Methodology based on the particle group optimizing neural network chooses the historical data of server performance parameter to be predicted, carries out pre-service, obtains training sample; Then utilize training sample that neural network is trained; Utilize at last the neural network that trains that the server performance parameter to be predicted in the moment in future is predicted; When neural network is trained, utilize particle cluster algorithm that connection weights and the threshold value of described neural network are optimized, specifically comprise the following steps:
Step 1, the connection weights that will optimize and threshold value are carried out the particle coding, and definite population scale, the position of particle and speed, the study factor and inertia weight, wherein the dimension of each particle is the connection weights that will optimize and the total quantity of threshold value;
Step 2, initialization population;
Step 3, iteration are upgraded:
The fitness of each particle in step 301, the current population of calculating;
Global optimum's extreme value of step 302, the individual extreme value of upgrading current each particle and particle colony;
The concentration class of step 303, the current population of calculating if concentration class keeps the particle of global optimum greater than default concentration class threshold value, and utilizes random n-1 the scope that generates to be Displacement increment upgrade the position of other n-1 particle except the particle of global optimum; Wherein, n is the total number of particles in population, and Maxset is the mould of maximum disaggregation; The concentration class R of the population of described current the k time iteration k, calculate in accordance with the following methods:
R k = Nearnum k n ,
In formula, Nearnum kFor in current population and the distance between global optimum's particle less than r kTotal number of particles; r kThe radius of neighbourhood for global optimum's particle calculates according to the following formula:
r k = 1 2 × ( 1 - k Maxiter ) × MaxSet ,
Wherein, Maxiter is maximum iteration time, and Maxset is the mould of maximum disaggregation, and k is the number of times of current iteration;
Step 304, more speed and the position of new particle;
Whether there is particle to cross the border in step 305, inspection population, it is adapted on the border of separating the territory if particle crosses the border;
Whether step 306, inspection satisfy the algorithm end condition, and in this way, algorithm finishes, and goes to step 4; Otherwise return to step 303, continue iteration;
Step 4, global optimum's extreme value of last iteration is decoded, obtain connection weights and the threshold value of the optimization of described neural network.
As a preferred version of the present invention, described neural network is the Elman neural network.
Further, the input layer number of described Elman neural network is determined in accordance with the following methods:
If it is P that training sample is concentrated total sample number, each sample is t the time series X that continuous historical data forms t={ x 1, x 2..., x t, for each sample, utilize following formula to calculate in this sample except x tEach historical data x in addition iWith x tBetween degree of correlation ρ i, i=1,2,, t-1:
ρ k = Cov ( x t , x t - k ) Var ( x t ) Var ( x t - k ) ,
In formula, ρ kExpression and x tThe historical data x in an interval k sampling period t-kWith x tThe degree of correlation, k=1,2 ..., t-1; , be used for measuring x tDispersion degree to this seasonal effect in time series average μ; Cov (x t, x t-k)=E[(x t-μ) (x t-k-μ)], Cov (x t, x t-k) expression x tWith x t-kCovariance; From x t-1Begin statistical dependence degree forward continuously greater than the number of 0 historical data, in i sample from x t-1Beginning statistical dependence degree forward is m greater than the number of 0 historical data continuously i
The input layer of Elman neural network is counted m and is obtained by following formula:
m = 1 P Σ i = 1 P m i .
Compared to existing technology, the present invention has following beneficial effect:
The present invention utilizes the particle group optimizing neural network to carry out the server performance prediction, and existing particle cluster algorithm is improved, excessive when intensive in population, dynamically adjusting population distributes, thereby avoid Premature Convergence in locally optimal solution, improve convergence, and then improved the accuracy of server performance prediction.
Description of drawings
Fig. 1 is the structural representation of Elman neural network;
Fig. 2 is particle coding exemplary plot;
Fig. 3 is the building process of PSO-Elman forecast model of the present invention.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is elaborated:
The present invention is directed in existing particle group optimizing neural network, population iteration renewal process easily is absorbed in the defective of local optimum, particle swarm optimization algorithm is improved, the particle method of adjustment that proposition distributes based on population, main thought is in each iterative process, when the population distribution is comparatively intensive, increases a random site increment particle is broken up, thereby jump out locally optimal solution.
For the ease of the public understanding technical solution of the present invention, the below is elaborated as an example of PSO-Elman neural network prediction model example.
As shown in Figure 1, the Elman neural network structure comprises four layers: input layer, hidden layer, structural sheet, output layer, and the Elman neural network feeds back to a structural sheet to the output of the hidden layer of feed-forward network model, and wherein structural sheet is identical with the hidden layer number of unit.The effect of structural sheet is the time delay operator as neural network, thereby can remember the former output valve constantly in hidden layer unit, makes its data to historic state have stronger susceptibility.If the input layer of network has m node, hidden layer and have structural sheet that n node arranged, output layer y (k) has r node, and the input layer of network is the m dimensional vector, and hidden layer and structural sheet are n-dimensional vector, and output layer is the r dimensional vector, the connection weight w 1For m * n ties up matrix, w 2For n * n ties up matrix, w 3For n * r ties up matrix, the mathematical model of Elman network is:
x(k)=f(w 1x c(k)+w 2u(k-1)) (1)
x c(k)=x(k-1)+αx c(k-1) (2)
y(k)=g(w 3x(k)) (3)
Wherein, w 1∈ R M * n, w 2∈ R N * n, w 3∈ R R * nBe respectively input layer to hidden layer, structural sheet to hidden layer and hidden layer to the connection weight value matrix of output layer; U (k-1) is the input vector of input layer, and x (k) is the output vector of input layer, x c(k) be the input vector of structural sheet, y (k) is the output vector of output layer; F (.) and g (.) are respectively the excitation function of hidden layer unit and output layer unit, are generally the sigmoid function, namely
Figure BDA00003000771300051
α is structural sheet x c(k) the corresponding self feed back factor.
Each node layer (neuron) number of Elman neural network can be rule of thumb or existing the whole bag of tricks determine, the present invention utilizes the correlativity of sample to determine the input layer number in order to improve forecasting accuracy, and is specific as follows:
If it is P that training sample is concentrated total sample number, each sample is t the time series X that continuous historical data forms t={ x 1, x 2..., x t, for each sample, utilize following formula to calculate in this sample except x tEach historical data x in addition iWith x tBetween degree of correlation ρ i, i=1,2 ..., t-1:
ρ k = Cov ( x t , x t - k ) Var ( x t ) Var ( x t - k ) ,
In formula, ρ kExpression and x tThe historical data x in an interval k sampling period t-kWith x tThe degree of correlation, k=1,2 ..., t-1;
Figure BDA00003000771300062
Be used for measuring x tDispersion degree to this seasonal effect in time series average μ; Cov (x t, x t-k)=E[(x t-μ) (x t-k-μ)], Cov (x t, x t-k) expression x tWith x t-kCovariance; From x t-1Begin statistical dependence degree forward continuously greater than the number of 0 historical data, in i sample from x t-1Beginning statistical dependence degree forward is m greater than the number of 0 historical data continuously i
The input layer of Elman neural network is counted m and is obtained by following formula:
m = 1 P Σ i = 1 P m i .
After determining the structure of neural network, according to training sample, utilize particle cluster algorithm that connection weights and the threshold value of neural network are optimized, specifically comprise the following steps:
Step 1, the connection weights that will optimize and threshold value are carried out the particle coding, and definite population scale, the position of particle and speed, the study factor and inertia weight, wherein the dimension of each particle is the connection weights that will optimize and the total quantity of threshold value.
In population, the element of particle is the weights of self feed back factor-alpha and Elman neural network, the particle coded format as shown in Figure 2, the element of particle X comprises sub-feedback factor α and neural network weight
Figure BDA00003000771300064
Wherein,
Figure BDA00003000771300065
Figure BDA00003000771300066
It is respectively the arrangement that in weight matrix, element launches by the row vector.
Step 2, initialization population: the particle that to generate at random N length be L, the set of particle population is expressed as A.
Step 3, iteration are upgraded:
The fitness of each particle in step 301, the current population of calculating.Fitness function used in the present invention is the square error of neural network:
Ft = 1 K Σ i = 1 K ( y ^ i - y i ) 2 - - - ( 6 )
In formula, K is the training sample sum,
Figure BDA00003000771300068
The actual output of neural network, y iIt is the neural network desired output.
The extreme value gbest of global optimum of step 302, the individual extreme value pbest that upgrades current each particle and particle colony.
If according to the particle swarm optimization algorithm of standard, after obtaining the extreme value gbest of global optimum of the individual extreme value pbest of current each particle and particle colony, namely tackle speed and the position of particle and advance to upgrade, specific as follows:
In the target search space of n dimension, X i=(x i1, x i2..., x in) be the current location of particle i, V i=(v i1, v i2..., v in) be the current flight speed of particle i, P i=(p i1, p i2..., p in) be the individual extreme value of particle i, P g(t)=(p g1, p g2..., p gn) be the desired positions that in colony, all particles live through, i.e. global extremum.To the k time iteration, each particle upgrades by formula (7) and formula (8);
v ij(t+1)=wv ij(t)+c 1r 1(p ij(t)-x ij(t))+c 2r 2(p gj(t)-x ij(t)) (7)
x ij(t+1)=x ij(t)+v ij(t+1) (8)
In formula, i represents particle, and j represents the j dimension of particle, and w is weighting coefficient, is used for controlling historical speed to the influence degree of present speed, general value between [0.1,0.9], c 1, c 2Be the study factor, value is 2, r usually 1, r 2Be [0,1 ] upper equally distributed random number.
Can find out from speed renewal equation formula (7), the renewal of speed is divided into three parts, first is initial velocity, fixed scope of initial speed limit in basic PSO algorithm, afterwards some are improved the inertial factor that has added various distortion in algorithm, regulate overall situation and partial situation's search capability of algorithm, improve speed of convergence.Be cognitive part, obviously this part is relevant with particle itself, record be that the motion of particle self is on the next impact of speed.Third part is social part, the impact of the motion of other particles on this particle in colony.If only have the renewal of self particle, algorithm is exactly each self-operatings of a lot of particles, finds the workload of optimum solution to increase, degradation.If equally only regulate the motion of oneself according to the situation of colony, so just may be absorbed in local optimum.Therefore only have rational utilization particle self experience and colony's experience could be fast and accurate true find optimum solution.
Can find out from the update method of above PSO algorithm medium velocity and position, in each iterative process, in population, particle constantly approaches globally optimal solution to optimal particle, increasing particle will be assembled in groups, and lose oneself speed, become more and more sluggish, the globally optimal solution that is difficult to find, thus precocious situation appears.Generation for fear of this situation, the present invention proposes the particle method of adjustment based on the population distribution, and main thought is in each iterative process, when the population distribution is comparatively intensive, increase a random site increment particle is broken up, thereby jump out locally optimal solution.
In order to calculate in each iterative process the distribution situation of particle in population, the present invention proposes the concept of concentration class.To the k time iteration, the distance of at first calculating in population i particle and global optimum's particle is
Figure BDA00003000771300071
If n is number of particles in population, d in the statistics population i k≤ r kParticle number be Nearnum k, the concentration class of population is:
R k = Nearnum k n - - - ( 9 )
In each iterative process, if R kη, the particle that keeps the fitness optimum, n-1 displacement increment of random generation, and utilize this n-1 displacement increment to upgrade respectively the position of other n-1 particle except the particle of global optimum, the distance that is about between this n-1 particle and global optimum's particle adds a random displacement increment that generates, the position after upgrading as this particle; The scope of displacement increment is
Figure BDA00003000771300082
Wherein, η is the concentration class threshold value, and MaxSet is the mould of maximum disaggregation, r kBeing the radius of neighbourhood of the k time iteration optimal particle, is half of mould of maximum disaggregation due to the random file increment, therefore, the radius of the neighborhood of optimal particle from
Figure BDA00003000771300083
Successively decrease, computing method are suc as formula (10):
r k = 1 2 × ( 1 - k Maxiter ) × MaxSet - - - ( 10 )
Wherein, Maxiter represents maximum iteration time, and MaxSet is the mould of maximum disaggregation, can find out from formula (10), the radius of neighbourhood increases along with iterations and reduces, and is larger at the computing initial stage radius of neighbourhood, can prevent the too fast gathering of particle, cause algorithm to converge on prematurely locally optimal solution, guarantee the diversity at population initial stage, in the computing later stage, the radius of neighbourhood is less, be conducive to convergence of algorithm, prevent that algorithm from can't restrain, guarantee convergence.
The particle method of adjustment that distributes based on population proposed by the invention, its specific implementation are the described operations of execution in step 303 before according to existing method, the speed of particle and position being upgraded.
The concentration class of step 303, the current population of calculating if concentration class keeps the particle of global optimum greater than default concentration class threshold value, and utilizes random n-1 the scope that generates to be
Figure BDA00003000771300085
Displacement increment upgrade the position of other n-1 particle except the particle of global optimum; Wherein, n is the total number of particles in population, and Maxset is the mould of maximum disaggregation; The concentration class R of the population of described current the k time iteration k, calculate in accordance with the following methods:
R k = Nearnum k n ,
In formula, Nearnum kFor in current population and the distance between global optimum's particle less than r kTotal number of particles; r kThe radius of neighbourhood for global optimum's particle calculates according to the following formula:
r k = 1 2 × ( 1 - k Maxiter ) × MaxSet ,
Wherein, Maxiter is maximum iteration time, and Maxset is the mould of maximum disaggregation, and k is the number of times of current iteration;
Step 304, more speed and the position of new particle.
According to formula (7), (8), speed and the position of particle are upgraded, owing to being prior art, repeated no more herein.
Whether there is particle to cross the border in step 305, inspection population, it is adapted on the border of separating the territory if particle crosses the border.
When particle position upgrades, the situation that particle crosses the border may occur, the present invention adopts the strategy of position correction, and this tactful main thought is whether the element that checks each dimension of each particle surpasses limited range, if surpass limited range, it be modified to boundary value.
If the i of each particle X dimension particle X i, its limited range is [D min, D max], the method for adjustment of crossing the border as shown in Equation (11):
X i = D max , X i > D max D min , X i < D min - - - ( 11 )
Wherein, D min, D maxBe respectively in each particle element lower limit and the upper limit.
Whether step 306, inspection satisfy algorithm end condition (for example reach default maximum iteration time or reach default precision of prediction), and in this way, algorithm finishes, and goes to step 4; Otherwise return to step 303, continue iteration;
Step 4, the extreme value gbest of global optimum of last iteration is decoded, obtain connection weights and the threshold value of the optimization of described neural network.
PSO-Elman forecast model after namely be optimized this moment can utilize this forecast model that the moment in future server performance is predicted.The building process of PSO-Elman forecast model as shown in Figure 3.
Suppose to comprise under cloud environment the M station server, from wherein choosing 200 continuous cpu performance records a station server, every 20 one group forms a sample, the sample set of totally 10 samples, and wherein each sample is that length is 10 time series { x i 1, x i 2... x i 10, wherein i represents i sample.x i 10Be the moment of needs prediction.The input layer number of neural network is m, and the hidden layer node number is n, and the output layer nodes is r.Population size N=20 is set, maximum iteration time G=100, Inertia Weight w=0.5, each particle initial velocity v=0, learn position is restricted to [1,1], study factor c 1=c 2=0.7, target error is 0.001, and concentration class threshold value η is 0.7.Build on this basis PSO-Elman forecast model of the present invention, specific as follows:
Step 1: initialization
1. according to training sample design Elman network structure, calculate element and x in each sample sequence i 10Related coefficient, form { ρ 1 i, ρ 2 i..., ρ 9 iThe related coefficient sequence, if ρ 7 iLess than 0, the ρ after it 8 iAnd ρ 9 iAll more than or equal to 0, m i=2.The number m of last input layer is m iThe mean value of sequence is assumed to be 3, and the hidden layer node number is 6, and the interstitial content of output layer is 1.
2. determine the population scale, the position of particle and speed, study factor c 1=c 2=0.7 and inertia weight w=0.5, wherein the dimension of each particle is 3 * 6+6 * 6+6 * 1+1=61.
3. the initialization population, generate N=20 particle at random, and length is L=61, and cluster is combined into A.
Step 2: iteration is upgraded
1. calculate the fitness of particle in population A, by formula (6) calculate the fitness of all particles.
2. upgrade the individual extreme value pbest of current each particle and the extreme value gbest of global optimum of particle colony.
3. calculate the concentration class of population according to formula (10), if concentration class greater than threshold value η=0.7, keeps optimum particle, generate at random n-1 displacement increment, the scope of displacement increment is
Figure BDA00003000771300101
Upgrade in population the position of other n-1 particle except optimal particle.
4. according to formula (7) and formula (8) more speed and the position of new particle.
5. whether there is particle to cross the border in inspection population, utilizes formula (11) that it is adapted on the border of separating the territory if particle crosses the border.
6. check that the algorithm iteration number of times is whether to maximum iteration time G=100 or whether reach the precision of prediction 0.001 of setting, if satisfy end condition, be weights and the threshold value of Elman neural network with the globally optimal solution gbest of last iteration, algorithm finishes, otherwise return to 3, algorithm continues iteration.
Step 3: result output
Optimum solution gbest is converted into weights and the threshold value of each layer of Elman network according to the neuron number of each layer of input layer, hidden layer, structural sheet and output layer, builds the Elman neural network with the weights of finding the solution and threshold value and be used for prediction, algorithm finishes.
The forecast model that the present invention proposes has all kept precision preferably in short-term forecasting and long-term forecasting, and has improved the training speed of neural network.

Claims (4)

1. at first server performance Forecasting Methodology based on the particle group optimizing neural network chooses the historical data of server performance parameter to be predicted, carries out pre-service, obtains training sample; Then utilize training sample that neural network is trained; Utilize at last the neural network that trains that the server performance parameter to be predicted in the moment in future is predicted; It is characterized in that, when neural network is trained, utilize particle cluster algorithm that connection weights and the threshold value of described neural network are optimized, specifically comprise the following steps:
Step 1, the connection weights that will optimize and threshold value are carried out the particle coding, and definite population scale, the position of particle and speed, the study factor and inertia weight, wherein the dimension of each particle is the connection weights that will optimize and the total quantity of threshold value;
Step 2, initialization population;
Step 3, iteration are upgraded:
The fitness of each particle in step 301, the current population of calculating;
Global optimum's extreme value of step 302, the individual extreme value of upgrading current each particle and particle colony;
The concentration class of step 303, the current population of calculating, if concentration class keeps the particle of global optimum greater than default concentration class threshold value, and the random generation of utilization n-1 scope is Displacement increment upgrade other except the particle of global optimum nThe position of-1 particle; Wherein, nBe the total number of particles in population,
Figure 2013101131161100001DEST_PATH_IMAGE004
Mould for maximum disaggregation; Described current kThe concentration class of the population of inferior iteration
Figure 2013101131161100001DEST_PATH_IMAGE006
, calculate in accordance with the following methods:
Figure 2013101131161100001DEST_PATH_IMAGE008
In formula,
Figure 2013101131161100001DEST_PATH_IMAGE010
For in current population and the distance between global optimum's particle less than
Figure 2013101131161100001DEST_PATH_IMAGE012
Total number of particles;
Figure 60772DEST_PATH_IMAGE012
The radius of neighbourhood for global optimum's particle calculates according to the following formula:
Figure 2013101131161100001DEST_PATH_IMAGE014
Wherein,
Figure 2013101131161100001DEST_PATH_IMAGE016
Be maximum iteration time,
Figure 2013101131161100001DEST_PATH_IMAGE017
Be the mould of maximum disaggregation, kNumber of times for current iteration;
Step 304, more speed and the position of new particle;
Whether there is particle to cross the border in step 305, inspection population, it is adapted on the border of separating the territory if particle crosses the border;
Whether step 306, inspection satisfy the algorithm end condition, and in this way, algorithm finishes, and goes to step 4; Otherwise return to step 303, continue iteration;
Step 4, global optimum's extreme value of last iteration is decoded, obtain connection weights and the threshold value of the optimization of described neural network.
2. as claimed in claim 1 based on the server performance Forecasting Methodology of particle group optimizing neural network, it is characterized in that, described neural network is the Elman neural network.
3. as claimed in claim 2 based on the server performance Forecasting Methodology of particle group optimizing neural network, it is characterized in that, the input layer number of described Elman neural network is determined in accordance with the following methods:
If training sample concentrates total sample number to be P, each sample is tThe time series that individual continuous historical data forms
Figure 2013101131161100001DEST_PATH_IMAGE019
, for each sample, utilize following formula to calculate in this sample and remove
Figure 2013101131161100001DEST_PATH_IMAGE021
Each historical data in addition With
Figure 476448DEST_PATH_IMAGE021
Between the degree of correlation
Figure 2013101131161100001DEST_PATH_IMAGE025
,
Figure DEST_PATH_IMAGE027
:
Figure DEST_PATH_IMAGE029
In formula,
Figure DEST_PATH_IMAGE031
The expression with
Figure 941671DEST_PATH_IMAGE021
The interval kThe historical data in individual sampling period
Figure DEST_PATH_IMAGE033
With
Figure 212246DEST_PATH_IMAGE021
The degree of correlation,
Figure DEST_PATH_IMAGE035
Figure DEST_PATH_IMAGE037
, be used for measuring
Figure 361640DEST_PATH_IMAGE021
To this seasonal effect in time series average
Figure DEST_PATH_IMAGE039
Dispersion degree;
Figure DEST_PATH_IMAGE041
,
Figure DEST_PATH_IMAGE043
Expression
Figure 371054DEST_PATH_IMAGE021
With
Figure 575770DEST_PATH_IMAGE033
Covariance; From
Figure DEST_PATH_IMAGE045
Begin statistical dependence degree forward continuously greater than the number of 0 historical data, iIn individual sample from
Figure 148571DEST_PATH_IMAGE045
Beginning statistical dependence degree forward greater than the number of 0 historical data is continuously
Figure DEST_PATH_IMAGE047
The input layer number of Elman neural network mObtained by following formula:
As described in claim 1~3 any one based on the server performance Forecasting Methodology of particle group optimizing neural network, it is characterized in that, described fitness is the mean square deviation of described neural network, obtains according to the following formula:
Figure DEST_PATH_IMAGE051
In formula, KThe training sample sum,
Figure DEST_PATH_IMAGE053
The actual output of neural network,
Figure DEST_PATH_IMAGE055
It is the neural network desired output.
CN201310113116.1A 2013-04-02 2013-04-02 A kind of server performance Forecasting Methodology based on particle group optimizing neural network Expired - Fee Related CN103164742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310113116.1A CN103164742B (en) 2013-04-02 2013-04-02 A kind of server performance Forecasting Methodology based on particle group optimizing neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310113116.1A CN103164742B (en) 2013-04-02 2013-04-02 A kind of server performance Forecasting Methodology based on particle group optimizing neural network

Publications (2)

Publication Number Publication Date
CN103164742A true CN103164742A (en) 2013-06-19
CN103164742B CN103164742B (en) 2016-02-17

Family

ID=48587808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310113116.1A Expired - Fee Related CN103164742B (en) 2013-04-02 2013-04-02 A kind of server performance Forecasting Methodology based on particle group optimizing neural network

Country Status (1)

Country Link
CN (1) CN103164742B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946613A (en) * 2012-10-10 2013-02-27 北京邮电大学 Method for measuring QoE
CN103364622A (en) * 2013-08-08 2013-10-23 株洲硬质合金集团有限公司 Catching method for breakdown points in electrolytic capacitor breakdown test process
CN103428282A (en) * 2013-08-06 2013-12-04 浪潮(北京)电子信息产业有限公司 On-line energy-saving control method and device for cloud computing data center
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN104484715A (en) * 2014-11-28 2015-04-01 江苏大学 Neural network and particle swarm optimization algorithm-based building energy consumption predicting method
CN105320835A (en) * 2014-07-15 2016-02-10 通用电气智能平台有限公司 Apparatus and method for time series data analysis method market
CN106102079A (en) * 2016-06-08 2016-11-09 西安电子科技大学 Based on the C RAN carrier wave emigration resource requirement Forecasting Methodology improving PSO
CN106472412A (en) * 2016-10-10 2017-03-08 重庆科技学院 Pet feeding method and system based on internet of things
CN106775705A (en) * 2016-12-12 2017-05-31 西安邮电大学 A kind of software module division methods
WO2017134554A1 (en) * 2016-02-05 2017-08-10 International Business Machines Corporation Efficient determination of optimized learning settings of neural networks
CN107070802A (en) * 2016-12-21 2017-08-18 吉林大学 Wireless sensor network Research of Congestion Control Techniques based on PID controller
CN107392397A (en) * 2017-08-25 2017-11-24 广东工业大学 A kind of short-term wind speed forecasting method, apparatus and system
CN107403035A (en) * 2017-07-03 2017-11-28 北京航空航天大学 A kind of moon high ladder overall plan optimization method
CN106614273B (en) * 2016-10-10 2018-05-08 重庆科技学院 Pet feeding method and system based on Internet of Things big data analysis
CN108182490A (en) * 2017-12-27 2018-06-19 南京工程学院 A kind of short-term load forecasting method under big data environment
CN108288115A (en) * 2018-03-15 2018-07-17 安徽大学 A kind of daily short-term express delivery amount prediction technique of loglstics enterprise
CN108665112A (en) * 2018-05-16 2018-10-16 东华大学 Photovoltaic fault detection method based on Modified particle swarm optimization Elman networks
CN108881475A (en) * 2018-07-12 2018-11-23 重庆市特种设备检测研究院 A kind of Industrial Boiler intelligent Data Acquisition System and acquisition method
CN109120463A (en) * 2018-10-15 2019-01-01 新华三大数据技术有限公司 Method for predicting and device
CN109217651A (en) * 2018-10-08 2019-01-15 淮阴工学院 A kind of APFC control system of online compensation control rate
CN110460880A (en) * 2019-08-09 2019-11-15 东北大学 Wireless industrial streaming media self-adapting transmission method based on population and neural network
CN111859562A (en) * 2020-07-10 2020-10-30 大连理工大学 Pressure-independent intelligent regulating valve design method based on dynamic resistance characteristic
CN113065693A (en) * 2021-03-22 2021-07-02 哈尔滨工程大学 Traffic flow prediction method based on radial basis function neural network
CN113326960A (en) * 2020-02-28 2021-08-31 南京理工大学 Subway traction energy consumption prediction method based on particle swarm optimization LSTM
CN113807486A (en) * 2021-08-23 2021-12-17 南京邮电大学 Multi-robot area coverage method based on improved particle swarm optimization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154109A1 (en) * 2009-12-22 2011-06-23 Xerox Corporation Continuous, automated discovery of bugs in released software
CN102364501A (en) * 2011-09-14 2012-02-29 哈尔滨工程大学 Method for reproducing two-dimensional defect of petroleum pipeline PSO-BP (Particle Swarm Optimization-Back-Propagation) neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154109A1 (en) * 2009-12-22 2011-06-23 Xerox Corporation Continuous, automated discovery of bugs in released software
CN102364501A (en) * 2011-09-14 2012-02-29 哈尔滨工程大学 Method for reproducing two-dimensional defect of petroleum pipeline PSO-BP (Particle Swarm Optimization-Back-Propagation) neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIANJUN SHEN,ET AL.: "Particle Swarm Optimization with Dynamic Adaptive Inertia Weight", 《2010 INTERNATIONAL CONFERENCE ON CHALLENGES IN ENVIRONMENTAL SCIENCE AND COMPUTER ENGINEERING》 *
牛永洁,陈莉: "基于竞争与拉伸技术的粒子群算法", 《计算机工程与设计》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946613A (en) * 2012-10-10 2013-02-27 北京邮电大学 Method for measuring QoE
CN102946613B (en) * 2012-10-10 2015-01-21 北京邮电大学 Method for measuring QoE
CN103428282B (en) * 2013-08-06 2016-05-18 浪潮(北京)电子信息产业有限公司 Online energy-saving control method and the device of a kind of cloud computing data center
CN103428282A (en) * 2013-08-06 2013-12-04 浪潮(北京)电子信息产业有限公司 On-line energy-saving control method and device for cloud computing data center
CN103364622B (en) * 2013-08-08 2015-09-16 株洲硬质合金集团有限公司 A kind of method for catching of electrochemical capacitor flash test breakdown point
CN103364622A (en) * 2013-08-08 2013-10-23 株洲硬质合金集团有限公司 Catching method for breakdown points in electrolytic capacitor breakdown test process
CN105320835A (en) * 2014-07-15 2016-02-10 通用电气智能平台有限公司 Apparatus and method for time series data analysis method market
CN104361393B (en) * 2014-09-06 2018-02-27 华北电力大学 Data predication method is used for based on the improved neural network model of particle swarm optimization algorithm
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN104484715A (en) * 2014-11-28 2015-04-01 江苏大学 Neural network and particle swarm optimization algorithm-based building energy consumption predicting method
US11093826B2 (en) 2016-02-05 2021-08-17 International Business Machines Corporation Efficient determination of optimized learning settings of neural networks
WO2017134554A1 (en) * 2016-02-05 2017-08-10 International Business Machines Corporation Efficient determination of optimized learning settings of neural networks
CN106102079A (en) * 2016-06-08 2016-11-09 西安电子科技大学 Based on the C RAN carrier wave emigration resource requirement Forecasting Methodology improving PSO
CN106102079B (en) * 2016-06-08 2019-03-19 西安电子科技大学 Based on the C-RAN carrier wave emigration resource requirement prediction technique for improving PSO
CN106614273B (en) * 2016-10-10 2018-05-08 重庆科技学院 Pet feeding method and system based on Internet of Things big data analysis
CN106472412B (en) * 2016-10-10 2018-03-27 重庆科技学院 pet feeding method and system based on internet of things
CN106472412A (en) * 2016-10-10 2017-03-08 重庆科技学院 Pet feeding method and system based on internet of things
CN106775705A (en) * 2016-12-12 2017-05-31 西安邮电大学 A kind of software module division methods
CN106775705B (en) * 2016-12-12 2019-10-11 西安邮电大学 A kind of software module division methods
CN107070802A (en) * 2016-12-21 2017-08-18 吉林大学 Wireless sensor network Research of Congestion Control Techniques based on PID controller
CN107403035A (en) * 2017-07-03 2017-11-28 北京航空航天大学 A kind of moon high ladder overall plan optimization method
CN107392397A (en) * 2017-08-25 2017-11-24 广东工业大学 A kind of short-term wind speed forecasting method, apparatus and system
CN108182490A (en) * 2017-12-27 2018-06-19 南京工程学院 A kind of short-term load forecasting method under big data environment
CN108288115A (en) * 2018-03-15 2018-07-17 安徽大学 A kind of daily short-term express delivery amount prediction technique of loglstics enterprise
CN108665112A (en) * 2018-05-16 2018-10-16 东华大学 Photovoltaic fault detection method based on Modified particle swarm optimization Elman networks
CN108881475A (en) * 2018-07-12 2018-11-23 重庆市特种设备检测研究院 A kind of Industrial Boiler intelligent Data Acquisition System and acquisition method
CN109217651B (en) * 2018-10-08 2019-06-28 淮阴工学院 A kind of APFC control system of online compensation control rate
CN109217651A (en) * 2018-10-08 2019-01-15 淮阴工学院 A kind of APFC control system of online compensation control rate
CN109120463B (en) * 2018-10-15 2022-01-07 新华三大数据技术有限公司 Flow prediction method and device
CN109120463A (en) * 2018-10-15 2019-01-01 新华三大数据技术有限公司 Method for predicting and device
CN110460880A (en) * 2019-08-09 2019-11-15 东北大学 Wireless industrial streaming media self-adapting transmission method based on population and neural network
CN113326960A (en) * 2020-02-28 2021-08-31 南京理工大学 Subway traction energy consumption prediction method based on particle swarm optimization LSTM
CN113326960B (en) * 2020-02-28 2022-08-23 南京理工大学 Subway traction energy consumption prediction method based on particle swarm optimization LSTM
CN111859562A (en) * 2020-07-10 2020-10-30 大连理工大学 Pressure-independent intelligent regulating valve design method based on dynamic resistance characteristic
CN113065693A (en) * 2021-03-22 2021-07-02 哈尔滨工程大学 Traffic flow prediction method based on radial basis function neural network
CN113065693B (en) * 2021-03-22 2022-07-15 哈尔滨工程大学 Traffic flow prediction method based on radial basis function neural network
CN113807486A (en) * 2021-08-23 2021-12-17 南京邮电大学 Multi-robot area coverage method based on improved particle swarm optimization
CN113807486B (en) * 2021-08-23 2023-09-26 南京邮电大学 Multi-robot area coverage method based on improved particle swarm algorithm

Also Published As

Publication number Publication date
CN103164742B (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN103164742B (en) A kind of server performance Forecasting Methodology based on particle group optimizing neural network
Liu et al. Ensemble forecasting system for short-term wind speed forecasting based on optimal sub-model selection and multi-objective version of mayfly optimization algorithm
Luo et al. Short-term traffic flow prediction based on least square support vector machine with hybrid optimization algorithm
Liang et al. A deep reinforcement learning network for traffic light cycle control
Li et al. Research and application of a combined model based on variable weight for short term wind speed forecasting
Yaghini et al. A hybrid algorithm for artificial neural network training
Li et al. Urban traffic flow forecasting using Gauss–SVR with cat mapping, cloud model and PSO hybrid algorithm
Wang et al. Predicting Beijing's tertiary industry with an improved grey model
CN102469103B (en) Trojan event prediction method based on BP (Back Propagation) neural network
Poczęta et al. Learning fuzzy cognitive maps using structure optimization genetic algorithm
Nguyen et al. Efficient time-series forecasting using neural network and opposition-based coral reefs optimization
CN105139264A (en) Photovoltaic generation capacity prediction method based on particle swarm algorithm wavelet neural network
CN105160444A (en) Electrical equipment failure rate determining method and system
Ren et al. Solving flow-shop scheduling problem with a reinforcement learning algorithm that generalizes the value function with neural network
Rabie et al. A fog based load forecasting strategy based on multi-ensemble classification for smart grids
Zhu et al. Network Traffic Prediction based on Particle Swarm BP Neural Network.
Rabie et al. A new outlier rejection methodology for supporting load forecasting in smart grids based on big data
Niu et al. Short-term wind speed hybrid forecasting model based on bias correcting study and its application
Tian et al. A network traffic hybrid prediction model optimized by improved harmony search algorithm
CN105574586A (en) General airplane air-material demand prediction method based on MPSO-BP network
CN113722980A (en) Ocean wave height prediction method, system, computer equipment, storage medium and terminal
Chen et al. The optimal design and application of LSTM neural network based on the hybrid coding PSO algorithm
CN115186803A (en) Data center computing power load demand combination prediction method and system considering PUE
Ibrahim et al. Forecasting multi-step-ahead reservoir monthly and daily inflow using machine learning models based on different scenarios
Liu et al. A survey of artificial bee colony algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20130619

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000209

Denomination of invention: Server performance prediction method based on particle swarm optimization nerve network

Granted publication date: 20160217

License type: Common License

Record date: 20161111

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EC01 Cancellation of recordation of patent licensing contract

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000209

Date of cancellation: 20180116

EC01 Cancellation of recordation of patent licensing contract
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160217

Termination date: 20190402

CF01 Termination of patent right due to non-payment of annual fee