Summary of the invention
It is an object of the invention to solve at least the above problems, and provide the advantages of at least will be described later.
It is a still further object of the present invention to provide a kind of battery remaining capacity prediction technique neural network based, packets
It includes:
Initial model is predicted using neural network model building remaining capacity;
Multiple groups voltage, electric current and the remaining capacity data for obtaining battery, using voltage and current data as input sample,
Remaining capacity data use L1/2 regularization method pair after being input in remaining capacity prediction initial model as desired output
Remaining capacity predicts the hidden layer of neural network in initial model to the weight vector W of output layernIt is handled to obtain vector
Wn+1, the number of hidden nodes of neural network is revised as W1In for 0 weight component number m, then to remaining capacity predict just
Beginning model is trained, and obtains multiple remaining capacity forecast value revision models;
Selection one is to the smallest model of the error of desired output from obtained multiple remaining capacity forecast value revision models
As final remaining capacity prediction model;
The electric current for needing the battery predicted and voltage value are input in remaining capacity prediction model, obtained output number
According to the residual electric quantity for being the battery.
Preferably, in the battery remaining capacity prediction technique neural network based, L1/2 regularization is used
Method predicts the hidden layer of neural network in initial model to the weight vector W of output layer remaining capacitynIt is handled to obtain
Vector Wn+1Method particularly includes:
S1, intermediate variable b is calculated:
B=Wn+(Samout-Wn*Unitout)*(Unitoutt)
Wherein, Samout is the output layer output vector of neural network;Unitout be neural network hidden layer export to
Amount;T is the arithmetic accuracy of neural network;
S2, intermediate variable r is calculated:
R=(961/2/9*u)*(b(i))3/2
Wherein, u is given value;B (i) is i-th of component of b, i=1,2 ..., n, n WnComponent number;
S3, W is calculatedn+1I-th of component Wn+1(i):
IfThen
Wherein,
Wherein, abs (b (i)/3) is the absolute value of b (i)/3;
IfThen Wn+1(i)=0;
S4, W is successively calculatedn+1(1) W is arrivedn+1(n), W is obtainedn+1。
Preferably, in the battery remaining capacity prediction technique neural network based, t=0.000001, u=
0.001。
Preferably, in the battery remaining capacity prediction technique neural network based, neural network is determined
After the number of hidden nodes, before being trained to remaining capacity prediction initial model, further includes: the principle based on fuzzy control, it will be defeated
Enter sample data multiplied by K, and using obtained data as new input sample, while hidden layer central point parameter is also multiplied by K;
Wherein, 0 < K < 1.
Preferably, in the battery remaining capacity prediction technique neural network based, by input sample and
Desired output is input to be trained in remaining capacity prediction initial model before, to the voltage, electric current and residue of battery electricity
Amount data are normalized, will treated voltage and current data as input sample, treated remaining capacity number
According to as desired output.
Preferably, in the battery remaining capacity prediction technique neural network based, building remaining capacity is pre-
Survey the used neural network model of initial model is RBF neural network model.
Preferably, in the battery remaining capacity prediction technique neural network based, remaining capacity prediction is just
Beginning model uses learning algorithm of the gradient descent method as neural network.
Preferably, in the battery remaining capacity prediction technique neural network based, remaining capacity prediction is just
When being trained in beginning model, simulated annealing is introduced in training algorithm and solves the problems, such as local optimum to reduce training time
Number:
A suboptimization neural network model parameter, i.e. hidden layer are obtained after being trained by gradient descent method
To the weight vector W of the input layer and central point parameter Center of hidden layer function, a small offset △ w is then on W,
New weight vector NewW=W+ △ w is obtained, while also carrying out a small offset △ center on Center, is obtained new
Then central point parameter NewCenter=Center+ △ center uses new weight vector NewW and new central point parameter
NewCenter carries out gradient descent method training, a preferably neural network model parameter can be obtained, to reduce needs
Reach the frequency of training of same training precision.
The present invention is selected as basic neural network framework according to the characteristics of RBF neural first, then passes through L1/2
Regularization method has chosen suitable the number of hidden nodes in neural network, then using the logical thought of fuzzy control to input number
According to estimation error is further increased after being handled, introduced finally by the learning algorithm (gradient descent method) of neural network
Simulated annealing combines the defect for improving and easily falling into locally optimal solution under single use gradient descent method, so that last
The RBF neural fast convergence rate of formation, frequency of training is few, and estimation precision is high.By the way that the technology is applied to actual number
In applying according to scene, base station storage batteries residual capacity can be estimated preferably, and can satisfy in base station for storing
The requirement of battery remaining power estimation precision is so as to applied in actual base station storage batteries.
Further advantage, target and feature of the invention will be partially reflected by the following instructions, and part will also be by this
The research and practice of invention and be understood by the person skilled in the art.
Specific embodiment
As shown in Figure 1, the present invention provides a kind of battery remaining capacity prediction technique neural network based, comprising:
Initial model is predicted using neural network model building remaining capacity;
Multiple groups voltage, electric current and the remaining capacity data for obtaining battery, using voltage and current data as input sample,
Remaining capacity data use L1/2 regularization method pair after being input in remaining capacity prediction initial model as desired output
Remaining capacity predicts the hidden layer of neural network in initial model to the weight vector W of output layernIt is handled to obtain vector
Wn+1, the number of hidden nodes of neural network is revised as W1In for 0 weight component number m, then to remaining capacity predict just
Beginning model is trained, and obtains multiple remaining capacity forecast value revision models;
Selection one is to the smallest model of the error of desired output from obtained multiple remaining capacity forecast value revision models
As final remaining capacity prediction model;
The electric current for needing the battery predicted and voltage value are input in remaining capacity prediction model, obtained output number
According to the residual electric quantity for being the battery.
In the technical solution, we establish the basic framework of a neural network, by existing data (such as electricity
Pressure, electric current) acquisition module and remaining battery capacity test acquisition module acquire appropriate number of base station storage batteries electric current, voltage
And corresponding battery remaining power data, and carry out saving the training data and test data as neural network.Then will
Training data is input in neural network, is first passed through L1/2 regularization method and is selected suitable neural network the number of hidden nodes, then
Estimation precision is further increased to input data scaled down appropriate by the logical thought of fuzzy control, finally by
Simulated annealing is introduced into training algorithm to solve the problems, such as easily to fall into Local Minimum in training process.Therefrom selection one is to test
The smallest model of the error of data is as final neural network model.
In the battery remaining capacity prediction technique neural network based, using L1/2 regularization method to residue
Weight vector W of the hidden layer of neural network to output layer in power quantity predicting initial modelnIt is handled to obtain vector Wn+1Tool
Body method are as follows:
S1, intermediate variable b is calculated:
B=Wn+(Samout-Wn*Unitout)*(Unitoutt)
Wherein, Samout is the output layer output vector of neural network;Unitout be neural network hidden layer export to
Amount;T is the arithmetic accuracy of neural network;
S2, intermediate variable r is calculated:
R=(961/2/9*u)*(b(i))3/2
Wherein, u is given value;B (i) is i-th of component of b, i=1,2 ..., n, n WnComponent number;
S3, W is calculatedn+1I-th of component Wn+1(i):
IfThen
Wherein,
Wherein, abs (b (i)/3) is the absolute value of b (i)/3;
IfThen Wn+1(i)=0;
S4, W is successively calculatedn+1(1) W is arrivedn+1(n), W is obtainedn+1。
In the battery remaining capacity prediction technique neural network based, t=0.000001, u=0.001.
In another technical solution, for using radial base neural net prediction battery remaining power during,
The problem (the number of hidden nodes determines) that network structure determines, due to will lead to neural network when hidden node number is excessive
Overfitting problem, and the feature without calligraphy learning to data when interstitial content is very few.Therefore, pass through the think of of principal component analysis
Want for L1/2 regularization algorithm to be applied in the selection of number of network node purpose, obtains suitable interstitial content and solve neural network
Common overfitting problem in prediction.
Assume that initial the number of hidden nodes is relatively mostly 6, then sets hidden layer at random to the weight vector W of output layer as one
A is the non-vanishing vector i.e. W=(1.2,1.6,0.3,0.35,0.78,1.8) of one 6 dimension, passes through above-mentioned hidden node algorithm
W=(1.2,0.9,0.3,0,0,0) to the end is obtained after being trained to neural network.It can find a part of in the vector
Component be the 0 expression part weight be that 0 i.e. its corresponding hidden node can remove.To realize the choosing of the number of hidden nodes
It selects, has obtained 3 suitable hidden nodes.
It can be indicated by threshold value iterative formula below using L1/2 regularization algorithm process core concept:
Wi(n+1)=g (Wi(n));|bi|>m;
Wi(n+1)=0;| bi |≤m;
The process of entire algorithm is as shown in following formula:
Given algorithm precision t (generally taking lesser value such as 0.000001) and maximum step M, primary iteration walk n=0, give
Determining u (generally taking lesser value such as 0.001), degree of rarefication k (refers to that sparse degree generally takes biggish integer value such as 10),
Given matrix U nitout (referring to that hidden layer exports in RBF neural) and vector Samout (refer to RBF nerve net
The output of network output layer), (what is indicated in RBF neural is power of the hidden layer to output layer to given initial point iteration point W at random
It is worth vector):
Wn=W;
B=Wn+(Samout-Wn*Unitout)*(Unitoutt);
It indicates vector b obtaining bsort=sort (b, 2, ' descend ') by component decline sequence;
R=(961/2/9*u)*(b(k+1))3/2
Wherein b (k+1) indicates+1 component of kth in bsort;
Assuming that Wn+1(i), Wn(i), b (i) is that the component of its corresponding vector so can have each of which component respectively
Following relationship:
If
Otherwise Wn+1(i)=0;
To obtain the W of most initialnAnd Wn+1, in above-mentioned formula, WnWhat is indicated is that the hidden node of RBF neural arrives
The weight vector of input layer, b (i) are by WnIt obtains, n indicates how many times iteration.By it is above-mentioned iterate to certain number after
Wn+1And WnA degree of approximation passes through analysis W at this timen+1Each weight component is hidden node to defeated in middle weight vector
The size of layer weight can find that part weight is 0 or close to 0 out, at this time can remove the corresponding hidden node of these weights
Fall, the node finally obtained is the suitable the number of hidden nodes that we are obtained by L1/2 method of regularization.
In the battery remaining capacity prediction technique neural network based, the number of hidden nodes of neural network is determined
Afterwards, before being trained to remaining capacity prediction initial model, further includes: the principle based on fuzzy control, by input sample data
Multiplied by K, and using obtained data as new input sample, while hidden layer central point parameter is also multiplied by K;
Wherein, 0 < K < 1.
In another technical solution, due to the mistake estimated using residual capacity of the neural network to base station storage batteries
Journey, practical is exactly by neural network to the input data i.e. voltage and electricity for establishing a battery in training data learning process
The estimating system of stream and output data remaining battery capacity, as to a mathematical modulo between input data and output data
One neural network approximation of type.Assume that X is input data, Y is that output data is remaining battery capacity estimation, RBF
Neural network is equivalent at a mapping f;
Then input data passes through the output data mathematical formulae that must be fallen after neural network:
Y=f (X);
And from the thought of fuzzy control logic: when two groups of data to be carried out to the diminution of equal proportion, corresponding number
The difference of word magnitude can also reduce similar as to a certain extent, it may be assumed that
|X1-X2| > K* | X1-X2|,
Wherein, K is the positive number less than 1;
Assuming that X1For the input data of training neural network, X2For the estimated data for carrying out remaining battery capacity;By above
Conclusion known to:
|K*X1-K*X2| < | X1-X2|
Difference between being reduced when reducing neural metwork training data and the input data equal proportion estimated
It is different to increase similitude between data, therefore when the error for making training data is reduced, the estimation error of battery
I.e. estimation precision, which can be reduced, can also be promoted i.e.:
|{Y1=f (X1)}-{Y2=f (X2) | > | { Y1=f (K*X1)}-{Y2=f (K*X2)}|
Y1The output of neural network, Y as in training data2The output i.e. battery as estimated with neural network is surplus
Covolume amount.Can then be obtained by above-mentioned formula is Y under identical training error1By will be in training data under conditions of precision be the same
X1With input data X when estimating battery2It carries out can reduce estimation error when certain equal proportion reduces to mention
High estimation precision;
By being selected in experimentation for reducing a of parameter, for input data, that is, voltage and current composition
Bivector is SamIn=[12.8 12.7 12.6 12.4 12.3 12.2 12 11.9 11.8 11.6 11.5 11.4
11.2 11.1 11 12.7 12.6 12.5 12.4 12.3 12.2 12.1 11.8 11.7 11.6 11.5 11.4 11.3
11.2 11 12.8 12.7 12.6 12.5 12.4 12.3 12 11.9 11.8 11.7 11.6 11.4 11.3 11.2
11.1;1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 22 2 2 2 2 3 3 3 3 3 3 3
3333333 3] * a, while hidden layer central point Center=[11 12 13;12 3] * a trains test data
Good neural network input voltage and input current Vector Groups testno=[11.3 11.7 12.1 12.5 12.9 11.1 11.9
1212.8 12.9 11 11.5 12.1 12.2 12.9;11111222223333 3] * a passes through experiment
Suitable a value a=0.27 is obtained after selection.Then defeated after being multiplied with a to the voltage and current for carrying out Residual capacity prediction is needed
Enter and suitable residual capacity can be obtained in trained nerve network system.
In the battery remaining capacity prediction technique neural network based, input sample and desired output is defeated
Enter into remaining capacity prediction initial model before being trained, the voltage of battery, electric current and remaining capacity data are carried out
Normalized, will treated voltage and current data as input sample, remaining capacity data that treated are as it is expected
Output.
In another technical solution, generally data are trained by neural network study before need data into
Row pretreatment, and whether need to pre-process data can generally consider from following two o'clock.
First point: when being trained to neural network using the gradient descent method that training algorithm such as this technology uses,
Its contour is very sharp when being such as in [0,1000] when larger for input data range, when line direction finding optimal solution of vertically ascending a height
It can be walked with Z-shaped direction so causing iteration that could repeatedly restrain, so that convergence rate is too slow.And if by normalized
So that data area is converted to [0,5], at this moment its contour for being formed is relatively round, therefore meeting when seeking optimal solution with vertical contour direction
With nearly straight searching optimal solution to find optimal solution under the iteration of fewer number, convergence rate is substantially increased.
Second point: in using sample distance as the algorithm such as k nearest neighbor algorithm of feature judgment criteria, if some feature
Codomain range it is larger when, will affect the judgement of the smaller feature of codomain of those features so that the primary and secondary of feature by
It is smudgy in codomain reason.
In the battery remaining capacity prediction technique neural network based, building remaining capacity predicts initial model
Used neural network model is RBF neural network model.
In another technical solution, recently during the last ten years, artificial neural network research work deepens continuously, and has been achieved with
Greatly it is in progress, has been solved the problems, such as in fields such as prediction estimations many.And apply two kinds of more neural network types
For BP neural network and RBF neural.And RBF neural therein has been successfully applied to function approximation, mode point
The fields such as class, system modelling, pattern-recognition and signal processing.RBF neural has network structure simple, None-linear approximation
The features such as ability is strong, fast convergence rate and global convergence.
Opposite BP neural network is more complicated than RBF neural in network structure, therefore the parameter of required training
Number will increase it is also relatively slow so as to cause its convergence rate, so our final choice RBF neurals are as us
The basic structure model of neural network.
The radial basis function of j-th of neuron of RBF neural are as follows:
Wherein, cj=[c1j...cnj], it is the coordinate vector of the radial basis function central point of j-th of neuron of hidden layer;
B=[b1...bm]T, bjFor the width of j-th of radial basis function of hidden layer.
The Sigmoid function of its j-th of neuron for BP neural network are as follows:Wherein net=xp,
Wherein pj=[p1j...pnj]T, compare node output valve and the part of RBF known to the function of the neuron of above two function
The closer input of distance center point is related i.e. | | x-cj| | value it is larger when output be not just 0, and BP neural network neuron exists
It is related to almost all of input in very wide range to be determined by the characteristics of function, so as to cause using output error to network
Due to the difference of function characteristic when node and weight are adjusted, RBF neural convergence rate is faster than BP neural network
It is more.
In the battery remaining capacity prediction technique neural network based, remaining capacity predicts that initial model uses
Learning algorithm of the gradient descent method as neural network.
In the battery remaining capacity prediction technique neural network based, remaining capacity predict initial model in into
When row training, simulated annealing is introduced in training algorithm and solves the problems, such as local optimum to reduce frequency of training:
A suboptimization neural network model parameter, i.e. hidden layer are obtained after being trained by gradient descent method
To the weight vector W of the input layer and central point parameter Center of hidden layer function, a small offset △ w is then on W,
New weight vector NewW=W+ △ w is obtained, while also carrying out a small offset △ center on Center, is obtained new
Then central point parameter NewCenter=Center+ △ center uses new weight vector NewW and new central point parameter
NewCenter carries out gradient descent method training, a preferably neural network model parameter can be obtained, to reduce needs
Reach the frequency of training of same training precision.
In another technical solution, we use gradient descent method conduct in the learning process to RBF neural
The learning algorithm of neural network, but since the initial point in the learning algorithm is randomly selected so as to cause using gradient
The problem of locally optimal solution is easily fallen into during descent method.And since simulated annealing derives from solid annealing theory, mould
The principle of quasi- annealing algorithm assumes that from a certain high initial temperature, with the continuous decline of temperature parameter, join probability is prominent
The globally optimal solution that characteristic finds objective function at random in solution space is jumped, i.e., probabilistic can be jumped out simultaneously in locally optimal solution
It is final to tend to globally optimal solution.Assuming that L (m) is that we carry out the objective function of RBF neural training process, what we used
Least square regression seeks minimum value to L (m) function by least-squares algorithm.And we are on the basis of the learning algorithm
It is described below to introduce simulated annealing:
△ L (m)=L (m+ △ m)-L (m);
Using m+ △ m as new solution if △ L (m)<0, m+ △ m is if △ L (m)>0 with probability
Exp (- △ L (m)/M) is as new solution.
Above-mentioned step is repeated, but the value of M (one primary constant value of setting) will reduce a part each time, and
Setting is that most multipotency repeats the number that above-mentioned step is played using the maximum times of above-mentioned steps.
L (m) is enabled to indicate locally optimal solution, and what L (M) was indicated is globally optimal solution.First by gradient descent method to mesh
Scalar functions are iterated to obtain locally optimal solution, introduce simulated annealing at this time and add a lesser offset to parameter m
△ m obtains L (m+ △ m);We will (this probability can be with above-mentioned with greater probability P at this time if L (m+ △ m) > L (m)
The increase of the number of iterations and reduce finally tend to 0) with the parameter in L (m+ △ m) be new starting point using gradient descent method continuation
Optimizing;Stop obtaining an approximate globally optimal solution (during actual not when the above process can tend to 0 with P
Be likely to be obtained globally optimal solution, only approaching as far as possible), i.e., it is preferable solve before single use gradient descent method when meeting fall into
The problem of entering locally optimal solution.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed
With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily
Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited
In specific details and embodiment shown and described herein.