CN102622418B - Prediction device and equipment based on BP (Back Propagation) nerve network - Google Patents

Prediction device and equipment based on BP (Back Propagation) nerve network Download PDF

Info

Publication number
CN102622418B
CN102622418B CN 201210039243 CN201210039243A CN102622418B CN 102622418 B CN102622418 B CN 102622418B CN 201210039243 CN201210039243 CN 201210039243 CN 201210039243 A CN201210039243 A CN 201210039243A CN 102622418 B CN102622418 B CN 102622418B
Authority
CN
China
Prior art keywords
data
unit
output
neuron
outcome
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201210039243
Other languages
Chinese (zh)
Other versions
CN102622418A (en
Inventor
马楠
王汕汕
周林
沈洪
曹国良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN 201210039243 priority Critical patent/CN102622418B/en
Publication of CN102622418A publication Critical patent/CN102622418A/en
Application granted granted Critical
Publication of CN102622418B publication Critical patent/CN102622418B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a prediction device based on a BP (Back Propagation) nerve network. The prediction device comprises a data receiving unit, a data preprocessing unit, an initializing unit, a learning unit, an iteration prediction unit and a data recovery unit. The used improved algorithm can automatically judge the original training data mode, and perform sample establishment and normalization on the original training data mode. The prediction device can be suitable for various complicated situations, has high flexibility and can finish prediction without auxiliary data and recover prediction results to be in a numerical range corresponding to the original training data.

Description

A kind of prediction unit and equipment based on the BP neural network
Technical field
The present invention relates generally to the data mining technology field.
Background technology
One, data mining technology:
Along with memory device and development of database, the storage of data has not been problem, on the contrary, people have begun to feel to be flooded by lot of data, therefore, be badly in need of a kind of scientific methods the data of magnanimity are changed into knowledge and the rule that people are of practical significance, data mining is exactly the technology that produces under this background.
Late 1980s, data mining quietly occurs as emerging research field.The research purpose of data mining is concentrated at large data and is found that those are hidden, the interested information with specific rule of people.Along with the development of data mining, this technology is used in numerous field such as commercial management, government's office, scientific research and engineering development.
In general, basic data digging flow is as follows:
⑴ problem definition: clearly define traffic issues, the purpose that specified data is excavated.
⑵ data are prepared: data are prepared to comprise: select data--in large database and data warehouse target, extract the target data set of data mining; The data pre-service--carry out data reprocessing, comprise the integrality that checks data and consistance, the denoising of data, fill up the territory of losing, delete invalid data etc.
⑶ data mining: select corresponding algorithm according to the type of data function and the characteristics of data, excavate at the enterprising line data of the data set that purifies and changed.
⑷ interpretation of result: the result that data are excavated makes an explanation and estimates, and is converted into the knowledge that can finally be understood by the user.
⑸ the utilization of knowledge: will analyze in the institutional framework that resulting knowledge is integrated into operating information system and go.
Two, neural network algorithm:
In recent years, neural network is extensively applied to time series analysis and finance forecast, this is because neural network has very strong nonlinear function approximation capability, has overcome the defective of traditional treatment method for the data aspect, makes it the application of succeeding in the prediction field.
Neural network is a kind of cerebral nerve synaptic structure and the mathematical model that can carry out information processing of being similar to, it be to human brain abstract, simplify and simulation, reflected the fundamental characteristics of human brain.
Neural network (Fig. 1) also is a kind of operational model simultaneously, is connected and composed by a large amount of node (being also referred to as neuron) and weighting each other.Each node represents a kind of specific output function, is called excitation function (activation function).Per two internodal connections all represent one and are called weight (weight) for the weighted value by this connection signal, and this is equivalent to the memory of neural network.The output of network is different and different according to connected mode, weighted value and the excitation function of network then.And network self all is to the approaching of certain algorithm of nature or function usually, also may be the expression to a kind of logic strategy.
Neural network mainly solves classification and the recurrence task of data mining, and it can find out level and smooth continuous nonlinear relationship between input attributes and the measurable attribute.
Three, BP neural network:
BP(Back Propagation) network is to be proposed by the scientist group headed by Rumelhart and the McCelland in 1986, is a kind of Multi-layered Feedforward Networks by the training of error Back-Propagation algorithm, is one of present most widely used neural network model.A large amount of input-output mode map relations can be learnt and store to the BP network, and need not to disclose the math equation of describing this mapping relations in advance.Its learning rules are to use method of steepest descent, constantly adjust weights and the threshold value of network by backpropagation, make the error sum of squares minimum of network.BP neural network model topological structure comprises input layer (input), hides layer (hide layer) and output layer (output layer).
Neural network can be used as classification, cluster, prediction etc.Neural network need have a certain amount of historical data, and by the training of historical data, network may learn tacit knowledge in the data.In your problem, at first to find some features of some problem, and corresponding evaluating data, come neural network training with these data.Though the BP network has obtained using widely, self also has some defectives and deficiency, mainly comprises the problem of the following aspects.At first, owing to learning rate is fixed, so the speed of convergence of network is slow, needs the long training time.For some challenges, the training time that the BP algorithm needs may be very long, and this mainly causes owing to learning rate is too for a short time, can adopt the learning rate of variation or adaptive learning rate to be improved.Secondly, the BP algorithm can make weight convergence arrive certain value, but does not guarantee that it is the global minimum of error plane, and this is because adopt the gradient descent method may produce a local minimum.For this problem, can adopt the additional momentum method to solve.Again, the number of plies of network hidden layer and the selection of unit number be the guidance on the gear shaper without theoretical still, generally is rule of thumb or by experiment repeatedly to determine.Therefore, often there is very big redundancy in network, has also increased the burden of e-learning to a certain extent.At last, the learning and memory of network has instability.That is to say that if increased learning sample, the network that the trains training that just need start anew is not remembered for former weights and threshold value.The reasonable weights that but prediction, classification or cluster can be done are preserved.
Look back the development of various data mining prediction algorithms, we can see that the bottleneck of its popularization is that different algorithms needs different training data forms and the parameter setting of particularization, and its forecasting process often needs human intervention and setting, a large amount of data of assisting a ruler in governing a country need be provided during prediction, it is also directly perceived inadequately to predict the outcome, and this popularization and use for algorithm is all very unfavorable.
Summary of the invention
The objective of the invention is to, provide a class brand-new, can adaptive prediction unit based on the BP neural network, it has improved the BP neural network algorithm, in order to adapting to various common predictive modes, as prediction of weather and traffic conditions etc.As embodiments of the present invention, provide a kind of can adaptive prediction unit based on the BP neural network, described device comprises data receiving element, data pretreatment unit, initialization unit, unit, iteration predicting unit and data recovery unit, said units is serial arrangement successively, wherein, data receiving element, data pretreatment unit and initialization unit are used for preparing basic data and parameter, so that device is carried out smoothly; Unit is trained raw data and is learnt, and finds out weight matrix; The iteration predicting unit is derived according to weight matrix and is predicted the outcome; Data recovery unit returns to original value with normalized predicting the outcome.
Further, described data receiving element receives the necessary original training data of this prediction unit and prediction duration parameters, and described original training data is the N*N determinant.
The original training data that described data pretreatment unit data receiving element receives carries out automatic normalized, and sets up the input and output matrix of training sample.
The parameters of described initialization unit initialization BP neural network, described parameter comprise each neuron initial value of learning rate, anticipation error minimum value, inertial coefficient, maximum frequency of training, hidden unit and output unit etc.
In an optional embodiment, described unit is learnt training sample, constantly adjust the weighting coefficient of hidden unit and output unit by the process of study, when error less than minimum anticipation error or when reaching maximum frequency of training, learning process finishes, and its objective is to obtain weight matrix.
Described iteration predicting unit is carried out the iteration prediction, the training sample that unit is last is as known, by study to each unit weighting coefficient carry out computing and draw first and predict the outcome, predict the outcome as known and extrapolate the next one and predict the outcome with this again, by that analogy, iteration is finished all predictions, obtains to predict the outcome.
Described data recovery unit reverts to the numerical range consistent with original training data with predicting the outcome of normalizing form.
One is concrete, and in the nonrestrictive embodiment, described error computing function is:
If output neuron, then error E rr i=O i(1-O i) (T i-O i),
O iBe the output of output neuron i, T iIt is the actual value of this output neuron;
If hidden neuron, then error E rr i=O i(1-O i) ∑ jErr jw Ij,
O iBe the output of hidden neuron i, this neuron has j to the output of lower floor, and is described Err j Be the error of neuron j, w IjBe the weights between these two neurons.
Described adjustment weights function is: w Ij=w Ij+ l* Err j* O i, l is pace of learning.
In more preferably embodiment of the present invention, what described data receiving element received is weather data, perhaps traffic state data.
The present invention also provides a kind of weather forecasting equipment, comprises above-mentioned prediction unit based on the BP neural network.
Method provided by the invention is used training data form relatively flexibly, and algorithm can carry out work such as normalization, sample foundation automatically, the problems such as algorithm inefficacy of having avoided factor to cause according to form is chaotic; In addition, prediction does not need to provide any data of assisting a ruler in governing a country, and only can carry out multiattribute continuous prediction according to original training data.And BP neural network algorithm itself is applicable to multiple complicated situation, dirigibility height.
This device can be differentiated original training data pattern automatically, and it is carried out sample set up and normalization.Forecasting Methodology is used the iteration prediction, and not needing provides auxiliary data to finish prediction.Predict the outcome and to return to the numerical range corresponding with original training data.The BP neural network algorithm is applicable to various complicated situations, but it is subjected to the influence of data mode bigger, and explanation results is difficulty comparatively, this BP neural network prediction device construction method based on improvement strategy can weaken the people to a great extent to the control of data, and uses it under the multiple environment.
Description of drawings
Fig. 1 is neural network rudimentary algorithm synoptic diagram.
Fig. 2 is the functional flow diagram of the prediction unit based on the BP neural network provided by the invention.
Embodiment
Below in conjunction with concrete embodiment the present invention is described.
In a nonrestrictive embodiment, aspect weather forecast, utilize method of the present invention, digitized historical datas such as interior maximum temperature of a period of time, minimum temperature only need be provided, just can dope information such as following maximum temperature and minimum temperature, and in this process, need not to provide other excessive data auxiliary, predict the outcome and also can intuitively embody.
Give an example with temperature prediction, N is provided the temperature information of capable 2 row, Column Properties is respectively maximum temperature and minimum temperature.Device begins to carry out pre-service after receiving this raw data, and the initial temperature data normalization that is about to capable 2 row of N is the numerical value between 0 to 1; Automatically set up the input and output matrix of training sample then, be about to the 1st and arrive N-1 bar data as input matrix, the 2nd arrives N bar data as output matrix, and so far the pretreatment stage of data is finished.
After this device enters the algorithm learning phase, at first the random initializtion input layer is to hiding layer, hiding layer to the weight matrix of output layer, calculate then and hide each neuron of layer, each neuronic output of output layer, calculate the error (comparing with real data) of each output neuron and hidden neuron output, and the weights in the network are upgraded in backpropagation, finish the work of algorithm learning phase when error surpasses set maximum frequency of training less than set anticipation error or frequency of training, the record weight matrix enters next stage.
Device enters the iteration forecast period, and at first, as known conditions, the weight matrix that utilizes previous step to obtain calculates output with the last item of training sample record, and with the result as known; Utilize the given data iteration to predict then, obtain beginning to predicting predicting the outcome of duration last unit stage from the last item training data, this moment, 1 real data and n-1 bar predicted data were in same interim result set; At last, utilize data in the above-mentioned interim result set and weight matrix as known, calculate all matrixes that predicts the outcome.
Obtain predicting the outcome after the matrix, device enters the last anti-normalized stage, and the result data after the normalization soon reverts to normal temperature value.
This installs manageable data and need satisfy condition:
1) all denumerable values of all properties;
2) historical data has certain time sequence.
In order further to disclose technical scheme of the present invention, below introduce the theoretical foundation of method proposed by the invention:
Definition 1: normalized function is: R (i, j)=(r (i, j)-minv (j))/(maxv (j)-minv (j));
Wherein (i j) is the normalizing result of the capable j row of i to R, and (i j) is the historical data of the capable j row of i to r, and minv (j) is the minimum value of j row, and maxv (j) is the maximal value of j row.
Definition 2: algorithm can be according to the neuronic output valve of the various computing of neuron type:
If current is hidden neuron, then adopt the tanh function: O=(e a-e -a)/(e a+ e -a);
If current is output neuron, then adopt the sigmoid function: O=1/ (1+e a);
Wherein a is input value, and O is output valve.
Definition 3: the error computing function is:
If current is output neuron: Err i=O i(1-O i) (T i-O i)
(O iBe the output of output neuron i, T iIt is the actual value of this output neuron.)
If current is hidden neuron: Err i=O i(1-O i) ∑ jErr jw Ij
(O iBe the output of hidden neuron i, this neuron has j to the output of lower floor. Err j Be the error of neuron j, w Ij Be the weights between these two neurons.)
The neural network algorithm synoptic diagram as shown in Figure 2.
Definition 4: the adjustment weights function of algorithm is: w Ij=w Ij+ l* Err j* O i(l is pace of learning.)
According to above-mentioned theory, in another more general embodiment, a kind of BP neural network prediction device construction method based on improvement strategy is suggested, this device is divided into six levels, is respectively that data receiving layer, data pretreatment layer, algorithm initialization layer, algorithm learning layer, iteration prediction interval and data are recovered layer.
In a nonrestrictive embodiment, above-mentioned each layer all uses corresponding hardware or software unit to realize, so it is also referred to as data receiving element, data pretreatment unit, initialization unit, unit, iteration predicting unit and data recovery unit.
Wherein the data receiving layer is the receiving layer of algorithm external parameter; Data pretreatment layer and algorithm initialization layer are used for preparing basic data and parameter, so that algorithm is carried out smoothly; The algorithm learning layer is trained raw data and is learnt, and purpose is to find out weight matrix; The iteration prediction interval can be derived according to training data and be predicted the outcome; Data are recovered layer can return to original value with normalized predicting the outcome.
In the more detailed embodiment, above-mentioned six levels have been carried out following nonrestrictive description.
The data receiving layer: the major function of this layer is to receive the necessary original training data of this prediction unit and prediction duration parameters.Wherein original training data will be classified attribute as with N*N(, and life period concerns between row and the row) primitive form of determinant is input in this device as original training data, and these data do not need through any special processing; Chronomere when the unit of prediction duration should be with original collecting training data is consistent substantially, as sky, hour, minute etc.
The data pretreatment layer: the major function of this layer is that the original training data that the upper strata receives is carried out automatic normalized, and sets up the input and output matrix of training sample, and whole process need not human intervention and setting, is judged automatically and processing by device.Device can obtain the line number columns of original training data earlier, finishes normalization work by dual circulation then, utilizes dual circulation to set up the input and output matrix of sample afterwards again.
The algorithm initialization layer: the major function of this layer is the parameters of initialization BP neural network algorithm, as learning rate, anticipation error minimum value, inertial coefficient, maximum frequency of training, hiding layer and each neuron initial value of output layer etc.
The algorithm learning layer: the major function of this layer is to utilize algorithm that training sample is learnt, and constantly adjusts the weighting coefficient of hiding layer and output layer by the process of study, when error less than minimum anticipation error or when reaching maximum frequency of training, the learning process end.This layer final purpose is to obtain weight matrix.
The iteration prediction interval: the major function of this layer is to carry out the iteration prediction according to the achievement of algorithm study and training sample, the characteristics of this layer are as known with last training sample, by study to each layer weighting coefficient carry out computing and draw first and predict the outcome, predict the outcome as known and extrapolate the next one and predict the outcome with this again, by that analogy, iteration is finished all predictions.
Data are recovered layer: the major function of this layer is that predicting the outcome of normalizing form reverted to the numerical range consistent with original training data, and making predicts the outcome has practical significance.
Discuss the implementation of this device below:
The BP neural network is a kind of Multi-layered Feedforward Networks by the training of error Back-Propagation algorithm, and a large amount of input-output mode map relations can be learnt and store to the BP neural network, and need not to disclose the math equation of describing this mapping relations in advance.Its learning rules are to use method of steepest descent, constantly adjust weights and the threshold value of network by backpropagation, make the error sum of squares minimum of network.BP neural network device topological structure comprises input layer, hides layer and output layer.
Each neuron has one or more inputs but has only an output, neural network algorithm uses weighted sum, and (each input value multiply by the weights related with it, then to product summation) method make up a plurality of input values, then according to the neuronic output valve of the various computing of neuron type (activation):
If hidden neuron then adopts the tanh function: O=(e a-e -a)/(e a+ e -a);
If output neuron then adopts the sigmoid function: O=1/ (1+e a);
Wherein a is input value, and O is output valve.
Fig. 1 has shown the computation process that inside neurons makes up and exports, and namely at first uses the method combinatorial input value 1,2 and 3 of weighted sum, and different choice tanh or the sigmoid function according to neuron type obtains exporting the result then.Calculate the error (comparing with real data) that each output and hidden neuron calculate output at last, the weights in the network are upgraded in backpropagation, up to the end condition that satisfies algorithm.
Wherein, the error computing function is:
If output neuron: Err i=O i(1-O i) (T i-O i)
(O iBe the output of output neuron i, T iIt is the actual value of this output neuron.)
If hidden neuron: Err i=O i(1-O i) ∑ jErr jw Ij
( O i Be the output of hidden neuron i, this neuron has j to the output of lower floor. Err j Be the error of neuron j, w Ij Be the weights between these two neurons.)
Adjusting the weights function is: w Ij=w Ij+ l* Err j* O i
(l is pace of learning.)
In improved BP neural network algorithm, performing step is as follows:
Step 1, the original training data matrix of reception and training duration parameters are example with the weather forecasting, and raw data can be continuous several days maximum temperature and minimum temperature, and the prediction duration is 7 days.
Step 2, initialization data comprise each neuron output initial value of maximum times, inertial coefficient, hidden layer and output layer of setting learning rate, anticipation error, training, dynamically obtain matrix raw column data p0 according to raw data.
Normalized is carried out to data in step 3, the maximal value maxv (j) that obtains every row training data and minimum value minv (j) back, make raw data standard to 0 between 1, normalizing use p (i, j)=(p0 (i, j)-minv (j))/(maxv (j)-minv (j));Wherein (i j) is the normalizing result of the capable j row of i to p, and (i j) is the historical data of the capable j row of i to p0, and minv (j) is the minimum value of j row, and maxv (j) is the maximal value of j row.
Step 4, obtain input matrix and the output matrix of training sample according to original training data.
for?i=1:count_sumC
for?j?=?1:count_sumL-1
X(i,j)=?p(j,i);
T(i,j)?=?p(j+1,i);
end
end
Wherein, the quantity of count_sumC representing matrix row, the quantity that the count_sumL representing matrix is capable, X is the input sample, and T is output sample, and p is the training sample after the normalization.
Step 5, random initializtion weight matrix wki and wij.Wherein, wki represents to hide layer to the weight matrix of input layer, and wij represents that input layer arrives the weight matrix of hiding layer.
Each neuron of layer, each neuronic output of output layer are hidden in step 6, calculating, and hiding layer is exported a computing formula and is: O=(e a-e -a)/(e a+ e -a), the output computing formula of output layer is: O=1/ (1+e -a).Wherein a represents neuronic input value.
Step 7, calculate the error that each output and hidden neuron calculate output (with real data relatively), the weights in the network are upgraded in backpropagation.
Wherein, the error computing function is:
If output neuron: Err i=O i(1-O i) (T i-O i)
(O iBe the output of output neuron i, T iIt is the actual value of this output neuron.)
If hidden neuron: Err i=O i(1-O i) ∑ jErr jw Ij
(O iBe the output of hidden neuron i, this neuron has j to the output of lower floor.Err jBe the error of neuron j, w IjBe the weights between these two neurons.)
Adjusting the weights function is: w Ij=w Ij+ l* Err j* O i
(l is pace of learning.)
Step 8, repeating step 6, till satisfying end condition, the end condition of this algorithm is error less than anticipation error or frequency of training greater than maximum set value.
Step 9, the parameter during according to the weight matrix that obtains after the training and training are predicted as initial input with the last item real data, will predict the outcome and predict again as the real data of next day, up to satisfying prediction fate parameter.Forecasting process is with step 6.
Step 10, the matrix that predicts the outcome that will obtain recover, and the value after the normalization soon returns to actual numerical value: and Res (i, j)=PredictRes (j, i) * (maxv (j)-minv (j))+minv (j).Wherein, PredictRes (j, i) unreduced the predicting the outcome of expression, Res (i, j) as-reduced the predicting the outcome of expression.
Now give an example with temperature prediction, N is provided the temperature information of capable 2 row, Column Properties is respectively maximum temperature and minimum temperature.
Device begins to carry out pre-service work after receiving this raw data, the initial temperature data that are about to capable 2 row of N utilize formula p (i, j)=(p0 (i, j)-minv (j))/(maxv (j)-minv (j)) is normalized to the numerical value between 0 to 1; Automatically set up the input and output matrix of training sample then, be about to the 1st and arrive N-1 bar data as input matrix, the 2nd arrives N bar data as output matrix, and so far the pretreatment stage of data is finished.
Device enters the algorithm learning phase, and at first the random initializtion input layer utilizes formula O=(e then to hiding layer, hiding layer to the weight matrix of output layer a-e -a)/(e a+ e -a) calculate and hide each neuron output of layer, utilize formula O=1/ (1+e -a) each the neuronic output of calculating output layer, calculate the error (comparing with real data) of each output neuron and hidden neuron output afterwards again, and the weights in the network are upgraded in backpropagation, finish the work of algorithm learning phase when error surpasses set maximum frequency of training less than set anticipation error or frequency of training, the record weight matrix enters next stage.
Device enters the iteration forecast period, at first, as known conditions, utilizes weight matrix that previous step obtains with formula O=(e the last item of training sample record a-e -a)/(e a+ e -a) or O=1/ (1+e -a) calculate output, and with the result as known; Utilize the given data iteration to predict then, obtain beginning to predicting predicting the outcome of duration last unit stage from the last item training data, this moment, 1 real data and n-1 bar predicted data were in same interim result set; At last, utilize data in the above-mentioned interim result set and weight matrix as known, calculate all matrixes that predicts the outcome.
Obtain predicting the outcome after the matrix, device enters the last anti-normalized stage, be about to result data after the normalization utilize formula Res (i, j)=(j, i) * (maxv (j)-minv (j))+minv (j) reverts to normal temperature value to PredictRes.
The algorithm flow chart of improved BP neural network as shown in Figure 2.
Technical solution of the present invention has following advantage:
BP neural network algorithm itself is applicable to multiple complicated situation, dirigibility height.
The improvement algorithm that this device uses can be differentiated original training data pattern automatically, and it is carried out sample set up and normalization.
Forecasting Methodology is used the iteration prediction, and not needing provides auxiliary data to finish prediction.
Predict the outcome and to return to the numerical range corresponding with original training data.
The BP neural network algorithm is applicable to various complicated situations, but it is subjected to the influence of data mode bigger, and explanation results is difficulty comparatively, this BP neural network prediction device construction method based on improvement strategy can weaken the people to a great extent to the control of data, and uses it under the multiple environment.
Can derive multiple result formats through predicting the outcome of over recovery by different instruments and technology, as broken line graph, cake chart, histogram etc., even the relational graph of special dimension, as traffic flow map, temperature Change figure etc.

Claims (3)

1. weather forecasting device based on the BP neural network, described device comprises data receiving element, data pretreatment unit, initialization unit, unit, iteration predicting unit and data recovery unit, it is characterized in that:
Said units is serial arrangement successively,
Wherein, data receiving element, data pretreatment unit and initialization unit are used for preparing basic data and parameter, so that device is carried out smoothly;
Unit is found out weight matrix for raw data being trained and learning;
The iteration predicting unit predicts the outcome for deriving according to weight matrix;
Data are recovered to be used for the unit normalized predicting the outcome are returned to original value;
Described data receiving element specifically is used for receiving the necessary original training data of this prediction unit and prediction duration parameters, and described original training data is the N*N determinant; Described original training data is specially the historical temperature data in a period of time;
The original training data that described data pretreatment unit specifically is used for the data receiving element is received carries out automatic normalized, and sets up the input and output matrix of training sample;
Described initialization unit specifically is used for the parameters of initialization BP neural network, and described parameter comprises each neuron initial value of learning rate, anticipation error minimum value, inertial coefficient, maximum frequency of training, hidden unit and output unit;
Described unit specifically is used for training sample is learnt, constantly adjust the weighting coefficient of hidden unit and output unit by the process of study, when error less than minimum anticipation error or when reaching maximum frequency of training, learning process finishes, and its objective is to obtain weight matrix;
Described iteration predicting unit specifically is used for carrying out the iteration prediction, the training sample that unit is last is as known, by study to the hidden unit weighting coefficient carry out computing and draw first and predict the outcome, predict the outcome as known and extrapolate the next one and predict the outcome with this again, by that analogy, iteration is finished all predictions, obtains to predict the outcome;
Described data recovery unit specifically is used for predicting the outcome of normalizing form reverted to the numerical range consistent with original training data.
2. the weather forecasting device based on the BP neural network as claimed in claim 1 is characterized in that:
Described error computing function is:
If output neuron, then error E rr i=O i(1-O i) (T i-O i),
O iBe the output of output neuron i, T iIt is the actual value of this output neuron;
If hidden neuron, then error E rr i=O i(1-O i) ∑ jErr jw Ij,
O iBe the output of hidden neuron i, this neuron has j to the output of lower floor, described Err jBe the error of neuron j, w IjBe the weights between these two neurons;
Described adjustment weights function is: w Ij=w Ij+ l* Err j* O i, l is pace of learning.
3. a weather forecasting equipment is characterized in that, comprises the weather forecasting device based on the BP neural network as claimed in claim 1 or 2.
CN 201210039243 2012-02-21 2012-02-21 Prediction device and equipment based on BP (Back Propagation) nerve network Expired - Fee Related CN102622418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201210039243 CN102622418B (en) 2012-02-21 2012-02-21 Prediction device and equipment based on BP (Back Propagation) nerve network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201210039243 CN102622418B (en) 2012-02-21 2012-02-21 Prediction device and equipment based on BP (Back Propagation) nerve network

Publications (2)

Publication Number Publication Date
CN102622418A CN102622418A (en) 2012-08-01
CN102622418B true CN102622418B (en) 2013-08-07

Family

ID=46562337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201210039243 Expired - Fee Related CN102622418B (en) 2012-02-21 2012-02-21 Prediction device and equipment based on BP (Back Propagation) nerve network

Country Status (1)

Country Link
CN (1) CN102622418B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103967963B (en) * 2014-05-21 2016-08-17 合肥工业大学 The measuring method of DCT wet clutch temperature based on neural network prediction
CN105376506A (en) * 2014-08-27 2016-03-02 江南大学 Design of image pattern noise relevance predictor
CN105788249B (en) * 2014-12-16 2018-09-28 高德软件有限公司 A kind of traffic flow forecasting method, prediction model generation method and device
CN105989407A (en) * 2015-02-12 2016-10-05 中国人民解放军信息工程大学 Neural network based short wave median field intensity prediction system, method and device
CN105354277B (en) * 2015-10-30 2020-11-06 中国船舶重工集团公司第七0九研究所 Recommendation method and system based on recurrent neural network
CN105847080A (en) * 2016-03-18 2016-08-10 上海珍岛信息技术有限公司 Method and system for predicting network traffic
CN106355879A (en) * 2016-09-30 2017-01-25 西安翔迅科技有限责任公司 Time-space correlation-based urban traffic flow prediction method
CN106777243A (en) * 2016-12-27 2017-05-31 浪潮软件集团有限公司 Dynamic modeling of streaming data analysis
CN107506852A (en) * 2017-08-01 2017-12-22 佛山科学技术学院 A kind of tax arrear Forecasting Methodology and prediction meanss based on data mining
CN109728928B (en) * 2017-10-30 2021-05-07 腾讯科技(深圳)有限公司 Event recognition method, terminal, model generation method, server and storage medium
CN110200641A (en) * 2019-06-04 2019-09-06 清华大学 A kind of method and device based on touch screen measurement cognitive load and psychological pressure
CN110415521B (en) * 2019-07-31 2021-03-05 京东城市(北京)数字科技有限公司 Traffic data prediction method, apparatus and computer-readable storage medium
CN110470481B (en) * 2019-08-13 2020-11-24 南京信息工程大学 Engine fault diagnosis method based on BP neural network
CN110798227B (en) * 2019-09-19 2023-07-25 平安科技(深圳)有限公司 Model prediction optimization method, device, equipment and readable storage medium
CN111638034B (en) * 2020-06-09 2021-07-09 重庆大学 Strain balance temperature gradient error compensation method and system based on deep learning
CN114130525A (en) * 2021-11-29 2022-03-04 湖南柿竹园有色金属有限责任公司 Control method, device, equipment and medium for mineral processing equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101796928B (en) * 2009-07-14 2012-07-25 大连水产学院 Method for predicting effect of water quality parameters of aquaculture water on growth conditions of aquaculture living beings
CN101706335B (en) * 2009-11-11 2012-01-11 华南理工大学 Wind power forecasting method based on genetic algorithm optimization BP neural network
CN102110428B (en) * 2009-12-23 2015-05-27 新奥特(北京)视频技术有限公司 Method and device for converting color space from CMYK to RGB

Also Published As

Publication number Publication date
CN102622418A (en) 2012-08-01

Similar Documents

Publication Publication Date Title
CN102622418B (en) Prediction device and equipment based on BP (Back Propagation) nerve network
CN102622515A (en) Weather prediction method
CN103581188B (en) A kind of network security situation prediction method and system
CN111324990A (en) Porosity prediction method based on multilayer long-short term memory neural network model
Alaloul et al. Data processing using artificial neural networks
CN108764540B (en) Water supply network pressure prediction method based on parallel LSTM series DNN
CN103226741B (en) Public supply mains tube explosion prediction method
CN110245801A (en) A kind of Methods of electric load forecasting and system based on combination mining model
CN104751842B (en) The optimization method and system of deep neural network
CN106600059A (en) Intelligent power grid short-term load predication method based on improved RBF neural network
CN110111848A (en) A kind of human cyclin expressing gene recognition methods based on RNN-CNN neural network fusion algorithm
CN103105246A (en) Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN110751318A (en) IPSO-LSTM-based ultra-short-term power load prediction method
Liu et al. A fault diagnosis intelligent algorithm based on improved BP neural network
CN102034133A (en) Quantum neural network-based comprehensive evaluation method for multi-factor system
CN115099519B (en) Oil well yield prediction method based on multi-machine learning model fusion
CN111401547B (en) HTM design method based on circulation learning unit for passenger flow analysis
CN112163671A (en) New energy scene generation method and system
CN109934422A (en) Neural network wind speed prediction method based on time series data analysis
CN112766603A (en) Traffic flow prediction method, system, computer device and storage medium
Liang et al. Hydrocarbon production dynamics forecasting using machine learning: A state-of-the-art review
CN111382840B (en) HTM design method based on cyclic learning unit and oriented to natural language processing
Frimpong et al. Intelligent modeling: Advances in open pit mine design and optimization research
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
CN108470212B (en) Efficient LSTM design method capable of utilizing event duration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130807

CF01 Termination of patent right due to non-payment of annual fee