CN107180261A - Based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network - Google Patents

Based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network Download PDF

Info

Publication number
CN107180261A
CN107180261A CN201710426571.5A CN201710426571A CN107180261A CN 107180261 A CN107180261 A CN 107180261A CN 201710426571 A CN201710426571 A CN 201710426571A CN 107180261 A CN107180261 A CN 107180261A
Authority
CN
China
Prior art keywords
mrow
msub
neural network
greenhouse
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710426571.5A
Other languages
Chinese (zh)
Other versions
CN107180261B (en
Inventor
任守纲
刘鑫
顾兴健
徐焕良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lanchang Automation Technology Co ltd
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Priority to CN201710426571.5A priority Critical patent/CN107180261B/en
Publication of CN107180261A publication Critical patent/CN107180261A/en
Application granted granted Critical
Publication of CN107180261B publication Critical patent/CN107180261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Abstract

The present invention proposes a kind of based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, according to predicted time, each moment builds a BP neural network, ultimately form the BP neural network group of a rolling, this method operation includes two stages, unsupervised learning is carried out using autocoder first and obtains good initial network parameter, recycles improved localized particle group optimizing method to optimize the network parameter, sets up initial BP neural network;Then on the basis of initial BP neural network, carry out rolling training and prediction using the output of previous network as the part input of latter network.The present invention can accurately predict long-term Trend of Environmental Change in the greenhouse under Various Seasonal different geographical, and effectively improve the precision of prediction of Greenhouse grape.

Description

Based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network
Technical field
It is especially a kind of stingy based on the greenhouse for rolling BP neural network the invention belongs to industrialized agriculture environmental forecasting field Wait medium- and long-term forecasting method.
Background technology
The efficient production in greenhouse depends on suitable greenhouse micro-climate, sets up long-term in high-precision Greenhouse grape Forecast model is to realizing that greenhouse Optimum Regulation is significant.Although the threshold value control method commonly used in current greenhouse is simple It is easy, but high energy consumption, the stability of a system are poor.Based on proportional-integral-differential (Proportion-Integral-Derivative, PID) the autocontrol method, reliability such as controller and Model Predictive Control (Model Predictive Control, MPC) High, energy consumption is relatively low, but needs the ambient parameter of look-ahead multiple periods.Greenhouse grape simulation model is broadly divided into two classes: One is mechanism model, and its parameter is more difficult to be determined, is not suitable for greenhouse flower.Two be experimental model, and also referred to as System Discrimination can To carry out on-line tuning to model parameter, to meet the requirement of control.What is commonly used in experimental model is artificial nerve network model, Because BP neural network is simple and fault-tolerant ability strong, it is most widely used in Greenhouse grape prediction.
Current domestic and foreign scholars establish the miniclimate simulation model based on BP neural network for different greenhouses, take Good effect was obtained, research shows that artificial neural network is practical in terms of greenhouse micro-climate prediction, but these are pre- Single-step Prediction, i.e. short-term forecast can only be carried out by surveying model majority, it is impossible to realize medium- and long-term forecasting, it is impossible to meet wanting for Optimum Regulation Ask.In addition, there is certain advantage using BP neural network modeling, but it also has some shortcomings and deficiencies, is such as easily absorbed in office The problems such as portion's minimum value, the undue selection for relying on initial weight and poor generalization ability, therefore the precision of BP neural network prediction Still have greatly improved space.Conventional Many researchers do not propose improved method for the defect of BP neural network, only Choose optimal result to show, in fact these results are not convincing to a certain extent.How BP nerve is improved The precision of prediction of network, and the medium- and long-term forecasting of Greenhouse grape is realized, it is worth further research and inquires into.
The content of the invention
Technical problem solved by the invention is to provide a kind of based on long in the Greenhouse grape for rolling BP neural network Phase Forecasting Methodology, the BP neural network group of a rolling is built according to predicted time, using the output of previous network as latter The part input of individual network carries out the training and prediction of roller, effectively improves the precision of prediction of Greenhouse grape.
The technical solution for realizing the object of the invention is:
Based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, comprise the following steps:
Step 1:Set up initial BP neural network f1If current time is t, the inside greenhouse humiture of t is inputted, it is defeated Go out the inside greenhouse humiture at the t+1 moment of prediction, and obtain f1Network parameter;
Step 2:Set up the BP neural network group rolled, including n-1 neutral net fn, each neutral net fnInclude instruction Practice collection train_XnWith test set test_Xn, when being separated by one between the training set and test set of two neighboring neutral net Carve, wherein, train_XnRepresent the training set at t+n-1 moment, test_XnRepresent the test set at t+n-1 moment, n >=2;
Step 3:Utilize train_XnF is trained with network parameter combination gradient descent methodnModel, after the completion of training, then will train_XnIt is input to fnIn model, analog result train_Y is exportedn;By test_XnIt is input to fnIn model, output prediction knot Fruit test_Yn
Step 4:N=n+1 is made, step 3 is gone to.
Further, of the invention based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, step 1 has Body includes:
Step 1-1:Pre-training is carried out to the inside greenhouse humiture of t based on unsupervised learning model, input is extracted Exported after the feature of data, and reconstruct;
Step 1-2:Using the feature of data as the initiation parameter of BP neural network, have the target of supervision to learn, Using the weight and threshold parameter of the improved localized particle group optimizing method combination genetic algorithm optimization BP neural network;
Step 1-3:Initial BP neural network f is set up using optimal weights and threshold parameter1, export the t+1 moment of prediction Inside greenhouse humiture.
Further, it is of the invention based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, step 1-1 It is described to be specially to the method that input data is reconstructed:By the weight between input layer and hidden layer and threshold value { W(1),b(1)Make For encoder, coding function uses sigmoid functions;By the weight between hidden layer and output layer and threshold value { W(2),b(2)Make For decoder, decoding function uses tanh functions.
Further, it is of the invention based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, step 1-2 Concretely comprise the following steps:
Step 1-2-1:Population is divided into two subgroups, calculated simultaneously in spmd parallel organizations, population is initialized Speed and position, learning rate C1And C2, inertia weight;
Step 1-2-2:The sub-average particle number of times of number of times is reset, global optimum is assigned to subgroup global optimum, i.e., BadNum [N]=0, PLg=Pg, badNum is particle number of times, and N is the particle numbering that number of times is below the average, PLgIt is global for subgroup It is optimal, PgFor global optimum;
Step 1-2-3:The speed of more new particle and position:
vi(t+1)=ω vi(t)+c1r1(pavg-xi(t))+c2r2(pLg-xi(t)), xi(t+1)=xi(t)+vi(t+1), Wherein, i=1,2 ..., N, t are current iteration number of times, and ω is inertia weight, and c1, c2 are accelerated factors, and r1, r2 are [0,1] areas Between random number, vi(t) it is the former speed of particle, vi(t+1) it is the particle rapidity after updating, pavgFor individual extreme value central point, pLg For the global optimum position of each subgroup, xi(t) put for particles in-situ, xi(t+1) it is the particle position after updating;
Step 1-2-4:Crossover operator is introduced, if the random number produced is less than crossover probability PC, then two subgroups perform friendship Fork operation:xik=pLg1k, xjl=pLg2l, wherein, xikElement, p are tieed up for the kth of i-th of particle position in first subgroupLg1kFor The kth dimension element of first subgroup global optimum position, xjlElement is tieed up for the l of j-th of particle position in second subgroup, pLg2lElement, i, j=1,2 ..., N/2 and i ≠ j, k ∈ [(IN+1) * HN+ are tieed up for the l of second subgroup global optimum position 1, D], l ∈ [1, (IN+1) * HN], IN are the input layer number of neutral net, and HN is hidden layer neuron number, and D is The dimension of particle, and the fitness J (i) of each particle is calculated, if random number is more than crossover probability PC, then without any behaviour Make;
Step 1-2-5:Update local optimum Pi, will be new if the particle position after updating is better than original particle position Particle position as the particle Pi, and it is used as the global optimum P in current iterationLg, update individual mechanism center point Pavg, meter Each subgroup average fitness fit_avg is calculated, if the particle position after updating is not better than original particle position, without appointing What is operated;
Step 1-2-6:Mutation operator is introduced, if J (i) < fit_avg, make badNum (i)+1, if badNum (i) >= BadNumLimit, the then position of random initializtion particle and speed:xid=a+ (b-a) * rand, vid=m+ (n-m) * rand, Wherein d=1,2 ..., D, a and b are the minimum and maximum positions for limiting particle, and m and n are the minimum and maximum speed for limiting particle Degree, rand for [0,1) between uniform random number;
Step 1-2-7:Judge whether to reach default inner iterative number of times, if so, the subgroup for then comparing two subgroups is optimal, Global optimum is obtained, if it is not, then going to step 1-2-3;
Step 1-2-8:Judge whether to reach maximum iteration or meet gbest (n)-gbest (n-4)<=0.0001, If so, then stopping iteration, if it is not, then brick step 1-2-2.
Further, it is of the invention based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, step 1 Predictor formula is:
Wherein, (P)tFor t ambient parameter, (Tin)tFor t greenhouse observed temperature, (Hin)tIt is real for t greenhouse Measuring moisture,For the t+1 moment greenhouse temperatures of prediction,For the t+1 moment chamber humidities of prediction.
Further, in the Greenhouse grape medium- and long-term forecasting method of the invention based on rolling BP neural network, step 2 A moment be 15min.
Further, it is of the invention based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, training set train_XnGreenhouse external environment influence factor (P) including the t+n-1 momentt+n-1With neutral net fn-1Training set simulation knot Fruit train_Yn-1, wherein train_Yn-1The inside greenhouse humiture at the t+n-1 moment including prediction.
Further, it is of the invention based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, rolling The predictor formula of BP neural network is:
Wherein, (P)t+n-1For the ambient parameter at t+n-1 moment,For the t+n-1 moment greenhouse temperatures of prediction,For the t+n-1 moment chamber humidities of prediction,For the t+n moment greenhouse temperatures of prediction,For prediction T+n moment chamber humidities.
The present invention uses above technical scheme compared with prior art, with following technique effect:
1st, method of the invention can continuously predict following 6-12 hours Greenhouse grape, test result indicates that, with tradition BP single steps shift to an earlier date on line rolling prediction model and compare, more than 50% can be reduced by rolling the following 6 hours humiture errors of BP model predictions, The cumulative errors of medium-term and long-term rolling forecast are greatly reduced, can accurately be predicted in the greenhouse under Various Seasonal different geographical Long-range circumstances variation tendency, foundation is provided to formulate rational miniclimate regulation and control scheme.
2nd, the method first stage of the invention uses improved BP neural network, test result indicates that, with initial nerve net Network model is compared using BP networks, and the following 6 hours humiture errors of rolling BP model predictions proposed by the present invention can be reduced 9.3%~45%, it is highly effective to illustrate improved BP neural network, and the overall prediction essence for rolling BP models can be improved conscientiously Degree.
3rd, unsupervised learning model is used in Greenhouse grape medium- and long-term forecasting by method of the invention first, experimental result Show, predicated error reduces 10% or so, and operational efficiency improves more than 20%.
4th, method of the invention is optimized using improved localized particle group optimizing method to BP neural network, with standard Localized particle group optimizing method compare, prediction humiture error reduction by 10%~30%.
Brief description of the drawings
Fig. 1 is the model structure based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network of the present invention Figure;
Fig. 2 is the improved localized particle group optimizing method flow chart of the present invention;
Fig. 3 is the god of n-th of BP based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network of the present invention Through e-learning and prediction flow chart.
Embodiment
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning Same or similar element or element with same or like function are represented to same or similar label eventually.Below by ginseng The embodiment for examining accompanying drawing description is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
Structure chart based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network is as shown in figure 1, model point For two stages, that is, the BP neural network group for setting up initial BP neural network and rolling.The initial neutral net bag of first stage Include two steps, i.e. AE unsupervised learnings and BP neural network supervised learning.Second stage builds the BP neural network group rolled, fn-1(n>=2) simulation output of model will be used as fnThe part input of model, fn-1After the completion of training, by fn-1Network parameter It is used as fnInitial network parameter, because the predicted time interval of continuous two models is shorter (15 minutes), network parameter difference It is smaller, and BP neural network has stronger reverse fine-tuning capability, therefore f2~fnBP neural network is used, further reduces pre- Survey the error of model.
The function model of Greenhouse grape is set up according to indoor and outdoor factor of influence.If current time is t, t+1 is continuously predicted In the data at~t+n moment, following formula, f1Initial BP neural network is represented, is trained using the True Data of t, network It is output as the inside greenhouse humiture at t+1 moment;fnRepresent to roll the n-th (n in BP neural network>=2) individual model, using t+ The external environment parameters and f at n-1 momentn-1The model indoor temperature and humidity analogue value is trained, and network is output as the temperature at t+n moment Chamber interior humiture.
Trained after n network, preserved the weight and threshold parameter of each BP neural network, then in carrying out it is long-term roll it is pre- Survey.Prediction process is by fn-1Predict the outcome as fnPart input carry out continuous rolling forecast.
Wherein, P represents external environment parameters and inside greenhouse equipment state, includes [Tout, Hout, Ws, Sr, Fs, Vs] In any number of parameters, (P)tFor t ambient parameter, (Tin)tFor t greenhouse observed temperature, (Hin)tFor t temperature Humidity is surveyed in room,For the t+1 moment greenhouse temperatures of prediction,For the t+1 moment chamber humidities of prediction;(P)t+n-1 For the ambient parameter at t+n-1 moment,For the t+n-1 moment greenhouse temperatures of prediction,For prediction t+n-1 when Carve chamber humidity,For the t+n moment greenhouse temperatures of prediction,For the t+n moment chamber humidities of prediction.
The detailed process of BP neural network Greenhouse grape medium- and long-term forecasting is rolled for implementation below.
1st, initial BP neural network is built
Initial BP neural network predicts the outcome as the part input of second model and just with network parameter Beginning network parameter, and then influence to roll predicting the outcome for BP models entirety.To improve precision of prediction, initial BP neural network is first Unsupervised learning is carried out using unsupervised learning model AE, the feature of data is extracted;Then it regard AE feature representation as BP god Initiation parameter through network, then have the target of supervision to learn, and using improved localized particle group optimizing method come excellent Change the network weight and threshold value.Because PSO algorithm has easily precocious, stability, therefore the present invention is proposed A kind of improved localized particle group optimizing method (IPSO).Test set is finally inputted to checking network in the model trained and completed Generalization ability.
(1) the BP neural network initial parameter optimization based on AE
Three layers of autocoder network are initially set up, input vector is equal with output vector each element.Input layer is with hiding Weight and threshold value { W between layer(1),b(1)It is encoder, coding function uses sigmoid functions;Hidden layer and output layer it Between weight and threshold value { W(2),b(2)It is decoder, decoding function uses tanh functions, then had:
Hi=sigmoid (W(1)Xi+b(1))
Yi=tanh (W(2)Hi+b(2))
AE is that a kind of unsupervised learning model, i.e. training data are no labels, is output as the reconstruct of input, by calculating Reconstructed error obtains AE weight parameter, obtains the feature representation of input data.Shown in reconstructed error function J (θ) following formula.
M is the quantity of training sample in formula, and n is the network number of plies, and θ is the parameter of neutral net, including weight and bias term. Section 1 is the mean square deviation between model output valve and desired value in braces, and Section 2 L2 is regular terms, to reduce weight Amplitude of variation, it is to avoid over-fitting.
(2) the BP neural network parameter optimization based on IPSO algorithms
Traditional BP neural network is trained using gradient descent method, and Local uniqueness is stronger, but is easily absorbed in local optimum Point.And because network parameter is more, the dimension of particle is higher, the performance of PSO algorithms can be with certain particle populations quantity The increase of optimised problem dimension and reduce, in order to not increase the complexity of algorithm and improve precision, the present invention is proposed IPSO algorithms, including:
(A) localized particle group optimizing method is taken.Population is divided into by multiple subgroups, the speed of particle by parallel algorithm Updated based on individual optimal and subgroup global optimum, to strengthen ability of searching optimum, while improving the efficiency of algorithm.Due to nerve Network structure is complex, and dimensionality of particle is higher, therefore particle rapidity more new formula is based on individual extreme value by the method for the present invention Central point and global extremum.Individual extreme value central point is pavg=[pavg1,pavg2,…,pavgD], whereinChange The particle group velocity entered updates formula and is shown below.
vi(t+1)=ω vi(t)+c1r1(pavg-xi(t))+c2r2(pLg-xi(t))
Wherein, pLgFor the global optimum position of each subgroup.
(B) crossover operator of genetic algorithm is introduced.Crossover operation is performed to particle position, to increase population diversity, Avoid algorithm Premature Convergence.Network parameter is divided into two parts during intersection, Part I is neural network input layer to hiding Parameter { the W of layer(1),b(1), Part II is parameter { W of the neutral net hidden layer to output layer(2),b(2)}.If crossover probability For Pc, the individual x of first subgroupi=[xi1,xi2,…,xiD] with Pc probability and the global optimum position of first subgroup pLg1Part II parameter is intersected;The individual x of second subgroupj=[xj1,xj2,…,xjD] with Pc probability and second subgroup Global optimum position pLg2Part I parameter is intersected, and formula is as follows.
xik=pLg1k
xjl=pLg2l
Wherein, i, j=1,2 ..., N/2 and i ≠ j, k ∈ [(IN+1) * HN+1, D], l ∈ [1, (IN+1) * HN], IN is god Input layer number through network, HN is hidden layer neuron number, and D is the weight of the dimension, i.e. neutral net of particle With threshold parameter number sum, if output layer neuron number is ON, then D=IN*HN+HN*ON+HN+ON.
(C) mutation operator is introduced.If the adaptive value of some particle is repeatedly averagely fitted less than colony during Evolution of Population It should be worth, then show that the Evolutionary direction of particle much deviates optimal solution, no longer adapt to current search environment, therefore introduce something lost The mutation operator of propagation algorithm performs mutation operation to the particle, jumps out the particle for being absorbed in local value and continually looks for optimal solution, Other particles, which then maintain the original state, to be continued to evolve, until convergence.Shown in variation mode following formula, i.e., change particle by initialization mode Position and speed.
xid=a+ (b-a) * rand
vid=m+ (n-m) * rand
Wherein, d=1,2 ..., D, a and b be limit particle minimum and maximum position, namely neural network parameter model Enclose;M and n are the minimum and maximum speed for limiting particle, determine the amplitude of particle seat change;Rand is [between 0,1) Uniform random number.
IPSO algorithm flow charts are as shown in Figure 2.Specific algorithm flow is as follows:
Step 1:Population is divided into two subgroups, calculated simultaneously in spmd parallel organizations, initialization kind group velocity and position Put, initialize learning rate c1, c2, the parameter such as inertia weight ω;
Step 2:The each sub-average number of times of particle clear 0 of statistics, i.e. badNum [N]=0;Global optimum is assigned to Subgroup global optimum PLg=Pg
Step 3:The position of more new particle and speed;
Step 4:Introduce crossover operator.If the random number produced is less than crossover probability Pc, two subgroups perform intersection respectively Operation;
Step 5:Calculate the fitness J (i) of each particle;
Step 6:Local optimum Pi is updated, if particle position is better than original particle after updating, by new particle Position as the particle Pi;
Step 7:1. subgroup global optimum is updated, if particle is better than original subgroup global optimum position after location updating Put, then regard the position of the particle as the global optimum P in current iterationLg;2. more new individual extreme value center point Pavg
Step 8:Calculate each subgroup average fitness fit_avg;
Step 9:Introduce mutation operator:If J (i)<Fit_avg, then make badNum (i)+1;If badNum (i)>= BadNumLimit, the then position of random initializtion particle and speed.
Step 10:Reach after inner iterative number of times, mutually, that is, it is optimal to compare subgroup for two sub- flock-mates, thus obtain it is global most It is excellent;Not up to then jump procedure 3;
Step 11:Reach maximum iteration or meet gbest (n)-gbest (n-4)<=0.0001 (i.e. fitness letter Several continuous 5 times are constant) it is to stop iteration.Step 2 is circulated back to, until meeting end condition.
2nd, the BP neural network group rolled is built
It is to set up the BP neural network group rolled to roll BP model second stage, i.e., continuously set up n Single-step Prediction model, Each network model has corresponding training set train_xn(n>=2) and test set test_xn, train_xnAnd test_xnGeneration The data at table t+n-1 moment, are included (P)t+n-1WithThree parameters.The training of two neighboring network model A moment is only differed between collection and test set, data also postpone takes a moment downwards.train_ynAnd test_ynRespectively N-th of network training collection train_xnWith test set test_xnAnalog result, comprisingWithTwo parameters.
N-th of BP neural network study and prediction process are as shown in figure 3, using t+n-1 (n>=2) moment training set train_xnTrain fnModel, the training set includes the greenhouse internal and external environment influence factor (P) at t+n-1 momentt+n-1, and fn-1 The analog result train_y of the training set of modeln-1.Network is output as the measured data at t+n moment, is instructed using gradient descent method Practice network.By training set train_x after the completion of trainingnF is inputted againnIn model, the analog result train_ of training set is obtained yn, i.e. the inside greenhouse humiture analog result collection at t+n moment will be used as train_xn+1A part be used for train fn+1Mould Type.Then by t+n-1 moment test sets test_xnT+n moment indoor temperature and humidities are obtained in input model to predict the outcome test_yn, And it is used as test_xn+1A part, the indoor temperature and humidity for predicting the t+n+1 moment.So roll training and predict, realize The medium- and long-term forecasting of Greenhouse grape.The purpose for training multiple networks is in order that training set is consistent with test set source, to improve The precision of forecast model, i.e. fn(n>=indoor temperature and humidity the data 2) in the training sample and test sample of model are all from fn-1 The analog result of model.
Described above is only some embodiments of the present invention, it is noted that for the ordinary skill people of the art For member, under the premise without departing from the principles of the invention, some improvement can also be made, these improvement should be regarded as the guarantor of the present invention Protect scope.

Claims (8)

1. based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, it is characterised in that comprise the following steps:
Step 1:Set up initial BP neural network f1If current time is t, the inside greenhouse humiture of t is inputted, output is pre- The inside greenhouse humiture at the t+1 moment of survey, and obtain f1Network parameter;
Step 2:Set up the BP neural network group rolled, including n-1 neutral net fn, each neutral net fnInclude training set train_XnWith test set test_Xn, a moment is separated by between the training set and test set of two neighboring neutral net, its In, train_XnRepresent the training set at t+n-1 moment, test_XnRepresent the test set at t+n-1 moment, n >=2;
Step 3:Utilize train_XnF is trained with network parameter combination gradient descent methodnModel, after the completion of training, then by train_ XnIt is input to fnIn model, analog result train_Y is exportedn;By test_XnIt is input to fnIn model, the test_ that predicts the outcome is exported Yn
Step 4:N=n+1 is made, step 3 is gone to.
2. it is according to claim 1 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, step 1 is specifically included:
Step 1-1:Pre-training is carried out to the inside greenhouse humiture of t based on unsupervised learning model, input data is extracted Feature, and reconstruct after export;
Step 1-2:Using the feature of data as the initiation parameter of BP neural network, have the target of supervision to learn, use The weight and threshold parameter of the improved localized particle group optimizing method combination genetic algorithm optimization BP neural network;
Step 1-3:Initial BP neural network f is set up using optimal weights and threshold parameter1, export the greenhouse at the t+1 moment of prediction Internal humiture.
3. it is according to claim 2 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, described in step 1-1 is specially to the method that input data is reconstructed:By the weight and threshold value between input layer and hidden layer {W(1),b(1)As encoder, coding function uses sigmoid functions;By the weight and threshold value between hidden layer and output layer {W(2),b(2)As decoder, decoding function uses tanh functions.
4. it is according to claim 2 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, step 1-2's concretely comprises the following steps:
Step 1-2-1:Population is divided into two subgroups, calculated simultaneously in spmd parallel organizations, initialization kind group velocity With position, learning rate C1And C2, inertia weight;
Step 1-2-2:The sub-average particle number of times of number of times is reset, global optimum is assigned to subgroup global optimum, i.e., BadNum [N]=0, PLg=Pg, badNum is particle number of times, and N is the particle numbering that number of times is below the average, PLgIt is global for subgroup It is optimal, PgFor global optimum;
Step 1-2-3:The speed of more new particle and position:
vi(t+1)=ω vi(t)+c1r1(pavg-xi(t))+c2r2(pLg-xi(t)), xi(t+1)=xi(t)+vi(t+1), wherein, i =1,2 ..., N, t are current iteration number of times, and ω is inertia weight, and c1, c2 are accelerated factors, r1, r2 be [0,1] it is interval with Machine number, vi(t) it is the former speed of particle, vi(t+1) it is the particle rapidity after updating, pavgFor individual extreme value central point, pLgTo be each The global optimum position of subgroup, xi(t) put for particles in-situ, xi(t+1) it is the particle position after updating;
Step 1-2-4:Crossover operator is introduced, if the random number produced is less than crossover probability PC, then two subgroups, which are performed, intersects behaviour Make:xik=pLg1k, xjl=pLg2l, wherein, xikElement, p are tieed up for the kth of i-th of particle position in first subgroupLg1kFor first The kth dimension element of individual subgroup global optimum position, xjlElement, p are tieed up for the l of j-th of particle position in second subgroupLg2lFor The l dimension elements of second subgroup global optimum position, i, j=1,2 ..., N/2 and i ≠ j, k ∈ [(IN+1) * HN+1, D], l ∈ [1, (IN+1) * HN], IN are the input layer number of neutral net, and HN is hidden layer neuron number, and D is particle Dimension, and the fitness J (i) of each particle is calculated, if random number is more than crossover probability PC, then without any operation;
Step 1-2-5:Update local optimum PiIf the particle position after updating is better than original particle position, by new particle position Put the P as the particlei, and it is used as the global optimum P in current iterationLg, update individual mechanism center point Pavg, calculate each Subgroup average fitness fit_avg, if the particle position after updating is not better than original particle position, without any behaviour Make;
Step 1-2-6:Mutation operator is introduced, if J (i) < fit_avg, make badNum (i)+1, if badNum (i) >= BadNumLimit, the then position of random initializtion particle and speed:xid=a+ (b-a) * rand, vid=m+ (n-m) * rand, Wherein d=1,2 ..., D, a and b are the minimum and maximum positions for limiting particle, and m and n are the minimum and maximum speed for limiting particle Degree, rand for [0,1) between uniform random number;
Step 1-2-7:Judge whether to reach default inner iterative number of times, if so, the subgroup for then comparing two subgroups is optimal, obtain Global optimum, if it is not, then going to step 1-2-3;
Step 1-2-8:Judge whether to reach maximum iteration or meet gbest (n)-gbest (n-4)<=0.0001, if It is then to stop iteration, if it is not, then brick step 1-2-2.
5. it is according to claim 1 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, the predictor formula of step 1 is:
<mrow> <mo>&amp;lsqb;</mo> <msub> <mrow> <mo>(</mo> <mover> <msub> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mo>(</mo> <mover> <msub> <mi>H</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>=</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>&amp;lsqb;</mo> <msub> <mrow> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> <mi>t</mi> </msub> <mo>,</mo> <msub> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>t</mi> </msub> <mo>,</mo> <msub> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>t</mi> </msub> <mo>&amp;rsqb;</mo> </mrow>
Wherein, (P)tFor t ambient parameter, (Tin)tFor t greenhouse observed temperature, (Hin)tSurveyed for t greenhouse wet Degree,For the t+1 moment greenhouse temperatures of prediction,For the t+1 moment chamber humidities of prediction.
6. it is according to claim 1 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, in step 2 a moment is 15min.
7. it is according to claim 1 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, training set train_XnGreenhouse external environment influence factor (P) including the t+n-1 momentt+n-1With neutral net fn-1Instruction Practice collection analog result train_Yn-1, wherein train_Yn-1The inside greenhouse humiture at the t+n-1 moment including prediction.
8. it is according to claim 1 based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network, its feature It is, the predictor formula of the BP neural network of rolling is:
<mrow> <mo>&amp;lsqb;</mo> <msub> <mrow> <mo>(</mo> <mover> <msub> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>n</mi> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mo>(</mo> <mover> <msub> <mi>H</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>n</mi> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>=</mo> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>&amp;lsqb;</mo> <msub> <mrow> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mo>(</mo> <mover> <msub> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mo>(</mo> <mover> <msub> <mi>H</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;rsqb;</mo> </mrow>
Wherein, (P)t+n-1For the ambient parameter at t+n-1 moment,For the t+n-1 moment greenhouse temperatures of prediction, For the t+n-1 moment chamber humidities of prediction,For the t+n moment greenhouse temperatures of prediction,For the t+n moment of prediction Chamber humidity.
CN201710426571.5A 2017-06-08 2017-06-08 Greenhouse microclimate medium-long term prediction method based on rolling BP neural network Active CN107180261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710426571.5A CN107180261B (en) 2017-06-08 2017-06-08 Greenhouse microclimate medium-long term prediction method based on rolling BP neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710426571.5A CN107180261B (en) 2017-06-08 2017-06-08 Greenhouse microclimate medium-long term prediction method based on rolling BP neural network

Publications (2)

Publication Number Publication Date
CN107180261A true CN107180261A (en) 2017-09-19
CN107180261B CN107180261B (en) 2020-03-17

Family

ID=59835322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710426571.5A Active CN107180261B (en) 2017-06-08 2017-06-08 Greenhouse microclimate medium-long term prediction method based on rolling BP neural network

Country Status (1)

Country Link
CN (1) CN107180261B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909149A (en) * 2017-10-26 2018-04-13 西北农林科技大学 A kind of Temperature in Greenhouse Forecasting Methodology based on Genetic BP Neutral Network
CN110414045A (en) * 2019-06-18 2019-11-05 东华大学 Short-term wind speed forecasting method based on VMD-GRU
CN115358475A (en) * 2022-08-29 2022-11-18 河南农业大学 Disaster prediction method and system based on support vector machine and gray BP neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0980034A1 (en) * 1998-08-13 2000-02-16 C.S.E.M. Centre Suisse D'electronique Et De Microtechnique Sa Building heating control system
CN103105246A (en) * 2012-12-31 2013-05-15 北京京鹏环球科技股份有限公司 Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN103235620A (en) * 2013-04-19 2013-08-07 河北农业大学 Greenhouse environment intelligent control method based on global variable prediction model
CN104715282A (en) * 2015-02-13 2015-06-17 浙江工业大学 Data prediction method based on improved PSO-BP neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0980034A1 (en) * 1998-08-13 2000-02-16 C.S.E.M. Centre Suisse D'electronique Et De Microtechnique Sa Building heating control system
CN103105246A (en) * 2012-12-31 2013-05-15 北京京鹏环球科技股份有限公司 Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN103235620A (en) * 2013-04-19 2013-08-07 河北农业大学 Greenhouse environment intelligent control method based on global variable prediction model
CN104715282A (en) * 2015-02-13 2015-06-17 浙江工业大学 Data prediction method based on improved PSO-BP neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
朱春侠等: "BP神经网络在日光温室湿度预测中的应用", 《农机化研究》 *
杜世强: "一种新的优化神经网络权值算法及其应用", 《西北民族大学学报(自然科学版)》 *
杨德平等: "《经济预测方法及MATLAB实现》", 31 March 2012, 北京:机械工业出版社 *
谷悦等: "基于BP神经网络群结构的风电场短期风速预测", 《农村电气化》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909149A (en) * 2017-10-26 2018-04-13 西北农林科技大学 A kind of Temperature in Greenhouse Forecasting Methodology based on Genetic BP Neutral Network
CN110414045A (en) * 2019-06-18 2019-11-05 东华大学 Short-term wind speed forecasting method based on VMD-GRU
CN110414045B (en) * 2019-06-18 2023-08-11 东华大学 Short-term wind speed prediction method based on VMD-GRU
CN115358475A (en) * 2022-08-29 2022-11-18 河南农业大学 Disaster prediction method and system based on support vector machine and gray BP neural network

Also Published As

Publication number Publication date
CN107180261B (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN110458443A (en) A kind of wisdom home energy management method and system based on deeply study
CN105913151A (en) Photovoltaic power station power generation amount predication method based on adaptive mutation particle swarm and BP network
CN109146121A (en) The power predicating method stopped in the case of limited production based on PSO-BP model
CN112733462B (en) Ultra-short-term wind power plant power prediction method combining meteorological factors
CN108448610A (en) A kind of short-term wind power prediction method based on deep learning
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN101315544A (en) Greenhouse intelligent control method
CN104636985A (en) Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network
CN110222883A (en) Load Prediction In Power Systems method based on wind Drive Optimization BP neural network
CN103793887B (en) Short-term electric load on-line prediction method based on self-adaptive enhancement algorithm
CN103235620A (en) Greenhouse environment intelligent control method based on global variable prediction model
CN107180261A (en) Based on the Greenhouse grape medium- and long-term forecasting method for rolling BP neural network
CN105447509A (en) Short-term power prediction method for photovoltaic power generation system
CN107466816A (en) A kind of irrigation method based on dynamic multilayer extreme learning machine
Yue et al. The prediction of greenhouse temperature and humidity based on LM-RBF network
CN106447133A (en) Short-term electric load prediction method based on deep self-encoding network
CN107121926A (en) A kind of industrial robot Reliability Modeling based on deep learning
CN110276472A (en) A kind of offshore wind farm power ultra-short term prediction method based on LSTM deep learning network
CN108399470A (en) A kind of indoor PM2.5 prediction techniques based on more example genetic neural networks
CN105160441A (en) Real-time power load forecasting method based on integrated network of incremental transfinite vector regression machine
CN107400935A (en) Adjusting method based on the melt-spinning technology for improving ELM
CN116522795A (en) Comprehensive energy system simulation method and system based on digital twin model
CN113705922B (en) Improved ultra-short-term wind power prediction algorithm and model building method
Tao et al. On comparing six optimization algorithms for network-based wind speed forecasting
CN111401659A (en) Ultra-short-term or short-term photovoltaic power generation power prediction method based on case reasoning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230327

Address after: 201906 8312, building 8, No. 4361, Hutai Road, Baoshan District, Shanghai

Patentee after: SHANGHAI LANCHANG AUTOMATION TECHNOLOGY CO.,LTD.

Address before: Weigang Xuanwu District of Nanjing Jiangsu province 210095 No. 1

Patentee before: NANJING AGRICULTURAL University

TR01 Transfer of patent right