CN108717573A - A kind of dynamic process Neural Network Model Identification method - Google Patents

A kind of dynamic process Neural Network Model Identification method Download PDF

Info

Publication number
CN108717573A
CN108717573A CN201810486684.9A CN201810486684A CN108717573A CN 108717573 A CN108717573 A CN 108717573A CN 201810486684 A CN201810486684 A CN 201810486684A CN 108717573 A CN108717573 A CN 108717573A
Authority
CN
China
Prior art keywords
neural network
model
training
sample
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810486684.9A
Other languages
Chinese (zh)
Inventor
雎刚
邵恩泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810486684.9A priority Critical patent/CN108717573A/en
Publication of CN108717573A publication Critical patent/CN108717573A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a kind of dynamic process Neural Network Model Identification method, this method gatherer process inputoutput datas first, and constitute model training sample;Then BP neural network model structure is determined;Then with traditional error sum of squares JeAs Model Distinguish precision index, the quadratic sum item and J of the difference of variable quantity and sample output variable quantity are exported with neighbouring sample periodic modeleThe sum of as neural metwork training index P;Neural network finally is trained as target to minimize training quota P, loop iteration corrects neural network weight coefficient, with Model Distinguish precision index JeStop condition less than preset value as training.The present invention is as a result of neural metwork training index P, with traditional based on error sum of squares performance indicator JeBP neural network discrimination method compare, the data capability of fitting and generalization ability of institute established model are improved under the conditions of identical identification precision, effectively improves model quality.

Description

A kind of dynamic process Neural Network Model Identification method
Technical field
The invention belongs to automatically control to identify with neural network model, and in particular to a kind of dynamic process neural network model Discrimination method.
Background technology
Neural network is that a general purpose function approaches device, is widely applied in Dynamic Process Modeling, wherein BP god It is the most extensive through network application.The error sum of squares performance indicator that Neural Network Model Identification method traditional at present is based on is such as Under:
Wherein k is sampling instant, and N is number of samples, and y (k) is the reality output of k etching process, ym(k) it is k moment god It is exported through network model, JeFor the error sum of squares of model output and process reality output.
BP neural network identification Method based on error sum of squares performance indicator, it is difficult to the data fitting of Coordination Model The contradiction of ability (identification precision) and generalization ability.Identification precision is arranged low, and the data capability of fitting of model is poor, let alone Generalization ability;Identification precision is arranged high, and the data capability of fitting of model is good, but will appear overfitting phenomenon, leads to mould The generalization ability of type is poor.The reason is that the discrimination method based on error sum of squares performance indicator only considered the identification on sample point Precision.
Invention content
Goal of the invention:To solve the above-mentioned problems of the prior art, the present invention proposes a kind of dynamic process neural network Mould discrimination method, the contradiction between this method energy effective coordination model data capability of fitting and generalization ability.
Technical solution:A kind of dynamic process Neural Network Model Identification method, includes the following steps:
(1) gatherer process inputoutput data, and constitute model training sample;
(2) Artificial Neural Network Structures are determined;
(3) Model Distinguish precision index function and model training target function are determined;
(4) training sample of step (1) is used to train neural network parameter.
Further, the specific method is as follows for step (1):
To the Process History data of continuous time according to fixed sampling period T, acquisition acquisition process input data sequence u (k) and output data sequences y (k), and variable u and y are made into time delay processing respectively, constitutes following data to sample:
Wherein k is sampling instant, and N is number of samples, and m is that process inputs order, and n is the output of process order, and m≤n, T take 1-5 seconds;
Further, it is I that step (2), which specifically includes one input layer number of structure, and intermediate the number of hidden nodes is J, defeated Go out 3 layers of BP neural network that node layer number is 1, wherein I=m+n, the connection weight between input layer and hidden node is w1j,i, hidden layer Connection weight between output node layer is w2j,
I=1,2 ..., I, j=1,2 ..., J;
Further, step (3) specifically includes:
Setting vector:
X (k)=[u (k-1), u (k-2) ..., u (k-m), y (k-1), y (k-2) ..., y (k-n)]
With the element x of vectorial X (k)i(k) input of (i=1,2 ..., I) as neural network, y (k) are used as nerve net The output of network, then Model Distinguish precision index function be:
Model training target function is:
P=Je+λJrate
Wherein:
In above-mentioned two target function, JeFor the error sum of squares of neural network model output and respective sample output, Jrate The error sum of squares of variable quantity and sample output variable quantity, λ J are exported for neighbouring sample periodic modelrateWeight coefficient, β For constant, ymIt exports for neural network model, is calculated by following formula:
O in formulajFor the output of hidden node, f () is Sigmoidal functions;
Further, step (4) uses the training sample of step 1, and to minimize training quota P as target, training is neural Network weight w1j,iAnd w2j, make final Model Distinguish precision index JeIt meets the requirements, is as follows:
(41) initialization connection weight w1j,iAnd w2j, set learning rate η and Model Distinguish precision ε;
(42) connection weight w1 is calculated as followsj,iAnd w2jCorrection amount w1j,iWith Δ w2j
With
(43) connection weight w1 is correctedj,iAnd w2j
w1j,i=w1j,i+Δw1j,i
w2j=w2j+Δw2j
(44) with through step (43) revised w1j,iAnd w2jCalculate identification precision index JeIf Je< ε, then training are tied Otherwise beam returns to step (42).
Advantageous effect:The present invention and traditional BP neural network discrimination method phase based on error sum of squares performance indicator Than in traditional error sum of squares performance indicator, increasing neighbouring sample periodic model output variable quantity and changing with sample output The quadratic sum item of the difference of amount is less than preset value as instruction in this, as network training index using traditional error sum of squares index Experienced stop condition, to the data capability of fitting of effective coordination model and the contradiction of generalization ability;On the other hand the present invention exists Under the conditions of identical identification precision, it is greatly improved the data capability of fitting and generalization ability of institute's established model, effectively improves model Quality.
Description of the drawings
Fig. 1 (a) is that the model output of traditional discrimination method training sample exports correlation curve with sample;
Fig. 1 (b) is that the model output of traditional discrimination method test sample exports correlation curve with sample;
Fig. 2 (a) is that the model output of the method for the present invention training sample exports correlation curve with sample;
Fig. 2 (b) is that the model output of the method for the present invention test sample exports correlation curve with sample.
Specific implementation mode
In order to which technical solution disclosed by the invention is preferably described in detail, with reference to the accompanying drawings of the specification and it is embodied Example is further elaborated.
Assuming that being for the transmission function for simulating the process of being identifiedBelow in conjunction with the accompanying drawings to the present invention The method and its step does further elaboration.
Step 1:To the Process History data of continuous time according to fixed sampling period T, acquisition acquisition process inputs number Make time delay processing respectively according to sequence u (k) and output data sequences y (k), and by variable u and y, constitutes following data to sample:
Wherein k is sampling instant, and N is number of samples, and m is that process inputs order, and n is the output of process order, and m≤n, T take 1-5 seconds;
For simulation process inputoutput data, add stochastic inputs of the amplitude between 0~2 in process transmission function input terminal Signal, with 1000 groups of the inputoutput data of 1 second sampling period gatherer process, wherein preceding 800 groups of data are as training sample Data, latter 200 groups are used as test sample data.N=4 in implementation, m=4, N=800.
Step 2:One input layer number of structure is I, and intermediate the number of hidden nodes is J, and output layer number of nodes is 3 layers of 1 BP neural network, wherein I=m+n, the connection weight between input layer and hidden node are w1j,i, the company between hidden layer and output node layer It is w2 to connect powerj, i=1,2 ..., I, j=1,2 ..., J;I=8 in implementation, J=80.
Step 3:Setting vector
X (k)=[u (k-1), u (k-2) ..., u (k-4), y (k-1), y (k-2) ..., y (k-4)]
With the element x of vectorial X (k)i(k) (i=1,2 ..., 8) as the input of neural network, y (k) is used as nerve net The output of network, then Model Distinguish precision index function be:
Model training target function is:
P=Je+λJrate
Wherein:
In These parameters, JrateThe square-error of variable quantity and sample output variable quantity is exported for neighbouring sample periodic model With λ JrateWeight coefficient, β is constant, takes β=0.01, ymIt exports for neural network model, is calculated by following formula:
O in formulajFor the output of hidden node, f () is Sigmoidal functions;
Step 4:Using the training sample of step 1, neural network, cycle is trained to change as target to minimize training quota P In generation, corrects neural network weight coefficient w1j,iAnd w2j, make final Model Distinguish precision index JeIt meets the requirements, is as follows:
(41) initialization connection weight w1j,iAnd w2j, set learning rate η and Model Distinguish precision ε;
In this implementation, w1j,iAnd w2jRandom number between being 0 to 1, η=0.001, ε=1
(42) connection weight w1 is calculated as followsj,iAnd w2jCorrection amount w1j,iWith Δ w2j
With
(43) connection weight w1 is correctedj,iAnd w2j
w1j,i=w1j,i+Δw1j,i
w2j=w2j+Δw2j
(44) with through (43) revised w1j,iAnd w2jCalculate identification precision index JeIf Je< ε, then training terminates, no Then, step (42) is gone to.
Fig. 1 and Fig. 2 is the simulation result of the present embodiment, and result data is as shown in table 1 below, which is traditional identification side Method and the method for the present invention identification result correction data table:
Table 1
It can be seen that, training error (i.e. identification precision) JeIt is all 1, the test error J of the method for the present inventioneThan traditional identification side Method is much smaller, shows under the conditions of identical identification precision, and the model that the method for the present invention is established has better generalization ability.By Fig. 1 and Fig. 2 can be seen that, although the identification precision of two methods is identical, the training sample of traditional discrimination method neural network exports Zigzag is presented in curve and test sample curve of output, and the curve of output of the method for the present invention is very smooth, shows present invention side Method has better data capability of fitting and generalization ability.The reason of the above results occur is that the method for the present invention considers adjacent adopt Sample periodic model exports the error sum of squares J of variable quantity and sample output variable quantityrate.Therefore, it is put down based on error with traditional Side is compared with the BP neural network discrimination method of performance indicator, and under the conditions of identical identification precision, the method for the present invention can be significantly The data capability of fitting and generalization ability for improving institute's established model, effectively improve the quality of model, can bring above-mentioned advantageous effect The remarkable result that part is recorded.

Claims (5)

1. a kind of dynamic process Neural Network Model Identification method, it is characterised in that:Include the following steps:
(1) gatherer process inputoutput data, and constitute model training sample;
(2) Artificial Neural Network Structures are determined;
(3) Model Distinguish precision index function and model training target function are determined;
(4) training sample of step (1) is used to train neural network parameter.
2. a kind of dynamic process Neural Network Model Identification method according to claim 1, it is characterised in that:Step (1) It is specific as follows:
The Process History data of continuous time are sampled according to fixed sampling period T, acquire acquisition process input data Sequence u (k) and output data sequences y (k), and variable u and y are made into time delay processing respectively, constitute neural metwork training data pair Sample, expression formula are as follows:
Wherein u (k-1), u (k-2) ..., u (k-m), y (k-1), y (k-2) ..., y (k-n) is the input variable of neural network Sample, y (k) are corresponding neural network output variable sample, and k is sampling instant, and N is number of samples, and m is that process inputs rank Secondary, n is the output of process order, and m≤n, T take 1-5 seconds.
3. a kind of dynamic process Neural Network Model Identification method according to claim 1, it is characterised in that:Step (2) It is I including one input layer number of structure, intermediate the number of hidden nodes is J, 3 layers of BP neural network that output layer number of nodes is 1, Wherein I=m+n, the connection weight between input layer and hidden node are w1j,i, the connection weight between hidden layer and output node layer is w2j, i =1,2 ..., I, j=1,2 ..., J.
4. a kind of dynamic process Neural Network Model Identification method according to claim 1, it is characterised in that:Step (3) Include the following steps:
Setting vector:X (k)=[u (k-1), u (k-2) ..., u (k-m), y (k-1), y (k-2) ..., y (k-n)], with vector The element x of X (k)i(k) input of (i=1,2 ..., I) as neural network, outputs of the y (k) as neural network obtain mould Type identification precision target function, the Model Distinguish precision index function expression are as follows:
Model training target function is:
P=Je+λJrate
Wherein:
In above-mentioned two target function, JeFor the error sum of squares of neural network model output and respective sample output, JrateFor phase The error sum of squares of adjacent sampling period model output variable quantity and sample output variable quantity, λ JrateWeight coefficient, β is normal Number, ymIt is exported for neural network model, calculation expression is as follows:
O in formulajFor the output of hidden node, f () is Sigmoidal functions.
5. a kind of dynamic process Neural Network Model Identification method according to claim 1, it is characterised in that:Step (4) Using the training sample of step 1, neural network, loop iteration is trained to correct neural network as target to minimize training quota P Weight coefficient w1j,iAnd w2j, make final Model Distinguish precision index JeIt meets the requirements, is as follows:
(41) initialization connection weight w1j,iAnd w2j, set learning rate η and Model Distinguish precision ε;
(42) connection weight w1 is calculatedj,iAnd w2jCorrection amount w1j,iWith Δ w2j, calculation formula is as follows:
(43) connection weight w1 is correctedj,iAnd w2j
w1j,i=w1j,i+Δw1j,i
w2j=w2j+Δw2j
(44) with through step (43) revised w1j,iAnd w2jCalculate identification precision index JeIf Je< ε, then training terminates, no Then, step (42) is gone back to.
CN201810486684.9A 2018-05-21 2018-05-21 A kind of dynamic process Neural Network Model Identification method Pending CN108717573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810486684.9A CN108717573A (en) 2018-05-21 2018-05-21 A kind of dynamic process Neural Network Model Identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810486684.9A CN108717573A (en) 2018-05-21 2018-05-21 A kind of dynamic process Neural Network Model Identification method

Publications (1)

Publication Number Publication Date
CN108717573A true CN108717573A (en) 2018-10-30

Family

ID=63900095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810486684.9A Pending CN108717573A (en) 2018-05-21 2018-05-21 A kind of dynamic process Neural Network Model Identification method

Country Status (1)

Country Link
CN (1) CN108717573A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113939777A (en) * 2020-05-13 2022-01-14 东芝三菱电机产业系统株式会社 Physical model identification system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113939777A (en) * 2020-05-13 2022-01-14 东芝三菱电机产业系统株式会社 Physical model identification system

Similar Documents

Publication Publication Date Title
Ata et al. An adaptive neuro-fuzzy inference system approach for prediction of tip speed ratio in wind turbines
CN111222677A (en) Wind speed prediction method and system based on long-short term memory time neural network
CN103942461B (en) Water quality parameter Forecasting Methodology based on online online-sequential extreme learning machine
CN103971160B (en) particle swarm optimization method based on complex network
WO2023019601A1 (en) Signal modulation recognition method for complex-valued neural network based on structure optimization algorithm
CN103020728A (en) Method for predicating short-term substation power quality in electrical power system
CN101315544A (en) Greenhouse intelligent control method
CN113052334A (en) Method and system for realizing federated learning, terminal equipment and readable storage medium
CN114217524A (en) Power grid real-time self-adaptive decision-making method based on deep reinforcement learning
CN103106535A (en) Method for solving collaborative filtering recommendation data sparsity based on neural network
CN111144663A (en) Ultra-short-term wind power prediction method for offshore wind farm considering output fluctuation process
CN105184400A (en) Tobacco field soil moisture prediction method
CN108460462A (en) A kind of Interval neural networks learning method based on interval parameter optimization
CN106296434A (en) A kind of Grain Crop Yield Prediction method based on PSO LSSVM algorithm
CN103106331A (en) Photo-etching line width intelligence forecasting method based on dimension-reduction and quantity-increment-type extreme learning machine
CN108717506A (en) A method of prediction coke hot strength
CN108717573A (en) A kind of dynamic process Neural Network Model Identification method
CN113328435B (en) Active and reactive power combined control method for active power distribution network based on reinforcement learning
Zhu et al. Structural safety monitoring of high arch dam using improved ABC-BP model
CN109272144A (en) The prediction technique of grassland in northern China area NDVI based on BPNN
CN112821420B (en) XGboost-based prediction method and system for dynamic damping factor and multidimensional frequency index in ASFR model
CN107526294B (en) Intelligent identification method for thermal field temperature-silicon single crystal diameter nonlinear time lag system
CN103781108A (en) Neural network-based wireless sensor network data prediction method
CN107994570A (en) A kind of method for estimating state and system based on neutral net
CN116992779A (en) Simulation method and system of photovoltaic energy storage system based on digital twin model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181030