CN101846970A - Electric heating furnace device - Google Patents

Electric heating furnace device Download PDF

Info

Publication number
CN101846970A
CN101846970A CN200910048009A CN200910048009A CN101846970A CN 101846970 A CN101846970 A CN 101846970A CN 200910048009 A CN200910048009 A CN 200910048009A CN 200910048009 A CN200910048009 A CN 200910048009A CN 101846970 A CN101846970 A CN 101846970A
Authority
CN
China
Prior art keywords
function
electric heating
heating furnace
training
furnace device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910048009A
Other languages
Chinese (zh)
Inventor
程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd filed Critical SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Priority to CN200910048009A priority Critical patent/CN101846970A/en
Publication of CN101846970A publication Critical patent/CN101846970A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses an electric heating furnace device and relates to an intelligent control method, in particular to an improved neural network control method. The invention provides an optimizing method of an electric heating furnace device by utilizing the improved neural network learning method aiming at the nonlinearity of a control object of the electric heating furnace device. An improved neural network adopts an improved weight value correcting method.

Description

Electric heating furnace device
Affiliated technical field
Patent of the present invention relates to a kind of intelligence control method, and particularly a kind of improved neural network control method is applied to the electric heating furnace device transformation.
Background technology
In the industrial control system, the traditional PID control method of the general employing of electric heating furnace device, this method has under specific applying working condition controls effect preferably, but because the parameter of controller is not easy to adjusting, when applying working condition changes, the control effect that can not obtain well.For the control of process temperature, because the complicacy of actual condition is difficult to accurately set up mathematics model.Because neural network has learning ability and the non-linear characteristic of approaching preferably, the controller existing certain research in theoretical and practical application based on neural network comprises that neural network is the decoupling controller on basis.Because the learning ability of neural network has very big influence to the decoupling performance of whole decoupling controller, therefore the present invention proposes a kind of improved network learning method.
The ultimate principle of BP learning algorithm is the gradient method of steepest descent, and its central idea is to adjust weights to make network total error minimum.Adopt the gradient search technology, so that the error mean square value minimum of the real output value of network and expectation.Network learning procedure is a kind of process of error back-propagating modified weight coefficient.
In general, learning rate is big more, and the change of weights is fierce more, and at the training initial stage, bigger learning rate is favourable to the quick decline of error, but has arrived certain phase, and big learning rate may cause vibration, energy function promptly occurs and neglect to rise and to fall suddenly or go up not down.So, speed of convergence and be the obvious deficiency of BP algorithm slowly to the dependence of algorithm convergence parameter.Numerous methods have proposed improvement project, below are a kind of algorithms that can take all factors into consideration speed of convergence and parameter robustness.
Summary of the invention
The present invention utilizes following improved network learning method, has proposed one group of injecting machine material tube heating system method.
The theme step of BP network calculations:
(a). put the initial value of each weights and threshold values
Figure B2009100480099D0000011
Figure B2009100480099D0000012
(p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, to each sample calculation output and weights correction
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), f in the formula (*) is an activation function
(e) if the desired output that it is exported and each top-mould type is right is inconsistent, then its error signal is returned from the output terminal backpropagation, and in communication process, weighting coefficient is constantly revised, up to till obtaining needed expectation input value on the output layer neuron.After sample being finished the adjustment of network weight coefficient, send into another sample mode again to carrying out similar study, till finishing a training study.
Below utilize method of conjugate gradient to the weights correction:
Consider the quadratic form performance function
Figure B2009100480099D0000021
Its gradient is Its second order gradient is the Hessian matrix
Figure B2009100480099D0000023
So the change amount of gradient is
Δg[k]=g[k+1]-g[k]=(Qw[k+1]+b)-(Qw[k]+b)=QΔw[k]=α[k]Hp[k]
In the formula, α [k] is prolonging direction p[k constantly] search makes the minimum learning rate of performance function E (w)
For the quadratic form performance function, optimum learning rate is pressed following formula and is determined
Figure B2009100480099D0000024
So, according to conjugate condition, and because learning rate is a scalar, so α [k] p T[k] Hp[j]=Δ g T[k] p[j]=0.Conjugate condition just changes direction of search p[j into] with the change amount Δ g[k of gradient] quadrature, and irrelevant with the Hessian matrix.
Initial search direction p[0] can be arbitrarily, the 1st iteration direction p[1] as long as and Δ g[0] quadrature, begin follow-up direction p[k usually with direction of steepest descent] as long as and the change amount sequence of gradient Δ g[0], Δ g[1] ... Δ g[k-1] quadrature gets final product.A kind of concise and to the point method is to adopt iteration P[k+1]=β [k+1] P[k]-g[k+1]
Wherein: β [ k ] = [ 1 + w ( k - 1 ) ] g T [ k ] g [ k ] [ 1 - w ( k - 1 ) ] g T [ k - 1 ] g [ k - 1 ]
E { w ( k ) + p 2 [ k ] p [ k - 1 ] } | α [ k ] = α * [ k ] = min , w ( k + 1 ) = w ( k ) + p 2 [ k ] p [ k - 1 ]
Description of drawings
Fig. 1 is the structural drawing that improves neural network in this method
Embodiment
The present invention utilizes improved network learning method, has proposed one group of electric heating furnace device remodeling method, and wherein improved neural network realizes according to the following steps:
(a). put the initial value of each weights and threshold values
Figure B2009100480099D0000029
(p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, and each sample is carried out (c)~(c) step
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), in the formula, f (*) is an activation function
(d). compute gradient g (k) and gradient change amount Δ g[k]
(e). revise weights w ( k + 1 ) = w ( k ) + p 2 [ k ] p [ k - 1 ]
P[k wherein] be about w (k) sequence, β [k] sequence, g[k] function of sequence, as P[k+1]=β [k+1] P[k]-g[k+1]
(f). all samples in sample set have all experienced c~e step, promptly finish a cycle of training, calculation of performance indicators
E ( w ) = 1 2 w T Qw + b T w + c
(g) if. the accuracy requirement of performance index cat family, i.e. E≤ε, training finishes so, otherwise forwards (b) to, continues next cycle of training.ε is little positive number, chooses according to actual conditions.
Wherein the computing method of β [k] are as follows: β [ k ] = [ 1 + w ( k - 1 ) ] g T [ k ] g [ k ] [ 1 - w ( k - 1 ) ] g T [ k - 1 ] g [ k - 1 ]
Wherein activation function can adopt: trigonometric function, bipolarity function, piecewise function, sigmoid function, based on warping function of sigmoid function etc.
Described correction weights refer in particular to behind individual iterative computation several times, and the direction of search is re-set as gradient direction, again by (e) iteration.

Claims (4)

1. the technical characterictic of electric heating furnace device is:
The present invention utilizes following improved network learning method, has proposed one group of electric heating furnace device systems approach.
Described improved network learning method flow process is carried out in the following manner:
(a). put the initial value of each weights and threshold values
Figure F2009100480099C0000011
(p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, and each sample is carried out (c)~(e) step
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), in the formula, f (*) is an activation function
(d). compute gradient g (k) and gradient change amount Δ g[k]
(e). revise weights
w ( k + 1 ) = w ( k ) + p 2 [ k ] p [ k - 1 ]
P[k wherein] be about w (k) sequence, β [k] sequence, g[k] function of sequence, as P[k+1]=β [k+1] P[k]-g[k+1]
(f). all samples in sample set have all experienced (c)~(e) step, promptly finish a cycle of training, calculation of performance indicators,
E ( w ) = 1 2 w T Qw + b T w + c
(g) if. the accuracy requirement of performance index cat family, i.e. E≤ε, training finishes so, otherwise forwards (b) to, continues next cycle of training.ε is little positive number, chooses according to actual conditions.
2. according to claim item 1, the technical characterictic of described activation function is:
Activation function can adopt: trigonometric function, bipolarity function, piecewise function, sigmoid function, based on the warping function of sigmoid function, etc.
3. according to claim item 1, the technical characterictic of described correction weights is:
Described correction weights refer in particular to behind individual iterative computation several times, and the direction of search is re-set as gradient direction, again by (e) iteration.
4. according to claim item 1, the technical characterictic of described β [k] is:
β [ k ] = [ 1 + w ( k - 1 ) ] g T [ k ] g [ k ] [ 1 - w ( k - 1 ) ] g T [ k - 1 ] g [ k - 1 ]
CN200910048009A 2009-03-23 2009-03-23 Electric heating furnace device Pending CN101846970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910048009A CN101846970A (en) 2009-03-23 2009-03-23 Electric heating furnace device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910048009A CN101846970A (en) 2009-03-23 2009-03-23 Electric heating furnace device

Publications (1)

Publication Number Publication Date
CN101846970A true CN101846970A (en) 2010-09-29

Family

ID=42771609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910048009A Pending CN101846970A (en) 2009-03-23 2009-03-23 Electric heating furnace device

Country Status (1)

Country Link
CN (1) CN101846970A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372729A (en) * 2016-08-31 2017-02-01 广州瑞基信息科技有限公司 Depth learning method and device for mental analysis
CN108088087A (en) * 2017-11-17 2018-05-29 深圳和而泰数据资源与云技术有限公司 A kind of apparatus control method, device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372729A (en) * 2016-08-31 2017-02-01 广州瑞基信息科技有限公司 Depth learning method and device for mental analysis
CN106372729B (en) * 2016-08-31 2020-05-12 广州瑞基信息科技有限公司 Deep learning method and device for psychological analysis
CN108088087A (en) * 2017-11-17 2018-05-29 深圳和而泰数据资源与云技术有限公司 A kind of apparatus control method, device, electronic equipment and storage medium
CN108088087B (en) * 2017-11-17 2020-07-21 深圳和而泰数据资源与云技术有限公司 Equipment control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102096373B (en) Microwave drying PID (proportion integration differentiation) control method based on increment improved BP (back propagation) neural network
CN107272403A (en) A kind of PID controller parameter setting algorithm based on improvement particle cluster algorithm
CN104698842B (en) A kind of LPV model nonlinear forecast Control Algorithms based on interior point method
CN104317195B (en) Improved extreme learning machine-based nonlinear inverse model control method
CN103489038A (en) Photovoltaic ultra-short-term power prediction method based on LM-BP neural network
CN102968055A (en) Fuzzy PID (Proportion Integration Differentiation) controller based on genetic algorithm and control method thereof
CN102682345A (en) Traffic flow prediction method based on quick learning neural network with double optimal learning rates
CN104765350A (en) Cement decomposing furnace control method and system based on combined model predicting control technology
CN102510059A (en) Super short-term wind power forecasting method based on back propagation (BP) neural network
CN104375478A (en) Method and device for online predicting and optimizing product quality in steel rolling production process
CN111522229A (en) Parameter self-tuning MIMO different-factor offset format model-free control method
CN111522233A (en) Parameter self-tuning MIMO different-factor full-format model-free control method
CN105159071A (en) Method for estimating economic performance of industrial model prediction control system in iterative learning strategy
CN104517035A (en) Planar array antenna active scattering directional diagram predication method
CN102645894A (en) Fuzzy adaptive dynamic programming method
CN105469142A (en) Neural network increment-type feedforward algorithm based on sample increment driving
CN105676645A (en) Double-loop water tank liquid level prediction control method based on function type weight RBF-ARX model
CN101846970A (en) Electric heating furnace device
CN108181809B (en) System error-based parameter self-tuning method for MISO (multiple input single output) compact-format model-free controller
CN109782586A (en) The tight format non-model control method of the different factor of the MISO of parameter self-tuning
CN108153151A (en) Methods of self-tuning of the MIMO full format Non-Model Controller based on systematic error
CN101900991A (en) Composite PID (Proportion Integration Differentiation) neural network control method based on nonlinear dynamic factor
CN106371321A (en) PID control method for fuzzy network optimization of coking-furnace hearth pressure system
CN101846971A (en) Ladle furnace optimization method
CN101844154A (en) Reforming method of band steel rolling process

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100929