CN101846969A - Temper mill - Google Patents

Temper mill Download PDF

Info

Publication number
CN101846969A
CN101846969A CN200910048008A CN200910048008A CN101846969A CN 101846969 A CN101846969 A CN 101846969A CN 200910048008 A CN200910048008 A CN 200910048008A CN 200910048008 A CN200910048008 A CN 200910048008A CN 101846969 A CN101846969 A CN 101846969A
Authority
CN
China
Prior art keywords
function
training
controller
weights
technical characterictic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910048008A
Other languages
Chinese (zh)
Inventor
程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd filed Critical SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Priority to CN200910048008A priority Critical patent/CN101846969A/en
Publication of CN101846969A publication Critical patent/CN101846969A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention patent discloses a temper mill, relating to an intelligent control method, in particular to an improved neural network control method. Aiming at the nonlinearity and coupling characteristics of the control object of the temper mill, the invention provides an optimization method for the temper mill by utilizing a mode of combining an improved neural network learning method and a diagonal matrix decoupling method, wherein an improved neural network utilizes an improved weight correction method.

Description

Planisher
Affiliated technical field
Patent of the present invention relates to a kind of intelligence control method, and particularly a kind of improved neural network control method is applied to the planisher transformation.
Background technology
Planisher is the key equipment of high added value steel band sheet materials such as production Automobile Plate, tin plate, autochromatic plate.The key that guarantees planisher strip section product quality is to guarantee that mill speed in the operation of rolling, forward pull and backward pull keep stablely, otherwise light and dark chatter mark may appear in belt steel surface, and tension fluctuation is too big, even causes broken belt, interrupts the operation of rolling.Coiling machine, working roll and 3 organic electronic systems of uncoiler are separately designs, do not consider the normal coupling influence of each subsystem after wearing band when rolling.In fact working roll speed, forward pull, 3 parameters of backward pull intercouple, influence each other, and forward and backward sometimes tension force can produce strong coupled vibrations, causes the rolled piece surface chatter mark to occur.Because the learning ability of neural network has very big influence to the decoupling performance of whole decoupling controller, therefore the present invention proposes a kind of improved network learning method.
The ultimate principle of BP learning algorithm is the gradient method of steepest descent, and its central idea is to adjust weights to make network total error minimum.Adopt the gradient search technology, so that the error mean square value minimum of the real output value of network and expectation.Network learning procedure is a kind of process of error back-propagating modified weight coefficient.
In general, learning rate is big more, and the change of weights is fierce more, and at the training initial stage, bigger learning rate is favourable to the quick decline of error, but has arrived certain phase, and big learning rate may cause vibration, energy function promptly occurs and neglect to rise and to fall suddenly or go up not down.So, speed of convergence and be the obvious deficiency of BP algorithm slowly to the dependence of algorithm convergence parameter.Numerous methods have proposed improvement project, below are a kind of algorithms that can take all factors into consideration speed of convergence and parameter robustness.
Summary of the invention
The mode that the present invention utilizes following improved network learning method and diagonal matrix decoupling method to combine, the decoupling zero part is to employing PID control method in the corner channel in its middle controller, adopt improved neural net method in the corresponding main channel of control section in the controller, the decoupling zero part is to another adopts improved neural net method in the corner channel in the controller, adopt the PID control method in the corresponding main channel of control section in the controller, proposed one group of packed tower plant modification method.Wherein diagonal matrix decoupling method and PID control method are classic methods, only improved network learning method are described.
The theme step of BP network calculations:
(a). put the initial value w of each weights and threshold values Ij p(0), θ j p(0), (p=1,2...Q) wherein p is a several layers, Q represents total number of plies (b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, to each sample calculation output and weights correction
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), f in the formula ( *) be activation function
If the desired output that it is exported and each top-mould type is right is inconsistent, then its error signal is returned from the output terminal backpropagation, and in communication process, weighting coefficient is constantly revised, up to till obtaining needed expectation input value on the output layer neuron.After sample being finished the adjustment of network weight coefficient, send into another sample mode again to carrying out similar study, till finishing a training study.
Below utilize method of conjugate gradient to the weights correction:
Consider the quadratic form performance function
E ( w ) = 1 2 w T Qw + b T w + c
Its gradient is
g = ▿ E ( w ) = Qw + b
Its second order gradient is the Hessian matrix
H = ▿ 2 E ( w ) = Q
So the change amount of gradient is
Δ g[k]=g[k+1]-g[k]=(Qw[k+1]+b)-(Qw[k]+b)=Q Δ w[k]=α [k] Hp[k] in the formula, α [k] is prolonging direction p[k constantly] search makes the minimum learning rate of performance function E (w)
For the quadratic form performance function, optimum learning rate is pressed following formula and is determined α [ k ] = Δ g T [ k ] p [ i ] p T [ k ] Hp [ j ] , So, according to conjugate condition, and because learning rate is a scalar, so α [k] p T[k] Hp[j]=Δ g T[k] p[j]=0
Conjugate condition just changes direction of search p[j into] with the change amount Δ g[k of gradient] quadrature, and irrelevant with the Hessian matrix.
Initial search direction p[0] can be arbitrarily, the 1st iteration direction p[1] as long as and Δ g[0] quadrature, begin follow-up direction p[k usually with direction of steepest descent] as long as and the change amount sequence of gradient Δ g[0], Δ g[1] ... Δ g[k-1] quadrature gets final product.A kind of concise and to the point method is to adopt iteration
P[k+1]=β[k+1]P[k]-g[k+1]
Wherein: β [ k ] = [ 1 - P ( k - 1 ) ] g T [ k ] Δg [ k - 1 ] [ 1 + P ( k - 1 ) ] P T [ k - 1 ] Δg [ k - 1 ]
E { w ( k ) + w [ k ] Δg [ k - 1 ] w [ k - 1 ] Δg [ k - 1 ] P ( k ) } | α [ k ] = α * [ k ] = min
w ( k + 1 ) = w ( k ) + w [ k ] Δg [ k - 1 ] w [ k - 1 ] Δg [ k - 1 ] P ( k )
Description of drawings
Fig. 1 is the structural drawing of this control method
Fig. 2 is the structural drawing that improves neural network in this method
Embodiment
The mode that the present invention utilizes improved network learning method and diagonal matrix decoupling method to combine has proposed one group of planisher remodeling method, and wherein improved neural network realizes according to the following steps:
(a). put the initial value w of each weights and threshold values Ij p(0), θ j p(0), (p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, and each sample is carried out (c)~(e) step
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), f in the formula ( *) be activation function
(d). compute gradient g (k) and gradient change amount Δ g[k]
(e). revise weights w ( k + 1 ) = w ( k ) + w [ k ] Δg [ k - 1 ] w [ k - 1 ] Δg [ k - 1 ] P ( k )
P[k wherein] be about w (k) sequence, β [k] sequence, g[k] function of sequence, as P[k+1]=β [k+1] P[k]-g[k+1]
(f). all samples in sample set have all experienced c~e step, promptly finish a cycle of training, calculation of performance indicators
E ( w ) = 1 2 w T Qw + b T w + c
(g) if. the accuracy requirement of performance index cat family, i.e. E≤ε, training finishes so, otherwise forwards (b) to, continues next cycle of training.ε is little positive number, chooses according to actual conditions.
Wherein the computing method of β [k] are as follows: β [ k ] = [ 1 - P ( k - 1 ) ] g T [ k ] Δg [ k - 1 ] [ 1 + P ( k - 1 ) ] P T [ k - 1 ] Δg [ k - 1 ]
Wherein activation function can adopt: trigonometric function, bipolarity function, piecewise function, sigmoid function, based on warping function of sigmoid function etc.
Described correction weights refer in particular to behind individual iterative computation several times, and the direction of search is re-set as gradient direction, again by (e) iteration.

Claims (4)

1. the technical characterictic of planisher is:
The mode that the present invention utilizes following improved network learning method and diagonal matrix decoupling method to combine, the decoupling zero part is to employing PID control method in the corner channel in its middle controller, adopt improved neural net method in the corresponding main channel of control section in the controller, the decoupling zero part is to another adopts improved neural net method in the corner channel in the controller, adopt the PID control method in the corresponding main channel of control section in the controller, proposed one group of planisher remodeling method; Wherein diagonal matrix decoupling method and PID control method are classic methods, only improved network learning method are described;
Described improved network learning method flow process is carried out in the following manner:
(a). put the initial value w of each weights and threshold values Ij p(0), θ j p(0), (p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, and each sample is carried out (c)~(e) step
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), in the formula, f ( *) be activation function
(d). compute gradient g (k) and gradient change amount Δ g[k]
(e). revise weights
Figure F2009100480084C00011
P[k wherein] be about w (k) sequence, β [k] sequence, g[k] function of sequence, as P[k+1]=β [k+1] P[k]-g[k+1]
(f). all samples in sample set have all experienced (c)~(e) step, promptly finish a cycle of training, calculation of performance indicators,
(g) if. the accuracy requirement of performance index cat family, i.e. E≤ε, training finishes so, otherwise forwards (b) to, continues next cycle of training.ε is little positive number, chooses according to actual conditions.
2. according to claim item 1, the technical characterictic of described activation function is:
Activation function can adopt: trigonometric function, bipolarity function, piecewise function, sigmoid function, based on the warping function of sigmoid function, etc.
3. according to claim item 1, the technical characterictic of described correction weights is:
Described correction weights refer in particular to behind individual iterative computation several times, and the direction of search is re-set as gradient direction, again by (e) iteration.
4. according to claim item 1, the technical characterictic of described β [k] is:
Figure F2009100480084C00013
CN200910048008A 2009-03-23 2009-03-23 Temper mill Pending CN101846969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910048008A CN101846969A (en) 2009-03-23 2009-03-23 Temper mill

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910048008A CN101846969A (en) 2009-03-23 2009-03-23 Temper mill

Publications (1)

Publication Number Publication Date
CN101846969A true CN101846969A (en) 2010-09-29

Family

ID=42771608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910048008A Pending CN101846969A (en) 2009-03-23 2009-03-23 Temper mill

Country Status (1)

Country Link
CN (1) CN101846969A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106862284A (en) * 2017-03-24 2017-06-20 燕山大学 A kind of cold rolled sheet signal mode knows method for distinguishing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106862284A (en) * 2017-03-24 2017-06-20 燕山大学 A kind of cold rolled sheet signal mode knows method for distinguishing

Similar Documents

Publication Publication Date Title
CN108637020B (en) Self-adaptive variation PSO-BP neural network strip steel convexity prediction method
KR101149927B1 (en) Rolling load prediction learning method for hot plate rolling
CN101346676B (en) Method and device for tuning and control
CN101168173B (en) Device and method for controlling winding temperature
CN101391268B (en) Reverse optimization method of steel plate rolling and cooling controlling-process temperature institution
CN104375478A (en) Method and device for online predicting and optimizing product quality in steel rolling production process
CN105303252A (en) Multi-stage nerve network model training method based on genetic algorithm
CN103745101A (en) Improved neural network algorithm based forecasting method of set value of rolling force of medium plate
CN108537366B (en) Reservoir scheduling method based on optimal convolution bidimensionalization
CN1091008C (en) Interlinked control method for plate-band rolling course based on coordination law of plate shape and plate thickness
CN111522233A (en) Parameter self-tuning MIMO different-factor full-format model-free control method
CN104317195A (en) Improved extreme learning machine-based nonlinear inverse model control method
CN102172641A (en) Device and method for controlling winding temperature
CN111290282A (en) Predictive control method for thermal power generating unit coordination system
CN101846969A (en) Temper mill
CN105652666A (en) Large die forging press beam feeding speed predictive control method based on BP neural networks
Chi et al. Comparison of two multi-step ahead forecasting mechanisms for wind speed based on machine learning models
CN101844154A (en) Reforming method of band steel rolling process
CN101900991A (en) Composite PID (Proportion Integration Differentiation) neural network control method based on nonlinear dynamic factor
CN101846970A (en) Electric heating furnace device
CN100552574C (en) Machine group loading forecast control method based on flow model
CN109146007B (en) Solid waste intelligent treatment method based on dynamic deep belief network
CN102662324A (en) Non-linear model predication control method of tank reactor based on on-line support vector machine
CN101846972A (en) Biaxial scanning mirror
CN101846971A (en) Ladle furnace optimization method

Legal Events

Date Code Title Description
DD01 Delivery of document by public notice

Addressee: Wang Wei

Document name: Notification of Passing Preliminary Examination of the Application for Invention

C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100929