CN101846971A - Ladle furnace optimization method - Google Patents

Ladle furnace optimization method Download PDF

Info

Publication number
CN101846971A
CN101846971A CN200910048011A CN200910048011A CN101846971A CN 101846971 A CN101846971 A CN 101846971A CN 200910048011 A CN200910048011 A CN 200910048011A CN 200910048011 A CN200910048011 A CN 200910048011A CN 101846971 A CN101846971 A CN 101846971A
Authority
CN
China
Prior art keywords
function
training
weights
ladle furnace
optimization method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910048011A
Other languages
Chinese (zh)
Inventor
程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd filed Critical SHANGHAI DUFENG INTELLIGENT TECHNOLOGY Co Ltd
Priority to CN200910048011A priority Critical patent/CN101846971A/en
Publication of CN101846971A publication Critical patent/CN101846971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P10/00Technologies related to metal processing
    • Y02P10/25Process efficiency

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses a ladle furnace optimization method, relating to an intelligent control method, in particular to an improved neural network control method. The ladle furnace optimization method is provided aiming at the nonlinear and coupling characteristics of a controlled object in theladle furnace optimization method by using a mode combining an improved neural network learning method and a diagonal matrix decoupling method, wherein the improved neural network adopts an improved weight modification method.

Description

Ladle furnace optimization method
Affiliated technical field
Patent of the present invention relates to a kind of intelligence control method, and particularly a kind of improved neural network control method is applied to ladle furnace optimization method.
Background technology
Ladle furnace (Ladle Furnace is called for short the LF stove) is a kind of with arc heated, the double refining electric arc furnaces that argon gas stirs. the rise fall of electrodes system is the key component of whole LF stove. the electrode regulating system regulates the position of electrode real-time, keep constant arc length, to reduce the fluctuation of flame current, pilot arc voltage and current ratio constant, make power input stable. simultaneously by the selected power supply curve of optimizing, the electrode regulating system that can make power input maximization .LF stove is that a very complicated three-phase is non-linear, in time, become, the multi-variable system of the mutual coupling of input and output, the Hydraulic Power Transmission System of drive electrode lifting is a big inertia, pure hysteresis and have the nonlinear system of dead band characteristic. because the learning ability of neural network has very big influence to the decoupling performance of whole decoupling controller, therefore the present invention proposes a kind of improved network learning method.
The ultimate principle of BP learning algorithm is the gradient method of steepest descent, and its central idea is to adjust weights to make network total error minimum.Adopt the gradient search technology, so that the error mean square value minimum of the real output value of network and expectation.Network learning procedure is a kind of process of error back-propagating modified weight coefficient.
In general, learning rate is big more, and the change of weights is fierce more, and at the training initial stage, bigger learning rate is favourable to the quick decline of error, but has arrived certain phase, and big learning rate may cause vibration, energy function promptly occurs and neglect to rise and to fall suddenly or go up not down.So, speed of convergence and be the obvious deficiency of BP algorithm slowly to the dependence of algorithm convergence parameter.Numerous methods have proposed improvement project, below are a kind of algorithms that can take all factors into consideration speed of convergence and parameter robustness.
Summary of the invention
The mode that the present invention utilizes following improved network learning method and diagonal matrix decoupling method to combine has proposed one group of ladle furnace optimization method.Wherein the diagonal matrix decoupling method is a classic method, only improved network learning method is described.
The theme step of BP network calculations:
(a). put the initial value w of each weights and threshold values Ij p(0), θ j p(0), (p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, to each sample calculation output and weights correction
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), f in the formula ( *) be activation function
If the desired output that it is exported and each top-mould type is right is inconsistent, then its error signal is returned from the output terminal backpropagation, and in communication process, weighting coefficient is constantly revised, up to till obtaining needed expectation input value on the output layer neuron.After sample being finished the adjustment of network weight coefficient, send into another sample mode again to carrying out similar study, till finishing a training study.
Below utilize method of conjugate gradient to the weights correction:
Consider the quadratic form performance function E ( w ) = 1 2 w T Qw + b T w + c , Its gradient is g = ▿ E ( w ) = Qw + b , Its second order gradient is the Hessian matrix H = ▿ 2 E ( w ) = Q . So the change amount of gradient is
Δ g[k]=g[k+1]-g[k]=(Qw[k+1]+b)-(Qw[k]+b)=Q Δ w[k]=α [k] Hp[k] in the formula, a[k] be to prolong direction p[k constantly] search makes the minimum learning rate of performance function E (w)
For the quadratic form performance function, optimum learning rate is pressed following formula and is determined α [ k ] = Δ g T [ k ] p [ j ] p T [ k ] Hp [ j ] , So, according to conjugate condition, and because learning rate is a scalar, so a[k] p T[k] Hp[j]=Δ g T[k] p[j]=0.Conjugate condition just changes direction of search p[j into] with the change amount Δ g[k of gradient] quadrature, and irrelevant with the Hessian matrix.
Initial search direction p[0] can be arbitrarily, the 1st iteration direction p[1] as long as and Δ g[0] quadrature, begin follow-up direction p[k usually with direction of steepest descent] as long as and the change amount sequence of gradient Δ g[0], Δ g[1] ... Δ g[k-1] quadrature gets final product.A kind of concise and to the point method is to adopt iteration P[k+1]=β [k+1] P[k]-g[k+1]
Wherein: β [ k ] = g T [ k ] Δg [ k - 1 ] g T [ k - 1 ] g [ k - 1 ]
E { w ( k ) + p [ k ] Δg [ k - 1 ] g [ k ] p [ k - 1 ] g 2 [ k - 1 ] P ( k ) } | α [ k ] = α * [ k ] = min
w ( k + 1 ) = w ( k ) + p [ k ] Δg [ k - 1 ] g [ k ] p [ k - 1 ] g 2 [ k - 1 ] P ( k )
Description of drawings
Fig. 1 is the structural drawing of this control method
Fig. 2 is the structural drawing that improves neural network in this method
Embodiment
The mode that the present invention utilizes improved network learning method and diagonal matrix decoupling method to combine has proposed one group of ladle furnace optimization method, and wherein improved neural network realizes according to the following steps:
(a). put the initial value w of each weights and threshold values Ij p(0), θ j p(0), (p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, and each sample is carried out (c)~(e) step
(c). the actual output x of each layer of computational grid p=f (s p)=f (w px P-1), in the formula, f ( *) be activation function
(d). compute gradient g (k) and gradient change amount g (k)
(e). revise weights w ( k + 1 ) = w ( k ) + p [ k ] Δg [ k - 1 ] g [ k ] p [ k - 1 ] g 2 [ k - 1 ] P ( k )
P[k wherein] be about w (k) sequence, β [k] sequence, g[k] function of sequence, as P[k+1]=β [k+1] P[k]-g[k+1]
(f). all samples in sample set have all experienced c~e step, promptly finish a cycle of training, calculation of performance indicators
E ( w ) = 1 2 w T Qw + b T w + c
(g) if. the accuracy requirement of performance index cat family, i.e. E≤ε, training finishes so, otherwise forwards (b) to, continues next cycle of training.ε is little positive number, chooses according to actual conditions.
Wherein the computing method of β [k] are as follows: β [ k ] = g T [ k ] Δg [ k - 1 ] g T [ k - 1 ] g [ k - 1 ]
Wherein activation function can adopt: trigonometric function, bipolarity function, piecewise function, sigmoid function, based on warping function of sigmoid function etc.
Described correction weights refer in particular to behind individual iterative computation several times, and the direction of search is re-set as gradient direction, again by (e) iteration.

Claims (4)

1. the technical characterictic of ladle furnace optimization method is:
The mode that the present invention utilizes following improved network learning method and diagonal matrix decoupling method to combine has proposed one group of ladle furnace optimization method;
Described improved network learning method flow process is carried out in the following manner:
(a). put the initial value w of each weights and threshold values Ij p(0), θ j p(0), (p=1,2...Q) wherein p is a several layers, Q represents total number of plies
(b). input training sample (I q, d q), (p=1,2...M) wherein M represents input and output quantity, and each sample is carried out (c)~(e) step
(c). the actual output of each layer of computational grid
x p=f(s p)=f(w px p-1)
In the formula, f ( *) be activation function
(d). compute gradient g (k) and gradient change amount Δ g[k]
(e). revise weights
Figure F2009100480116C00011
P[k wherein] be about w (k) sequence, β [k] sequence, g[k] function of sequence, as P[k+1]=β [k+1] P[k]-g[k+1]
(f). all samples in sample set have all experienced (c)~(e) step, promptly finish a cycle of training, calculation of performance indicators,
Figure F2009100480116C00012
(g) if. the accuracy requirement of performance index cat family, i.e. E≤ε, training finishes so, otherwise forwards (b) to, continues next cycle of training.ε is little positive number, chooses according to actual conditions.
2. according to claim item 1, the technical characterictic of described activation function is:
Activation function can adopt: trigonometric function, bipolarity function, piecewise function, sigmoid function, based on the warping function of sigmoid function, etc.
3. according to claim item 1, the technical characterictic of described correction weights is:
Described correction weights refer in particular to behind individual iterative computation several times, and the direction of search is re-set as gradient direction, again by (e) iteration.
4. according to claim item 1, the technical characterictic of described β [k] is:
Figure F2009100480116C00013
CN200910048011A 2009-03-23 2009-03-23 Ladle furnace optimization method Pending CN101846971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910048011A CN101846971A (en) 2009-03-23 2009-03-23 Ladle furnace optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910048011A CN101846971A (en) 2009-03-23 2009-03-23 Ladle furnace optimization method

Publications (1)

Publication Number Publication Date
CN101846971A true CN101846971A (en) 2010-09-29

Family

ID=42771610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910048011A Pending CN101846971A (en) 2009-03-23 2009-03-23 Ladle furnace optimization method

Country Status (1)

Country Link
CN (1) CN101846971A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019097A (en) * 2012-11-29 2013-04-03 北京和隆优化控制技术有限公司 Optimal control system for steel rolling heating furnace
US11475180B2 (en) 2018-03-09 2022-10-18 Tata Consultancy Services Limited System and method for determination of air entrapment in ladles

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019097A (en) * 2012-11-29 2013-04-03 北京和隆优化控制技术有限公司 Optimal control system for steel rolling heating furnace
CN103019097B (en) * 2012-11-29 2015-03-25 北京和隆优化科技股份有限公司 Optimal control system for steel rolling heating furnace
US11475180B2 (en) 2018-03-09 2022-10-18 Tata Consultancy Services Limited System and method for determination of air entrapment in ladles

Similar Documents

Publication Publication Date Title
CN106873379A (en) A kind of sewage disposal method for optimally controlling based on iteration ADP algorithms
CN104698842B (en) A kind of LPV model nonlinear forecast Control Algorithms based on interior point method
CN102968055A (en) Fuzzy PID (Proportion Integration Differentiation) controller based on genetic algorithm and control method thereof
CN111522233B (en) Parameter self-tuning MIMO different factor full-format model-free control method
CN108181802A (en) A kind of controllable PID controller parameter optimization setting method of performance
CN110347192B (en) Glass furnace temperature intelligent prediction control method based on attention mechanism and self-encoder
CN111553118B (en) Multi-dimensional continuous optimization variable global optimization method based on reinforcement learning
CN111522229A (en) Parameter self-tuning MIMO different-factor offset format model-free control method
CN204595644U (en) Based on the aluminum-bar heating furnace temperature of combustion automaton of neural network
CN106292785A (en) Aluminum-bar heating furnace ignition temperature automaton based on neutral net
CN104050505A (en) Multilayer-perceptron training method based on bee colony algorithm with learning factor
CN110097929A (en) A kind of blast furnace molten iron silicon content on-line prediction method
CN103995466B (en) Interval prediction control modeling and optimizing method based on soft constraints
CN109359320B (en) Blast furnace index prediction method based on multiple sampling rate autoregressive distribution hysteresis model
CN105159071A (en) Method for estimating economic performance of industrial model prediction control system in iterative learning strategy
CN106462117A (en) Controlling a target system
CN102520617B (en) Prediction control method for unminimized partial decoupling model in oil refining industrial process
CN104950945A (en) Self-adaptive temperature optimization control method under all working conditions of cement calcination decomposing furnace
CN110399697B (en) Aircraft control distribution method based on improved genetic learning particle swarm algorithm
CN103605284B (en) The cracking waste plastics stove hearth pressure control method that dynamic matrix control is optimized
CN101846971A (en) Ladle furnace optimization method
CN117272782B (en) Hot rolled steel mechanical property prediction method based on self-adaptive multi-branch depth separation
CN105700357A (en) Boiler combustion system control method based on multivariable PID-PFC
CN111522235B (en) MIMO different factor tight format model-free control method with self-setting parameters
CN101900991A (en) Composite PID (Proportion Integration Differentiation) neural network control method based on nonlinear dynamic factor

Legal Events

Date Code Title Description
DD01 Delivery of document by public notice

Addressee: Wang Wei

Document name: Notification of Passing Preliminary Examination of the Application for Invention

C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100929