CN103971163A - Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering - Google Patents
Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering Download PDFInfo
- Publication number
- CN103971163A CN103971163A CN201410195894.4A CN201410195894A CN103971163A CN 103971163 A CN103971163 A CN 103971163A CN 201410195894 A CN201410195894 A CN 201410195894A CN 103971163 A CN103971163 A CN 103971163A
- Authority
- CN
- China
- Prior art keywords
- function
- wavelet
- learning rate
- output
- neuron
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Feedback Control In General (AREA)
Abstract
The invention relates to the technical field of wavelet neural network optimization, in particular to an adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering. The adaptive learning rate wavelet neural network control method comprises the steps that a control system model is built; unitization is conducted on all weight values of a wavelet network by layer; wavelet neural cell weight value optimization is carried out; an error signal and training cost are figured out; segment processing is conducted on derived functions of an activation function through a step function; fuzzy rules of fitting the derived functions are made; a membership function is determined; the proportion of each fuzzy rule in a derived function value is determined; a fuzzy system is output, and the activation function is displayed in a linearization mode; induction local areas of all neural cells are determined, and the neural cells are output; each local gradient function is solved; adjustment of the learning rate is conducted by an output layer in an adaptive mode; the range of the learning rate of the output layer is determined; the learning rate of a hidden layer is adjusted; neural cell synapse weight values are trained; a tracking control signal is output; closed-loop feedback control is completed. According to the adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering, the rate of convergence can be increased, and computation complexity can be reduced.
Description
Technical field
The present invention relates to wavelet neural network optimisation technique field, particularly a kind of adaptive learning rate Wavelet Neural Control method based on normalization minimum mean-square auto adapted filtering.
Background technology
Complication system has uncertainty more, and the nonlinear function of its internal system is difficult to set up, and therefore cannot adopt the method based on system architecture to realize the tracking control to complication system.And artificial neural network is by the interconnected network system forming of artificial neuron, it has carried out abstract and has simplified human brain from micromechanism and function, can be regarded as a large-scale highly-parallel processor by simple process cell formation, natural have storage experimental knowledge and make it available characteristic.The resemblance of neural network and human brain is, the knowledge that neural network is obtained is that interconnective interneuronal connection weights are used for storing the knowledge of acquisition simultaneously from external environment study.Processing in calculating, although it is simple that the function of each processing unit seems, but the concurrent activities of a large amount of simple process unit makes network present abundant function under the prerequisite that ensures fast speed, in addition the adaptive ability of neural network is to solve complicated problem non-linear, uncertain, that be uncertain of system to have opened up new way, so artificial neural network has been subject to the favor in many fields such as Nonlinear Systems Identification and analysis, control system and computing machine at present, and is used widely.In the research of neural network, the most extensive as the BP algorithm application of feedforward multilayer neural network training method, but also there are some defects in it, and the error that for example learning algorithm obtains is the complex nonlinear function of higher-dimension weight vector, is easily absorbed in local minimum.Wavelet neural network (abbreviation wavelet network) is the new neural network being based upon on Wavelet Analysis Theory basis, in hidden layer, adopt wavelet function to replace traditional neural network function as excitation function, and set up being connected between wavelet transformation and network coefficient by affined transformation, combine the advantage of wavelet analysis and neural network, there is very strong study and generalization ability, in control, there is the advantage of static non linear mapping and dynamic process, it is also progressively adopted in many complication systems with non-linear, strong coupling.But at present also there is the problems such as tracking velocity is slow in wavelet network in complication system or large scale network.In fact this problem is to use the algorithm of random gradient ubiquitous in the time carrying out the renewal of weights iteration, its achievement in research is mainly reflected in minimum mean square self-adaption filtering (LMS) and the normalization minimum mean-square auto adapted filtering (NLMS) in auto adapted filtering, but rarely has the achievement in research of this class in field of neural networks.According to the theory of Simon Haykin, LMS and NLMS can be considered that set up a simple linear neuron designs inputting of unknown dynamic system---single output model more.Hence one can see that, and wavelet network is to have the more sef-adapting filter of complex topology structure, if wavelet network is imposed to improvement, can make NLMS or LMS have the possibility being applied in wavelet network.
LMS and the NLMS limitation in weights derivation is mainly manifested in them and is only applicable to linear structure, and the nonlinear activation function of wavelet network can make this process become very complicated.And this problem already causes the hard-wired scholars' of researching neural network concern, wherein scholar Emilio Soria-Olivas adopts fuzzy theory to carry out local linear to the activation functions of neural network and returns calculating in " A Low-Complexity Fuzzy ActivationFunction for Artificial Neural Networks ".And compare and traditional fuzzy algorithm, the normally linear function of input variable of consequent of the T-S fuzzy model that Takagi and Sugeno proposed in 1985, every like this rule just can comprise many information, so adopt less rule just can reach control effect, this just means and can carry out regression fit to nonlinear function by relatively simple method.Therefore the present invention adopts T-S fuzzy model to carry out local linear regression fit to the activation functions of wavelet network, the result of this method and original function have very high fitting precision, thereby have also overcome the dyscalculia that the nonlinear activation function of wavelet network complexity brings.In addition, also there is the problem of multiple-input and multiple-output and multilayer topological structure in wavelet network, these problems no longer can independently regulate adaptive learning rate as NLMS wave filter, the present invention proposes a kind of adaptive learning rate wavelet network control method based on NLMS for this reason, thereby reduce systematic error in the starting stage of controlling, improved speed of convergence and the stability of control procedure.
Summary of the invention
The object of the present invention is to provide a kind of systematic error that reduces, improve convergence and the stability of control procedure, and reduce computation complexity, break away from the redundancy puzzlement that original fixing learning rate brings, avoid the problem of dispersing, improved a kind of adaptive learning rate Wavelet Neural Control method based on normalization minimum mean-square auto adapted filtering of the tracking efficiency of wavelet network in control of complex systems.
The object of the present invention is achieved like this:
(1) set up control system model: adopt wavelet network to carry out parameter tuning to enhanced PID controller, making wavelet network is the Multi-Layer Feedback web frame of MIMO, and each neuron function is activation functions, and the state space of neural network is:
Wherein W
kfor weights space, U
kfor network input, Z
kfor network output, φ
kfor right value update function, ψ (W
k, U
k) be parameterized nonlinear function, the weights space of wavelet network is W
k, the each weights in weights space are generated to equally distributed random number on [1,1] interval;
(2) getting equally distributed random number on [1,1] interval is weights initial value, and all weights of wavelet network are carried out to unit by layer;
(3) small echo neuron weights are optimized: centered by the neuron taking wavelet function as excitation function, weights in former and later two network layers are carried out associated with wavelet function type, neuron number respectively, if the excitation function in J layer is wavelet function, I, K are respectively the two-layer of J layer front and back, W
lM, W
mNfor two weight matrixs of three interlayers after unit, by expression formula associated with it with neuron number wavelet function type be:
Wherein K
jfor normal value;
(4) introduce training sample set { x (n), norm (n) }: input vector x (1) successively, x (2) ... x (n), record network output z (1), z (2) ... z (n), solves error signal e (n) and training cost ε (n):
e(n)=norm(n)-z(n)
(5) adopt the derived function staging treating of step function to activation functions: function is divided into M section, activation functions is carried out to matching, the functional value of the corresponding derivative of the slope of every section of activation functions;
(6) fuzzy rule of formulation matching derived function: the input variable of T-S model:
Output quantity is derived function value k (n), and its fuzzy rule form is:
Wherein
with
represent the fuzzy set in i rule, b
mrepresent m section (m=1,2 ..., M) left margin, p
i, q
iand r
iit is the constant of fuzzy set;
(7) determine subordinate function: adopt Gauss type function as subordinate function, each input variable x
jdegree of membership be:
In formula
with
be respectively center and the width of membership function;
(8) determine each fuzzy rule shared proportion in derived function value: every fuzzy rule is for input quantity x=[x
1, x
2] relevance grade μ
iand activity
for:
(9) output T-S fuzzy system, linearization show activation functions: the linear forms of activation functions:
The activation functions that s (x) is linear forms, a and b are respectively the border, left and right of function, θ
1and θ
2be respectively the threshold value on border, k (n) and d (n) are range of linearity coefficient, and λ is normal value coefficient;
(10) determine each neuronic induction local field and neuron output: the output signal of induction local field and neuron j is respectively:
Wherein v
j(n) be induction local field, w
ijfor weights, x
i(n) be the output of upper strata neuron, W
ijand X
i(n) be respectively w
ijand x
i(n) vector forming, I is upper strata neuron sum,
for the activation functions of j layer;
(11) solve each partial gradient function δ
j(n), partial gradient δ
j(n) be:
After the activation functions linearization of step (9), function δ
jL(n) linearization is expressed as:
(12) output layer self-adaptation regularized learning algorithm rate: adaptive learning rate is:
Wherein k
k(n)=s '
k(v
k(n))=constant c
k, σ
v(0 < σ
v< 1);
(13) determine the scope of output layer learning rate
Threshold value:
(14) learning rate of hidden layer regulates: the each neuron to hidden layer all adopts identical learning rate:
Wherein μ
jrepresent a hidden layer j neuronic learning rate, K is output layer neuron number;
(15) training synapse weights: introduce the adaptive learning rate of step (12) and step (14), partial gradient adopts nonlinear δ (n):
w(n+1)=w(n)+Δw(n)
Δw(n)=μ(n)δ(n)x(n);
(16) cycle index adds 1, returns to step (10), until meet stopping criterion, Output Tracking Control signal;
(17) control signal inputted to topworks and calculated fusion with system, exporting controlled parameter value, comparing with desired amount, completing close-loop feedback control.
Beneficial effect of the present invention is:
The present invention can adaptive real-time adjusting learning rate, thereby convergence speedup speed reduces computation complexity, has more level and smooth iterativecurve simultaneously.Compared with fixing learning rate, can improve learning efficiency.Compared with adaptive filter method based on LMS algorithm, the present invention adopts based on NLMS thought learning rate is regulated and has more specific aim, thereby quickening algorithm the convergence speed, broken away from the redundancy puzzlement that original learning rate control method brings simultaneously, avoid the problem of dispersing, improved control system and follow the tracks of efficiency.
Brief description of the drawings
Fig. 1: based on the wavelet network structural drawing of NLMS adaptive learning rate;
Fig. 2: technical scheme process flow diagram;
Fig. 3: the wavelet network of complication system is followed the tracks of control principle drawing;
Fig. 4: activation functions and derived function;
Fig. 5: the derived function fitting result chart of activation functions after T-S fuzzy reasoning;
Figure: 6: the input signal flow graph of neuron j;
Fig. 7: tracking control matched curve and the error signal of nonlinear function (1);
Fig. 8: tracking control matched curve and the error signal of nonlinear function (2).
Embodiment
Below in conjunction with accompanying drawing, the present invention is described further.
The present invention adopts based on NLMS thought learning rate is regulated and has more specific aim, this method can real-time update learning rate in right value update process, thereby reduction systematic error, improve convergence and the stability of control procedure, and reduce computation complexity, break away from the redundancy puzzlement that original fixing learning rate brings, avoided the problem of dispersing, improved the tracking efficiency of wavelet network in control of complex systems.Self-adaptation of the present invention regulates learning rate method to carry out in wavelet network on-line study platform, and embodiment of the present invention mainly comprises following committed step:
Step 1 is set up control system model, adopt wavelet network to carry out parameter tuning to enhanced PID controller, making wavelet network is the Multi-Layer Feedback web frame of MIMO, each neuron function is activation functions (as sigmoid function), its Wavelets is got the function (as Morlet function) with continuously differentiable character, and circulation stopping criterion is set.The state-space model of neural network can be expressed as:
Wherein W
kfor weights space, U
kfor network input, Z
zfor network output, φ
kfor right value update function, ψ (W
k, U
k) be parameterized nonlinear function.The weights space that makes wavelet network is W
k, the each weights in weights space are generated to equally distributed random number on [1,1] interval.
Step 2 is on the basis of step 1, and getting equally distributed random number on [1,1] interval is weights initial value, and all weights of wavelet network are carried out to unit by layer.
Step 3 small echo neuron weights are optimized.Centered by neuron taking wavelet function as excitation function, the weights in former and later two network layers are carried out associated with wavelet function type and neuron number.If the excitation function in J layer is wavelet function, I, K are respectively the two-layer of J layer front and back, now W
lM, W
mNfor two weight matrixs of three interlayers after unit, by expression formula associated with it with neuron number wavelet function type be:
Wherein K
jfor the normal value relevant with wavelet function, different wavelet functions has different normal values.
Step 4 is introduced training sample set { x (n), norm (n) }.Input vector x (1) successively, x (2) ... x (n), and record network output z (1), z (2) ... z (n).Solve error signal e (n) and training cost function ε (n):
e(n)=norm(n)-z(n)
Step 5 adopts the derived function staging treating of step function to activation functions.According to the Changing Pattern of derivative, function is divided into M section, thereby activation functions is carried out to matching.The functional value of corresponding its derivative of the slope of every section of activation functions, the segmentation problem of slope can be converted into the segmentation problem of derived function.
Step 6 is formulated the fuzzy rule of matching derived function.T-S model adopts the single output model of many inputs, input variable:
Output quantity is derived function value k (n), and its fuzzy rule has following form:
Wherein
with
represent the fuzzy set in i rule, b
mrepresent m section (m=1,2 ..., M) left margin, p
i, q
iand r
ithe constant relevant with fuzzy set.
Step 7 is determined subordinate function.The present invention adopts Gauss type function as subordinate function, each input variable x
jdegree of membership be:
In formula
with
be respectively center and the width of membership function.
Step 8 is determined each fuzzy rule shared proportion in derived function value.Every fuzzy rule is for input quantity x=[x
1, x
2] relevance grade μ
iand activity
for:
The output of step 9 T-S fuzzy system and activation functions linearization show.The linear forms of activation functions can be described as:
The activation functions that s (x) is linear forms, a and b are respectively the border, left and right of function, θ
1and θ
2be respectively the threshold value on border, k (n) and d (n) are range of linearity coefficient, and λ is normal value coefficient.
Step 10 solves each neuronic induction local field and neuron output.The output signal of induction local field and neuron j is respectively:
Wherein v
j(n) be induction local field, w
ijfor weights, x
i(n) be the output of upper strata neuron, W
ijand X
i(n) be respectively w
ijand x
i(n) vector forming, I is upper strata neuron sum,
for the activation functions of j layer.
Step 11 solves each partial gradient function δ
j(n).Partial gradient δ
j(n) can be expressed as:
After the activation functions linearization of step 9, function δ
jL(n) linearization is expressed as:
Step 12 output layer self-adaptation regularized learning algorithm rate.For strengthening the learning efficiency of wavelet network weights, the adaptive learning rate that the present invention proposes is:
Wherein k
k(n)=s '
k(v
k(n))=constant c
k, for fear of when input x (n) hour,
also can be very little, so likely there is the difficulty of numerical evaluation, therefore adopt σ
v(0 < σ
v< 1) to overcome this problem.
The scope of step 13 output layer learning rate
In order to ensure μ
k(n+1) validity, the present invention is to its restriction of threshold value in addition:
The learning rate of step 14 hidden layer regulates.On the basis that the present invention regulates in step 12 output layer learning rate self-adaptation, each neuron of hidden layer is all adopted to identical learning rate:
Wherein μ
jrepresent a hidden layer j neuronic learning rate, K is output layer neuron number.
The training process of step 15 synapse weights.In the adjustment process of synaptic weight, introduce the adaptive learning rate of step 12 and step 14, but for keeping the original advantage of activation functions, its partial gradient still adopts nonlinear δ (n):
w(n+1)=w(n)+Δw(n)
Δw(n)=μ(n)δ(n)x(n)
Step 16 cycle index adds 1, returns to step 10, until meet stopping criterion, Output Tracking Control signal.
Step 17 is inputted control signal topworks and is calculated fusion with system, under certain external interference condition, exports controlled parameter value, and compares with desired amount, completes a process of close-loop feedback control.
The present invention proposes one and is applied to wavelet neural network T-S obfuscation activation functions and adaptive learning rate control method, sets up a kind of wavelet neural network learning rate adaptive regulation method based on NLMS.The concrete enforcement of the method comprises sets up key contents such as controlling model and wavelet-neural network model, the T-S fuzzy reasoning of activation functions derived function, the adaptive learning rate function model of structure output layer and hidden layer.Learning rate adaptive regulation method of the present invention carries out in Neural Network Online learning platform, is the system construction drawing of wavelet network shown in Fig. 1.Below by the specific implementation process (as shown in Figure 2) of technical scheme that the present invention is described in detail in detail according to flow process proposes.This embodiment mainly comprises following key content:
Step 1 is set up control system model (as shown in Figure 3), adopts wavelet network to carry out parameter tuning to enhanced PID controller, thereby realizes the tracking control of complication system.Make wavelet network adopt the Multi-Layer Feedback web frame of MISO, each neuron function is activation functions (as sigmoid function, logistic function etc.), its Wavelets is got the function (as Morlet function) with continuously differentiable character, and circulation stopping criterion is set.The state-space model of wavelet network can be expressed as:
Wherein W
kfor weights space, U
kfor network input, Z
kfor network output, φ
kfor right value update function, ψ (W
k, U
k) be parameterized nonlinear function.The weights space that makes wavelet network is W
k, the each weights in weights space are generated to equally distributed random number on [1,1] interval.
Step 2 is on the basis of step 1, and getting equally distributed random number on [1,1] interval is weights initial value, and all weights of wavelet network are carried out to unit by layer.For example establish W
mNfor by w
mn(m=1...M; N=1...N) weight matrix of M layer to the N interlayer of composition, the weight matrix W after unit
mNfor:
Step 3 small echo neuron weights are optimized, and this step is centered by the neuron taking wavelet function as excitation function, and the weights in former and later two network layers are carried out associated with wavelet function type and neuron number.In wavelet network, making the excitation function in certain hidden layer is wavelet function, and for example the present invention adopts Morlet function, and the excitation function of establishing in J layer is wavelet function, and I, K are respectively the two-layer of J layer front and back, now W
lM, W
mNfor two weight matrixs of three interlayers after unit, by expression formula associated with it with neuron number wavelet function type be:
Wherein K
jfor the normal value relevant with wavelet function, different wavelet functions has different normal values.
Step 4 is introduced training sample set { x (n), norm (n) }.Input vector x (1) successively, x (2) ... x (n), and record network output z (1), z (2) ... z (n).Solve error signal e (n) and training cost function ε (n):
e(n)=norm(n)-z(n)
Step 5 adopts the derived function staging treating of step function to activation functions.Adopt piecewise linear approximation strategy, its basic thought is the derivative of discounting to approach activation functions with a series of.What the present invention adopted is that step function approaches, and according to the Changing Pattern of derivative, function is divided into M section, thereby activation functions is carried out to matching.The functional value of corresponding its derivative of the slope of every section of activation functions, the segmentation problem of slope can be converted into the segmentation problem of derived function.
Step 6 is formulated the fuzzy rule of matching derived function.Making T-S fuzzy model is many single-input single-output system (SISO system)s, and two clear input variable is respectively:
The clear output quantity of fuzzy system is derived function value k (n), and its fuzzy rule has following form:
Wherein
with
represent the fuzzy set in i rule, the present invention meets
for neuron input signal, i.e. activation functions input signal, has description in its computing method step afterwards, establishes derived function and is divided into M section, b
mrepresent m section (m=1,2 ..., M) left margin.P
i, q
iand r
ibe the constant relevant with fuzzy set, they are reflections of function inherent characteristic.
Step 7 is determined subordinate function.The present invention adopts Gauss type function as subordinate function, according to the fuzzy rule of having set, input quantity x=[x
1, x
2] in each input variable x
jdegree of membership be:
In formula
with
be respectively center and the width of membership function.
Step 8 is determined each fuzzy rule shared proportion in derived function value.Can obtain every fuzzy rule for input quantity x=[x by fuzzy reasoning
1, x
2] relevance grade μ
iand normalizing algorithm obtains the activity of every fuzzy rule
be expressed as:
The output of step 9 T-S fuzzy system and activation functions linearization.The result that can obtain derived function according to step 6 to step 8 is:
The activation functions adopting is herein limited function, and has continuous differentiability, and linear forms can be described as:
S () is the activation functions of linear forms, and from above formula, activation functions is divided into three parts, border, left and right and linear fit region, and a and b are respectively the border, left and right of function, θ
1and θ
2be respectively the threshold value on border, k (n) and d (n) are range of linearity coefficient, and λ is normal value coefficient.Fig. 4 represents sigmoid function and the derived function of wavelet function and the output layer of hidden layer, and Fig. 5 has described the fitting effect of derived function after T-S fuzzy reasoning.
Step 10 solves each neuronic induction local field and neuron output, and wherein the induction local field of input layer is input vector itself, and input neuron is not containing activation functions simultaneously.When Fig. 6 has described the n time iteration, the input signal flow graph of the neuron j except input layer, the function signal of its induction local field comes from the weight vector between its upper strata neuron output and output and neuron j:
The output signal of neuron j is:
Wherein v
j(n) be induction local field, w
ijfor weights, x
i(n) be the output of upper strata neuron, W
ijand X
i(n) be respectively w
ijand x
i(n) vector forming, I is upper strata neuron sum,
for the activation functions of j layer.
Step 11 solves each partial gradient function δ
j(n).Partial gradient δ
j(n) can be expressed as:
After the activation functions linearization of step 9, function δ
jL(n) linearization is expressed as:
Step 12 output layer self-adaptation regularized learning algorithm rate.Upper on step 9-11 bases, the right value update process in network as shown in the formula:
Wherein c
j(n) and
be constant, z
ifor the input signal of neuron j, e (n) is error signal, and the weight coefficient of LMS algorithm more new formula be:
w(n+1)=w(n)+μx(n)e(n)
Application principle in conjunction with LMS is known, and neural network weight adjustment and LMS weight coefficient after linearization have approximately uniform structure, and LMS algorithm is the special shape of BP network.Because the application of the T-S fuzzy reasoning proposing in the present invention in activation functions can be carried out local linearization by function in the situation that precision is higher, thereby solve the application problem that LMS cannot be in nonlinear recursion function.In LMS wave filter, in the time that input is larger, wave filter can run into the problem that gradient noise is large.In order to overcome this difficulty, can use Normalized LMS wave filter---NLMS wave filter.Known according to above-mentioned theory, NLMS is the extension of LMS, in the problem of linear recursive function, exist the limitation same with LMS by the known NLMS of theoretical analysis, therefore the present invention adopts the step-length transformation idea of NLMS wave filter, in conjunction with T-S fuzzy reasoning step, adopt following scheme self-adaptation regularized learning algorithm rate:
Wherein k
k(n)=s '
k(v
k(n))=constant c
k.Because the right value update process of wavelet network is more than sef-adapting filter complexity, the especially renewal process of hidden layer, the therefore μ in above formula
krepresent an output layer k neuronic learning rate, and the learning rate of hidden layer also needs further discussion.For fear of when input x (n) hour,
also can be very little, so likely there is the difficulty of numerical evaluation, therefore adopt σ
v(0 < σ
v< 1) to overcome this problem.
The scope of step 13 output layer adaptive learning rate
In order to ensure μ
k(n+1) validity, the present invention is to its restriction of threshold value in addition:
The learning rate of step 14 hidden layer regulates.The renewal of hidden layer weights is updated to basis with output layer, all has identical contribution according to the neuron of the known each output layer of update rule in hidden layer is upgraded.Therefore, on the basis that the present invention regulates in step 12 output layer learning rate self-adaptation, each neuron j of hidden layer is all adopted to identical learning rate:
Wherein μ
jrepresent a hidden layer j neuronic learning rate, K is output layer neuron number.
The training process of step 15 synapse weights.In the adjustment process of synaptic weight, introduce the adaptive learning rate of step 12 and step 14, but for keeping the original advantage of activation functions, its partial gradient still adopts nonlinear δ (n), specifically describes as follows:
w(n+1)=w(n)+Δw(n)
Δw(n)=μ(n)δ(n)x(n)
Step 16 cycle index adds 1, returns to step 10, until meet stopping criterion, Output Tracking Control signal.
Step 17 is inputted control signal topworks and is calculated fusion with system, under certain external interference condition, exports controlled parameter value, and compares with desired amount, completes a process of close-loop feedback control.As shown in accompanying drawing 7 and 8, network is respectively to nonlinear function norm=a
1sin (b
1π n)+c
1log
dn and norm=a
2cos (b
2π n) carries out matching training, adopts respectively fixing learning rate and learning rate changing method of the present invention to carry out contrast simulation checking.
Claims (1)
1. the adaptive learning rate Wavelet Neural Control method based on normalization minimum mean-square auto adapted filtering, is characterized in that, comprises the steps:
(1) set up control system model: adopt wavelet network to carry out parameter tuning to enhanced PID controller, making wavelet network is the Multi-Layer Feedback web frame of MIMO, and each neuron function is activation functions, and the state space of neural network is:
Wherein W
kfor weights space, U
kfor network input, Z
kfor network output, φ
kfor right value update function, ψ (W
k, U
k) be parameterized nonlinear function, the weights space of wavelet network is W
k, the each weights in weights space are generated to equally distributed random number on [1,1] interval;
(2) getting equally distributed random number on [1,1] interval is weights initial value, and all weights of wavelet network are carried out to unit by layer;
(3) small echo neuron weights are optimized: centered by the neuron taking wavelet function as excitation function, weights in former and later two network layers are carried out associated with wavelet function type, neuron number respectively, if the excitation function in J layer is wavelet function, I, K are respectively the two-layer of J layer front and back, W
lM, W
mNfor two weight matrixs of three interlayers after unit, by expression formula associated with it with neuron number wavelet function type be:
Wherein K
jfor normal value;
(4) introduce training sample set { x (n), norm (n) }: input vector x (1) successively, x (2) ... x (n), record network output z (1), z (2) ... z (n), solves error signal e (n) and training cost ε (n):
e(n)=norm(n)-z(n)
(5) adopt the derived function staging treating of step function to activation functions: function is divided into M section, activation functions is carried out to matching, the functional value of the corresponding derivative of the slope of every section of activation functions;
(6) fuzzy rule of formulation matching derived function: the input variable of T-S model:
Output quantity is derived function value k (n), and its fuzzy rule form is:
Wherein
with
represent the fuzzy set in i rule, b
mrepresent m section (m=1,2 ..., M) left margin, p
i, q
iand r
iit is the constant of fuzzy set;
(7) determine subordinate function: adopt Gauss type function as subordinate function, each input variable x
jdegree of membership be:
In formula
with
be respectively center and the width of membership function;
(8) determine each fuzzy rule shared proportion in derived function value: every fuzzy rule is for input quantity x=[x
1, x
2] relevance grade μ
iand activity
for:
(9) output T-S fuzzy system, linearization show activation functions: the linear forms of activation functions:
The activation functions that s (x) is linear forms, a and b are respectively the border, left and right of function, θ
1and θ
2be respectively the threshold value on border, k (n) and d (n) are range of linearity coefficient, and λ is normal value coefficient;
(10) determine each neuronic induction local field and neuron output: the output signal of induction local field and neuron j is respectively:
Wherein v
j(n) be induction local field, w
ijfor weights, x
i(n) be the output of upper strata neuron, W
ijand X
i(n) be respectively w
ijand x
i(n) vector forming, I is upper strata neuron sum,
for the activation functions of j layer;
(11) solve each partial gradient function δ
j(n), partial gradient δ
j(n) be:
After the activation functions linearization of step (9), function δ
jL(n) linearization is expressed as:
(12) output layer self-adaptation regularized learning algorithm rate: adaptive learning rate is:
Wherein k
k(n)=s '
k(v
k(n))=constant c
k, σ
v(0 < σ
v< 1);
(13) determine the scope of output layer learning rate:
Threshold value:
(14) learning rate of hidden layer regulates: the each neuron to hidden layer all adopts identical learning rate:
Wherein μ
jrepresent a hidden layer j neuronic learning rate, K is output layer neuron number;
(15) training synapse weights: introduce the adaptive learning rate of step (12) and step (14), partial gradient adopts nonlinear δ (n):
w(n+1)=w(n)+Δw(n)
Δw(n)=μ(n)δ(n)x(n);
(16) cycle index adds 1, returns to step (10), until meet stopping criterion, Output Tracking Control signal;
(17) control signal inputted to topworks and calculated fusion with system, exporting controlled parameter value, comparing with desired amount, completing close-loop feedback control.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410195894.4A CN103971163B (en) | 2014-05-09 | 2014-05-09 | Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410195894.4A CN103971163B (en) | 2014-05-09 | 2014-05-09 | Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103971163A true CN103971163A (en) | 2014-08-06 |
CN103971163B CN103971163B (en) | 2017-02-15 |
Family
ID=51240630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410195894.4A Active CN103971163B (en) | 2014-05-09 | 2014-05-09 | Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103971163B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104198893A (en) * | 2014-09-24 | 2014-12-10 | 中国科学院电工研究所 | Self-adapting fault current detection method |
WO2016062044A1 (en) * | 2014-10-24 | 2016-04-28 | 华为技术有限公司 | Model parameter training method, device and system |
CN106059532A (en) * | 2016-06-02 | 2016-10-26 | 国网山东省电力公司济宁供电公司 | Multifunctional self-adaptive filter based on wavelet neural network and filtering method |
CN109359120A (en) * | 2018-11-09 | 2019-02-19 | 阿里巴巴集团控股有限公司 | Data-updating method, device and equipment in a kind of model training |
CN109886392A (en) * | 2019-02-25 | 2019-06-14 | 深圳市商汤科技有限公司 | Data processing method and device, electronic equipment and storage medium |
CN110782017A (en) * | 2019-10-25 | 2020-02-11 | 北京百度网讯科技有限公司 | Method and device for adaptively adjusting learning rate |
CN110866608A (en) * | 2019-10-31 | 2020-03-06 | 同济大学 | Self-adaptive learning rate calculation method |
CN111310904A (en) * | 2016-04-29 | 2020-06-19 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing convolutional neural network training |
CN111353589A (en) * | 2016-01-20 | 2020-06-30 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing artificial neural network forward operations |
CN111489412A (en) * | 2019-01-25 | 2020-08-04 | 辉达公司 | Semantic image synthesis for generating substantially realistic images using neural networks |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054694A1 (en) * | 1999-03-26 | 2002-05-09 | George J. Vachtsevanos | Method and apparatus for analyzing an image to direct and identify patterns |
CN1805319A (en) * | 2005-01-10 | 2006-07-19 | 乐金电子(中国)研究开发中心有限公司 | Adaptive array antenna of broadband CDMA frequency divided duplex uplink receiver |
CN101902416A (en) * | 2010-06-30 | 2010-12-01 | 南京信息工程大学 | Feedback blind equalization method of dynamic wavelet neural network based on fuzzy control |
-
2014
- 2014-05-09 CN CN201410195894.4A patent/CN103971163B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054694A1 (en) * | 1999-03-26 | 2002-05-09 | George J. Vachtsevanos | Method and apparatus for analyzing an image to direct and identify patterns |
CN1805319A (en) * | 2005-01-10 | 2006-07-19 | 乐金电子(中国)研究开发中心有限公司 | Adaptive array antenna of broadband CDMA frequency divided duplex uplink receiver |
CN101902416A (en) * | 2010-06-30 | 2010-12-01 | 南京信息工程大学 | Feedback blind equalization method of dynamic wavelet neural network based on fuzzy control |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104198893A (en) * | 2014-09-24 | 2014-12-10 | 中国科学院电工研究所 | Self-adapting fault current detection method |
CN104198893B (en) * | 2014-09-24 | 2017-03-15 | 中国科学院电工研究所 | Adaptive failure electric current detecting method |
WO2016062044A1 (en) * | 2014-10-24 | 2016-04-28 | 华为技术有限公司 | Model parameter training method, device and system |
CN111353589B (en) * | 2016-01-20 | 2024-03-01 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing artificial neural network forward operations |
CN111353589A (en) * | 2016-01-20 | 2020-06-30 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing artificial neural network forward operations |
CN111310904B (en) * | 2016-04-29 | 2024-03-08 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing convolutional neural network training |
CN111310904A (en) * | 2016-04-29 | 2020-06-19 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing convolutional neural network training |
CN106059532A (en) * | 2016-06-02 | 2016-10-26 | 国网山东省电力公司济宁供电公司 | Multifunctional self-adaptive filter based on wavelet neural network and filtering method |
CN106059532B (en) * | 2016-06-02 | 2018-10-02 | 国网山东省电力公司济宁供电公司 | A kind of multifunctional adaptive filter and filtering method based on wavelet neural network |
CN109359120A (en) * | 2018-11-09 | 2019-02-19 | 阿里巴巴集团控股有限公司 | Data-updating method, device and equipment in a kind of model training |
CN111489412B (en) * | 2019-01-25 | 2024-02-09 | 辉达公司 | Semantic image synthesis for generating substantially realistic images using neural networks |
CN111489412A (en) * | 2019-01-25 | 2020-08-04 | 辉达公司 | Semantic image synthesis for generating substantially realistic images using neural networks |
CN109886392B (en) * | 2019-02-25 | 2021-04-27 | 深圳市商汤科技有限公司 | Data processing method and device, electronic equipment and storage medium |
CN109886392A (en) * | 2019-02-25 | 2019-06-14 | 深圳市商汤科技有限公司 | Data processing method and device, electronic equipment and storage medium |
CN110782017B (en) * | 2019-10-25 | 2022-11-22 | 北京百度网讯科技有限公司 | Method and device for adaptively adjusting learning rate |
CN110782017A (en) * | 2019-10-25 | 2020-02-11 | 北京百度网讯科技有限公司 | Method and device for adaptively adjusting learning rate |
CN110866608B (en) * | 2019-10-31 | 2022-06-07 | 同济大学 | Self-adaptive learning rate calculation method |
CN110866608A (en) * | 2019-10-31 | 2020-03-06 | 同济大学 | Self-adaptive learning rate calculation method |
Also Published As
Publication number | Publication date |
---|---|
CN103971163B (en) | 2017-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103971163A (en) | Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering | |
Wang et al. | A fast and accurate online self-organizing scheme for parsimonious fuzzy neural networks | |
Chai et al. | Mamdani model based adaptive neural fuzzy inference system and its application | |
CA2414707C (en) | Computer method and apparatus for constraining a non-linear approximator of an empirical process | |
Asaad et al. | Back Propagation Neural Network (BPNN) and sigmoid activation function in multi-layer networks | |
Wang et al. | Fixed-time synchronization for complex-valued BAM neural networks with time-varying delays via pinning control and adaptive pinning control | |
CN103399487A (en) | Nonlinear MIMO (multiple input multiple output) system-based decoupling control method and device | |
Zhou et al. | Adaptive NN control for nonlinear systems with uncertainty based on dynamic surface control | |
Yang et al. | Synchronization for fractional-order reaction–diffusion competitive neural networks with leakage and discrete delays | |
CN103926832A (en) | Method for self-adaptively adjusting learning rate by tracking and controlling neural network | |
Li et al. | Control of discrete chaotic systems based on echo state network modeling with an adaptive noise canceler | |
CN107255920A (en) | PID control method and apparatus and system based on network optimization algorithm | |
Wang et al. | Application of artificial neural networks in chemical process control | |
CN104050508B (en) | Self-adaptive wavelet kernel neural network tracking control method based on KLMS | |
Leng et al. | A hybrid learning algorithm with a similarity-based pruning strategy for self-adaptive neuro-fuzzy systems | |
Uppal et al. | Neuro-fuzzy based fault diagnosis applied to an electro-pneumatic valve | |
Zhao et al. | Research progress of chemical process control and optimization based on neural network | |
Savran | An adaptive recurrent fuzzy system for nonlinear identification | |
Johnson et al. | Adaptive control using combined online and background learning neural network | |
Hasan et al. | Design and Implemetation of a Neural Control System and Performance Characterization with PID Controller for Water Level Control | |
Dawy et al. | The most general intelligent architectures of the hybrid neuro-fuzzy models | |
Treesatayapun | Fuzzy rules emulated network and its application on nonlinear control systems | |
Ahmadi et al. | A Higher Order Online Lyapunov-Based Emotional Learning for Rough-Neural Identifiers | |
Chu et al. | Neural network based recursive terminal sliding mode control and its application to active power filters | |
Gong et al. | Research of oil pump control based on fuzzy neural network PID algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |