CN114063444A - Parameter setting method of fractional order PID controller based on RBF neural network - Google Patents
Parameter setting method of fractional order PID controller based on RBF neural network Download PDFInfo
- Publication number
- CN114063444A CN114063444A CN202111579779.3A CN202111579779A CN114063444A CN 114063444 A CN114063444 A CN 114063444A CN 202111579779 A CN202111579779 A CN 202111579779A CN 114063444 A CN114063444 A CN 114063444A
- Authority
- CN
- China
- Prior art keywords
- neural network
- output
- formula
- hidden layer
- rbf neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 89
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000005070 sampling Methods 0.000 claims abstract description 14
- 210000002569 neuron Anatomy 0.000 claims description 95
- 230000006870 function Effects 0.000 claims description 47
- 230000008859 change Effects 0.000 claims description 26
- 230000009897 systematic effect Effects 0.000 claims description 16
- 230000004069 differentiation Effects 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 10
- 238000003062 neural network model Methods 0.000 claims description 7
- 210000004205 output neuron Anatomy 0.000 claims description 7
- 230000035945 sensitivity Effects 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000013459 approach Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B11/00—Automatic controllers
- G05B11/01—Automatic controllers electric
- G05B11/36—Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
- G05B11/42—Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P. I., P. I. D.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Feedback Control In General (AREA)
Abstract
A parameter setting method of a fractional order PID controller based on an RBF neural network comprises the following steps: s1, initializing each network parameter; s2, sampling to obtain input given r (k) and system output y (k) to obtain a system control error e (k); s3, constructing dynamic RBF neural network on line, and adjusting each parameter of dynamic RBF neural network to obtain output y of neural network identifierm(k) And Jacobian identification information of the controlled object; s4, adjusting the proportional coefficient, the integral coefficient, the differential coefficient, the integral order lambda and the differential order of the fractional order PID controller by a gradient descending method according to the system error function; s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller; s6, let k equal to k +1, and perform next sampling control. The design not only automatically approaches the optimal solution of the system in a self-learning mode, but also effectively improves the control efficiency.
Description
Technical Field
The invention relates to a parameter setting method of a fractional order PID controller based on a RBF neural network, which is particularly suitable for parameter setting in the field of automatic control.
Background
In recent years, with the development of the fractional calculus theory, experiments prove that the dynamic performance and the static performance of the PID controller based on the fractional calculus theory are better than those of an integral PID controller in the field of automatic control. The fractional order PID controller is a popularization of the traditional PID controller to the fractional order field, and has two more parameters, namely an integral order lambda and a differential order mu compared with the traditional PID controller, so that the fractional order PID controller has a more flexible adjusting range. However, as the parameters increase, parameter tuning of the fractional order PID controller becomes more difficult.
In actual field application, for example, in a flying missile control system, a fractional order PID controller is adopted for control, so that the control quality of the flying missile controller is greatly improved, and the quick response capability, the penetration capability and the attack precision of a missile are enhanced. In a vehicle steer-by-wire system, a fractional order PID controller can also exhibit better robustness. And finally, based on the motor which is a strong coupling, nonlinear and high-order complex control system, the fractional order PID controller can also accurately and flexibly control. It can be seen that the application of the fractional order PID controller to various fields is a hotspot of later development and an inevitable trend, and has a broad prospect in practical application.
Disclosure of Invention
The invention aims to overcome the problem of difficult setting of a fractional order PID controller in the prior art, and provides a parameter setting method of the fractional order PID controller based on an RBF neural network in a self-learning mode. In order to achieve the above purpose, the technical solution of the invention is as follows:
a parameter setting method of a fractional order PID controller based on an RBF neural network comprises the following steps:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) A base width parameter of the j-th neuron at the time k of the hidden layer;
updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
in the formula: u (t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
change Δ K of the scaling factor at time KPComprises the following steps:
in the formula: e is a systematic error function, E (k) is the systematic error at time k,jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
change Δ K of differential coefficient at time KDComprises the following steps:
the change Δ λ in the integration order at time k is:
the change Δ μ in the differential order at time k is:
the parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the parameter setting method of the RBF neural network-based fractional order PID controller, the optimal solution of the fractional order PID is automatically approximated in a self-learning mode, and the RBF neural network has stronger local approximation capability and higher learning efficiency than a common network. Five parameters of the fractional order PID can be automatically set, and the control efficiency is greatly improved.
2. The RBF neural network in the parameter setting method of the fractional order PID controller based on the RBF neural network has the advantages of approaching nonlinear function with any precision and high training speed for a general neural network, and has natural advantages particularly for a nonlinear and strongly coupled high-order complex control system of a permanent magnet synchronous motor. Compared with the traditional PID, the fractional order PID based on the RBF neural network has the advantages of high response speed, high control precision, capability of effectively compensating time delay and strong robustness and self-adaptive capacity.
Drawings
FIG. 1 shows the RBF neural network self-tuning PI of the present inventionλDμAnd (4) a structural block diagram of the controller.
Fig. 2 is a diagram of the RBF neural network architecture.
Fig. 3 is a flowchart of the algorithm of embodiment 2 of the present invention.
Fig. 4 is a control configuration block diagram of embodiment 2 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following description and embodiments in conjunction with the accompanying drawings.
Referring to fig. 1 to 2, a parameter tuning method of a fractional order PID controller based on an RBF neural network is characterized in that:
the parameter setting method comprises the following steps:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of RBF neural network, namely determining the radius vector B of the base width of the neural network identifier, the central vector C, the initial weight vector W of the hidden layer corresponding to the output layer, the learning efficiency eta of the network, the momentum factor alpha and the parameters of the fractional PID controllerInitial numerical value: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmFor each hidden layer nerveThe element corresponds to the weight coefficient of the output, h1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) Base width parameter for k time of j-th neuron of hidden layer
Updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the jth hidden S1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) represents the nerveThe weight coefficients of the first moment and the first two moments;
updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) Base width parameter for k time of j-th neuron of hidden layer
Updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
in the formula: u (t) is the time domain output of the controller, KPIs a coefficient of proportionalityE (t) is the time domain feedback error of the system,integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
change Δ K of the scaling factor at time KPComprises the following steps:
in the formula: e is a systematic error function, E (k) is the systematic error at time k,jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
change Δ K of differential coefficient at time KDComprises the following steps:
the change Δ λ in the integration order at time k is:
the change Δ μ in the differential order at time k is:
the parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
The principle of the invention is illustrated as follows:
jacobian identification information: the sensitivity information of the output of the Jacobian matrix as the object to the input change of the control quantity can be directly calculated
Grunwald-Letnikov: for a fractional calculus definition, a fractional differential operator can be directly discretized according to the definition, which is as follows:
wherein the content of the first and second substances,differential operator for a function f (t) of order β
Example 1:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2 method for initializing RBF neural networkDetermining parameters, namely determining a base width radius vector B, a central vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and an initial parameter value of a fractional order PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) For the jth of the hidden layerA base width parameter at neuron k time;
updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the jth hidden S1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) A base width parameter of the j-th neuron at the time k of the hidden layer;
updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
in the formula: u (t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
change Δ K of the scaling factor at time KPComprises the following steps:
in the formula: e is a systematic error function, E (k) is the systematic error at time k,jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
change Δ K of differential coefficient at time KDComprises the following steps:
the change Δ λ in the integration order at time k is:
the change Δ μ in the differential order at time k is:
the parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
Example 2:
referring to fig. 3 and 4, the structure diagram of the vector control structure of the permanent magnet synchronous motor is shown, and a driving system obtains a given rotating speed v*Comparing with actual rotating speed v to obtain deviation, and passing the deviation through RBF fractional order PID neural network speed loop controller to obtain current iq *And the actual value i of the current state of the systemqComparison, output voltage given value uqAnd the output u of the speed loop controllerdThe voltage value of the control system under an alpha and beta coordinate system is obtained through coordinate transformation, a trigger signal is output to an inverter through a space vector modulation (SVPWM) module, the inverter outputs three-phase voltage to directly control a motor, and the actual current value can be obtained through the coordinate transformation of the inverter output voltage, so that the system forms a closed loop. A permanent magnet synchronous motor vector control parameter setting method of a fractional order PID controller based on an RBF neural network comprises the following steps:
s1, establishing an RBF neural network model, determining that the number of input layer neurons is 3, the number of hidden layer neurons is 6, and the number of output neurons is 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ v (k), v (k-1), iq(k)]TWhere v (k) is the output speed of the motor, iq(k) Q-axis current output by the motor is also output by the fractional order PID controller;
the Gaussian function inside hidden layer neurons is H ═ H1,h2,…h6]T(j=1,2,…6)
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,cj3]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2,3), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … 6);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,b6]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,w6]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … 6);
s3, running dynamic RBF neural network on line to obtain neural network identificationOutput v of the detectorm(k);
vm(k)=w1h1+w2h2+…+w6h6 (4)
In the formula: w is a1,w2,…,wj,…,w6Weight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…h6A gaussian function for the interior of each hidden layer neuron;
s4, sampling and obtaining the system input given rotating speed v of the controlled system*(k) And outputting v (k) by actual output rotating speed, adjusting all parameters of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: v (k) actual output speed of the system, vm(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[v(k)-vm(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing the previous time and the previous two times of the neuron
Updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) A base width parameter of the j-th neuron at the time k of the hidden layer;
updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
in the formula: v (k) is the output speed of the system, iq(k) Is the output of the controller, vm(k) Is the output of the neural network identifier;
s5, calculating the output i of the controller through the time domain form of the fractional order PID controllerq(k);
The time domain expression of the fractional order PID controller is as follows:
in the formula: i.e. iq(t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
in the formula: p is the time step, qlAnd dlIs a coefficient of binomial, e (k) v*(k) -v (k), e (k-l) representing the systematic error at time k-l; wherein:
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
in the formula: v (k) is the velocity output of the system, v*(k) Given the input to the system.
change Δ K of the scaling factor at time KPComprises the following steps:
in the formula: e is a systematic error function, E (k) is the systematic error at time k,jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
change Δ K of differential coefficient at time KDComprises the following steps:
the change Δ λ in the integration order at time k is:
change of differential order Δ μ at time k
The parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
Claims (1)
1. A parameter setting method of a fractional order PID controller based on an RBF neural network is characterized in that:
the parameter setting method comprises the following steps:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) Base width parameter for k time of j-th neuron of hidden layer
Updating the central point:
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
in the formula: u (t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
s6, adjusting fractional order PID control by gradient descent method again according to system error functionProportional coefficient K of the systemPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
change Δ K of the scaling factor at time KPComprises the following steps:
in the formula: e is a systematic error function, E (k) is the systematic error at time k,jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
change Δ K of differential coefficient at time KDComprises the following steps:
the change Δ λ in the integration order at time k is:
the change Δ μ in the differential order at time k is:
the parameters of the fractional order PID are:
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111579779.3A CN114063444A (en) | 2021-12-22 | 2021-12-22 | Parameter setting method of fractional order PID controller based on RBF neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111579779.3A CN114063444A (en) | 2021-12-22 | 2021-12-22 | Parameter setting method of fractional order PID controller based on RBF neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114063444A true CN114063444A (en) | 2022-02-18 |
Family
ID=80230184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111579779.3A Pending CN114063444A (en) | 2021-12-22 | 2021-12-22 | Parameter setting method of fractional order PID controller based on RBF neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114063444A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114759815A (en) * | 2022-04-08 | 2022-07-15 | 西安石油大学 | Self-adaptive control method of quasi-Z-source inverter |
-
2021
- 2021-12-22 CN CN202111579779.3A patent/CN114063444A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114759815A (en) * | 2022-04-08 | 2022-07-15 | 西安石油大学 | Self-adaptive control method of quasi-Z-source inverter |
CN114759815B (en) * | 2022-04-08 | 2024-05-03 | 西安石油大学 | Self-adaptive control method for quasi-Z source inverter |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bortoff et al. | Pseudolinearization of the acrobot using spline functions | |
Zhou et al. | Deep convolutional neural network based fractional-order terminal sliding-mode control for robotic manipulators | |
Lin et al. | Intelligent sliding-mode position control using recurrent wavelet fuzzy neural network for electrical power steering system | |
CN109921707B (en) | Switched reluctance hub motor position-free prediction control method | |
Chang et al. | Adaptive control of hypersonic vehicles based on characteristic models with fuzzy neural network estimators | |
CN113722877A (en) | Method for online prediction of temperature field distribution change during lithium battery discharge | |
CN108390597A (en) | Permanent magnet synchronous motor nonlinear predictive controller design with disturbance observer | |
CN112398397A (en) | Linear active disturbance rejection permanent magnet synchronous motor control method based on model assistance | |
Li et al. | Robust control for permanent magnet in-wheel motor in electric vehicles using adaptive fuzzy neural network with inverse system decoupling | |
CN114063444A (en) | Parameter setting method of fractional order PID controller based on RBF neural network | |
CN115632584A (en) | Loss optimization control method for embedded permanent magnet synchronous motor | |
Sun et al. | A new method of fault estimation and tolerant control for fuzzy systems against time-varying delay | |
CN115890668A (en) | Distributed optimization learning control method and system for robot joint module | |
Hammoud et al. | Learning-based model predictive current control for synchronous machines: An LSTM approach | |
CN113110430A (en) | Model-free fixed-time accurate trajectory tracking control method for unmanned ship | |
CN112564557A (en) | Control method, device and equipment of permanent magnet synchronous motor and storage medium | |
CN106533285B (en) | Permanent magnet DC motor method for controlling number of revolution based on Kriging model | |
Acikgoz et al. | Long short-term memory network-based speed estimation model of an asynchronous motor | |
CN111055920B (en) | Construction method of multi-model corner controller of automobile EPS (electric power steering) system | |
CN113176731B (en) | Dual-neural-network self-learning IPMSM active disturbance rejection control method | |
Romasevych et al. | Identification and optimal control of a dynamical system via ANN-based approaches | |
CN117097220B (en) | EMPC current loop control method and system applied to electric forklift induction motor | |
CN115114964B (en) | Sensor intermittent fault diagnosis method based on data driving | |
Wang et al. | Fuzzy state observer based command-filtered adaptive control of uncertain nonlinear systems | |
CN116015119B (en) | Permanent magnet synchronous motor current control method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |