CN114063444A - Parameter setting method of fractional order PID controller based on RBF neural network - Google Patents

Parameter setting method of fractional order PID controller based on RBF neural network Download PDF

Info

Publication number
CN114063444A
CN114063444A CN202111579779.3A CN202111579779A CN114063444A CN 114063444 A CN114063444 A CN 114063444A CN 202111579779 A CN202111579779 A CN 202111579779A CN 114063444 A CN114063444 A CN 114063444A
Authority
CN
China
Prior art keywords
neural network
output
formula
hidden layer
rbf neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111579779.3A
Other languages
Chinese (zh)
Inventor
胡红明
杨皓东
刘勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202111579779.3A priority Critical patent/CN114063444A/en
Publication of CN114063444A publication Critical patent/CN114063444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P. I., P. I. D.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)

Abstract

A parameter setting method of a fractional order PID controller based on an RBF neural network comprises the following steps: s1, initializing each network parameter; s2, sampling to obtain input given r (k) and system output y (k) to obtain a system control error e (k); s3, constructing dynamic RBF neural network on line, and adjusting each parameter of dynamic RBF neural network to obtain output y of neural network identifierm(k) And Jacobian identification information of the controlled object; s4, adjusting the proportional coefficient, the integral coefficient, the differential coefficient, the integral order lambda and the differential order of the fractional order PID controller by a gradient descending method according to the system error function; s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller; s6, let k equal to k +1, and perform next sampling control. The design not only automatically approaches the optimal solution of the system in a self-learning mode, but also effectively improves the control efficiency.

Description

Parameter setting method of fractional order PID controller based on RBF neural network
Technical Field
The invention relates to a parameter setting method of a fractional order PID controller based on a RBF neural network, which is particularly suitable for parameter setting in the field of automatic control.
Background
In recent years, with the development of the fractional calculus theory, experiments prove that the dynamic performance and the static performance of the PID controller based on the fractional calculus theory are better than those of an integral PID controller in the field of automatic control. The fractional order PID controller is a popularization of the traditional PID controller to the fractional order field, and has two more parameters, namely an integral order lambda and a differential order mu compared with the traditional PID controller, so that the fractional order PID controller has a more flexible adjusting range. However, as the parameters increase, parameter tuning of the fractional order PID controller becomes more difficult.
In actual field application, for example, in a flying missile control system, a fractional order PID controller is adopted for control, so that the control quality of the flying missile controller is greatly improved, and the quick response capability, the penetration capability and the attack precision of a missile are enhanced. In a vehicle steer-by-wire system, a fractional order PID controller can also exhibit better robustness. And finally, based on the motor which is a strong coupling, nonlinear and high-order complex control system, the fractional order PID controller can also accurately and flexibly control. It can be seen that the application of the fractional order PID controller to various fields is a hotspot of later development and an inevitable trend, and has a broad prospect in practical application.
Disclosure of Invention
The invention aims to overcome the problem of difficult setting of a fractional order PID controller in the prior art, and provides a parameter setting method of the fractional order PID controller based on an RBF neural network in a self-learning mode. In order to achieve the above purpose, the technical solution of the invention is as follows:
a parameter setting method of a fractional order PID controller based on an RBF neural network comprises the following steps:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure BDA0003426716160000021
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure BDA0003426716160000022
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
Figure BDA0003426716160000031
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) A base width parameter of the j-th neuron at the time k of the hidden layer;
updating the central point:
Figure BDA0003426716160000032
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
Figure BDA0003426716160000033
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
Figure BDA0003426716160000034
in the formula: u (t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,
Figure BDA0003426716160000035
integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,
Figure BDA0003426716160000036
is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
Figure BDA0003426716160000041
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
Figure BDA0003426716160000042
Figure BDA0003426716160000043
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
Figure BDA0003426716160000044
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
for the sake of convenience of expression, in formula (13)
Figure BDA0003426716160000045
And
Figure BDA0003426716160000046
the definition is as follows:
Figure BDA0003426716160000047
Figure BDA0003426716160000048
change Δ K of the scaling factor at time KPComprises the following steps:
Figure BDA0003426716160000049
in the formula: e is a systematic error function, E (k) is the systematic error at time k,
Figure BDA0003426716160000051
jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
Figure BDA0003426716160000052
change Δ K of differential coefficient at time KDComprises the following steps:
Figure BDA0003426716160000053
the change Δ λ in the integration order at time k is:
Figure BDA0003426716160000054
the change Δ μ in the differential order at time k is:
Figure BDA0003426716160000055
the parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the parameter setting method of the RBF neural network-based fractional order PID controller, the optimal solution of the fractional order PID is automatically approximated in a self-learning mode, and the RBF neural network has stronger local approximation capability and higher learning efficiency than a common network. Five parameters of the fractional order PID can be automatically set, and the control efficiency is greatly improved.
2. The RBF neural network in the parameter setting method of the fractional order PID controller based on the RBF neural network has the advantages of approaching nonlinear function with any precision and high training speed for a general neural network, and has natural advantages particularly for a nonlinear and strongly coupled high-order complex control system of a permanent magnet synchronous motor. Compared with the traditional PID, the fractional order PID based on the RBF neural network has the advantages of high response speed, high control precision, capability of effectively compensating time delay and strong robustness and self-adaptive capacity.
Drawings
FIG. 1 shows the RBF neural network self-tuning PI of the present inventionλDμAnd (4) a structural block diagram of the controller.
Fig. 2 is a diagram of the RBF neural network architecture.
Fig. 3 is a flowchart of the algorithm of embodiment 2 of the present invention.
Fig. 4 is a control configuration block diagram of embodiment 2 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following description and embodiments in conjunction with the accompanying drawings.
Referring to fig. 1 to 2, a parameter tuning method of a fractional order PID controller based on an RBF neural network is characterized in that:
the parameter setting method comprises the following steps:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of RBF neural network, namely determining the radius vector B of the base width of the neural network identifier, the central vector C, the initial weight vector W of the hidden layer corresponding to the output layer, the learning efficiency eta of the network, the momentum factor alpha and the parameters of the fractional PID controllerInitial numerical value: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure BDA0003426716160000061
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmFor each hidden layer nerveThe element corresponds to the weight coefficient of the output, h1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure BDA0003426716160000071
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
Figure BDA0003426716160000072
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) Base width parameter for k time of j-th neuron of hidden layer
Updating the central point:
Figure BDA0003426716160000081
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the jth hidden S1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure BDA0003426716160000082
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure BDA0003426716160000091
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) represents the nerveThe weight coefficients of the first moment and the first two moments;
updating the base width parameter:
Figure BDA0003426716160000092
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) Base width parameter for k time of j-th neuron of hidden layer
Updating the central point:
Figure BDA0003426716160000093
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
Figure BDA0003426716160000101
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
Figure BDA0003426716160000102
in the formula: u (t) is the time domain output of the controller, KPIs a coefficient of proportionalityE (t) is the time domain feedback error of the system,
Figure BDA0003426716160000103
integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,
Figure BDA0003426716160000104
is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
Figure BDA0003426716160000105
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
Figure BDA0003426716160000106
Figure BDA0003426716160000107
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
Figure BDA0003426716160000108
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
for the sake of convenience of expression, in formula (13)
Figure BDA0003426716160000111
And
Figure BDA0003426716160000112
the definition is as follows:
Figure BDA0003426716160000113
Figure BDA0003426716160000114
change Δ K of the scaling factor at time KPComprises the following steps:
Figure BDA0003426716160000115
in the formula: e is a systematic error function, E (k) is the systematic error at time k,
Figure BDA0003426716160000116
jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
Figure BDA0003426716160000117
change Δ K of differential coefficient at time KDComprises the following steps:
Figure BDA0003426716160000118
the change Δ λ in the integration order at time k is:
Figure BDA0003426716160000119
the change Δ μ in the differential order at time k is:
Figure BDA00034267161600001110
the parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
The principle of the invention is illustrated as follows:
jacobian identification information: the sensitivity information of the output of the Jacobian matrix as the object to the input change of the control quantity can be directly calculated
Grunwald-Letnikov: for a fractional calculus definition, a fractional differential operator can be directly discretized according to the definition, which is as follows:
Figure BDA0003426716160000121
wherein the content of the first and second substances,
Figure BDA0003426716160000122
differential operator for a function f (t) of order β
Figure BDA0003426716160000123
Example 1:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2 method for initializing RBF neural networkDetermining parameters, namely determining a base width radius vector B, a central vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and an initial parameter value of a fractional order PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure BDA0003426716160000124
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure BDA0003426716160000131
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
Figure BDA0003426716160000132
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) For the jth of the hidden layerA base width parameter at neuron k time;
updating the central point:
Figure BDA0003426716160000141
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the jth hidden S1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure BDA0003426716160000142
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure BDA0003426716160000151
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
Figure BDA0003426716160000152
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) A base width parameter of the j-th neuron at the time k of the hidden layer;
updating the central point:
Figure BDA0003426716160000153
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
Figure BDA0003426716160000161
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
Figure BDA0003426716160000162
in the formula: u (t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,
Figure BDA0003426716160000163
integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,
Figure BDA0003426716160000164
is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
Figure BDA0003426716160000165
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
Figure BDA0003426716160000166
Figure BDA0003426716160000167
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
Figure BDA0003426716160000168
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
for the sake of convenience of expression, in formula (13)
Figure BDA0003426716160000171
And
Figure BDA0003426716160000172
the definition is as follows:
Figure BDA0003426716160000173
Figure BDA0003426716160000174
change Δ K of the scaling factor at time KPComprises the following steps:
Figure BDA0003426716160000175
in the formula: e is a systematic error function, E (k) is the systematic error at time k,
Figure BDA0003426716160000176
jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
Figure BDA0003426716160000177
change Δ K of differential coefficient at time KDComprises the following steps:
Figure BDA0003426716160000178
the change Δ λ in the integration order at time k is:
Figure BDA0003426716160000179
the change Δ μ in the differential order at time k is:
Figure BDA00034267161600001710
the parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
Example 2:
referring to fig. 3 and 4, the structure diagram of the vector control structure of the permanent magnet synchronous motor is shown, and a driving system obtains a given rotating speed v*Comparing with actual rotating speed v to obtain deviation, and passing the deviation through RBF fractional order PID neural network speed loop controller to obtain current iq *And the actual value i of the current state of the systemqComparison, output voltage given value uqAnd the output u of the speed loop controllerdThe voltage value of the control system under an alpha and beta coordinate system is obtained through coordinate transformation, a trigger signal is output to an inverter through a space vector modulation (SVPWM) module, the inverter outputs three-phase voltage to directly control a motor, and the actual current value can be obtained through the coordinate transformation of the inverter output voltage, so that the system forms a closed loop. A permanent magnet synchronous motor vector control parameter setting method of a fractional order PID controller based on an RBF neural network comprises the following steps:
s1, establishing an RBF neural network model, determining that the number of input layer neurons is 3, the number of hidden layer neurons is 6, and the number of output neurons is 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ v (k), v (k-1), iq(k)]TWhere v (k) is the output speed of the motor, iq(k) Q-axis current output by the motor is also output by the fractional order PID controller;
the Gaussian function inside hidden layer neurons is H ═ H1,h2,…h6]T(j=1,2,…6)
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure BDA0003426716160000181
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,cj3]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2,3), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … 6);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,b6]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,w6]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … 6);
s3, running dynamic RBF neural network on line to obtain neural network identificationOutput v of the detectorm(k);
vm(k)=w1h1+w2h2+…+w6h6 (4)
In the formula: w is a1,w2,…,wj,…,w6Weight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…h6A gaussian function for the interior of each hidden layer neuron;
s4, sampling and obtaining the system input given rotating speed v of the controlled system*(k) And outputting v (k) by actual output rotating speed, adjusting all parameters of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure BDA0003426716160000191
in the formula: v (k) actual output speed of the system, vm(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[v(k)-vm(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing the previous time and the previous two times of the neuron
Updating the base width parameter:
Figure BDA0003426716160000192
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) A base width parameter of the j-th neuron at the time k of the hidden layer;
updating the central point:
Figure BDA0003426716160000201
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
Figure BDA0003426716160000202
in the formula: v (k) is the output speed of the system, iq(k) Is the output of the controller, vm(k) Is the output of the neural network identifier;
s5, calculating the output i of the controller through the time domain form of the fractional order PID controllerq(k);
The time domain expression of the fractional order PID controller is as follows:
Figure BDA0003426716160000203
in the formula: i.e. iq(t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,
Figure BDA0003426716160000204
integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,
Figure BDA0003426716160000205
is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
Figure BDA0003426716160000206
in the formula: p is the time step, qlAnd dlIs a coefficient of binomial, e (k) v*(k) -v (k), e (k-l) representing the systematic error at time k-l; wherein:
Figure BDA0003426716160000207
Figure BDA0003426716160000208
s6, adjusting the proportionality coefficient K of the fractional order PID controller by the gradient descent method again according to the system error functionPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
Figure BDA0003426716160000211
in the formula: v (k) is the velocity output of the system, v*(k) Given the input to the system.
For the sake of convenience of expression, in formula (13)
Figure BDA0003426716160000212
And
Figure BDA0003426716160000213
the definition is as follows:
Figure BDA0003426716160000214
Figure BDA0003426716160000215
change Δ K of the scaling factor at time KPComprises the following steps:
Figure BDA0003426716160000216
in the formula: e is a systematic error function, E (k) is the systematic error at time k,
Figure BDA0003426716160000217
jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
Figure BDA0003426716160000218
change Δ K of differential coefficient at time KDComprises the following steps:
Figure BDA0003426716160000219
the change Δ λ in the integration order at time k is:
Figure BDA00034267161600002110
change of differential order Δ μ at time k
Figure BDA0003426716160000221
The parameters of the fractional order PID are:
KP(k+1)=KP(k)+ΔKP
KI(k+1)=KI(k)+ΔKI
KD(k+1)=KD(k)+ΔKD (24)
λ(k+1)=λ(k)+Δλ
μ(k+1)=μ(k)+Δμ
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.

Claims (1)

1. A parameter setting method of a fractional order PID controller based on an RBF neural network is characterized in that:
the parameter setting method comprises the following steps:
s1, establishing an RBF neural network model, determining the number of input layer neurons as n, the number of hidden layer neurons as m and the number of output neurons as 1;
s2, initializing each parameter of the RBF neural network, namely determining a base width radius vector B, a center vector C, an initial weight vector W of a hidden layer corresponding to an output layer, the learning efficiency eta of the network, a momentum factor alpha and a parameter initial value of a fractional PID controller: kP(0)、KI(0)、KD(0)、λ(0)、μ(0);
The input of the RBF neural network is X ═ X1,x2,…xi,…xn](i=1,2…n);
The Gaussian function inside hidden layer neurons is H ═ H1,h2,…hj,…hm]T(j=1,2,…m);
Wherein the Gaussian function inside the jth hidden layer neuron is:
Figure FDA0003426716150000011
in the formula: cjThe central vector inside the jth hidden layer neuron of the RBF neural network is marked as Cj=[cj1,cj2,…,cji,…,cjn]TWherein: cjiIndicating that the jth hidden layer neuron corresponds to the central point of the ith input (i ═ 1,2, … n), bjA base width parameter for the jth hidden layer neuron (j ═ 1,2, … m);
the base width vector of the whole RBF neural network is as follows:
B=[b1,b2,…,bm]T (2)
the weight vector of the RBF neural network is as follows:
W=[w1,w2,…,wj,…,wm]T (3)
in the formula: w is ajA weight coefficient representing that the output corresponds to each hidden layer neuron, (j ═ 1,2, … m);
s3, running the dynamic RBF neural network on line to obtain the output y of the neural network identifierm(k);
ym(k)=w1h1+w2h2+…+wmhm (4)
In the formula: w is a1,w2,…,wj,…,wmWeight coefficients, h, corresponding outputs for each hidden layer neuron1,h2,…hmA gaussian function for the interior of each hidden layer neuron;
s4, sampling to obtain system input r (k) and output y (k) of the controlled system, adjusting each parameter of the dynamic RBF neural network and calculating Jacobian identification information of the controlled object:
the performance indicator function of the identifier is:
Figure FDA0003426716150000021
in the formula: y (k) is the output of the overall system, ym(k) Is the output of the neural network identifier;
for the function, a gradient descending method is applied, and an output weight, a central point of a hidden layer neuron and an updating formula of a base width parameter can be respectively calculated;
updating of the weight coefficients:
wj(k)=wj(k-1)+η[y(k)-ym(k)]hj+α[wj(k-1)-wj(k-2)] (6)
in the formula: eta is learning efficiency, alpha is momentum factor, wj(k) Weight coefficient, w, of the jth neuron representing time kj(k-1),wj(k-2) weight coefficients representing a previous time and previous two times of the neuron;
updating the base width parameter:
Figure FDA0003426716150000022
bj(k)=bj(k-1)+ηΔbj+α[bj(k-1)-bj(k-2)] (8)
in the formula: bj(k) Base width parameter for k time of j-th neuron of hidden layer
Updating the central point:
Figure FDA0003426716150000023
cji(k)=cji(k-1)+ηΔcji+α[cji(k-1)-cji(k-2)] (10)
in the formula: c. Cji(k) Representing the central point of the ith input in the jth hidden layer neuron at the time k;
the Jacobian array (i.e. sensitivity information of the output of the object to the control input) algorithm is as follows:
Figure FDA0003426716150000031
in the formula: y (k) is the output of the system, u (k) is the output of the controller, ym(k) Is the output of the neural network identifier;
s5, calculating the output u (k) of the controller through the time domain form of the fractional order PID controller;
the time domain expression of the fractional order PID controller is as follows:
Figure FDA0003426716150000032
in the formula: u (t) is the time domain output of the controller, KPE (t) is the time domain feedback error of the system,
Figure FDA0003426716150000033
integral operator of order λ, KIIs the integral coefficient, KDIn order to be the differential coefficient,
Figure FDA0003426716150000034
is a mu-order differential operator;
equation (12) can be discretized directly from fractional derivatives and integrals as defined by Grunwald-Letnikov:
Figure FDA0003426716150000035
in the formula: p is the time step, qlAnd dlIs a binomial coefficient, e (k) r (k) y (k), and e (k-l) represents the system error at the k-l moment; wherein:
Figure FDA0003426716150000036
Figure FDA0003426716150000037
s6, adjusting fractional order PID control by gradient descent method again according to system error functionProportional coefficient K of the systemPIntegral coefficient KICoefficient of differentiation KDAnd an integration order λ and a differentiation order μ;
the systematic error function is:
Figure FDA0003426716150000038
in the formula: y (k) is the output of the system, r (k) is a given input to the system;
for the sake of convenience of expression, in formula (13)
Figure FDA0003426716150000041
And
Figure FDA0003426716150000042
the definition is as follows:
Figure FDA0003426716150000043
Figure FDA0003426716150000044
change Δ K of the scaling factor at time KPComprises the following steps:
Figure FDA0003426716150000045
in the formula: e is a systematic error function, E (k) is the systematic error at time k,
Figure FDA0003426716150000046
jacobian identification information calculated for the formula (11), the following formula is the same as it is;
change Δ K of integral coefficient at time KIComprises the following steps:
Figure FDA0003426716150000047
change Δ K of differential coefficient at time KDComprises the following steps:
Figure FDA0003426716150000048
the change Δ λ in the integration order at time k is:
Figure FDA0003426716150000049
the change Δ μ in the differential order at time k is:
Figure FDA00034267161500000410
the parameters of the fractional order PID are:
Figure FDA0003426716150000051
s7, k is set to k +1, and the process returns to S3 to perform the next sampling control.
CN202111579779.3A 2021-12-22 2021-12-22 Parameter setting method of fractional order PID controller based on RBF neural network Pending CN114063444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111579779.3A CN114063444A (en) 2021-12-22 2021-12-22 Parameter setting method of fractional order PID controller based on RBF neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111579779.3A CN114063444A (en) 2021-12-22 2021-12-22 Parameter setting method of fractional order PID controller based on RBF neural network

Publications (1)

Publication Number Publication Date
CN114063444A true CN114063444A (en) 2022-02-18

Family

ID=80230184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111579779.3A Pending CN114063444A (en) 2021-12-22 2021-12-22 Parameter setting method of fractional order PID controller based on RBF neural network

Country Status (1)

Country Link
CN (1) CN114063444A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114759815A (en) * 2022-04-08 2022-07-15 西安石油大学 Self-adaptive control method of quasi-Z-source inverter

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114759815A (en) * 2022-04-08 2022-07-15 西安石油大学 Self-adaptive control method of quasi-Z-source inverter
CN114759815B (en) * 2022-04-08 2024-05-03 西安石油大学 Self-adaptive control method for quasi-Z source inverter

Similar Documents

Publication Publication Date Title
Bortoff et al. Pseudolinearization of the acrobot using spline functions
Zhou et al. Deep convolutional neural network based fractional-order terminal sliding-mode control for robotic manipulators
Lin et al. Intelligent sliding-mode position control using recurrent wavelet fuzzy neural network for electrical power steering system
CN109921707B (en) Switched reluctance hub motor position-free prediction control method
Chang et al. Adaptive control of hypersonic vehicles based on characteristic models with fuzzy neural network estimators
CN113722877A (en) Method for online prediction of temperature field distribution change during lithium battery discharge
CN108390597A (en) Permanent magnet synchronous motor nonlinear predictive controller design with disturbance observer
CN112398397A (en) Linear active disturbance rejection permanent magnet synchronous motor control method based on model assistance
Li et al. Robust control for permanent magnet in-wheel motor in electric vehicles using adaptive fuzzy neural network with inverse system decoupling
CN114063444A (en) Parameter setting method of fractional order PID controller based on RBF neural network
CN115632584A (en) Loss optimization control method for embedded permanent magnet synchronous motor
Sun et al. A new method of fault estimation and tolerant control for fuzzy systems against time-varying delay
CN115890668A (en) Distributed optimization learning control method and system for robot joint module
Hammoud et al. Learning-based model predictive current control for synchronous machines: An LSTM approach
CN113110430A (en) Model-free fixed-time accurate trajectory tracking control method for unmanned ship
CN112564557A (en) Control method, device and equipment of permanent magnet synchronous motor and storage medium
CN106533285B (en) Permanent magnet DC motor method for controlling number of revolution based on Kriging model
Acikgoz et al. Long short-term memory network-based speed estimation model of an asynchronous motor
CN111055920B (en) Construction method of multi-model corner controller of automobile EPS (electric power steering) system
CN113176731B (en) Dual-neural-network self-learning IPMSM active disturbance rejection control method
Romasevych et al. Identification and optimal control of a dynamical system via ANN-based approaches
CN117097220B (en) EMPC current loop control method and system applied to electric forklift induction motor
CN115114964B (en) Sensor intermittent fault diagnosis method based on data driving
Wang et al. Fuzzy state observer based command-filtered adaptive control of uncertain nonlinear systems
CN116015119B (en) Permanent magnet synchronous motor current control method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination