CN117031967B - Iterative learning active disturbance rejection control method - Google Patents

Iterative learning active disturbance rejection control method Download PDF

Info

Publication number
CN117031967B
CN117031967B CN202311300391.4A CN202311300391A CN117031967B CN 117031967 B CN117031967 B CN 117031967B CN 202311300391 A CN202311300391 A CN 202311300391A CN 117031967 B CN117031967 B CN 117031967B
Authority
CN
China
Prior art keywords
iterative learning
ileso
algorithm
state
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311300391.4A
Other languages
Chinese (zh)
Other versions
CN117031967A (en
Inventor
李向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202311300391.4A priority Critical patent/CN117031967B/en
Publication of CN117031967A publication Critical patent/CN117031967A/en
Application granted granted Critical
Publication of CN117031967B publication Critical patent/CN117031967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses an iterative learning active disturbance rejection control method, which is characterized in that an iterative learning extended state observer is introduced into an active disturbance rejection controller, replaces the original extended state observer in the active disturbance rejection controller with the iterative learning extended state observer, and combines an original tracking differentiator and a state error feedback device in the active disturbance rejection controller to perform control calculation; the iterative learning extended state observer comprises an iterative learning extended state observer algorithm and a real-time mobile data window, iterative learning is carried out through the real-time mobile data window operation algorithm, a state variable and an extended state variable of an iterative learning process are output to the state error feedback device, low-delay estimation of total disturbance of a controlled system is achieved, and sampling data with preset length, the state variable and the extended state variable of the iterative learning process are stored through the real-time mobile data window; the invention can obtain good estimation of the total disturbance of the system, and reduces the peak value phenomenon of the initial value by dynamically adjusting the iterative learning times.

Description

Iterative learning active disturbance rejection control method
Technical Field
The invention relates to the technical field of active disturbance rejection control, in particular to an iterative learning active disturbance rejection control method.
Background
The active disturbance rejection control (Active Disturbance Rejection Control, ADRC) method is a control method with universality proposed by a well-known scholars Han Jing of China. The active-disturbance-rejection controller consists of a tracking differentiator (Tracking Differentiator, TD), a dilated state observer (Extended State Observer, ESO), which is the core of the active-disturbance-rejection control, and a state error feedback (State Error Feedback, SEF).
The choice of the bandwidth of the ESO has great influence on the immunity of the active disturbance rejection control and the filtering capability of random noise. A large number of practical applications show that ESO bandwidth takes a larger value, a serious initial peak phenomenon can occur, and the larger bandwidth is not good for measuring noise filtering effect; if taken small, it is difficult to achieve a good estimate of the total disturbance. How to dynamically select ESO bandwidth has long been a problem in ADRC applications. In addition, the adoption period cannot be too short due to the limitation of hardware speed of an analog-to-digital converter and the like. On the premise of ensuring the estimation performance of ESO on the total disturbance, reducing the sampling frequency of digital control is one of key technologies for realizing low-cost automation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides an iterative learning active disturbance rejection control method, which makes full use of historical data, forms the historical data measured recently into a real-time mobile data window, removes earliest measured data after the latest measured data comes in, and then runs an iterative learning ILESO algorithm on the real-time mobile data window to realize iterative learning identification of the total disturbance of a system on the real-time mobile data window so as to obtain real-time estimation of the total disturbance of a controlled system; when a certain learning error requirement is met or the next sampling time is about to come, ending the iterative learning active disturbance rejection control, entering the next sampling, repeating the process, wherein the iterative learning ESO can obtain good estimation of the total disturbance of the system, and combining TD and SEF can realize the active disturbance rejection control method based on the iterative learning ESO. By dynamically adjusting the iterative learning times of the iterative ESO, the time-varying ESO bandwidth can be obtained, and the initial value peak phenomenon of the ESO can be greatly reduced or eliminated.
The aim of the invention is achieved by the following technical scheme: the iterative learning active disturbance rejection control method is that an iterative learning extended state observer ILESO is introduced into an active disturbance rejection controller ADRC, the iterative learning extended state observer ILESO is used for replacing the original extended state observer ESO in the active disturbance rejection controller ADRC, and the control calculation is carried out by combining an original tracking differentiator TD and a state error feedback device SEF in the active disturbance rejection controller ADRC; the method comprises the steps that an extended state observer ILESO comprises an ILESO algorithm and a real-time mobile data window RT-MDW, the ILESO algorithm is operated in the real-time mobile data window RT-MDW to conduct iterative learning, and a state variable and an extended state variable of an iterative learning process are output to a state error feedback device SEF, so that low-delay estimation of total disturbance of a controlled system is achieved, and meanwhile sampling data with preset length, and the state variable and the extended state variable of the iterative learning process are stored by the real-time mobile data window RT-MDW.
Further, the iterative learning extended state observer ILESO performs the steps of:
defining a plurality of data pairs of a section of sampling interval,/>) The real-time moving data window RT-MDW is formed by the following formula (1):
(,/>;/>,/>;…;/>,/>) (1);
wherein,for the number of real-time moving data windows RT-MDW, k is the sampling frequency, the sampling data in the real-time moving data windows RT-MDW are repeatedly utilized for multiple times through an iterative learning method, and related variables are re-represented according to the size of the real-time moving data windows, < >>=/>,/>=/>,/>And represent ILESO state estimation +.>And expansion state estimation +.>
ILESO performs a forward iterative learning mechanism, the forward nonlinear ILESO algorithm is expressed as (2) and the forward linear ILESO algorithm is expressed as (3), where j is the number of iterations in the real-time moving data window, N C Learning the maximum number of times for iteration; when the iterative operation times reach the maximum iterative learning times N C Ending the iterative learning; or using absolute values of estimated errorsTo end the iterative learning process, i.e. when +.>When (I)>The iterative learning of the current adoption period is exited for the preset positive number;
(2);
in the method, in the process of the invention,the number of data windows; />The order of the controlled system; />Is the sampling period; />Is a sign function; />,/>,…,/>And->To stabilize the ILESO normal number, bandwidth method is used for tuning; />Is an iterative learning filter coefficient,/->;/>Is iterative learning feedforward coefficient,/->;/>Is an iterative learning feedback coefficient,/->The method comprises the steps of carrying out a first treatment on the surface of the (2) The definition of the fal () function, +.>Is the estimation error +.>Is the demarcation point of the linear and nonlinear segments in the fal () function; />Is an exponent in the fal () function, representing the degree of nonlinearity; when->=1, fal () function is a linear function, +.>The farther from 1, the greater the degree of nonlinearity;
when (when)=1, yielding a linear ILESO algorithm:
(3);
ILESO also performs an inverse iterative learning mechanism, the inverse nonlinear ILESO algorithm is represented by the following equation (4), and the inverse linear ILESO algorithm is represented by the following equation (5):
(4);
the reverse linear ILESO algorithm is the reverse nonlinear ILESO algorithmLinear form at=1:
(5);
alternately performing a forward ILESO algorithm and a reverse ILESO algorithm, performing the forward ILESO algorithm, performing the reverse ILESO algorithm, alternately iterating, and taking the obtained ILESO output at the current sampling moment as an estimate of the state of the controlled systemAnd an estimate of the state of dilatation of the controlled system +.>As shown in formula (6):
(6)。
further, the state error feedback device SEF performs the steps of:
when the linear state error feedback device SEF is used, the control amount u (k) of ADRC is calculated as the following expression (7):
(7);
wherein,,/>,…,/>to stabilize the normal number of the closed loop system, a bandwidth method is adopted for setting; k is the number of samplings, +.>For the estimation of the state of the controlled system +.>For the estimation of the state of distension of the controlled system +.>For controlling the estimation of the gain +.>For each derivative of the input r calculated by the tracking differentiator TD, i=1, …, n.
Further, the method comprises the steps of:
s200, initializing working parameters of iterative learning active disturbance rejection control;
s201, obtaining each derivative of the input r by tracking the differentiator TD,i=1,…,n;
S202, inputting sampling data pairs,/>);
S203, forming a real-time mobile data window RT-MDW by the sampling data pair input in the step S202 and the historical data, and dynamically adjusting the iterative learning maximum times of the iterative learning ILESO algorithm;
s204, carrying out forward ILESO algorithm learning once, and adding one to the iterative learning times j;
s205, performing error judgment of iterative learning effects; if the preset error requirement is met, ending the iterative learning of the sampling period, and entering step S206; if the error requirement does not meet the preset error requirement, the step S208 is entered;
s206, outputting the final state of the estimated value of the ILESO algorithm at the sampling timeAnd expanded state->
S207, after calculating control output according to the state error feedback device SEF of the ADRC, entering the next sampling control, and returning to the step S201 again;
s208, judging whether the iterative learning times j is greater than the set iterative learning maximum times N C The method comprises the steps of carrying out a first treatment on the surface of the If the number j of iterative learning is greater than N C Step S206 is entered; if the number j of iterative learning is not greater than N C Then enter S209;
s209, performing reverse ILESO algorithm learning once, adding one to the iterative learning times j, and entering S210;
s210, performing forward ILESO algorithm learning once, adding one to the iteration learning times j, and re-entering S205 to further finish error judgment of the iteration learning effect.
Further, in step S200, the operating parameters include parameters of the tracking differentiator TD, the extended state observer ESO and the state error feedback device SEF, and the length N of the real-time moving data window w And the maximum number of iterative learning N C
Further, the step S203 includes the steps of:
sample data input in step S202 are paired up,/>) With history data (+)>,/>),l=0,…,N W Constitutes real-time moving data window RT-MDW, when there is insufficient data, (. About.f.)>,/>) 0 is used instead of->In interval [0, N W ]The sampling time of the corresponding measurement data is [ k-N ] W , k]With increasing time, the number of samplings k increases, new measurement data pairs (++>,/>) Continuously enter, at N W The data before the sampling period is withdrawn to form a data window moving in real time; in addition, the state +.A.in iterative learning process is stored in the real-time mobile data window RT-MDW>And expanded state->
Dynamically adjusting N C With increasing sampling number k, N is increased C Realizes the variable gain control of ILESO indirectly, as shown in the following formula (8),
(8);
where k is the number of samplings, N CM Learning the maximum number of iterations allowed for the time between two samplings, T C Representing a process time constant, and taking the number of adopted periods corresponding to the inertia time of a first-order equivalent model of the system; INT is a rounding function, min is a minimum function, and the length N of the real-time moving data window RT-MDW W According to T C And (5) adjusting.
Further, the step S205 includes the steps of:
performing error judgment of iterative learning effects; if it meetsEnding the iterative learning of the sampling period and entering step S206; if the absolute value of the estimation error is not satisfied +.>Step S208 is entered; wherein (1)>For estimating the absolute value of the error +.>Is a preset positive number.
Further, the step S206 includes the steps of:
outputting the estimated value final state of ILESO algorithm at the sampling timeAnd expanded state->Wherein:
(9);
in the formula (9), k is the sampling number,for ILESO state estimation,/->Estimated for ILESO expanded state.
A non-transitory computer readable medium storing instructions which, when executed by a processor, perform the steps of an iterative learning active disturbance rejection control method according to the above.
A computing device includes a processor and a memory for storing a program executable by the processor, wherein the processor implements the iterative learning active disturbance rejection control method described above when executing the program stored by the memory.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention provides a structure and an algorithm of an extended state observer (ILESO) by iterative learning, which introduces an iterative learning mechanism on the basis of classical ESO, forms real-time sampling data for a period of time into a real-time moving data window, improves the convergence rate of the ILESO by forward and reverse iterative learning along a time axis, reduces phase lag and realizes low-delay estimation of total disturbance;
2. the invention can dynamically adjust the iterative learning times according to the error of ILESO on system output, achieve the purpose of indirectly self-adaptively adjusting ESO bandwidth or observer gain, and overcome the influence of large-scale uncertainty of unknown disturbance on ESO performance; the method does not need high gain, and reduces the initial peak value phenomenon;
3. the ILESO can reduce the bandwidth of the time domain classical ESO, has a better filtering effect on measurement noise, and solves the problem that the classical ESO is difficult to simultaneously consider the filtering performance on the measurement noise and the estimation performance on the total disturbance;
4. the length of the real-time moving data window can be adjusted according to the time constant of the controlled system, and under the condition of computer power tolerance, the length of the data window is increased to enable the estimation of the total disturbance to be smoother, so that frequent back and forth actions of an actuator of the control system can be reduced, and the service life of the actuator is prolonged.
Drawings
Fig. 1 is a schematic diagram of a conventional ADRC structure.
Fig. 2 is a schematic structural diagram of the present invention.
Fig. 3 is a flow chart of the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples.
Example 1
The iterative learning active disturbance rejection control method provided by the embodiment is that an iterative learning extended state observer ILESO is introduced into an active disturbance rejection controller ADRC, the iterative learning extended state observer ILESO replaces the original extended state observer ESO in the active disturbance rejection controller ADRC, and the control calculation is carried out by combining an original tracking differentiator TD and a state error feedback device SEF in the active disturbance rejection controller ADRC; the method comprises the steps that an extended state observer ILESO comprises an ILESO algorithm and a real-time mobile data window RT-MDW, the ILESO algorithm is operated in the real-time mobile data window RT-MDW to conduct iterative learning, and a state variable and an extended state variable of an iterative learning process are output to a state error feedback device SEF, so that low-delay estimation of total disturbance of a controlled system is achieved, and meanwhile sampling data with preset length, and the state variable and the extended state variable of the iterative learning process are stored by the real-time mobile data window RT-MDW.
Referring to fig. 1, the prior art ADRC controlled system is described as formula (1),
(1)
wherein the method comprises the steps ofTo control the gain +.>For the order of the system, +.>(/>) For the system state->For system output, ++>And->For the internal disturbance and the external disturbance of the system respectively, (1) is written as an expansion state equation which is (2),
(2)
wherein the method comprises the steps ofRepresenting the total disturbance of the system, which is the expanded state of the system,/->To control a rough estimate of gain.
TD in FIG. 1 is a tracking differentiator for taking an inputDifferential signals of the respective order->According to ADRC theory, the discrete time nonlinear ESO (3) and the linear ESO (4) are in the form of
(3)
Wherein the method comprises the steps ofFor the sampling period +.>As a sign function +.>,/>,…,/>And->To stabilize ESO normal numbers, bandwidth method can be used to set, +.>,/>,…,/>And->Estimation of the state and the expanded state of ESO, respectively; />,/>The method comprises the steps of carrying out a first treatment on the surface of the When->When=1, the nonlinear ESO of the formula (3) becomes the linear ESO of the formula (4),
(4)
SEF in fig. 1 is state error feedback control; when linear SEF is adopted, the control amount of ADRC is
(5)
Wherein the method comprises the steps of,/>,…,/>To stabilize the closed loop system to a normal number, a bandwidth method may be used for tuning.
The data of ESO each time of cycle utilization in ADRC is only @, which is,/>) When the data is exhausted, it is discarded. Since the past history data is not fully utilized, the gain of ESO is +.>(/>) Tend to be large and have severe initial peaking, which makes it difficult to accommodate widely varying total disturbances.
Thus, referring to fig. 2, the iterative learning extended state observer ILESO employed in the present embodiment performs the following steps:
defining a plurality of data pairs of a section of sampling interval,/>) The real-time moving data window RT-MDW is formed by the following formula (6):
(,/>;/>,/>;…;/>,/>) (6);
wherein,for the number k of the real-time mobile data window RT-MDW is the sampling frequency, the sampling data in the real-time mobile data window RT-MDW is repeatedly utilized for a plurality of times by an iterative learning method, in order to embody the characteristic of iterative learning, the numerical value of the subscript is not increased along with the increase of the sampling frequency, the related variable is re-represented according to the size of the real-time mobile data window, and the subscript is used for the re-representation of the related variable according to the size of the real-time mobile data window>=/>,/>=/>,/>And represent ILESO state estimation +.>And expansion state estimation +.>
ILESO performs a forward iterative learning mechanism, the forward nonlinear ILESO algorithm is represented as (7) and the forward linear ILESO algorithm is represented as (8), where j is the number of iterations in the real-time moving data window, N C Learning the maximum number of times for iteration; when the iterative operation times reach the maximum iterative learning times N C Ending the iterative learning; or using absolute values of estimated errorsTo end the iterative learning process, i.e. when +.>When (I)>The iterative learning of the current adoption period is exited for the preset positive number;
(7);
in the method, in the process of the invention,the number of data windows; />The order of the controlled system; />Is the sampling period; />Is a sign function; />,/>,…,/>And->To stabilize the ILESO normal number, bandwidth method is used for tuning; />Is an iterative learning filter coefficient,/->;/>Is iterative learning feedforward coefficient,/->;/>Is an iterative learning feedback coefficient,/->The method comprises the steps of carrying out a first treatment on the surface of the (7) The formula gives the definition of the fal () function,/->Is the estimation error +.>Is the demarcation point of the linear and nonlinear segments in the fal () function; />Is an exponent in the fal () function, representing the degree of nonlinearity; when->=1, fal () function is a linear function, +.>The farther from 1, the greater the degree of nonlinearity;
when (when)=1, yielding a linear ILESO algorithm:
(8);
ILESO also performs an inverse iterative learning mechanism, the inverse nonlinear ILESO algorithm is represented by the following equation (9), and the inverse linear ILESO algorithm is represented by the following equation (10):
(9);
the reverse linear ILESO algorithm is the reverse nonlinear ILESO algorithmLinear form at=1:
(10);
alternately performing a forward ILESO algorithm and a reverse ILESO algorithm, performing the forward ILESO algorithm, performing the reverse ILESO algorithm, alternately iterating, and taking the obtained ILESO output at the current sampling moment as an estimate of the state of the controlled systemAnd an estimate of the state of dilatation of the controlled system +.>As shown in the formula (11), the feedback signal is output to the formula (5) to realize iterative learning active disturbance rejection control.
(11);
The ESO with the forward single data learning function is modified into the learning of a real-time mobile data window composed of a plurality of data with forward and reverse learning through the formulas (7) to (10), and repeated iterative learning is carried out in the real-time mobile data window, so that the learning mechanism of the ESO is not changed, and the problems that the bandwidth of the classical ESO is difficult to determine, the initial peak value phenomenon is serious, the influence of system output noise is large, and the system state and expansion state estimation are difficult to adapt to large-scale total disturbance change in the actual application process are solved.
Example 2
Referring to fig. 3, the iterative learning active disturbance rejection control method provided in the embodiment includes the following steps:
step200, initializing working parameters of iterative learning active disturbance rejection control, wherein the working parameters comprise tracking differentiator TD parameters and SEF parameters(i=1, …, n) and ESO +.>(i=1,…,n+1),/>(i=1,…,n+1),/>The TD parameters may be selected as classical linear or nonlinear TD; furthermore, the length of the Real-time mobile data window (Real-Time Moving Data Window, RT-MDW)>And maximum number of iterative learning +.>
Step 201, obtaining each derivative of the input r by tracking the differentiator TD(i=1,…,n);
Step 202, input sampling data pair,/>) At the initial time->,/>Are all 0;
step 203, forming a real-time moving data window RT-MDW from the sampling data pair input in Step 202 and the history data, and dynamically adjusting the maximum iterative learning times of the iterative learning ILESO algorithm, including the following steps:
input in step S202Sample data pair [ ],/>) With history data (+)>,/>)(l=0,…,N W ) Constitutes a real-time mobile data window RT-MDW, when the data is insufficient, (. About.>,/>) 0 is used instead of->In interval [0, N W ]The sampling time of the corresponding measurement data is [ k-N ] W , k]With increasing time, the number of samplings k increases, new measurement data pairs (++>,/>) Continuously enter, at N W The data before the sampling period is withdrawn to form a data window moving in real time; in addition, the state +.A.in iterative learning process is stored in the real-time mobile data window RT-MDW>And expanded state->
Dynamically adjusting N C With increasing sampling number k, N is increased C Realizes the variable gain control of ILESO indirectly, as shown in the following formula (12),
(12);
where k is the number of samplings, N CM The maximum number of iterative learning allowed by the time between two sampling intervals is determined by the computing power of a computer and the order of a system, TC represents a process time constant, and is determined when debugging is carried out, and the number of the adopted periods corresponding to the inertia time of a first-order equivalent model of the system is taken; INT is a rounding function, min is a minimum function, and the length N of the real-time moving data window RT-MDW W According to TC adjustment, under the condition of computer power tolerance, the length of the data window is increased, so that the estimation of total disturbance is smoother, frequent back and forth actions of an actuator of a control system can be reduced, and the service life of the actuator is prolonged.
Step 204, performing forward ILESO algorithm learning once, and adding one to the number j of iterative learning;
step 205, performing error judgment of the iterative learning effect; if it meetsEnding the iterative learning of the sampling period and entering step S206; if the absolute value of the estimation error is not satisfied +.>Step S208 is entered; wherein (1)>For estimating the absolute value of the error +.>Is a preset positive number;
step 206, outputting the estimated value final state of ILESO algorithm at the sampling timeExpanded state;/>
Step 207, after calculating the control output according to the state error feedback device SEF of the active disturbance rejection controller ADRC, entering the next sampling control, and returning to Step 201 again;
step 208, judging whether the iterative learning frequency j is greater than the set iterative learning maximum frequency N C The method comprises the steps of carrying out a first treatment on the surface of the If the number j of iterative learning is greater than N C Step 206 is entered; if the number j of iterative learning is not greater than N C Step 209 is entered;
step 209, performing reverse ILESO algorithm learning once, adding one to the iterative learning times j, and entering Step 210;
step 210, performing forward ILESO algorithm learning once, adding one to the iterative learning times j, and re-entering Step 205 to further complete error judgment of the iterative learning effect.
Example 3
This embodiment discloses a non-transitory computer-readable medium storing instructions that, when executed by a processor, perform the steps of the iterative learning active disturbance rejection control method according to embodiment 1 or embodiment 2.
The non-transitory computer readable medium in this embodiment may be a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a usb disk, a removable hard disk, or the like.
Example 4
The embodiment discloses a computing device, which includes a processor and a memory for storing a program executable by the processor, where the processor implements the iterative learning active disturbance rejection control method described in embodiment 1 or embodiment 2 when executing the program stored by the memory.
The computing device described in this embodiment may be a desktop computer, a notebook computer, a smart phone, a PDA handheld terminal, a tablet computer, a programmable logic controller (PLC, programmable Logic Controller), or other terminal devices with processor functionality.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention, so variations in shape and principles of the present invention should be covered.

Claims (9)

1. An iterative learning active disturbance rejection control method is characterized in that: introducing an iterative learning extended state observer ILESO into an active disturbance rejection controller ADRC, replacing the original extended state observer ESO in the active disturbance rejection controller ADRC with the iterative learning extended state observer ILESO, and performing control calculation by combining an original tracking differentiator TD and a state error feedback device SEF in the active disturbance rejection controller ADRC; the method comprises the steps that an iterative learning extended state observer ILESO comprises an ILESO algorithm and a real-time mobile data window RT-MDW, the ILESO algorithm is operated in the real-time mobile data window RT-MDW to conduct iterative learning, and a state variable and an extended state variable of an iterative learning process are output to a state error feedback device SEF, so that low-delay estimation of total disturbance of a controlled system is achieved, and meanwhile sampling data with preset length, and the state variable and the extended state variable of the iterative learning process are stored by the real-time mobile data window RT-MDW;
the iterative learning extended state observer ILESO performs the steps of:
defining a plurality of data pairs of a section of sampling interval,/>) The real-time moving data window RT-MDW is formed by the following formula (1):
(,/>;/>,/>;…;/>,/>) (1);
wherein,for the number of real-time moving data windows RT-MDW, k is the sampling frequency, the sampling data in the real-time moving data windows RT-MDW are repeatedly utilized for multiple times through an iterative learning method, and related variables are re-represented according to the size of the real-time moving data windows, < >>=/>,/>=/>,/>And represent ILESO state estimation +.>And expansion state estimation +.>
ILESO performs a forward iterative learning mechanism, the forward nonlinear ILESO algorithm is expressed as (2) and the forward linear ILESO algorithm is expressed as (3), where j is the number of iterations in the real-time moving data window, N C Learning the maximum number of times for iteration; when iterating operationThe number of lines reaches the maximum number of iterative learning N C Ending the iterative learning; or using absolute values of estimated errorsTo end the iterative learning process, i.e. when +.>When (I)>The iterative learning of the current adoption period is exited for the preset positive number;
(2);
in the method, in the process of the invention,the number of data windows; />The order of the controlled system; />Is the sampling period; />Is a sign function;,/>,…,/>and->Normal number for ILESO stabilizationSetting by adopting a bandwidth method; />Is an iterative learning filter coefficient,/->;/>Is iterative learning feedforward coefficient,/->;/>Is the iterative learning feedback coefficient of the device,the method comprises the steps of carrying out a first treatment on the surface of the (2) The definition of the fal () function, +.>Is the estimation error +.>Is the demarcation point of the linear and nonlinear segments in the fal () function; />Is an exponent in the fal () function, representing the degree of nonlinearity; when->=1, fal () function is a linear function, +.>The farther from 1, the greater the degree of nonlinearity;
when (when)=1, resulting in a linear ILESO algorithm:
(3);
ILESO also performs an inverse iterative learning mechanism, the inverse nonlinear ILESO algorithm is represented by the following equation (4), and the inverse linear ILESO algorithm is represented by the following equation (5):
(4);
the reverse linear ILESO algorithm is the reverse nonlinear ILESO algorithmLinear form at=1:
(5);
alternately performing a forward ILESO algorithm and a reverse ILESO algorithm, performing the forward ILESO algorithm, performing the reverse ILESO algorithm, alternately iterating, and taking the obtained ILESO output at the current sampling moment as an estimate of the state of the controlled systemAnd an estimate of the state of dilatation of the controlled system +.>As shown in formula (6):
(6)。
2. the iterative learning active disturbance rejection control method according to claim 1, wherein the state error feedback SEF performs the steps of:
when the linear state error feedback device SEF is used, the control amount u (k) of ADRC is calculated as the following expression (7):
(7);
wherein,,/>,…,/>to stabilize the normal number of the closed loop system, a bandwidth method is adopted for setting; k is the number of samples to be taken,for the estimation of the state of the controlled system +.>For the estimation of the state of distension of the controlled system +.>For controlling the estimation of the gain +.>For each derivative of the input r calculated by the tracking differentiator TD, i=1, …, n.
3. The iterative learning active disturbance rejection control method of claim 1, comprising the steps of:
s200, initializing working parameters of iterative learning active disturbance rejection control;
s201, obtaining each derivative of the input r by tracking the differentiator TD,i=1,…,n;
S202, inputting sampling data pairs,/>);
S203, forming a real-time mobile data window RT-MDW by the sampling data pair input in the step S202 and the historical data, and dynamically adjusting the iterative learning maximum times of the iterative learning ILESO algorithm;
s204, carrying out forward ILESO algorithm learning once, and adding one to the iterative learning times j;
s205, performing error judgment of iterative learning effects; if the preset error requirement is met, ending the iterative learning of the sampling period, and entering step S206; if the error requirement does not meet the preset error requirement, the step S208 is entered;
s206, outputting the final state of the estimated value of the ILESO algorithm at the sampling timeAnd expanded state->
S207, after calculating control output according to the state error feedback device SEF of the ADRC, entering the next sampling control, and returning to the step S201 again;
s208, judging whether the iterative learning times j is greater than the set iterative learning maximum times N C The method comprises the steps of carrying out a first treatment on the surface of the If the number j of iterative learning is greater than N C Step S206 is entered; if the number j of iterative learning is not greater than N C Then enter S209;
s209, performing reverse ILESO algorithm learning once, adding one to the iterative learning times j, and entering S210;
s210, performing forward ILESO algorithm learning once, adding one to the iteration learning times j, and re-entering S205 to further finish error judgment of the iteration learning effect.
4. A method according to claim 3An iterative learning active disturbance rejection control method is characterized in that in step S200, the operating parameters include parameters of a tracking differentiator TD, an extended state observer ESO and a state error feedback device SEF, and a length N of a real-time moving data window w And the maximum number of iterative learning N C
5. The iterative learning active disturbance rejection control method according to claim 3, wherein the step S203 comprises the steps of:
sample data input in step S202 are paired up,/>) With history data (+)>,/>),l=0,…,N W Constitutes real-time moving data window RT-MDW, when there is insufficient data, (. About.f.)>,/>) 0 is used instead of->In interval [0, N W ]The sampling time of the corresponding measurement data is [ k-N ] W , k]With increasing time, the number of samplings k increases, new measurement data pairs (++>,/>) Continuously enter, at N W The data before the sampling period is withdrawn to form a data window moving in real time; in addition, the state +.A.in iterative learning process is stored in the real-time mobile data window RT-MDW>And expanded state->
Dynamically adjusting N C With increasing sampling number k, N is increased C Realizes the variable gain control of ILESO indirectly, as shown in the following formula (8),
(8);
where k is the number of samplings, N CM Learning the maximum number of iterations allowed for the time between two samplings, T C Representing a process time constant, and taking the number of adopted periods corresponding to the inertia time of a first-order equivalent model of the system; INT is a rounding function, min is a minimum function, and the length N of the real-time moving data window RT-MDW W According to T C And (5) adjusting.
6. The iterative learning active disturbance rejection control method according to claim 3, wherein the step S205 comprises the steps of:
performing error judgment of iterative learning effects; if it meetsEnding the iterative learning of the sampling period and entering step S206; if the absolute value of the estimation error is not satisfied +.>Step S208 is entered; wherein (1)>For estimating the absolute value of the error +.>Is a preset positive number.
7. The iterative learning active disturbance rejection control method according to claim 3, wherein the step S206 comprises the steps of:
outputting the estimated value final state of ILESO algorithm at the sampling timeAnd expanded state->Wherein:
(9);
in the formula (9), k is the sampling number,for ILESO state estimation,/->Estimated for ILESO expanded state.
8. A non-transitory computer readable medium storing instructions which, when executed by a processor, perform the steps of the iterative learning active disturbance rejection control method according to any one of claims 1 to 7.
9. A computing device comprising a processor and a memory for storing a program executable by the processor, wherein the processor implements the iterative learning immunity control method of any one of claims 1-7 when executing the program stored by the memory.
CN202311300391.4A 2023-10-10 2023-10-10 Iterative learning active disturbance rejection control method Active CN117031967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311300391.4A CN117031967B (en) 2023-10-10 2023-10-10 Iterative learning active disturbance rejection control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311300391.4A CN117031967B (en) 2023-10-10 2023-10-10 Iterative learning active disturbance rejection control method

Publications (2)

Publication Number Publication Date
CN117031967A CN117031967A (en) 2023-11-10
CN117031967B true CN117031967B (en) 2024-01-23

Family

ID=88632293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311300391.4A Active CN117031967B (en) 2023-10-10 2023-10-10 Iterative learning active disturbance rejection control method

Country Status (1)

Country Link
CN (1) CN117031967B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106208824A (en) * 2016-07-22 2016-12-07 浙江工业大学 A kind of multi-motor synchronous control method based on active disturbance rejection iterative learning
CN107991867A (en) * 2017-11-28 2018-05-04 浙江工业大学 A kind of iterative learning profile errors control method of the networking multi-shaft motion control system based on automatic disturbance rejection controller
CN109143863A (en) * 2018-09-13 2019-01-04 武汉科技大学 The quick self study of nonlinear system improves ADRC control method
CN111897324A (en) * 2020-06-24 2020-11-06 安徽工程大学 Unmanned ship course control system based on FA-LADRC
CN113241973A (en) * 2021-06-17 2021-08-10 吉林大学 Trajectory tracking control method for linear motor by iterative learning control of S-shaped filter
CN116443100A (en) * 2023-05-16 2023-07-18 中国第一汽车股份有限公司 Angle control method, device, equipment and medium based on linear active disturbance rejection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985740B (en) * 2021-12-30 2022-05-06 中国科学院空天信息创新研究院 Stability control method and device based on particle active disturbance rejection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106208824A (en) * 2016-07-22 2016-12-07 浙江工业大学 A kind of multi-motor synchronous control method based on active disturbance rejection iterative learning
CN107991867A (en) * 2017-11-28 2018-05-04 浙江工业大学 A kind of iterative learning profile errors control method of the networking multi-shaft motion control system based on automatic disturbance rejection controller
CN109143863A (en) * 2018-09-13 2019-01-04 武汉科技大学 The quick self study of nonlinear system improves ADRC control method
CN111897324A (en) * 2020-06-24 2020-11-06 安徽工程大学 Unmanned ship course control system based on FA-LADRC
CN113241973A (en) * 2021-06-17 2021-08-10 吉林大学 Trajectory tracking control method for linear motor by iterative learning control of S-shaped filter
CN116443100A (en) * 2023-05-16 2023-07-18 中国第一汽车股份有限公司 Angle control method, device, equipment and medium based on linear active disturbance rejection

Also Published As

Publication number Publication date
CN117031967A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
Chen et al. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion
Ding et al. Adaptive digital control of Hammerstein nonlinear systems with limited output sampling
CN110908351B (en) Support vector machine-fused SCR denitration system disturbance suppression prediction control method
CN110320795B (en) Method for realizing any linear controller by adopting active disturbance rejection control structure
CN114942659B (en) Kiln temperature control method, system, device and storage medium
CN109298636A (en) A kind of improved integral sliding mode control method
Kofman Quantized-state control: a method for discrete event control of continuous systems
CN117031967B (en) Iterative learning active disturbance rejection control method
Mi et al. Event-triggered MPC design for distributed systems with network communications
Baumann et al. Event-triggered pulse control with model learning (if necessary)
CN108594643B (en) Performance-guaranteed control method for all-state limited strict feedback system
CN111413938B (en) SCR denitration system disturbance inhibition prediction control method based on converted ammonia injection amount
Li et al. Online sparse identification for regression models
Wang et al. An adaptive outlier-robust Kalman filter based on sliding window and Pearson type VII distribution modeling
CN114598611B (en) Input design method and system for event-driven identification of binary-valued FIR (finite Impulse response) system
CN110366232B (en) Sensor transmission energy control method for remote state estimation
CN112636719A (en) ILC system input signal filtering method under data loss and channel noise interference
Zeng et al. Nonlinear sampled-data systems with a generalized hold polynomial-function for fast sampling rates
Hu et al. Simulation and analysis of LQG controller in antenna control system
Chen et al. Insufficient initial condition of fractional order derivative definitions
CN115795283B (en) Differential signal extraction method based on iterative learning tracking differentiator
CN103064286A (en) Control method of industrial process and equipment
Guo et al. Optimal preiod input design in fir system identification with binary-valued observations and event-triggered communication
Yin Deep Learning for Partial Differential Equations (PDEs)
Yue et al. A Novel State Estimator Based on Algebraic Parametric Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant