JPH05165504A - Learning control system by incremental value operation - Google Patents

Learning control system by incremental value operation

Info

Publication number
JPH05165504A
JPH05165504A JP35478991A JP35478991A JPH05165504A JP H05165504 A JPH05165504 A JP H05165504A JP 35478991 A JP35478991 A JP 35478991A JP 35478991 A JP35478991 A JP 35478991A JP H05165504 A JPH05165504 A JP H05165504A
Authority
JP
Japan
Prior art keywords
value
incremental value
command
deviation
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP35478991A
Other languages
Japanese (ja)
Other versions
JP3152251B2 (en
Inventor
Yuji Nakamura
裕司 中村
Kazuhiro Tsuruta
和寛 鶴田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yaskawa Electric Corp
Original Assignee
Yaskawa Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yaskawa Electric Corp filed Critical Yaskawa Electric Corp
Priority to JP35478991A priority Critical patent/JP3152251B2/en
Publication of JPH05165504A publication Critical patent/JPH05165504A/en
Application granted granted Critical
Publication of JP3152251B2 publication Critical patent/JP3152251B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

PURPOSE:To curtail a memory, and to shorten an arithmetic time by inputting an incremental value of a target command and an incremental value of an output of a control object, and outputting a correction command or its incremental value to the controlled system. CONSTITUTION:At the time (i) of a k-th trial time, an incremental value DELTAek (i) of a follow-up deviation and its integral value ek (i) are derived by a difference between an incremental value DELTAr (i) of a target command and an incremental value DELTAyk (i) of an output of a control object. Also, an incremental value {DELTAek (i+1), ..., DELTAek (i+M)} of a deviation to the future of M steps is predicted by the incremental value DELTAek (i) of the deviation, an incremental value of a deviation at the time of the previous trial, an incremental value of a correction amount of the past and the present time, and information related to a dynamic characteristic of the control object. Subsequently, an incremental value DELTAsigmak (i) of the correction amount of the present time is determined so that an evaluation function shown by an expression I becomes minimum, and an incremental value DELTAUk (i) of a correction command is derived by DELTAUk (i)=DELTAk-1 (i)+DELTAsigmak (i). In such a way, a necessary memory can be curtailed, and also, an arithmetic time can be shortened.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、繰り返し動作をする工
作機械、ロボット等の制御方式に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a control system for machine tools, robots and the like that repeatedly operate.

【0002】[0002]

【従来の技術】繰り返し目標値に対する学習制御系の設
計法としては、本出願人が特開平1ー237701号公報におい
て、提案した方式がある。この方式の第1の発明では、
同じ目標値に対する動作を繰り返し、現在および前回試
行時の偏差と、現在および過去の補正量の増分値と、制
御対象のステップ応答とをもとに未来の偏差を予測し、
その予測値の重み付き2乗和を評価関数として、その評
価関数が最小となるように制御入力を補正するため、最
終的には目標値と出力が一致し、高精度な追従動作が実
現される。また、本出願人は、上述の設計法における制
御系のステップ応答のサンプリング点数を短縮する方法
として、ステップ応答を最初の数個だけサンプリングし
て、その後はステップ応答の差分値が一定減衰比で減少
するものと近似する方法を特願平3-173091において提案
している。さらに本出願人は、特願平2ー196940号におい
て、モータの位置制御系に対して、学習制御器を位置ル
ープの内側に挿入し、その入力を位置追従偏差あるいは
その定数倍した値とし、未来の偏差を予測して、その予
測値の重み付き2乗和が最小となるように補正指令u
(i) を補正し、これを補正速度指令として速度制御器に
入力する方式を提案している。
2. Description of the Related Art As a method for designing a learning control system with respect to a repetitive target value, there is a method proposed by the present applicant in Japanese Patent Laid-Open No. 1-237701. In the first invention of this system,
Repeat the operation for the same target value, predict the future deviation based on the deviation at the current and previous trials, the increment value of the current and past correction amount, and the step response of the controlled object,
Since the weighted sum of squares of the predicted value is used as the evaluation function and the control input is corrected so that the evaluation function becomes the minimum, the target value and the output eventually match and a highly accurate follow-up operation is realized. It Further, the applicant of the present invention, as a method of shortening the number of sampling points of the step response of the control system in the above-mentioned design method, samples only the first few step responses, and thereafter, the difference value of the step response is a constant damping ratio. A method that approximates the decrease is proposed in Japanese Patent Application No. 3-173091. Further, the present applicant, in Japanese Patent Application No. 2-196940, with respect to the position control system of the motor, insert a learning controller inside the position loop, and its input is a value that is a position tracking deviation or a constant multiple thereof, Predict the future deviation, and use the correction command u so that the weighted sum of squares of the predicted value is minimized.
A method is proposed in which (i) is corrected and the corrected speed command is input to the speed controller.

【0003】[0003]

【発明が解決しようとする課題】しかし、これらの方式
では、目標指令や制御対象の出力や偏差、あるいは、補
正指令などの値をそのまま利用しているため、データの
ビット長が長くなり、メモリや演算時間の増加等の問題
点があった。そこで本発明は、必要メモリの削減および
演算時間の短縮を目的とする。
However, in these methods, since the values of the target command, the output and deviation of the controlled object, the correction command, etc. are used as they are, the bit length of the data becomes long and the memory There was a problem such as increase of calculation time. Therefore, an object of the present invention is to reduce the required memory and the calculation time.

【0004】[0004]

【課題を解決するための手段】上記問題点を解決するた
め、本発明は、同じパターンを繰り返す目標指令に制御
対象の出力を追従させるよう、目標指令の増分値と制御
対象の出力の増分値を入力し、補正指令、あるいはその
増分値を制御対象へ出力する制御系において、k回目試
行時の時刻iに、前記目標指令の増分値Δr(i) と制御
対象の出力の増分値Δyk (i) との差分により、追従偏
差の増分値Δek (i) 、および、その積分値ek (i) を
求め、さらに、前記偏差の増分値Δek (i) と、前回試
行時の偏差の増分値と、過去および現在時刻の補正量の
増分値と、制御対象の動特性に関する情報とにより、M
ステップ未来までの偏差の増分値{Δek (i+1),Δek
(i+2),…, Δek (i+M) }を予測し、評価関数
SUMMARY OF THE INVENTION In order to solve the above problems, the present invention provides an incremental value of the target command and an incremental value of the output of the controlled object so that the output of the controlled object follows the target command that repeats the same pattern. In the control system that inputs the correction command or the increment value thereof to the control target, at the time i at the time of the k-th trial, the increment value Δr (i) of the target command and the increment value Δy k of the output of the control target. The incremental value Δe k (i) of the tracking deviation and its integrated value e k (i) are obtained from the difference from (i), and the incremental value Δe k (i) of the deviation and the previous trial value Based on the incremental value of the deviation, the incremental values of the correction amounts at the past and present times, and the information on the dynamic characteristics of the controlled object, M
Incremental value of deviation to step future {Δe k (i + 1), Δe k
(i + 2), ..., Δe k (i + M)} is predicted and the evaluation function

【0005】[0005]

【数2】 [Equation 2]

【0006】が最小となるように、現在時刻の補正量の
増分値Δσk (i) を決定し、 Δuk (i) = Δuk-1(i) + Δσk (i) により、補正指令の増分値Δuk (i) を求めることを特
徴とするものである。
In order to minimize the correction amount, the increment value Δσ k (i) of the correction amount at the current time is determined, and the correction command is given by Δu k (i) = Δu k-1 (i) + Δσ k (i). It is characterized in that the increment value Δu k (i) of is obtained.

【0007】[0007]

【作用】上記手段により、より少ないメモリ、より短い
演算時間で、動作する学習制御系が構成される。
By the above means, a learning control system that operates with less memory and shorter calculation time is constructed.

【0008】[0008]

【実施例】本発明は、従来技術で述べたいずれの方式に
も、あるいは、それ以外の方式にも適用可能であるが、
ここでは、特開平1ー237701号公報第1の発明に適用した
場合について、その具体的実施例を図1に示して説明す
る。図中1は指令発生器であり、現在時刻i における目
標指令値r(i) の増分値Δr(i) を発生する。2は減算
器であり、目標指令増分値Δr(i) と出力増分値y
k (i) との偏差増分値Δek (i) を出力する。3は、定
数Q, A1,A2,…, AM , B,g1,g2,…, gN を記憶
するメモリ、4は、前回試行時の時刻iから現在時刻i
までの偏差増分値を記憶するメモリ、5は、Nサンプリ
ング過去から現在時刻iに至るまでの補正量増分値を記
憶するメモリ、6は、前回試行時の時刻iから現在時刻
iまでの補正指令増分値を記憶するメモリである。7は
現在の偏差ek (i) を求める積算器である。8は演算器
であり、
BEST MODE FOR CARRYING OUT THE INVENTION The present invention can be applied to any of the systems described in the prior art or to other systems.
Here, a specific embodiment will be described with reference to FIG. 1 when applied to the first invention of JP-A-1-237701. In the figure, 1 is a command generator, which generates an increment value Δr (i) of the target command value r (i) at the current time i. 2 is a subtracter, which is a target command increment value Δr (i) and an output increment value y
and outputs a k deviation increment Δe k of the (i) (i). 3, a constant Q, A 1, A 2, ..., A M, B, g 1, g 2, ..., stores g N memory 4, the current time i from the time i at the previous trial
Memory for storing the deviation increment value up to the present time i, 5 is a memory for storing the correction amount increment value from N sampling past to the present time i, and 6 is a correction command from the time i at the time of the previous trial to the present time i. It is a memory for storing the increment value. Reference numeral 7 is an integrator for obtaining the current deviation e k (i). 8 is an arithmetic unit,

【0009】[0009]

【数3】 [Equation 3]

【0010】なる演算によって、時刻iにおける補正量
増分値Δσk (i) を算出する。さらに、9は現在の補正
量増分値Δσk (i) と、前回試行時の時刻iの補正指令
増分値Δuk-1(i)とを加算して、今回の補正指令増分値
Δuk (i) を出力する加算器である。11はモータおよ
びその位置制御系であり、目標位置指令の増分値とし
て、補正指令増分値Δuk (i) が入力され、モータの実
際の位置yk (i) を出力する。10はモータ位置の増分
値Δyk (i) を求める差分器である。ここで(1) 式の導
出を行う。いまk回目試行時の時刻iにおいて、mステ
ップ未来の偏差ek (i+m) は、次式で予測される。
By this calculation, the correction amount increment value Δσ k (i) at the time i is calculated. Further, 9 adds the current correction amount increment value Δσ k (i) and the correction command increment value Δu k-1 (i) at the time i at the time of the previous trial to obtain the current correction command increment value Δu k ( It is an adder that outputs i). Reference numeral 11 denotes a motor and its position control system, which receives a correction command increment value Δu k (i) as an increment value of a target position command and outputs an actual position y k (i) of the motor. Reference numeral 10 is a differentiator for obtaining the increment value Δy k (i) of the motor position. Here, the formula (1) is derived. At the time i at the time of the k-th trial, the deviation e k (i + m) in the m step future is predicted by the following equation.

【0011】[0011]

【数4】 [Equation 4]

【0012】ここで、hj (j=1,2, ・・・,N) は、目標
位置指令として単位ステップを入れた場合のモータ位置
制御系の出力応答のサンプル値Hj の差分値(hj = H
j - Hj-1 )であり、Nは応答が充分に整定するよう
に、すなわちhn =0(n>N)となるように選ばれる。
(2) 式より、mステップ未来の偏差増分値Δek (i+m)
は、
Here, h j (j = 1,2, ..., N) is the difference value (( j) of the sampled value H j of the output response of the motor position control system when a unit step is inserted as the target position command. h j = H
j − H j−1 ), and N is chosen so that the response is well settled, ie h n = 0 (n> N).
From equation (2), the deviation increment Δe k (i + m) in the future m steps
Is

【0013】[0013]

【数5】 [Equation 5]

【0014】で、予測される。ただし、未来の補正量増
分値Δσk (i+1)=Δσk (i+2)=…=0と仮定している。い
ま評価関数
Is predicted. However, it is assumed that the future correction amount increment value Δσ k (i + 1) = Δσ k (i + 2) = ... = 0. Now the evaluation function

【0015】[0015]

【数6】 [Equation 6]

【0016】が最小となるように、現在時刻の補正量の
増分値Δσk (i) を決定する。したがって、∂J/ ∂Δ
σk (i) = 0 より、評価関数式(4) を最小とするΔσk
(i) は、前記(1) 式で与えられる。ただし各定数、
m , Q, Am ,B,gn は、次式で与えられる。
The increment value Δσ k (i) of the correction amount at the current time is determined so that Therefore, ∂J / ∂Δ
From σ k (i) = 0, Δσ k that minimizes the evaluation function equation (4)
(i) is given by the above equation (1). However, each constant,
q m, Q, A m, B, g n is given by the following equation.

【0017】[0017]

【数7】 [Equation 7]

【0018】また、制御対象が指令増分値ではなく、指
令を入力するばあいには、加算器9の出力Δuk (i) の
積分値uk (i) を求め、制御対象へ入力してやれば良
い。なお、ここでは本発明を特開平1-237701号公報第1
の発明に適用した場合について説明したが、従来技術で
述べた他の方式、あるいは、それ以外の方式において
も、未来偏差増分値を(3) 式のように、偏差増分値、お
よび、補正量増分値によって予測し、評価関数式(4) を
最小とするように現在の補正量増分値を決定してやるこ
とにより、本発明を適用することができる。
When the control target is not the command increment value but a command is input, the integral value u k (i) of the output Δu k (i) of the adder 9 is calculated and input to the control target. good. Here, the present invention is described in Japanese Patent Laid-Open No. 1-237701
The invention is applied to the invention, but in the other methods described in the prior art, or in other methods, the future deviation increment value is calculated by the deviation increment value and the correction amount as shown in Expression (3). The present invention can be applied by predicting the increment value and determining the current increment value of the correction amount so as to minimize the evaluation function expression (4).

【0019】[0019]

【発明の効果】以上述べたように、本発明によれば、よ
り少ないメモリ、より短い演算時間で動作する学習制御
系を実現することができるという効果がある。
As described above, according to the present invention, it is possible to realize a learning control system which operates with a smaller memory and a shorter calculation time.

【図面の簡単な説明】[Brief description of drawings]

【図1】 本発明の具体的実施例を示す図FIG. 1 is a diagram showing a specific embodiment of the present invention.

【符号の説明】[Explanation of symbols]

1 指令発生器 2 減算器 3 定数Q, q1,q2,…, qM , g1,g2,…, gN を記
憶するメモリ 4 偏差増分値を記憶するメモリ 5 補正量増分値を記憶するメモリ 6 補正指令増分値を記憶するメモリ 7 積算器 8 演算器 9 加算器 10 差分器 11 モータおよびその位置制御系
1 command generator 2 subtractor 3 constant Q, q 1 , q 2 , ..., q M , g 1 , g 2 , ..., g N memory 4 memory for storing deviation increment value 5 correction amount increment value Memory to be stored 6 Memory for storing correction command increment value 7 Integrator 8 Computing unit 9 Adder 10 Difference unit 11 Motor and its position control system

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 同じパターンを繰り返す目標指令に制御
対象の出力を追従させるよう、目標指令の増分値と制御
対象の出力の増分値を入力し、補正指令の増分値を制御
対象へ出力する制御系において、k回目試行時の時刻i
に、前記目標指令の増分値Δr(i) と制御対象の出力の
増分値Δyk (i) との差分により、追従偏差の増分値Δ
k (i) 、および、その積分値ek (i) を求め、さら
に、前記偏差の増分値Δek (i) と、前回試行時の偏差
の増分値と、過去および現在時刻の補正量の増分値と、
制御対象の動特性に関する情報とにより、Mステップ未
来までの偏差の増分値{Δek (i+1),Δek (i+2),…,
Δek (i+M) }を予測し、評価関数 【数1】 が最小となるように、現在時刻の補正量の増分値Δσ
k(i)を決定し、 Δuk (i) = Δuk-1(i) + Δσk (i) により、補正指令の増分値Δuk (i) を求め、制御対象
へ出力することを特徴とする学習制御方式。
1. A control for inputting an increment value of a target command and an increment value of an output of a control target, and outputting an increment value of a correction command to a control target so that the output of the control target follows the target command repeating the same pattern. Time i at the k-th trial in the system
In accordance with the difference between the incremental value Δr (i) of the target command and the incremental value Δy k (i) of the output of the controlled object, the incremental value Δ of the tracking deviation is
e k (i) and its integrated value e k (i) are obtained, and further, the deviation increment value Δe k (i), the deviation increment value at the previous trial, and the past and present time correction amounts. The increment value of
Based on the information on the dynamic characteristics of the controlled object, the increment value of the deviation to the M step future {Δe k (i + 1), Δe k (i + 2), ...,
Δe k (i + M)} is predicted, and the evaluation function So that is minimized, the incremental value Δσ of the correction amount at the current time
The characteristic is that k (i) is determined, and the increment value Δu k (i) of the correction command is obtained by Δu k (i) = Δu k-1 (i) + Δσ k (i) and output to the control target. Learning control method.
【請求項2】 同じパターンを繰り返す目標指令に制御
対象の出力を追従させるよう、目標指令の増分値と制御
対象の出力の増分値を入力し、補正指令を制御対象へ出
力する制御系において、請求項1で求めた補正指令の増
分値Δuk (i) の積分により補正指令uk (i) を求め、
制御対象へ出力することを特徴とする請求項1記載の学
習制御方式。
2. A control system for inputting an incremental value of a target command and an incremental value of an output of a controlled object and outputting a correction command to the controlled object so that the output of the controlled object follows a target command repeating the same pattern. The correction command u k (i) is obtained by integrating the increment value Δu k (i) of the correction command obtained in claim 1,
The learning control method according to claim 1, wherein the learning control method outputs to a controlled object.
JP35478991A 1991-12-18 1991-12-18 Learning control method by increment value calculation Expired - Fee Related JP3152251B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP35478991A JP3152251B2 (en) 1991-12-18 1991-12-18 Learning control method by increment value calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP35478991A JP3152251B2 (en) 1991-12-18 1991-12-18 Learning control method by increment value calculation

Publications (2)

Publication Number Publication Date
JPH05165504A true JPH05165504A (en) 1993-07-02
JP3152251B2 JP3152251B2 (en) 2001-04-03

Family

ID=18439918

Family Applications (1)

Application Number Title Priority Date Filing Date
JP35478991A Expired - Fee Related JP3152251B2 (en) 1991-12-18 1991-12-18 Learning control method by increment value calculation

Country Status (1)

Country Link
JP (1) JP3152251B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995002854A1 (en) * 1993-07-14 1995-01-26 Kabushiki Kaisha Yaskawa Denki Prediction controller

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995002854A1 (en) * 1993-07-14 1995-01-26 Kabushiki Kaisha Yaskawa Denki Prediction controller
US5726878A (en) * 1993-07-14 1998-03-10 Kabushiki Kaisha Yaskawa Denki Prediction controller

Also Published As

Publication number Publication date
JP3152251B2 (en) 2001-04-03

Similar Documents

Publication Publication Date Title
Calafiore et al. Robot dynamic calibration: Optimal excitation trajectories and experimental parameter estimation
JP3516232B2 (en) Method and apparatus for implementing feedback control that optimally and automatically rejects disturbances
JP4697139B2 (en) Servo control device
KR970003823B1 (en) Control system that best follows periodical setpoint value
JPH10133703A (en) Adaptive robust controller
Izadbakhsh et al. Robust adaptive control of robot manipulators using Bernstein polynomials as universal approximator
US6825631B1 (en) Prediction controlling device
Ciliz Adaptive control of robot manipulators with neural network based compensation of frictional uncertainties
EP0709754B1 (en) Prediction controller
KR100267362B1 (en) Preview control apparatus
JPH05165504A (en) Learning control system by incremental value operation
JP3109605B2 (en) Learning control method
JP3152250B2 (en) Preview control method by incremental value
JP3196907B2 (en) Learning controller for systems with dead time for output detection
JP2921056B2 (en) Learning control device by correcting speed command
JP3039573B2 (en) Learning control method
JPH04369002A (en) Predictive learning control system based upon approximate step response
JP3039814B2 (en) Learning control method
JPH0830979B2 (en) Control method that optimally follows the periodic target value
JP3036654B2 (en) Learning control method
JP3256950B2 (en) Optimal preview learning control device
JPH06314106A (en) Learning controller
JP3191836B2 (en) Learning control device
JPH05119828A (en) Foreknowledge control system using step response
JPH0527829A (en) Prescience control system

Legal Events

Date Code Title Description
FPAY Renewal fee payment (prs date is renewal date of database)

Year of fee payment: 8

Free format text: PAYMENT UNTIL: 20090126

FPAY Renewal fee payment (prs date is renewal date of database)

Year of fee payment: 9

Free format text: PAYMENT UNTIL: 20100126

LAPS Cancellation because of no payment of annual fees