JPH09196622A - Method for detecting position - Google Patents

Method for detecting position

Info

Publication number
JPH09196622A
JPH09196622A JP8029899A JP2989996A JPH09196622A JP H09196622 A JPH09196622 A JP H09196622A JP 8029899 A JP8029899 A JP 8029899A JP 2989996 A JP2989996 A JP 2989996A JP H09196622 A JPH09196622 A JP H09196622A
Authority
JP
Japan
Prior art keywords
angle
points
robot
positions
measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP8029899A
Other languages
Japanese (ja)
Other versions
JP3562096B2 (en
Inventor
Shigeo Okamizu
茂生 岡水
Yoshiaki Kimura
美昭 木村
Toshiharu Sakamoto
俊治 坂本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazda Motor Corp
Original Assignee
Mazda Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazda Motor Corp filed Critical Mazda Motor Corp
Priority to JP02989996A priority Critical patent/JP3562096B2/en
Publication of JPH09196622A publication Critical patent/JPH09196622A/en
Application granted granted Critical
Publication of JP3562096B2 publication Critical patent/JP3562096B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To markedly enhance correction accuracy insusceptible to robot controlling accuracy and resolution accuracy by a method wherein a distance between a visual sensing means and an object is changed and positions and angles of two points are measured three times so that correction method for the position and angle is formed. SOLUTION: In a position detecting method for an object by image processing, positions of two points of an object 1 and an angle of a line passing through the two points with respect to a reference line are obtained by image recognition using a visual sensing means 2. Next, the visual sensing means 2 is moved such that obtained positions and angle is equal to reference positions and a reference angle. On the moved position a, positions and angle of the two points are detected again. Next, a distance β between the visual sensing means 2 and object 1 are changed, then the positions of the two points and the angle are detected again, thereby correcting the positions and angle.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【発明の属する技術分野】この発明は、例えば車両用ド
アやその他の部品をロボットを用いて組付けもしくは取
外しするような場合に用いられる画像処理による物体の
位置検出方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for detecting the position of an object by image processing, which is used when, for example, a vehicle door or other parts are assembled or removed by using a robot.

【0002】[0002]

【従来の技術】従来、被測定物の位置を画像処理により
検出する方法(位置検出方法)としては、例えば、特開
平2−243914号公報に記載の方法がある。すなわ
ち、図12に示す如く被測定物91を固定搭載するマス
タテーブル92と、このマスタテーブル92に対して位
置固定された視覚手段としてのCCDカメラ93と、こ
のCCDカメラ93によって撮像されたデータを演算す
る画像処理装置94とを備え、マスタテーブル92上の
基準点95に対する測定点96のマスタ位置データを画
像処理装置94内のメモリに格納し、この状態で被測定
物91をマスタテーブル92上の所定位置にセットし、
CCDカメラ93により被測定物91の測定点96とマ
スタテーブル92の基準点95との位置関係を画像処理
装置94により演算する。
2. Description of the Related Art Conventionally, as a method for detecting the position of an object to be measured by image processing (position detecting method), there is, for example, the method described in JP-A-2-243914. That is, as shown in FIG. 12, a master table 92 on which an object to be measured 91 is fixedly mounted, a CCD camera 93 as a visual means fixed in position with respect to the master table 92, and data captured by the CCD camera 93 are displayed. The master position data of the measurement point 96 with respect to the reference point 95 on the master table 92 is stored in the memory in the image processing device 94, and the measured object 91 is placed on the master table 92 in this state. Set to the specified position of
The positional relationship between the measurement point 96 of the DUT 91 by the CCD camera 93 and the reference point 95 of the master table 92 is calculated by the image processing device 94.

【0003】この際、測定点96のマスタ位置データ
(X0 ,Y0 )と、実際に測定された測定点(X1 ,Y
1 )とのそれぞれの基準点を画像処理装置94内の演算
部で一致させ、これらマスタ基準位置および実測位置と
の誤差ΔX=X1 −X0 ,ΔY=Y1 −Y0 を演算し、
これにより測定点の位置座標を算出して、位置を検出お
よび補正する所謂座標位置検出方法である。
At this time, the master position data (X 0 , Y 0 ) of the measurement point 96 and the actually measured measurement point (X 1 , Y 0
1 ) and the respective reference points are made to coincide with each other by the calculation unit in the image processing device 94, and the errors ΔX = X 1 −X 0 and ΔY = Y 1 −Y 0 between these master reference position and actual measurement position are calculated,
This is a so-called coordinate position detection method in which the position coordinates of the measurement point are calculated to detect and correct the position.

【0004】しかし、この従来方法にあっては1回のみ
の読込みにより補正を実行するうえ、CCDカメラ93
に関する角度情報の読込みを行なわない関係上、誤差が
±2.5mm程度と大きく、例えば±0.5mm程度の高精
度の補正が要求される位置検出には使用不可となる問題
点があった。
However, in this conventional method, the correction is executed by reading only once, and the CCD camera 93 is used.
Since the angle information regarding is not read, the error is as large as about ± 2.5 mm, and there is a problem that it cannot be used for position detection that requires highly accurate correction of, for example, about ± 0.5 mm.

【0005】一般に画像処理および視覚手段としてのC
CDカメラは被測定物に近付けば近づく程、解像精度が
向上するが、CCDカメラからの位置補正信号に対して
ロボットを駆動する場合、X軸成分、Y軸成分に加えて
角度成分を考慮する必要があり、かつロボット側にはロ
ボット駆動時に重力が作用するうえ、ロボットの製作誤
差もあり、さらにはロボット側での計算能力にも限界が
ある関係上、動作ずれが発生して、充分な高精度が確保
できず、指令位置と異なる位置、角度に動作してしまう
問題点があった。
Generally, C as an image processing and visual means.
The resolution accuracy improves as the CD camera gets closer to the object to be measured, but when driving the robot in response to the position correction signal from the CCD camera, the angle component is considered in addition to the X-axis component and the Y-axis component. In addition to the fact that gravity acts on the robot side when the robot is driven, there are manufacturing errors of the robot, and there is a limit to the calculation capacity on the robot side, there is a movement deviation, so it is sufficient. However, there is a problem in that the high precision cannot be ensured and the robot operates at a position and angle different from the commanded position.

【0006】[0006]

【発明が解決しようとする課題】この発明の請求項1記
載の発明は、視覚手段の画像認識により物体の2点の位
置と、これら2点を結ぶ線の基準線に対する角度とを求
め(1次計測)た後に、視覚手段を、求められた位置お
よび角度が基準位置および基準角度となるように移動さ
せ、この移動位置で2点の位置と角度とを再検出(2次
計測)し、さらに上記視覚手段の物体との距離を変化さ
せて2点の位置と角度とを再度検出(3次計測)するこ
とにより、位置および角度を補正する方法と成すこと
で、1次乃至3次の合計3回の検出によりロボット制御
精度および解像精度に左右されることなく補正精度の大
幅な向上を図ることができる位置検出方法の提供を目的
とする。
The invention according to claim 1 of the present invention obtains the positions of two points of an object and the angle of a line connecting these two points with respect to a reference line by image recognition by visual means (1 After the next measurement), the visual means is moved so that the obtained position and angle become the reference position and the reference angle, and the positions and angles of the two points are re-detected (secondary measurement) at this moving position, Further, by changing the distance between the visual means and the object and re-detecting the positions and angles of the two points (third-order measurement), the method of correcting the position and the angle is achieved. An object of the present invention is to provide a position detection method capable of significantly improving correction accuracy without being influenced by robot control accuracy and resolution accuracy by detecting a total of three times.

【0007】この発明の請求項2記載の発明は、上記請
求項1記載の発明の目的と併せて、視覚手段の物体との
距離変化を、該視覚手段の物体への接近に設定すること
で、この視覚手段による解像度(解像精度)の向上を図
ることができる位置検出方法の提供を目的とする。
According to a second aspect of the present invention, in addition to the object of the first aspect of the invention, the change in the distance between the visual means and the object is set so that the visual means approaches the object. An object of the present invention is to provide a position detecting method capable of improving the resolution (resolution accuracy) by this visual means.

【0008】この発明の請求項3記載の発明は、上記請
求項2記載の発明の目的と併せて、ロボットに視覚手段
を装着することで、ロボット側が本来有する制御系およ
び位置検出機能を有効利用して上述の視覚手段を駆動す
ることができる位置検出方法の提供を目的とする。
According to the third aspect of the present invention, in addition to the object of the second aspect of the invention, the control system and the position detection function originally possessed by the robot side are effectively utilized by mounting the visual means on the robot. Then, it aims at providing the position detection method which can drive the above-mentioned visual means.

【0009】この発明の請求項4記載の発明は、上記請
求項3記載の発明の目的と併せて、位置および角度の補
正はロボット系の制御誤差を補正することで、この制御
誤差の補正によりロボットアームを適切な位置、角度に
高精度にコントロールすることができる位置検出方法の
提供を目的とする。
According to a fourth aspect of the present invention, in addition to the object of the third aspect of the invention, the position and angle are corrected by correcting the control error of the robot system. An object of the present invention is to provide a position detection method capable of controlling a robot arm at an appropriate position and angle with high accuracy.

【0010】[0010]

【課題を解決するための手段】この発明の請求項1記載
の発明は、画像処理による物体の位置検出方法であっ
て、視覚手段の画像認識により物体の2点の位置と、こ
れら2点を結ぶ線の基準線に対する角度とを求め(1次
計測)、次に上記視覚手段を、求めた位置および角度が
基準位置および基準角度となるように移動させ、この移
動位置で2点の位置と角度とを再検出し(2次計測)、
次に上記視覚手段の物体との距離を変化させて2点の位
置と角度とを再度検出(3次計測)することにより位置
および角度を補正する位置検出方法であることを特徴と
する。
According to a first aspect of the present invention, there is provided a method for detecting the position of an object by image processing, wherein the position of two points of the object and the two points of the object are recognized by image recognition by visual means. The angle of the connecting line with respect to the reference line is obtained (primary measurement), and then the above-mentioned visual means is moved so that the obtained position and angle become the reference position and the reference angle. The angle and is detected again (secondary measurement),
Next, it is a position detecting method for correcting the position and the angle by detecting the position and the angle of the two points again (third measurement) by changing the distance of the visual means to the object.

【0011】この発明の請求項2記載の発明は、上記請
求項1記載の発明の構成と併せて、上記視覚手段の物体
との距離変化は、該視覚手段を物体に接近させる位置検
出方法であることを特徴とする。
According to a second aspect of the present invention, in combination with the configuration of the first aspect of the invention, the change in the distance between the visual means and the object is a position detecting method for bringing the visual means closer to the object. It is characterized by being.

【0012】この発明の請求項3記載の発明は、上記請
求項2記載の発明の構成と併せて、ロボットに上記視覚
手段が装着された位置検出方法であることを特徴とす
る。
According to a third aspect of the present invention, in addition to the configuration of the second aspect of the invention, there is provided a position detecting method in which the visual means is attached to a robot.

【0013】この発明の請求項4記載の発明は、上記請
求項3記載の発明の構成と併せて、上記位置および角度
の補正はロボット系の制御誤差を補正する位置検出方法
であることを特徴とする。
According to a fourth aspect of the present invention, in addition to the configuration of the third aspect of the invention, the correction of the position and angle is a position detecting method for correcting a control error of the robot system. And

【0014】[0014]

【発明の作用及び効果】この発明の請求項1記載の発明
によれば、図1乃至図7にクレーム対応図で示すよう
に、画像処理による物体1の位置検出方法において、視
覚手段2の画像認識により物体1の2点a,b(図2に
示す画像上のポイント参照)の位置と、これら2点a,
b(ロボットのティーチングポイントにより予め定めら
れた2点のこと)を結ぶ線cの基準線d(詳しくは視覚
手段がもっているY軸基準線)に対する角度θとを求め
(1次計測)、次に上述の視覚手段2を図1に仮想線α
で示すように、求めた位置a=(x.y)および角度θ
が基準位置(0,0)および基準角度0度となるように
移動させ、この移動位置(図1の仮想線α参照)で2点
e,f(図3、図5に示す画像上のポイント参照)の位
置と角度とを再検出(2次計測)する。
According to the invention described in claim 1 of the present invention, as shown in the claims correspondence diagrams in FIGS. 1 to 7, in the method of detecting the position of the object 1 by image processing, the image of the visual means 2 is detected. By recognition, the positions of the two points a and b of the object 1 (see the points on the image shown in FIG. 2) and the two points a and b
An angle θ of a line c connecting b (two points predetermined by the teaching point of the robot) with respect to a reference line d (specifically, a Y-axis reference line that the visual means has) is obtained (primary measurement), and then The above-mentioned visual means 2 is shown in FIG.
, The calculated position a = (x.y) and the angle θ
Is moved so as to become the reference position (0, 0) and the reference angle 0 degrees, and at this movement position (see the virtual line α in FIG. 1), two points e and f (points on the image shown in FIGS. 3 and 5 are shown. The position and the angle of (refer to) are detected again (secondary measurement).

【0015】この時、1次計測が正しい時には図3に示
すようにe=(0,0)、θ2 =0となり、1次計測が
正しくない時には図5、図7に示すようにe=(x2
2)、θ=θ2 (但しθ2 ≠0)となる。次に上述の
視覚手段2の物体1との距離を図1に仮想線βで示すよ
うに近づく方向もしくは遠ざかる方向(図示せず)に変
化させて2点g,hの位置と角度とを再度検出(3次計
測)する。
At this time, when the primary measurement is correct, e = (0,0) and θ 2 = 0 as shown in FIG. 3, and when the primary measurement is incorrect, e = (0,0) as shown in FIGS. 5 and 7. (X 2 ,
y 2 ), θ = θ 2 (where θ 2 ≠ 0). Next, the distance between the visual means 2 and the object 1 is changed to the approaching direction or the approaching direction (not shown) as indicated by the phantom line β in FIG. 1, and the positions and angles of the two points g and h are again determined. Detect (3rd measurement).

【0016】この3次計測の際、先の1次計測が正しい
時には図4に示すようにg=(0,0)、θ3 =0とな
り、1次計測が正しくない時には図6、図7に示すよう
にg=(x3 ,y3 )、θ=θ3 (但しθ3 ≠0)とな
る。さらに上述の2次計測、3次計測の位置データおよ
び角度データとしての(x2 ,y2 )、θ2 、(x3
3 )、θ3 に基づいて位置および角度の補正を実行す
る。
In this third-order measurement, when the above-mentioned first-order measurement is correct, g = (0,0) and θ 3 = 0 as shown in FIG. 4, and when the first-order measurement is incorrect, FIGS. As shown in, g = (x 3 , y 3 ) and θ = θ 3 (where θ 3 ≠ 0). Furthermore, (x 2 , y 2 ), θ 2 , (x 3 , as position data and angle data of the secondary measurement and the tertiary measurement described above,
y 3 ) and θ 3 are used to correct the position and angle.

【0017】このように上述の1次計測、2次計測、3
次計測の合計3回の検出により、その検出データに基づ
いた補正(X軸上の位置、Y軸上の位置、軸ずれ、角度
ずれの補正)を実行するので、ロボット制御精度および
解像精度に左右されることなく、補正精度の大幅な向上
を図ることができる効果がある。
Thus, the above-mentioned primary measurement, secondary measurement, and 3
Since the correction (position on the X-axis, position on the Y-axis, axis deviation, angular deviation) based on the detection data is executed by detection of a total of three times in the next measurement, robot control accuracy and resolution accuracy There is an effect that the correction accuracy can be greatly improved without being affected by the above.

【0018】この発明の請求項2記載の発明によれば、
上記請求項1記載の発明の効果と併せて、上述の視覚手
段の物体との距離変化は、該視覚手段を物体に対して接
近させるので、視覚手段による解像度(解像精度)の向
上を図ることができる効果がある。
According to the invention described in claim 2 of the present invention,
In addition to the effect of the invention described in claim 1, since the change in the distance between the visual means and the object brings the visual means closer to the object, the resolution (resolution accuracy) of the visual means is improved. There is an effect that can be.

【0019】この発明の請求項3記載の発明によれば、
上記請求項2記載の発明の効果と併せて、ロボットに視
覚手段を装着したので、ロボット側が本来有する制御系
および位置検出機能を有効利用して上述の視覚手段を駆
動することができる効果がある。
According to the third aspect of the present invention,
In addition to the effect of the invention described in claim 2, since the visual means is attached to the robot, there is an effect that the visual means described above can be driven by effectively utilizing the control system and the position detection function originally possessed by the robot side. .

【0020】この発明の請求項4記載の発明によれば、
上記請求項3記載の発明の効果と併せて、上述の位置お
よび角度の補正はロボット系の制御誤差を補正するの
で、このロボット系の制御誤差の補正によりロボットア
ームを適切な位置、角度を高精度にコントロールするこ
とができる効果がある。
According to the invention of claim 4 of the present invention,
In addition to the effect of the invention described in claim 3, the above-mentioned correction of the position and angle corrects the control error of the robot system. Therefore, by correcting the control error of the robot system, the robot arm can be adjusted to an appropriate position and angle. There is an effect that can be controlled with accuracy.

【0021】[0021]

【実施例】この発明の一実施例を以下図面に基づいて詳
述する。図8は本発明の位置検出方法に用いる位置検出
装置を示し、この位置検出装置11はロボット架台12
に搭載された6軸タイプのロボット13と、このロボッ
ト13のロボットアーム14先端に締結部15を介して
取付けられたロボットハンド16、このロボットハンド
16に備えられた複数のナットランナ17とを有し、ナ
ットランナ17配設部にはブラケット(図示せず)を介
して視覚手段としてのCCDカメラ2を取付け、このC
CDカメラ2の画像認識により物体1(被測定物体)の
2点の位置a,b(図2参照)と、これら2点a,bを
結ぶ線cの基準線dに対する角度θ(図2参照)とを求
めるように構成している。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS One embodiment of the present invention will be described below in detail with reference to the drawings. FIG. 8 shows a position detecting device used in the position detecting method of the present invention. The position detecting device 11 is a robot mount 12.
6-axis type robot 13 mounted on the robot hand, a robot hand 16 attached to the tip of the robot arm 14 of the robot 13 via a fastening portion 15, and a plurality of nut runners 17 provided on the robot hand 16. The CCD camera 2 as a visual means is attached to the nut runner 17 mounting portion via a bracket (not shown).
By the image recognition of the CD camera 2, the positions a and b of the two points of the object 1 (object to be measured) (see FIG. 2) and the angle θ of the line c connecting the two points a and b with respect to the reference line d (see FIG. 2). ) And is configured to ask.

【0022】このように構成した位置検出装置11を用
いての位置検出方法について以下に詳述する。なお、以
下の説明においては前述の1次計測、2次計測、3次計
測(但し次の各計測については発明の作用及び効果の項
で既述した内容と同一)後における補正値の計算方法
(ロボットでの計算に置換して)示す。
A position detecting method using the position detecting device 11 thus constructed will be described in detail below. In the following description, a method of calculating a correction value after the above-described primary measurement, secondary measurement, and tertiary measurement (however, the following measurement is the same as the content already described in the section of the action and effect of the invention) Shown (replaced by robot calculation).

【0023】図9において、p1は1次計測時の指令位
置、p2は2次計測時の指令位置、p3は3次計測時の
指令位置(対象位置)をそれぞれ示し、r1は1次計測
時の取込位置、r2は2次計測時の取込位置r3は3次
計測時の取込位置をそれぞれ示す。ここに上記各指令位
置p1,p2,p3の成分は次の[数1]により表すこ
とができる。
In FIG. 9, p1 indicates a command position during primary measurement, p2 indicates a command position during secondary measurement, p3 indicates a command position (target position) during tertiary measurement, and r1 indicates during primary measurement. , R2 indicates the take-in position during the secondary measurement, and r3 indicates the take-in position during the third measurement. Here, the components of the respective commanded positions p1, p2, p3 can be expressed by the following [Equation 1].

【0024】[0024]

【数1】 [Equation 1]

【0025】上述の[数1]においてx,y,zは三次
元のロボットベース座標のX軸、Y軸、Z軸を示し、
「01」「02」「03」は各指令位置p1,p2,p
3に対応する指令値を示し、tx,ty,tzは各軸ま
わりの角度を示し、特にtyが請求項で示した角度θを
意味する。また上記各取込位置r1,r2,r3の成分
は次の[数2]により表すことができる。
In the above [Equation 1], x, y, and z represent the X axis, Y axis, and Z axis of the three-dimensional robot base coordinates,
"01", "02", and "03" are command positions p1, p2, p
3 indicates a command value, and tx, ty, and tz indicate angles around each axis, and ty particularly means the angle θ shown in the claims. Further, the components of the above-mentioned respective intake positions r1, r2, r3 can be expressed by the following [Equation 2].

【0026】[0026]

【数2】 [Equation 2]

【0027】上述の[数2]において、x,y,zは
[数1]と同様に三次元のロボットベース座標のX軸、
Y軸、Z軸を示し、tx,ty,tzも同様に各軸まわ
りの角度を示し、「1」「2」「3」は各位置r1,r
2,r3に対応する実行値を示し、特にtyが請求項で
示した角度θを意味する。上述の各指令値p1,p2,
p3に対するロボット13の動作ずれベクトルは次に
[数3]で示すようになる。
In the above [Equation 2], x, y, and z are the X-axis of the three-dimensional robot base coordinates, as in [Equation 1],
The Y-axis and the Z-axis are shown, and tx, ty, and tz also show the angles around each axis, and "1", "2", and "3" are the respective positions r1, r.
2 and r3 are actual values, and in particular ty means the angle θ shown in the claims. Each of the above command values p1, p2
The motion displacement vector of the robot 13 with respect to p3 is as shown in [Equation 3].

【0028】[0028]

【図3】 [Figure 3]

【0029】最終的な動作補正値は、上述のずれの逆ベ
クトルとなり、本来、マイナス動作ずれ3となるはずで
あるが、ロボット13はその補正指令値で動作させても
重力や製作誤差等の要因により再びずれが発生する。そ
こで、得られた2つの動作ずれベクトル(精度の良い動
作ずれ2,動作ずれ3)の相加平均ベクトル(2つずれ
2,3の中央値)を求め、この値を近似的に補正値とす
る。この補正ベクトルは次の[数4]で示すことができ
る。
The final motion correction value is the inverse vector of the above-mentioned shift, and should be the negative motion shift 3 originally, but even if the robot 13 is operated with the correction command value, gravity, manufacturing error, etc. will occur. The deviation occurs again due to the factor. Therefore, an arithmetic mean vector (median value of the two shifts 2 and 3) of the two obtained motion shift vectors (accurate motion shift 2 and motion shift 3) is obtained, and this value is approximately used as a correction value. To do. This correction vector can be expressed by the following [Equation 4].

【0030】[0030]

【数4】 (Equation 4)

【0031】ここで図10のずれ補正2に相当するr2
p2のベクトルを成分で表わすと次の[数5]の如くな
る。
Here, r2 corresponding to the shift correction 2 in FIG.
When the vector of p2 is expressed by components, the following [Equation 5] is obtained.

【0032】[0032]

【数5】 (Equation 5)

【0033】また図10のずれ補正3に相当するr3p
3のベクトルを成分で表わすと次の[数6]の如くな
る。
Further, r3p corresponding to the shift correction 3 in FIG.
When the vector of 3 is represented by components, the following [Equation 6] is obtained.

【0034】[0034]

【数6】 (Equation 6)

【0035】なお上述の[数5][数6]において
「p」「r」の前に付した「0」はロボットベース座標
の原点としての零を示す。而して先の[数4]に[数
5][数6]を代入してベクトル式を整理することで補
正ベクトルを求めると次に[数7]で示すようになる。
In the above [Equation 5] and [Equation 6], “0” added before “p” and “r” indicates zero as the origin of the robot base coordinates. [Equation 5] and [Equation 6] are substituted into the above [Equation 4], and the correction vector is obtained by rearranging the vector formula.

【0036】[0036]

【数7】 (Equation 7)

【0037】したがって、図11に示す最終的なロボッ
ト13への指令値(dest)はor3のベクトルに対して
上述の補正ベクトルかを加算した値となるので、次の
[数8]で表わすことができる。
Therefore, the final command value (dest) to the robot 13 shown in FIG. 11 is a value obtained by adding the above correction vector to the vector of or3, and is represented by the following [Equation 8]. You can

【0038】[0038]

【数8】 (Equation 8)

【0039】すなわち、補正をしなかった場合には座標
原点0からr3の方向に向って動くか、上述の補正によ
り座標原点0から対象位置としてのp3の方向に向って
動かすことができる。
That is, when no correction is made, the coordinate can be moved in the direction from the coordinate origin 0 to r3, or by the above-described correction, it can be moved from the coordinate origin 0 in the direction p3 as the target position.

【0040】以上要するに本発明の位置検出方法によれ
ば、視覚手段(CCDカメラ2参照)の画像認識により
物体1の2点の位置と、これら2点を結ぶ線の基準線に
対する角度(数2中のty1参照)とを求め(1次計
測)、次の視覚手段(CCDカメラ2参照)を、求めた
位置および角度が基準位置および基準角度となるように
移動して、この移動位置で2点の位置と角度(数2中の
ty2参照)とを再検出(2次計測)する。
In summary, according to the position detecting method of the present invention, the positions of the two points of the object 1 by the image recognition of the visual means (see CCD camera 2) and the angle of the line connecting these two points with respect to the reference line (Equation 2) (See ty1 in the figure) (primary measurement), and the next visual means (see CCD camera 2) is moved so that the obtained position and angle become the reference position and the reference angle, and 2 at this moving position. The position and angle of the point (see ty2 in Formula 2) are detected again (secondary measurement).

【0041】次に上述の視覚手段(CCDカメラ2参
照)の物体1との距離を変化させて2点の位置と角度
(数2中のty3参照)とを再度検出(3次計測)す
る。さらに上述の2次計測、3次計測の位置データおよ
び角度データに基づいて位置および角度の補正(数7で
示す補正ベクトル、数8で示す最終指令値destのベクト
ル参照)を実行する。
Next, the distance between the above-mentioned visual means (see CCD camera 2) and the object 1 is changed, and the positions and angles (see ty3 in the equation 2) of the two points are detected again (third measurement). Further, the position and the angle are corrected (refer to the correction vector shown in Formula 7 and the vector of the final command value dest shown in Formula 8) based on the position data and the angle data of the above-mentioned secondary measurement and tertiary measurement.

【0042】このように従来の1次計測のみの方法に加
えて2次計測、3次計測を追加実行し、合計3回の検出
により、その検出データに基づいた補正(X軸上の位
置、Y軸上の位置、軸ずれ、角度ずれの補正)を実行す
るので、ロボット制御精度およびCCDカメラ2の解像
精度に左右されることなく、補正精度の大幅な向上を図
ることができる効果がある。
As described above, in addition to the conventional method of only primary measurement, secondary measurement and tertiary measurement are additionally executed, and a total of three detections is performed to perform correction based on the detected data (position on the X axis, Since the correction of the position on the Y axis, the axis deviation, and the angular deviation) is performed, the correction accuracy can be greatly improved without being affected by the robot control accuracy and the resolution accuracy of the CCD camera 2. is there.

【0043】また上述の視覚手段(CCDカメラ2参
照)の物体1との距離変化は、この視覚手段を物体1に
対して接近させるので、視覚手段による解像度(解像精
度)の向上を図ることができる効果がある。
Further, since the above-mentioned visual means (see CCD camera 2) changes its distance from the object 1, the visual means is brought closer to the object 1, so that the resolution (resolution accuracy) of the visual means should be improved. There is an effect that can be.

【0044】さらに、ロボット13に視覚手段(CCD
カメラ2参照)を装着したので、ロボット13側が本来
有する制御系および位置検出機能を有効利用して上述の
視覚手段を駆動することができる効果がある。加えて、
上述の位置および角度の補正はロボット系の制御誤差を
補正するので、このロボット系の制御誤差の補正により
ボットアーム14(実施例ではその先端のナットランナ
17も含む)を適切な位置、角度に高精度にコントロー
ルすることができる効果がある。
Furthermore, the robot 13 has a visual means (CCD).
Since the camera 2 is attached, there is an effect that the above-mentioned visual means can be driven by effectively utilizing the control system and the position detection function originally possessed by the robot 13 side. in addition,
The above-described correction of the position and angle corrects the control error of the robot system. Therefore, by correcting the control error of the robot system, the bot arm 14 (including the nut runner 17 at its tip in the embodiment) is adjusted to an appropriate position and angle. There is an effect that can be controlled with accuracy.

【0045】この発明の構成と、上述の実施例との対応
において、この発明の視覚手段は、実施例のCCDカメ
ラ2に対応し、以下同様に、ロボットは、6軸タイプの
ロボット13に対応するも、この発明は、上述の実施例
の構成のみに限定されるものではない。
In the correspondence between the structure of the present invention and the above-described embodiment, the visual means of the present invention corresponds to the CCD camera 2 of the embodiment, and the robot hereinafter corresponds to the 6-axis type robot 13. However, the present invention is not limited to the configurations of the above-described embodiments.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の位置検出方法を示すクレーム対応図。FIG. 1 is a claim correspondence diagram showing a position detection method of the present invention.

【図2】1次計測を示すクレーム対応図。FIG. 2 is a complaint correspondence diagram showing a primary measurement.

【図3】1次計測が正しい場合の2次計測を示すクレー
ム対応図。
FIG. 3 is a complaint correspondence diagram showing a secondary measurement when the primary measurement is correct.

【図4】1次計測が正しい場合の3次計測を示すクレー
ム対応図。
FIG. 4 is a complaint correspondence diagram showing a third measurement when the first measurement is correct.

【図5】1次計測が正しくない場合の2次計測を示すク
レーム対応図。
FIG. 5 is a claim correspondence diagram showing a secondary measurement when the primary measurement is incorrect.

【図6】1次計測が正しくない場合の3次計測を示すク
レーム対応図。
FIG. 6 is a claim correspondence diagram showing a third measurement when the first measurement is incorrect.

【図7】1次計測が正しくない場合の説明図。FIG. 7 is an explanatory diagram when the primary measurement is incorrect.

【図8】本発明の位置検出方法に用いる位置検出装置の
説明図。
FIG. 8 is an explanatory diagram of a position detection device used in the position detection method of the present invention.

【図9】動作ずれを示す説明図。FIG. 9 is an explanatory diagram showing an operation shift.

【図10】ずれ補正を示す説明図。FIG. 10 is an explanatory diagram showing deviation correction.

【図11】最終指令値を示す説明図。FIG. 11 is an explanatory diagram showing a final command value.

【図12】従来の位置検出方法を示す説明図。FIG. 12 is an explanatory diagram showing a conventional position detection method.

【符号の説明】[Explanation of symbols]

1…物体 2…CCDカメラ 13…ロボット a,b…2点 c…2点を結ぶ線 d…基準線 e,f…2点 g,h…2点 θ,θ2 ,θ3 …角度1 ... Object 2 ... CCD camera 13 ... Robot a, b ... Two points c ... Line connecting two points d ... Reference line e, f ... Two points g, h ... Two points θ, θ 2 , θ 3 ... Angle

Claims (4)

【特許請求の範囲】[Claims] 【請求項1】画像処理による物体の位置検出方法であっ
て、視覚手段の画像認識により物体の2点の位置と、こ
れら2点を結ぶ線の基準線に対する角度とを求め、次に
上記視覚手段を、求めた位置および角度が基準位置およ
び基準角度となるように移動させ、この移動位置で2点
の位置と角度とを再検出し、次に上記視覚手段の物体と
の距離を変化させて2点の位置と角度とを再度検出する
ことにより位置および角度を補正する位置検出方法。
1. A method for detecting the position of an object by image processing, wherein the position of two points of the object and the angle of a line connecting these two points with respect to a reference line are obtained by image recognition of visual means, and then the visual point is detected. The means is moved so that the obtained position and angle become the reference position and the reference angle, the positions and angles of the two points are detected again at this movement position, and then the distance between the visual means and the object is changed. A position detection method for correcting the position and the angle by re-detecting the position and the angle of the two points.
【請求項2】上記視覚手段の物体との距離変化は、該視
覚手段を物体に接近させる請求項1記載の位置検出方
法。
2. The position detecting method according to claim 1, wherein the change of the distance between the visual means and the object causes the visual means to approach the object.
【請求項3】ロボットに上記視覚手段が装着された請求
項2記載の位置検出方法。
3. The position detecting method according to claim 2, wherein the visual means is mounted on a robot.
【請求項4】上記位置および角度の補正はロボット系の
制御誤差を補正する請求項3記載の位置検出方法。
4. The position detecting method according to claim 3, wherein the correction of the position and the angle corrects a control error of the robot system.
JP02989996A 1996-01-23 1996-01-23 Position detection method Expired - Fee Related JP3562096B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP02989996A JP3562096B2 (en) 1996-01-23 1996-01-23 Position detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP02989996A JP3562096B2 (en) 1996-01-23 1996-01-23 Position detection method

Publications (2)

Publication Number Publication Date
JPH09196622A true JPH09196622A (en) 1997-07-31
JP3562096B2 JP3562096B2 (en) 2004-09-08

Family

ID=12288835

Family Applications (1)

Application Number Title Priority Date Filing Date
JP02989996A Expired - Fee Related JP3562096B2 (en) 1996-01-23 1996-01-23 Position detection method

Country Status (1)

Country Link
JP (1) JP3562096B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100752989B1 (en) * 2006-06-02 2007-08-30 주식회사 유진엠에스 Device capable of measuring 2-dimensional and 3-dimensional images
CN108839024A (en) * 2018-06-29 2018-11-20 易思维(杭州)科技有限公司 A kind of visual guide method suitable for the automatic loading process of arrangements for automotive doors
CN109059769A (en) * 2018-08-31 2018-12-21 中国科学院力学研究所 A kind of contactless current collecting bow lifting bow armed lever positional relationship measurement method
JP2019014410A (en) * 2017-07-10 2019-01-31 新明工業株式会社 Toe adjustment robot

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100752989B1 (en) * 2006-06-02 2007-08-30 주식회사 유진엠에스 Device capable of measuring 2-dimensional and 3-dimensional images
JP2019014410A (en) * 2017-07-10 2019-01-31 新明工業株式会社 Toe adjustment robot
CN108839024A (en) * 2018-06-29 2018-11-20 易思维(杭州)科技有限公司 A kind of visual guide method suitable for the automatic loading process of arrangements for automotive doors
CN109059769A (en) * 2018-08-31 2018-12-21 中国科学院力学研究所 A kind of contactless current collecting bow lifting bow armed lever positional relationship measurement method

Also Published As

Publication number Publication date
JP3562096B2 (en) 2004-09-08

Similar Documents

Publication Publication Date Title
EP1215017B1 (en) Robot teaching apparatus
JPH03136780A (en) Mechanism error correcting method for scalar type robot
JP2003117861A (en) Position correcting system of robot
US11554494B2 (en) Device for acquiring a position and orientation of an end effector of a robot
JP2001050741A (en) Calibration method and apparatus for robot
JP3466340B2 (en) A 3D position and orientation calibration method for a self-contained traveling robot
CN114012719A (en) Zero calibration method and system for six-axis robot
JPH09196622A (en) Method for detecting position
JP3169174B2 (en) Teaching Data Correction Method for Work Path Following Robot Manipulator
JP2000055664A (en) Articulated robot system with function of measuring attitude, method and system for certifying measuring precision of gyro by use of turntable for calibration reference, and device and method for calibrating turntable formed of n-axes
JP3511551B2 (en) Robot arm state detection method and detection system
JPH03161223A (en) Fitting of work
JPS61133409A (en) Automatic correction system of robot constant
JP2012006125A (en) Method for correcting coordinate value of horizontal articulated robot
JP2003121112A (en) Location detecting apparatus
Blank et al. High precision PSD guided robot localization: Design, mapping, and position control
JP6628170B1 (en) Measurement system and measurement method
JP2919135B2 (en) Robot position correction method
JPH04269194A (en) Plane measuring method
JPH0774964B2 (en) Robot positioning error correction method
TW202036194A (en) System for calibrating map data configured for mobile platform
JPH0260474B2 (en)
WO2021210540A1 (en) Coordinate system setting system and position/orientation measurement system
JP2985336B2 (en) Work line correction control method for industrial robots
JPH0477806A (en) Detecting device for position error of robot

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040419

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20040511

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20040524

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090611

Year of fee payment: 5

LAPS Cancellation because of no payment of annual fees