JP3562096B2 - Position detection method - Google Patents

Position detection method Download PDF

Info

Publication number
JP3562096B2
JP3562096B2 JP02989996A JP2989996A JP3562096B2 JP 3562096 B2 JP3562096 B2 JP 3562096B2 JP 02989996 A JP02989996 A JP 02989996A JP 2989996 A JP2989996 A JP 2989996A JP 3562096 B2 JP3562096 B2 JP 3562096B2
Authority
JP
Japan
Prior art keywords
angle
visual means
robot
measurement
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP02989996A
Other languages
Japanese (ja)
Other versions
JPH09196622A (en
Inventor
茂生 岡水
美昭 木村
俊治 坂本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazda Motor Corp
Original Assignee
Mazda Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazda Motor Corp filed Critical Mazda Motor Corp
Priority to JP02989996A priority Critical patent/JP3562096B2/en
Publication of JPH09196622A publication Critical patent/JPH09196622A/en
Application granted granted Critical
Publication of JP3562096B2 publication Critical patent/JP3562096B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Description

【0001】
【発明の属する技術分野】
この発明は、例えば車両用ドアやその他の部品をロボットを用いて組付けもしくは取外しするような場合に用いられる画像処理による物体の位置検出方法に関する。
【0002】
【従来の技術】
従来、被測定物の位置を画像処理により検出する方法(位置検出方法)としては、例えば、特開平2−243914号公報に記載の方法がある。
すなわち、図12に示す如く被測定物91を固定搭載するマスタテーブル92と、このマスタテーブル92に対して位置固定された視覚手段としてのCCDカメラ93と、このCCDカメラ93によって撮像されたデータを演算する画像処理装置94とを備え、マスタテーブル92上の基準点95に対する測定点96のマスタ位置データを画像処理装置94内のメモリに格納し、この状態で被測定物91をマスタテーブル92上の所定位置にセットし、CCDカメラ93により被測定物91の測定点96とマスタテーブル92の基準点95との位置関係を画像処理装置94により演算する。
【0003】
この際、測定点96のマスタ位置データ(X,Y)と、実際に測定された測定点(X,Y)とのそれぞれの基準点を画像処理装置94内の演算部で一致させ、これらマスタ基準位置および実測位置との誤差ΔX=X−X,ΔY=Y−Yを演算し、これにより測定点の位置座標を算出して、位置を検出および補正する所謂座標位置検出方法である。
【0004】
しかし、この従来方法にあっては1回のみの読込みにより補正を実行するうえ、CCDカメラ93に関する角度情報の読込みを行なわない関係上、誤差が±2.5mm程度と大きく、例えば±0.5mm程度の高精度の補正が要求される位置検出には使用不可となる問題点があった。
【0005】
一般に画像処理および視覚手段としてのCCDカメラは被測定物に近付けば近づく程、解像精度が向上するが、CCDカメラからの位置補正信号に対してロボットを駆動する場合、X軸成分、Y軸成分に加えて角度成分を考慮する必要があり、かつロボット側にはロボット駆動時に重力が作用するうえ、ロボットの製作誤差もあり、さらにはロボット側での計算能力にも限界がある関係上、動作ずれが発生して、充分な高精度が確保できず、指令位置と異なる位置、角度に動作してしまう問題点があった。
【0006】
【発明が解決しようとする課題】
この発明の請求項1記載の発明は、視覚手段の画像認識により物体の2点の位置と、これら2点を結ぶ線の基準線に対する角度とを求め(1次計測)た後に、視覚手段を、求められた位置および角度が基準位置および基準角度となるように移動させ、この移動位置で2点の位置と角度とを再検出(2次計測)し、さらに上記視覚手段の物体との距離を変化させて2点の位置と角度とを再度検出(3次計測)することにより、位置および角度を補正する方法と成すことで、1次乃至3次の合計3回の検出によりロボット制御精度および解像精度に左右されることなく補正精度の大幅な向上を図ることができる位置検出方法の提供を目的とする。
【0007】
この発明の請求項2記載の発明は、上記請求項1記載の発明の目的と併せて、視覚手段の物体との距離変化を、該視覚手段の物体への接近に設定することで、この視覚手段による解像度(解像精度)の向上を図ることができる位置検出方法の提供を目的とする。
【0008】
この発明の請求項3記載の発明は、上記請求項2記載の発明の目的と併せて、ロボットに視覚手段を装着することで、ロボット側が本来有する制御系および位置検出機能を有効利用して上述の視覚手段を駆動することができる位置検出方法の提供を目的とする。
【0009】
この発明の請求項4記載の発明は、上記請求項3記載の発明の目的と併せて、位置および角度の補正はロボット系の制御誤差を補正することで、この制御誤差の補正によりロボットアームを適切な位置、角度に高精度にコントロールすることができる位置検出方法の提供を目的とする。
【0010】
【課題を解決するための手段】
この発明の請求項1記載の発明は、画像処理による物体の位置検出方法であって、視覚手段の画像認識により物体の2点の位置と、これら2点を結ぶ線の基準線に対する角度とを求め(1次計測)、次に上記視覚手段を、求めた位置および角度が基準位置および基準角度となるように移動させ、この移動位置で2点の位置と角度とを再検出し(2次計測)、次に上記視覚手段の物体との距離を変化させて2点の位置と角度とを再度検出(3次計測)することにより位置および角度を補正する位置検出方法であることを特徴とする。
【0011】
この発明の請求項2記載の発明は、上記請求項1記載の発明の構成と併せて、上記視覚手段の物体との距離変化は、該視覚手段を物体に接近させる位置検出方法であることを特徴とする。
【0012】
この発明の請求項3記載の発明は、上記請求項2記載の発明の構成と併せて、ロボットに上記視覚手段が装着された位置検出方法であることを特徴とする。
【0013】
この発明の請求項4記載の発明は、上記請求項3記載の発明の構成と併せて、上記位置および角度の補正はロボット系の制御誤差を補正する位置検出方法であることを特徴とする。
【0014】
【発明の作用及び効果】
この発明の請求項1記載の発明によれば、図1乃至図7にクレーム対応図で示すように、画像処理による物体1の位置検出方法において、視覚手段2の画像認識により物体1の2点a,b(図2に示す画像上のポイント参照)の位置と、これら2点a,b(ロボットのティーチングポイントにより予め定められた2点のこと)を結ぶ線cの基準線d(詳しくは視覚手段がもっているY軸基準線)に対する角度θとを求め(1次計測)、次に上述の視覚手段2を図1に仮想線αで示すように、求めた位置a=(x.y)および角度θが基準位置(0,0)および基準角度0度となるように移動させ、この移動位置(図1の仮想線α参照)で2点e,f(図3、図5に示す画像上のポイント参照)の位置と角度とを再検出(2次計測)する。
【0015】
この時、1次計測が正しい時には図3に示すようにe=(0,0)、θ=0となり、1次計測が正しくない時には図5、図7に示すようにe=(x,y)、θ=θ(但しθ≠0)となる。
次に上述の視覚手段2の物体1との距離を図1に仮想線βで示すように近づく方向もしくは遠ざかる方向(図示せず)に変化させて2点g,hの位置と角度とを再度検出(3次計測)する。
【0016】
この3次計測の際、先の1次計測が正しい時には図4に示すようにg=(0,0)、θ=0となり、1次計測が正しくない時には図6、図7に示すようにg=(x,y)、θ=θ(但しθ≠0)となる。
さらに上述の2次計測、3次計測の位置データおよび角度データとしての(x,y)、θ、(x,y)、θに基づいて位置および角度の補正を実行する。
【0017】
このように上述の1次計測、2次計測、3次計測の合計3回の検出により、その検出データに基づいた補正(X軸上の位置、Y軸上の位置、軸ずれ、角度ずれの補正)を実行するので、ロボット制御精度および解像精度に左右されることなく、補正精度の大幅な向上を図ることができる効果がある。
【0018】
この発明の請求項2記載の発明によれば、上記請求項1記載の発明の効果と併せて、上述の視覚手段の物体との距離変化は、該視覚手段を物体に対して接近させるので、視覚手段による解像度(解像精度)の向上を図ることができる効果がある。
【0019】
この発明の請求項3記載の発明によれば、上記請求項2記載の発明の効果と併せて、ロボットに視覚手段を装着したので、ロボット側が本来有する制御系および位置検出機能を有効利用して上述の視覚手段を駆動することができる効果がある。
【0020】
この発明の請求項4記載の発明によれば、上記請求項3記載の発明の効果と併せて、上述の位置および角度の補正はロボット系の制御誤差を補正するので、このロボット系の制御誤差の補正によりロボットアームを適切な位置、角度を高精度にコントロールすることができる効果がある。
【0021】
【実施例】
この発明の一実施例を以下図面に基づいて詳述する。
図8は本発明の位置検出方法に用いる位置検出装置を示し、この位置検出装置11はロボット架台12に搭載された6軸タイプのロボット13と、このロボット13のロボットアーム14先端に締結部15を介して取付けられたロボットハンド16、このロボットハンド16に備えられた複数のナットランナ17とを有し、ナットランナ17配設部にはブラケット(図示せず)を介して視覚手段としてのCCDカメラ2を取付け、このCCDカメラ2の画像認識により物体1(被測定物体)の2点の位置a,b(図2参照)と、これら2点a,bを結ぶ線cの基準線dに対する角度θ(図2参照)とを求めるように構成している。
【0022】
このように構成した位置検出装置11を用いての位置検出方法について以下に詳述する。なお、以下の説明においては前述の1次計測、2次計測、3次計測(但し次の各計測については発明の作用及び効果の項で既述した内容と同一)後における補正値の計算方法(ロボットでの計算に置換して)示す。
【0023】
図9において、p1は1次計測時の指令位置、p2は2次計測時の指令位置、p3は3次計測時の指令位置(対象位置)をそれぞれ示し、r1は1次計測時の取込位置、r2は2次計測時の取込位置r3は3次計測時の取込位置をそれぞれ示す。
ここに上記各指令位置p1,p2,p3の成分は次の[数1]により表すことができる。
【0024】
【数1】

Figure 0003562096
【0025】
上述の[数1]においてx,y,zは三次元のロボットベース座標のX軸、Y軸、Z軸を示し、「01」「02」「03」は各指令位置p1,p2,p3に対応する指令値を示し、tx,ty,tzは各軸まわりの角度を示し、特にtyが請求項で示した角度θを意味する。
また上記各取込位置r1,r2,r3の成分は次の[数2]により表すことができる。
【0026】
【数2】
Figure 0003562096
【0027】
上述の[数2]において、x,y,zは[数1]と同様に三次元のロボットベース座標のX軸、Y軸、Z軸を示し、tx,ty,tzも同様に各軸まわりの角度を示し、「1」「2」「3」は各位置r1,r2,r3に対応する実行値を示し、特にtyが請求項で示した角度θを意味する。
上述の各指令値p1,p2,p3に対するロボット13の動作ずれベクトルは次に[数3]で示すようになる。
【0028】
【図3】
Figure 0003562096
【0029】
最終的な動作補正値は、上述のずれの逆ベクトルとなり、本来、マイナス動作ずれ3となるはずであるが、ロボット13はその補正指令値で動作させても重力や製作誤差等の要因により再びずれが発生する。そこで、得られた2つの動作ずれベクトル(精度の良い動作ずれ2,動作ずれ3)の相加平均ベクトル(2つずれ2,3の中央値)を求め、この値を近似的に補正値とする。この補正ベクトルは次の[数4]で示すことができる。
【0030】
【数4】
Figure 0003562096
【0031】
ここで図10のずれ補正2に相当するr2p2のベクトルを成分で表わすと次の[数5]の如くなる。
【0032】
【数5】
Figure 0003562096
【0033】
また図10のずれ補正3に相当するr3p3のベクトルを成分で表わすと次の[数6]の如くなる。
【0034】
【数6】
Figure 0003562096
【0035】
なお上述の[数5][数6]において「p」「r」の前に付した「0」はロボットベース座標の原点としての零を示す。而して先の[数4]に[数5][数6]を代入してベクトル式を整理することで補正ベクトルを求めると次に[数7]で示すようになる。
【0036】
【数7】
Figure 0003562096
【0037】
したがって、図11に示す最終的なロボット13への指令値(dest)はor3のベクトルに対して上述の補正ベクトルかを加算した値となるので、次の[数8]で表わすことができる。
【0038】
【数8】
Figure 0003562096
【0039】
すなわち、補正をしなかった場合には座標原点0からr3の方向に向って動くか、上述の補正により座標原点0から対象位置としてのp3の方向に向って動かすことができる。
【0040】
以上要するに本発明の位置検出方法によれば、視覚手段(CCDカメラ2参照)の画像認識により物体1の2点の位置と、これら2点を結ぶ線の基準線に対する角度(数2中のty1参照)とを求め(1次計測)、次の視覚手段(CCDカメラ2参照)を、求めた位置および角度が基準位置および基準角度となるように移動して、この移動位置で2点の位置と角度(数2中のty2参照)とを再検出(2次計測)する。
【0041】
次に上述の視覚手段(CCDカメラ2参照)の物体1との距離を変化させて2点の位置と角度(数2中のty3参照)とを再度検出(3次計測)する。
さらに上述の2次計測、3次計測の位置データおよび角度データに基づいて位置および角度の補正(数7で示す補正ベクトル、数8で示す最終指令値destのベクトル参照)を実行する。
【0042】
このように従来の1次計測のみの方法に加えて2次計測、3次計測を追加実行し、合計3回の検出により、その検出データに基づいた補正(X軸上の位置、Y軸上の位置、軸ずれ、角度ずれの補正)を実行するので、ロボット制御精度およびCCDカメラ2の解像精度に左右されることなく、補正精度の大幅な向上を図ることができる効果がある。
【0043】
また上述の視覚手段(CCDカメラ2参照)の物体1との距離変化は、この視覚手段を物体1に対して接近させるので、視覚手段による解像度(解像精度)の向上を図ることができる効果がある。
【0044】
さらに、ロボット13に視覚手段(CCDカメラ2参照)を装着したので、ロボット13側が本来有する制御系および位置検出機能を有効利用して上述の視覚手段を駆動することができる効果がある。
加えて、上述の位置および角度の補正はロボット系の制御誤差を補正するので、このロボット系の制御誤差の補正によりボットアーム14(実施例ではその先端のナットランナ17も含む)を適切な位置、角度に高精度にコントロールすることができる効果がある。
【0045】
この発明の構成と、上述の実施例との対応において、
この発明の視覚手段は、実施例のCCDカメラ2に対応し、
以下同様に、
ロボットは、6軸タイプのロボット13に対応するも、
この発明は、上述の実施例の構成のみに限定されるものではない。
【図面の簡単な説明】
【図1】本発明の位置検出方法を示すクレーム対応図。
【図2】1次計測を示すクレーム対応図。
【図3】1次計測が正しい場合の2次計測を示すクレーム対応図。
【図4】1次計測が正しい場合の3次計測を示すクレーム対応図。
【図5】1次計測が正しくない場合の2次計測を示すクレーム対応図。
【図6】1次計測が正しくない場合の3次計測を示すクレーム対応図。
【図7】1次計測が正しくない場合の説明図。
【図8】本発明の位置検出方法に用いる位置検出装置の説明図。
【図9】動作ずれを示す説明図。
【図10】ずれ補正を示す説明図。
【図11】最終指令値を示す説明図。
【図12】従来の位置検出方法を示す説明図。
【符号の説明】
1…物体
2…CCDカメラ
13…ロボット
a,b…2点
c…2点を結ぶ線
d…基準線
e,f…2点
g,h…2点
θ,θ,θ…角度[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a method for detecting a position of an object by image processing used when assembling or removing a vehicle door or other parts using a robot, for example.
[0002]
[Prior art]
2. Description of the Related Art Conventionally, as a method of detecting the position of an object to be measured by image processing (position detection method), for example, there is a method described in JP-A-2-243914.
That is, as shown in FIG. 12, a master table 92 on which an object to be measured 91 is fixed, a CCD camera 93 as visual means fixed to the master table 92, and data captured by the CCD camera 93 are stored. An image processing device 94 for calculating, and storing the master position data of the measurement point 96 with respect to the reference point 95 on the master table 92 in a memory in the image processing device 94; , And the positional relationship between the measurement point 96 of the DUT 91 and the reference point 95 of the master table 92 is calculated by the image processing device 94 by the CCD camera 93.
[0003]
At this time, the reference points of the master position data (X 0 , Y 0 ) of the measurement point 96 and the actually measured measurement points (X 1 , Y 1 ) are matched by the arithmetic unit in the image processing device 94. The errors ΔX = X 1 −X 0 and ΔY = Y 1 −Y 0 between the master reference position and the actually measured position are calculated, thereby calculating the position coordinates of the measurement point, and detecting and correcting the position. This is a coordinate position detection method.
[0004]
However, in this conventional method, the correction is performed by reading only once, and the error is as large as about ± 2.5 mm, for example, ± 0.5 mm because the angle information relating to the CCD camera 93 is not read. There is a problem that it cannot be used for position detection that requires a highly accurate correction.
[0005]
In general, the resolution of a CCD camera as an image processing and visual means is improved as the object is closer to the object to be measured. However, when a robot is driven in response to a position correction signal from the CCD camera, an X-axis component, a Y-axis It is necessary to consider the angle component in addition to the component, and because the gravity acts on the robot side when driving the robot, there is also a manufacturing error of the robot, and the calculation ability on the robot side is also limited, There is a problem in that an operation shift occurs, sufficient high accuracy cannot be secured, and the camera is operated at a position and angle different from the command position.
[0006]
[Problems to be solved by the invention]
According to a first aspect of the present invention, after the positions of two points of an object and the angle of a line connecting these two points with respect to a reference line are determined (primary measurement) by image recognition of the visual means, the visual means is used. Is moved so that the obtained position and angle become the reference position and reference angle, and the position and angle of the two points are re-detected (secondary measurement) at the moved position, and the distance between the visual means and the object is further determined. The position and angle of the two points are detected again by changing the position (the third measurement) to correct the position and the angle. Thus, the robot control accuracy is obtained by detecting the primary to tertiary three times in total. It is another object of the present invention to provide a position detection method capable of greatly improving correction accuracy without being affected by resolution accuracy.
[0007]
According to a second aspect of the present invention, in addition to the object of the first aspect of the present invention, by setting a change in the distance between the visual means and the object to be closer to the object by the visual means, the visual means can be visually recognized. It is an object of the present invention to provide a position detection method capable of improving resolution (resolution accuracy) by means.
[0008]
According to a third aspect of the present invention, in addition to the object of the second aspect of the present invention, by mounting visual means on the robot, the control system and the position detecting function inherent in the robot side are effectively used to make the above described. It is an object of the present invention to provide a position detecting method capable of driving the visual means.
[0009]
According to a fourth aspect of the present invention, in addition to the object of the third aspect, the position and angle are corrected by correcting a control error of the robot system, and the robot arm is corrected by correcting the control error. It is an object of the present invention to provide a position detection method capable of controlling an appropriate position and angle with high accuracy.
[0010]
[Means for Solving the Problems]
The invention according to claim 1 of the present invention is a method for detecting the position of an object by image processing, wherein the position of two points of the object and the angle of a line connecting these two points with respect to a reference line are recognized by image recognition of a visual means. Is obtained (primary measurement), and then the visual means is moved so that the obtained position and angle become the reference position and reference angle, and the position and angle of the two points are re-detected at this movement position (secondary measurement). Measurement), and a position detection method for correcting the position and angle by detecting the position and angle of the two points again by changing the distance between the visual means and the object (tertiary measurement). I do.
[0011]
According to a second aspect of the present invention, in addition to the configuration of the first aspect, the change in the distance between the visual means and the object is a position detecting method for bringing the visual means closer to the object. Features.
[0012]
According to a third aspect of the present invention, in addition to the configuration of the second aspect of the present invention, there is provided a position detecting method in which the visual means is mounted on a robot.
[0013]
According to a fourth aspect of the present invention, in addition to the configuration of the third aspect, the correction of the position and the angle is a position detection method for correcting a control error of a robot system.
[0014]
Function and effect of the present invention
According to the first aspect of the present invention, as shown in the claim correspondence diagrams in FIGS. 1 to 7, in the position detection method of the object 1 by image processing, two points of the object 1 are recognized by the image recognition of the visual means 2. A reference line d of a line c connecting the positions of a and b (see the points on the image shown in FIG. 2) and these two points a and b (two points predetermined by the teaching points of the robot) The angle θ with respect to the Y-axis reference line of the visual means is determined (primary measurement), and the visual means 2 described above is then determined by the position a = (xy) as shown by the virtual line α in FIG. ) And the angle θ become the reference position (0, 0) and the reference angle 0 °, and two points e and f (see FIGS. 3 and 5) at this movement position (see the virtual line α in FIG. 1). The position and angle of (refer to a point on the image) are detected again (secondary measurement).
[0015]
At this time, when the primary measurement is correct, e = (0, 0) and θ 2 = 0 as shown in FIG. 3, and when the primary measurement is incorrect, e = (x 2 ) as shown in FIGS. 5 and 7. , Y 2 ), θ = θ 2 (where θ 2 ≠ 0).
Next, the distance between the visual means 2 and the object 1 is changed in a direction approaching or away from the object 1 (not shown) as indicated by a virtual line β in FIG. Detect (tertiary measurement).
[0016]
In this tertiary measurement, when the previous primary measurement is correct, g = (0, 0) and θ 3 = 0 as shown in FIG. 4, and when the primary measurement is incorrect, as shown in FIGS. G = (x 3 , y 3 ) and θ = θ 3 (where θ 3 ≠ 0).
Further, the position and angle are corrected based on (x 2 , y 2 ), θ 2 , (x 3 , y 3 ), and θ 3 as the position data and the angle data of the above-described secondary measurement and tertiary measurement. .
[0017]
As described above, the detection based on the detection data (the position on the X-axis, the position on the Y-axis, the axis shift, and the angle shift) is performed by the above-described three times of the primary measurement, the secondary measurement, and the third measurement. (Correction) is performed, so that there is an effect that the correction accuracy can be significantly improved without being affected by the robot control accuracy and the resolution accuracy.
[0018]
According to the second aspect of the present invention, in addition to the effect of the first aspect, the change in the distance between the visual means and the object causes the visual means to approach the object. There is an effect that resolution (resolution accuracy) can be improved by visual means.
[0019]
According to the invention of claim 3 of the present invention, in addition to the effect of the invention of claim 2, the visual means is mounted on the robot, so that the control system and the position detection function inherent in the robot are effectively utilized. There is an effect that the above-mentioned visual means can be driven.
[0020]
According to the fourth aspect of the present invention, in addition to the effect of the third aspect, the above-described position and angle correction corrects a control error of the robot system. Has the effect that the robot arm can be controlled at an appropriate position and angle with high accuracy.
[0021]
【Example】
An embodiment of the present invention will be described below in detail with reference to the drawings.
FIG. 8 shows a position detecting device used in the position detecting method of the present invention. The position detecting device 11 includes a six-axis type robot 13 mounted on a robot base 12 and a fastening portion 15 at the tip of a robot arm 14 of the robot 13. And a plurality of nut runners 17 provided on the robot hand 16, and a CCD camera 2 as a visual means is provided at a portion where the nut runner 17 is provided via a bracket (not shown). Is attached, and two points a and b (see FIG. 2) of the object 1 (measured object) are recognized by the image recognition of the CCD camera 2 and an angle θ of a line c connecting the two points a and b with respect to a reference line d. (See FIG. 2).
[0022]
A position detecting method using the position detecting device 11 configured as described above will be described in detail below. In the following description, the method of calculating the correction value after the above-described primary measurement, secondary measurement, and tertiary measurement (however, the following measurements are the same as those described in the section of the operation and effect of the invention) (Replaced by robotic calculations).
[0023]
In FIG. 9, p1 indicates a command position at the time of primary measurement, p2 indicates a command position at the time of secondary measurement, p3 indicates a command position (target position) at the time of tertiary measurement, and r1 indicates an acquisition position at the time of primary measurement. The position, r2, indicates a capture position at the time of secondary measurement, and r3 indicates a capture position at the time of tertiary measurement.
Here, the components of the command positions p1, p2, and p3 can be represented by the following [Equation 1].
[0024]
(Equation 1)
Figure 0003562096
[0025]
In the above [Equation 1], x, y, and z indicate the X-axis, Y-axis, and Z-axis of the three-dimensional robot base coordinates, and “01”, “02”, and “03” indicate the command positions p1, p2, and p3. The corresponding command values are shown, and tx, ty, and tz indicate angles around each axis, and particularly, ty means the angle θ shown in the claims.
In addition, the components of the above-described capturing positions r1, r2, and r3 can be represented by the following [Equation 2].
[0026]
(Equation 2)
Figure 0003562096
[0027]
In the above [Equation 2], x, y, and z indicate the X-axis, Y-axis, and Z-axis of the three-dimensional robot base coordinates similarly to [Equation 1], and tx, ty, and tz are similarly around each axis. And "1", "2", and "3" indicate execution values corresponding to the respective positions r1, r2, and r3, and particularly, ty means the angle θ shown in the claims.
The motion deviation vector of the robot 13 with respect to each of the above command values p1, p2, and p3 is as shown in [Equation 3].
[0028]
FIG. 3
Figure 0003562096
[0029]
The final motion correction value is the inverse vector of the above-described deviation, and should be minus operation deviation 3 originally. However, even if the robot 13 is operated with the correction command value, the robot 13 may again operate due to gravity and manufacturing errors. Misalignment occurs. Then, an arithmetic mean vector (median value of two shifts 2 and 3) of the obtained two shifts of motion (accurate shifts 2 and 3) is obtained, and this value is approximated as a correction value. I do. This correction vector can be represented by the following [Equation 4].
[0030]
(Equation 4)
Figure 0003562096
[0031]
Here, when the vector of r2p2 corresponding to the shift correction 2 in FIG. 10 is represented by a component, the following [Equation 5] is obtained.
[0032]
(Equation 5)
Figure 0003562096
[0033]
When the vector of r3p3 corresponding to the shift correction 3 in FIG. 10 is represented by a component, the following [Equation 6] is obtained.
[0034]
(Equation 6)
Figure 0003562096
[0035]
In the above [Equation 5] and [Equation 6], "0" added before "p" and "r" indicates zero as the origin of the robot base coordinates. Then, by substituting [Equation 5] and [Equation 6] into [Equation 4] above and rearranging the vector equation, a correction vector is obtained, which is then shown by [Equation 7].
[0036]
(Equation 7)
Figure 0003562096
[0037]
Therefore, the final command value (dest) to the robot 13 shown in FIG. 11 is a value obtained by adding the above-described correction vector to the or3 vector, and can be expressed by the following [Equation 8].
[0038]
(Equation 8)
Figure 0003562096
[0039]
In other words, when no correction is made, it is possible to move from the coordinate origin 0 to the direction of r3, or to move from the coordinate origin 0 to the direction of p3 as the target position by the above-described correction.
[0040]
In short, according to the position detection method of the present invention, the position of two points of the object 1 and the angle (ty1 in equation 2) of a line connecting these two points with respect to a reference line by image recognition of the visual means (see the CCD camera 2). (Primary measurement), and the next visual means (see CCD camera 2) is moved so that the calculated position and angle become the reference position and reference angle, and the two positions are determined at this movement position. And the angle (see ty2 in Equation 2) are detected again (secondary measurement).
[0041]
Next, the position and angle of two points (see ty3 in Equation 2) are detected again (third measurement) by changing the distance from the visual means (see the CCD camera 2) to the object 1.
Further, the position and angle are corrected based on the position data and the angle data of the above-described secondary measurement and tertiary measurement (refer to the correction vector shown in Expression 7 and the vector of the final command value dest shown in Expression 8).
[0042]
In this way, in addition to the conventional method of only primary measurement, secondary measurement and tertiary measurement are additionally executed, and correction based on the detected data (position on the X-axis, (Correction of position, axis shift, and angle shift) is performed, and there is an effect that the correction accuracy can be greatly improved without being affected by the robot control accuracy and the resolution accuracy of the CCD camera 2.
[0043]
Further, the change in the distance between the visual means (see the CCD camera 2) and the object 1 causes the visual means to approach the object 1, so that the resolution (resolution accuracy) of the visual means can be improved. There is.
[0044]
Further, since the visual means (see the CCD camera 2) is attached to the robot 13, there is an effect that the above-mentioned visual means can be driven by effectively utilizing the control system and the position detecting function inherent in the robot 13 side.
In addition, since the above-described correction of the position and the angle corrects the control error of the robot system, the correction of the control error of the robot system causes the bot arm 14 (including the nut runner 17 at the tip in the embodiment) to have an appropriate position, There is an effect that the angle can be controlled with high precision.
[0045]
In correspondence between the configuration of the present invention and the above-described embodiment,
The visual means of the present invention corresponds to the CCD camera 2 of the embodiment,
Similarly,
The robot corresponds to the 6-axis type robot 13,
The present invention is not limited only to the configuration of the above embodiment.
[Brief description of the drawings]
FIG. 1 is a claim correspondence diagram showing a position detection method of the present invention.
FIG. 2 is a claim correspondence diagram showing primary measurement.
FIG. 3 is a claim correspondence diagram showing secondary measurement when primary measurement is correct.
FIG. 4 is a claim correspondence diagram showing tertiary measurement when primary measurement is correct.
FIG. 5 is a claim correspondence diagram illustrating secondary measurement when primary measurement is incorrect.
FIG. 6 is a claim correspondence diagram showing tertiary measurement when primary measurement is incorrect.
FIG. 7 is an explanatory diagram when primary measurement is incorrect.
FIG. 8 is an explanatory diagram of a position detecting device used in the position detecting method of the present invention.
FIG. 9 is an explanatory diagram showing an operation shift.
FIG. 10 is an explanatory diagram showing shift correction.
FIG. 11 is an explanatory diagram showing a final command value.
FIG. 12 is an explanatory diagram showing a conventional position detection method.
[Explanation of symbols]
DESCRIPTION OF SYMBOLS 1 ... Object 2 ... CCD camera 13 ... Robot a, b ... Two points c ... Line d connecting two points ... Reference line e, f ... Two points g, h ... Two points [theta], [theta] 2 , [theta] 3 ... Angle

Claims (4)

画像処理による物体の位置検出方法であって、
視覚手段の画像認識により物体の2点の位置と、これら2点を結ぶ線の基準線に対する角度とを求め、
次に上記視覚手段を、求めた位置および角度が基準位置および基準角度となるように移動させ、この移動位置で2点の位置と角度とを再検出し、
次に上記視覚手段の物体との距離を変化させて2点の位置と角度とを再度検出することにより位置および角度を補正する
位置検出方法。
An object position detection method using image processing,
The position of two points of the object and the angle of a line connecting these two points with respect to a reference line are obtained by image recognition of the visual means,
Next, the visual means is moved so that the obtained position and angle become the reference position and reference angle, and the position and angle of the two points are re-detected at this movement position,
Next, a position detecting method for correcting the position and the angle by changing the distance to the object of the visual means and detecting the position and the angle of the two points again.
上記視覚手段の物体との距離変化は、該視覚手段を物体に接近させる
請求項1記載の位置検出方法。
2. The position detecting method according to claim 1, wherein the change in the distance between the visual means and the object causes the visual means to approach the object.
ロボットに上記視覚手段が装着された
請求項2記載の位置検出方法。
3. The position detecting method according to claim 2, wherein said visual means is mounted on a robot.
上記位置および角度の補正はロボット系の制御誤差を補正する
請求項3記載の位置検出方法。
4. The position detecting method according to claim 3, wherein the correction of the position and the angle corrects a control error of a robot system.
JP02989996A 1996-01-23 1996-01-23 Position detection method Expired - Fee Related JP3562096B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP02989996A JP3562096B2 (en) 1996-01-23 1996-01-23 Position detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP02989996A JP3562096B2 (en) 1996-01-23 1996-01-23 Position detection method

Publications (2)

Publication Number Publication Date
JPH09196622A JPH09196622A (en) 1997-07-31
JP3562096B2 true JP3562096B2 (en) 2004-09-08

Family

ID=12288835

Family Applications (1)

Application Number Title Priority Date Filing Date
JP02989996A Expired - Fee Related JP3562096B2 (en) 1996-01-23 1996-01-23 Position detection method

Country Status (1)

Country Link
JP (1) JP3562096B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100752989B1 (en) * 2006-06-02 2007-08-30 주식회사 유진엠에스 Device capable of measuring 2-dimensional and 3-dimensional images
JP6914515B2 (en) * 2017-07-10 2021-08-04 新明工業株式会社 Toe adjustment robot
CN108839024A (en) * 2018-06-29 2018-11-20 易思维(杭州)科技有限公司 A kind of visual guide method suitable for the automatic loading process of arrangements for automotive doors
CN109059769B (en) * 2018-08-31 2020-08-28 中国科学院力学研究所 Non-contact pantograph lifting pantograph arm rod position relation measuring method

Also Published As

Publication number Publication date
JPH09196622A (en) 1997-07-31

Similar Documents

Publication Publication Date Title
EP1215017B1 (en) Robot teaching apparatus
KR970007039B1 (en) Detection position correction system
EP0493612B1 (en) Method of calibrating visual sensor
US7532949B2 (en) Measuring system
EP3542969B1 (en) Working-position correcting method and working robot
JPH0435885A (en) Calibration method for visual sensor
US11230011B2 (en) Robot system calibration
JP3644991B2 (en) Coordinate system coupling method in robot-sensor system
KR20080088165A (en) Robot calibration method
US11554494B2 (en) Device for acquiring a position and orientation of an end effector of a robot
JPH11156764A (en) Locomotive robot device
JP3562096B2 (en) Position detection method
JP3466340B2 (en) A 3D position and orientation calibration method for a self-contained traveling robot
JP6912529B2 (en) How to correct the visual guidance robot arm
JP3754340B2 (en) Position detection device
JP3511551B2 (en) Robot arm state detection method and detection system
JPH1011146A (en) Device for correcting stop posture of mobile object
JP2016203282A (en) Robot with mechanism for changing end effector attitude
US11826919B2 (en) Work coordinate generation device
WO2020184575A1 (en) Measurement system and measurement method
JPH04269194A (en) Plane measuring method
Nilsson et al. Combining a stable 2-D vision camera and an ultrasonic range detector for 3-D position estimation
JPH0731536B2 (en) Teaching data correction robot
JP4519295B2 (en) Method for measuring workpiece misalignment
JP3541980B2 (en) Calibration method for robot with visual sensor

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040419

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20040511

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20040524

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090611

Year of fee payment: 5

LAPS Cancellation because of no payment of annual fees