JPH09167234A - Three-dimensional recognizing method and its device using card information - Google Patents
Three-dimensional recognizing method and its device using card informationInfo
- Publication number
- JPH09167234A JPH09167234A JP7327408A JP32740895A JPH09167234A JP H09167234 A JPH09167234 A JP H09167234A JP 7327408 A JP7327408 A JP 7327408A JP 32740895 A JP32740895 A JP 32740895A JP H09167234 A JPH09167234 A JP H09167234A
- Authority
- JP
- Japan
- Prior art keywords
- cad
- dimensional
- camera
- contour line
- feature amount
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
【0001】[0001]
【発明の属する技術分野】本発明は、例えば自動化され
た生産ラインの組立用ロボットを制御するコンピュータ
によって、対象となる物体を3次元的に認識する手法及
び装置に関する。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method and apparatus for three-dimensionally recognizing a target object by a computer that controls a robot for assembling an automated production line.
【0002】[0002]
【従来の技術】生産コストを削減し、安全性を高めるた
め、加工又は組立の生産ラインについてはロボットを用
いた大幅な自動化が図られるようになってきている。い
わゆるCAMは、予め作成したCAD情報、例えばCA
Dデータに基づいてロボットを制御しながら、所定の材
料を研削、切断といった加工工程において広く見られる
ところであるが、こうした加工品をアセンブルする組立
工程では、ロボットを制御するコンピュータに被対象物
を3次元的に認識させなければならないため、今だ自動
化が完全には達成されていない。しかしながら、急速な
コンピュータ技術の発達の恩恵から、カメラで物体を捉
え、その画像からコンピュータが3次元的に物体を認識
してロボットを制御できるようになってきている。2. Description of the Related Art In order to reduce production cost and enhance safety, a manufacturing or assembling production line is being greatly automated by using a robot. So-called CAM is CAD information created in advance, for example, CA.
It is widely seen in machining processes such as grinding and cutting a predetermined material while controlling the robot based on D data. In the assembly process for assembling such a processed product, the computer controlling the robot is used to set the object to be processed. Automation has not yet been fully achieved because it has to be dimensionally recognized. However, as a result of the rapid development of computer technology, an object can be captured by a camera, and a computer can recognize the object three-dimensionally from the image and control the robot.
【0003】従来における被対象物を3次元的に認識す
る手法は、概ね、この被対象物を捉える方向を予め定め
ておき、その方向から見た被対象物の画像データとコン
ピュータが記憶している比較物の前記方向の図形データ
とを突き合わせることによって被対象物の種類を判別す
るという手順を用いており、比較物と突き合わせる被対
象物の画像を得るために、被対象物を多角的に写すカメ
ラが必要であった。In the conventional method of three-dimensionally recognizing an object, the direction in which the object is captured is generally set in advance, and the image data of the object viewed from that direction is stored in the computer. The procedure of determining the type of the target object by matching it with the graphic data of the comparison object in the above-mentioned direction is used. I needed a camera to shoot it.
【0004】[0004]
【発明が解決しようとする課題】従来の3次元の認識手
法は、被対象物を多角的に捉えるために、画像を得るカ
メラを複数台用意するか、1台のカメラを物体に対して
方向を変える装置が必要となり、生産ラインの自動化の
コストを引き上げる要因となっていた。しかも、多品種
少量生産を目指した生産ラインでは、外観が類似した物
体も少なくなく、このような従来の手法では認識を誤る
虞があった。In the conventional three-dimensional recognition method, a plurality of cameras for obtaining an image are prepared or one camera is directed toward an object in order to capture the object in multiple directions. This required a device to change the production line, which was a factor in raising the cost of automation of the production line. In addition, in a production line aiming at high-mix low-volume production, there are many objects that have similar appearances, and such conventional methods may cause misrecognition.
【0005】そこで、生産ラインの自動化を進めるに際
してそのコストを抑制しつつ、より正確かつ高速にロボ
ットを制御することを目標として、新たな3次元認識手
法を開発することとした。Therefore, it has been decided to develop a new three-dimensional recognition method with the goal of controlling the robot more accurately and at high speed while controlling the cost of the automation of the production line.
【0006】[0006]
【課題を解決するための手段】検討の結果開発したもの
が、複数の対象物の各CAD情報からこれら対象物それ
ぞれを所定方向の直交面に投影した各2次元図形を導い
て、この2次元図形の重心、輪郭線、該輪郭線上に配列
した各点における輪郭線との法線方向、重心から輪郭線
上に配列した各点までの距離等のCAD図形特徴量を算
出し、カメラが所定方向から捉えた特定の対象物(以
下、特定物と称する)から得られる2次元画像における
前記CAD図形特徴量と同様のカメラ図形特徴量を算出
して、CAD図形特徴量とカメラ図形特徴量とを比較し
て、カメラが捉えた特定物の種類、姿勢等を判別するC
AD情報を用いた3次元認識手法である。What has been developed as a result of the study is to derive each two-dimensional figure obtained by projecting each of these objects on a plane orthogonal to a predetermined direction from the CAD information of each of the plurality of objects. The CAD graphic feature amount such as the center of gravity of the figure, the contour line, the direction of the normal to the contour line at each point arranged on the contour line, the distance from the center of gravity to each point arranged on the contour line, and the like are calculated by the camera in a predetermined direction. A camera figure characteristic amount similar to the CAD figure characteristic amount in the two-dimensional image obtained from a specific target object (hereinafter referred to as a specific object) captured from the CAD figure characteristic amount and the camera figure characteristic amount are calculated. By comparing, C for determining the type, posture, etc. of the specific object captured by the camera C
This is a three-dimensional recognition method using AD information.
【0007】CAD情報は3次元CADデータが好まし
いが、場合によっては三面図を構成する2次元CADデ
ータからまず3次元CADデータを起こし、その3次元
CADデータを利用するようにしてもよい。CADデー
タのファイル形式は、特定するものではないが、汎用性
を考慮すれば、DXF形式が好ましい。The CAD information is preferably three-dimensional CAD data, but in some cases, the three-dimensional CAD data may be first generated from the two-dimensional CAD data forming the three-view and the three-dimensional CAD data may be used. The file format of the CAD data is not specified, but the DXF format is preferable in consideration of versatility.
【0008】まず、CADデータからなる対象物の3次
元図形を任意の視点から見た2次元図形に変換する。一
般に、3次元物体は重力下で安定する姿勢(以下、安定
姿勢と呼ぶ)をとるため、固定したカメラが捉える特定
物の画像データは、この安定姿勢にある特定物に限られ
る。なお、水平回転の角度は、位相差として計算時にオ
フセット扱いすればよい。3次元図形を変換して得る2
次元図形は、カメラが捉える前記画像データの種類のみ
について作成すればよい。任意の視点から見た3次元図
形は、特定物に対応する対象物のCADデータについ
て、3次元図形の中心が3次元空間の原点に重なるよう
に平行移動させた各頂点(x,y,z)を、X軸を中心とし
たRx回転(数1)、Y軸を中心としたRy回転(数2)、Z
軸を中心としたRz回転(数3)により座標(X,Y,Z)へ
移動させて得ることができる。First, a three-dimensional figure of an object made of CAD data is converted into a two-dimensional figure viewed from an arbitrary viewpoint. Generally, since a three-dimensional object has a stable posture under gravity (hereinafter referred to as a stable posture), image data of a specific object captured by a fixed camera is limited to the specific object in this stable posture. The angle of horizontal rotation may be treated as an offset during calculation as a phase difference. Obtained by converting 3D figures 2
The three-dimensional figure may be created only for the type of the image data captured by the camera. The 3D figure viewed from an arbitrary viewpoint is the vertex (x, y, z) that is translated so that the center of the 3D figure overlaps the CAD data of the object corresponding to the specific object so as to overlap the origin of the 3D space. ) Is Rx rotation around the X axis (Equation 1), Ry rotation around the Y axis (Equation 2), Z
It can be obtained by moving to coordinates (X, Y, Z) by Rz rotation (Equation 3) about the axis.
【0009】[0009]
【数1】 [Equation 1]
【0010】[0010]
【数2】 (Equation 2)
【0011】[0011]
【数3】 (Equation 3)
【0012】こうして回転させた3次元図形を2次元図
形に変換するには、3次元図形を投影する仮想スクリー
ンと視点Eとの距離をDとして、3次元座標(X,Y,Z)
を2次元座標(MX,MY)へと座標変換する(数4、数
5)。In order to convert the thus rotated three-dimensional figure into a two-dimensional figure, the distance between the virtual screen on which the three-dimensional figure is projected and the viewpoint E is D, and the three-dimensional coordinate (X, Y, Z).
Is converted into two-dimensional coordinates (MX, MY) (Equations 4 and 5).
【0013】[0013]
【数4】 (Equation 4)
【0014】[0014]
【数5】 (Equation 5)
【0015】本発明は、こうして得られた2次元図形を
塗りつぶして二値化し、特徴量を抽出する。まず、二値
化した2次元図形から、重心(Gx,Gy)(数6、数7)
と、輪郭線追跡処理(「画像処理」:1983,P187〜P188,尾崎
弘,谷口慶治,小川秀夫)によって輪郭線を表わす点(DX
j,DYj)(j=1,2,…,Nc)を得る。特徴量は、前記重
心や輪郭線のほかに、対象物の重心から輪郭線までの距
離(CLj、数8)、輪郭線の法線方向(Vj、数9)があ
る。According to the present invention, the two-dimensional figure thus obtained is filled and binarized to extract the feature quantity. First, from the binarized two-dimensional figure, the center of gravity (Gx, Gy) (Equations 6 and 7)
And a point (DX) that represents a contour line by contour line tracking processing (“Image processing”: 1983, P187 to P188, Hiroshi Ozaki, Keiji Taniguchi, Hideo Ogawa)
j, DYj) (j = 1, 2, ..., Nc) is obtained. In addition to the center of gravity and the contour line, the feature amount includes the distance from the center of gravity of the object to the contour line (CLj, equation 8) and the normal direction of the contour line (Vj, equation 9).
【0016】[0016]
【数6】 (Equation 6)
【0017】[0017]
【数7】 (Equation 7)
【0018】[0018]
【数8】 (Equation 8)
【0019】[0019]
【数9】 (Equation 9)
【0020】カメラから取り込んだ特定物を二値化し、
二値化された特定物の画像データの重心から輪郭線まで
の距離ILn(n=1,2,…,dr)と、上記CADデータか
ら導き出して来た対象物の重心から輪郭線までの距離C
Ljとを比較し、同一性を判断する。この場合、画像デ
ータにおけるILnとCADデータからのCLjとは、そ
の尺度とデータ数とが必ずしも一致するとは限らないの
で、尺度については比例定数K(数10)をCLjに掛け合
わせ、データ数については、ILnにおいてILnとIL
n+1との間で一次補間して連続折線に変換し、この連続
折線をNc(CLjの個数)で再分割し、MLjとして整合
を図る。The specific object taken in from the camera is binarized,
Distance ILn (n = 1, 2, ..., Dr) from the center of gravity of the image data of the binarized specific object, and the distance from the center of gravity of the object derived from the CAD data to the contour line. C
Compare with Lj to determine identity. In this case, since ILn in the image data and CLj from the CAD data do not always match the scale and the number of data, the scale constant is multiplied by CLj and the number of data is calculated. ILn and IL in ILn
A linear interpolation is performed between n + 1 and n + 1 to convert it into a continuous broken line, and this continuous broken line is subdivided by Nc (the number of CLj) to achieve matching as MLj.
【0021】[0021]
【数10】 [Equation 10]
【0022】カメラから取り込んだ画像データと、CA
Dデータの3次元図形を任意の視点から見た2次元図形
とは、CLjの各極大値{CLj1,CLj2,…,CLjm}とM
Ljの最大値max{MLj}(1≦j≦Nc)とを一致させた際の
各点と重心との距離CLj及びMLjとの差から求められ
る一致度Pmax(数11)と、CLjの各極小値{CLk1,CL
k2,…,CLkt}とMLjの最小値min{MLj}(1≦j≦Nc)と
を一致させた際の前記同様の一致度Pmin(数11)とが共
に最小になるものが最も同一性が高いと判断でき、この
場合にカメラが捉えた特定物がどのCADデータの対象
物であるか、そしてその対象物をどの方向から見ている
かが特定できるのである。Image data taken from the camera and CA
A two-dimensional figure in which a three-dimensional figure of D data is viewed from an arbitrary viewpoint is the maximum value of CLj {CLj1, CLj2, ..., CLjm} and M
The degree of matching Pmax (Equation 11) obtained from the difference between the distance CLj between each point and the center of gravity when matching the maximum value max {MLj} (1 ≦ j ≦ Nc) of Lj and MLj, and each of CLj Minimum value (CLk1, CL
k2, ..., CLkt} and the same degree of coincidence Pmin (Equation 11) when the minimum value min {MLj} (1 ≦ j ≦ Nc) of MLj are matched are the most identical. Can be determined to be high, and in this case, it is possible to identify which CAD data object the particular object captured by the camera is, and from which direction the object is viewed.
【0023】[0023]
【数11】 [Equation 11]
【0024】本発明による特定物の認識手法は、コンピ
ュータ内にCAD情報による仮想空間を想像し、カメラ
が捉えた画像データがこの仮想空間内においてどの対象
物をどの方向から見た場合の2次元画像と一致するかを
判断するものであり、1体のCAD情報についても相当
なデータ量が必要となる。そこで、複数の対象物を識別
するには、前記データ量は対象物の数だけ増加すること
になり、現状ではリアルタイムに認識のための計算を実
行することは難しい。そのため、CAD情報からの特徴
量については、予め算出してデータベースを構築してお
くと処理が高速化できる。カメラの画像データについ
て、その尺度を変え、データ数の整合を図るのは、こう
したデータベースの使用を前提とし、計算を要するデー
タが少なくなるように配慮したものである。例えば、将
来より高速のコンピュータが出現すれば、CADデータ
からの2次元図形についても、その都度計算して比較す
ることができる。According to the method of recognizing a specific object according to the present invention, a virtual space is imagined by CAD information in a computer, and image data captured by a camera is a two-dimensional view of which object is viewed from which direction in this virtual space. It is determined whether or not it matches the image, and a considerable amount of data is required for one piece of CAD information. Therefore, in order to identify a plurality of objects, the amount of data is increased by the number of objects, and it is currently difficult to execute the calculation for recognition in real time. Therefore, if the feature amount from the CAD information is calculated in advance and the database is constructed, the processing speed can be increased. The scale of the image data of the camera is changed so that the number of data is matched with the assumption that such a database is used and the amount of data that needs to be calculated is reduced. For example, if a faster computer appears in the future, two-dimensional figures from CAD data can be calculated and compared each time.
【0025】こうして種類と方向とが認識された特定物
について、上記CAD図形特徴量のうち、(1)2次元図
形の重心から輪郭線上に配列した各点までの距離CLj
の極大値CLj1の示す点(DXj1,DYj1)とこの極大値
を除いた次の極大値CLj2の示す点(DXj2,DYj2)と
を結んだ方向の法線方向H(数12)、(2)2次元図形の重
心から輪郭線上に配列した各点までの距離CLjの極小
値CLk1の示す点(DXk1,DYk1)とこの極小値を除い
た次の極小値CLk2の示す点(DXk2,DYk2)とを結ん
だ方向H(数13)、(3)2次元図形の輪郭線上に配列した
各点における輪郭線との法線方向Vjに殆ど変化がない
点が並ぶ輪郭線部分のうち最も長いものの法線方向V
a、又は(4)2次元図形の輪郭線上に配列した各点におけ
る輪郭線との法線方向Vjの度数分布の極大値{Vb1,Vb
2,…,Vbu}を調べ、この極大値のうち約πだけずれた法
線方向の組の一方Vbw、をそれぞれの方法で認識した特
定物の把持方向とする。そして、更に前記把持方向のう
ち複数が一致する方向に平行かつ重心を通過する方向を
把持位置とし、前記把持方向がすべて異なる場合には
(4)の把持方向に平行かつ重心を通過する方向を把持位
置として判別して、ロボットハンドによる対象物の把持
方向及び位置を認識する。With respect to the specific object whose type and direction are thus recognized, (1) the distance CLj from the center of gravity of the two-dimensional figure to each point arranged on the contour line among the CAD figure feature values
Of the maximum value CLj1 (DXj1, DYj1) and the point (DXj2, DYj2) of the next maximum value CLj2 excluding this maximum value are connected in the normal direction H (Equation 12), (2) A point (DXk1, DYk1) indicated by the minimum value CLk1 of the distance CLj from the center of gravity of the two-dimensional figure to each point arranged on the contour line, and a point (DXk2, DYk2) indicated by the next minimum value CLk2 excluding this minimum value. The direction H (Equation 13) connecting (3), (3) The method of the longest of the contour line parts in which the points having almost no change in the normal direction Vj to the contour line at each point arranged on the contour line of the two-dimensional figure are arranged Line direction V
a, or (4) the maximum value of the frequency distribution in the normal direction Vj with respect to the contour line at each point arranged on the contour line of the two-dimensional figure {Vb1, Vb
2, ..., Vbu}, and one of the sets of the normal direction, which is deviated by about π from the maximum value, is set as Vbw, which is the gripping direction of the specific object recognized by each method. Further, when a plurality of gripping directions are parallel to a direction in which a plurality of the gripping directions coincide with each other and pass through the center of gravity, the gripping position is determined.
A direction parallel to the gripping direction of (4) and passing through the center of gravity is determined as a gripping position, and the gripping direction and position of the object by the robot hand are recognized.
【0026】[0026]
【数12】 (Equation 12)
【0027】[0027]
【数13】 (Equation 13)
【0028】特定物の認識は、概ね何らかの装置又は機
械を動かすための前提(場合によっては、認識だけで十
分な場合もある)であり、上記認識手法により、CAD
図形特徴量を利用して、例えば、認識した特定物をロボ
ットハンドによる把持方向及び把持位置を判別するので
ある。把持方向は、特定物の向きを認識するデータとし
て、そのほか特定物を一定方向に水平回転させるターン
テーブル等の制御にも利用できる。これら把持方向及び
位置は、CAD図形特徴量から導くことができるので、
予め算出しておき、上述同様の理由からデータベースを
構築しておくのが好ましい。把持方向は、算出した複数
の結果を比較して多数決を採ることで、より安定したロ
ボットハンド等による把持を実現できるようにするが、
結果が不一致の場合、CAD図形特徴量として算出した
輪郭線の法線方向Vjが、ロボットハンド等による把持
に最も適正なものとして、把持位置として決定する。The recognition of a specific object is generally a premise for moving some device or machine (in some cases, recognition alone may be sufficient).
For example, the gripping direction and the gripping position of the recognized specific object by the robot hand are determined using the graphic feature amount. The gripping direction is used as data for recognizing the direction of a specific object, and can also be used for controlling a turntable or the like that horizontally rotates the specific object in a certain direction. Since these gripping directions and positions can be derived from the CAD figure feature values,
It is preferable to calculate in advance and build a database for the same reason as above. As for the gripping direction, a more stable grip by a robot hand or the like can be realized by comparing a plurality of calculated results and taking a majority decision.
If the results do not match, the normal direction Vj of the contour line calculated as the CAD figure feature amount is determined as the grip position as the most appropriate one for gripping by the robot hand or the like.
【0029】現在多用されているロボットハンドの殆ど
は、対となる指を対面的に接近、離隔させるものであ
り、こうしたロボットハンドで特定物を掴むためには、
この特定物の輪郭のうち、対向する二辺が平行かつ長い
ことが好ましい。但し、計算による把持位置の認識は、
必ずしもロボットハンドの把持に適したものを算出でき
るとは限らないので、まず把持方向としてCAD図形特
徴量から前記条件(対向する二辺が平行かつ長い)に適合
する方向を複数算出し、どの把持方向が最も好ましいか
を判断して、具体的な把持位置を決定するのである。Most of the robot hands that are currently in use are those that cause a pair of fingers to approach and separate face-to-face, and in order to grab a specific object with such a robot hand,
Among the contours of this specific object, it is preferable that two opposite sides are parallel and long. However, the recognition of the gripping position by calculation is
Since it is not always possible to calculate what is suitable for gripping a robot hand, first, as gripping directions, a plurality of directions that meet the above conditions (two opposing sides are parallel and long) are calculated from the CAD figure feature amount, and which gripping direction is calculated. The specific gripping position is determined by determining whether the direction is the most preferable.
【0030】(1)の方法は、2次元図形の重心から輪郭
線に配列した各点までの距離CLjの極大値CLj1の示
す点(DXj1,DYj1)とこの極大値を除いた次の極大値
CLj2の示す点(DXj2,DYj2)とが、それぞれ輪郭線
上大きく凸状に屈曲した点であると判断し、前記点(D
Xj1,DYj1),(DXj2,DYj2)を結んだ間に重心までの
距離CLjの変化が最も少ない辺、すなわち滑らかな辺
の部分が存在するものとして、その法線方向をロボット
ハンドの把持方向と決定する。例えば、台形の場合、4
つの頂点のうち隣り合う2点が選ばれる結果、この2点
を両端に持つ辺の法線方向が把持方向Hと判別されるこ
とになる。(2)の方法は、2つの極小値の示す点(DXk
1,DYk1),(DXk2,DYk2)を結んだ間に、重心までの
距離CLjの変化が最も少ない辺の部分が存在するもの
として、前記点を結んだ方向をロボットハンドの把持方
向Hと決定する。The method (1) is the point (DXj1, DYj1) indicated by the maximum value CLj1 of the distance CLj from the center of gravity of the two-dimensional figure to each point arranged on the contour line and the next maximum value excluding this maximum value. It is judged that the point (DXj2, DYj2) indicated by CLj2 is a point that is largely convexly curved on the contour line, and the point (D
Xj1, DYj1) and (DXj2, DYj2) are connected, the normal direction is defined as the gripping direction of the robot hand, assuming that there is a side having the least change in the distance CLj to the center of gravity, that is, a smooth side. decide. For example, in the case of trapezoid, 4
As a result of selecting two adjacent points among the two vertices, the normal direction of the side having these two points at both ends is determined as the gripping direction H. The method (2) uses the point (DXk
1, DYk1) and (DXk2, DYk2) are connected, and it is assumed that there is a side part where the distance CLj to the center of gravity changes the least, and the direction connecting the points is determined as the gripping direction H of the robot hand. To do.
【0031】(3)の方法は、2次元図形の輪郭線上に配
列した各点における輪郭線との法線方向Vjに殆ど変化
がない点が並ぶ輪郭線の部分は、概ね直線若しくは滑ら
かな曲線であると判断できるので、その輪郭線を把持方
向に直交する対象物の辺として、この辺のうち最も長い
ものの法線方向Vaを把持方向と判別する。(4)の方法
は、辺上の点における法線方向Vjが最も数が多いであ
ろうことから、2次元図形の輪郭線上に配列した各点に
おける輪郭線との法線方向Vjの度数分布の極大値を調
べ、この極大値のうちから約πだけずれた法線方向の組
はそれぞれ対向する二辺に存在するものとして、この法
線方向の一方Vbwを把持方向と判断する。In the method (3), the contour line portion in which points having almost no change in the normal direction Vj to the contour line at each point arranged on the contour line of the two-dimensional figure are arranged in a substantially straight line or a smooth curve. Since the contour line is the side of the object orthogonal to the gripping direction, the normal direction Va of the longest of the sides is determined as the gripping direction. In the method (4), since the normal direction Vj at the point on the side will have the largest number, the frequency distribution in the normal direction Vj with respect to the contour line at each point arranged on the contour line of the two-dimensional figure is calculated. The maximum value is examined, and it is determined that one set of the normal direction Vbw deviated from the maximum value by about π exists on two sides facing each other, and one of the normal directions Vbw is determined as the gripping direction.
【0032】対象物が単純な形状のものであれば、上記
各方法で算出した把持方向は一致するはずであり、少な
くとも複数が一致すれば、この一致した把持方向に平行
かつ重心を通る方向は、最も安定してロボットハンド等
で把持できるものとして把持位置として認定できる。す
べてが不一致の場合、少なくとも対向関係にある2点を
結んで得られる(4)の方法による把持方向が、より安定
してロボットハンド等で把持できるであろうから、この
把持方向に平行かつ重心を通る方向を把持位置と判断す
るのである。If the object has a simple shape, the gripping directions calculated by the above methods should match. If at least a plurality of gripping directions match, the direction parallel to the matched gripping direction and passing through the center of gravity is The grip position can be recognized as the one that can be gripped most stably by the robot hand or the like. If all do not match, the gripping direction obtained by connecting at least two points that are facing each other by the method of (4) will be more stable and can be gripped by a robot hand, etc. The direction passing through is determined to be the gripping position.
【0033】以上の3次元認識手法は、カメラとコンピ
ュータとからなる3次元認識装置として構成することが
できる。例えば、前記コンピュータが複数の対象物の各
CAD情報を作成及び記憶するCAD情報取扱手段、C
AD情報から前記対象物それぞれを所定方向の直交面に
投影した各2次元図形を導いて、2次元図形の重心、輪
郭線、該輪郭線上に配列した各点における輪郭線との法
線方向、重心から輪郭線上に配列した各点までの距離等
のCAD図形特徴量を算出するCAD図形特徴量抽出手
段、カメラが所定方向から捉えた特定の対象物から2次
元画像を得る画像データ取得手段、2次元画像における
前記CAD図形特徴量と同様のカメラ図形特徴量を算出
するカメラ図形特徴量抽出手段、CAD図形特徴量とカ
メラ図形特徴量とを比較して、カメラが捉えた特定の対
象物の種類、姿勢等を判別する判別手段とを備える。The above three-dimensional recognition method can be configured as a three-dimensional recognition device including a camera and a computer. For example, CAD information handling means for the computer to create and store each CAD information of a plurality of objects, C
From the AD information, each two-dimensional figure obtained by projecting each of the objects on a plane orthogonal to a predetermined direction is derived, the center of gravity of the two-dimensional figure, the contour line, and the direction normal to the contour line at each point arranged on the contour line, CAD figure feature amount extraction means for calculating a CAD figure feature amount such as a distance from the center of gravity to each point arranged on the contour line, image data acquisition means for obtaining a two-dimensional image from a specific object captured by the camera from a predetermined direction, A camera figure feature amount extraction means for calculating a camera figure feature amount similar to the CAD figure feature amount in a two-dimensional image, a CAD figure feature amount and a camera figure feature amount are compared, and a specific object captured by the camera And a discriminating means for discriminating the type, the posture, and the like.
【0034】更に、上記コンピュータに、カメラが捉え
た特定の対象物の種類、姿勢等と一致させた対象物のC
AD情報から導いた2次元図形のCAD図形特徴量から
この対象物の把持方向及び把持位置を算出する把持判定
手段を加えると、例えばロボットハンドでカメラが捉え
た特定物を把持するまでの3次元認識装置を構成するこ
とができる。Further, in the computer, the C of the target object matched with the type, posture, etc. of the specific target object captured by the camera is displayed.
If a gripping determination means for calculating the gripping direction and gripping position of the object from the CAD figure feature amount of the two-dimensional figure derived from the AD information is added, for example, three-dimensional until the robot grasps the specific object captured by the camera. A recognition device can be configured.
【0035】判別する対象物の形状、種類、数にもよる
が、昨今、CPUの性能向上が著しいことから、オフコ
ン、ミニコン等のほか、パーソナルコンピュータを使用
してもよい。また、各手段それぞれに1台ずつのコンピ
ュータを割り当てて分散処理をしてもよいし、手段の実
行に多少の時系列的流れがあることから、1台のコンピ
ュータに集中処理させてもよい。複数のコンピュータを
用いる場合には、各コンピュータはネットワークにより
結合しておくのが好ましい。Although it depends on the shape, type, and number of objects to be discriminated, a personal computer may be used in addition to an office computer, a minicomputer, etc. because the performance of the CPU has been remarkably improved these days. Further, one computer may be assigned to each means for distributed processing, or one computer may perform centralized processing because there is some time-series flow in executing the means. When using a plurality of computers, it is preferable to connect each computer via a network.
【0036】[0036]
【発明の実施の形態】以下、本発明の3次元認識手法を
用いた3次元認識装置の構成について、その実施形態を
図により説明する。図1はロボット1のロボットハンド
2にカメラ3を取付けて、このカメラ3で特定物4を上
方から捉えながらコンピュータ5でいずれの対象物6,
7,8かを判別し、同時にこのコンピュータ5が判別し
た対象物7の把持位置のデータに従ってロボットハンド
2を操作する3次元認識装置の装置構成を表わしたブロ
ック図である。コンピュータ5は対象物6,7,8のCA
Dデータを作成、記憶する手段でもあり、コンピュータ
5内では3次元図形のCADデータからCAD図形特徴
量を抽出してデータベース9を構築している。BEST MODE FOR CARRYING OUT THE INVENTION A configuration of a three-dimensional recognition apparatus using the three-dimensional recognition method of the present invention will be described below with reference to the drawings. In FIG. 1, a camera 3 is attached to a robot hand 2 of a robot 1, and a specific object 4 is captured by the camera 3 from above, and a target object 6 is detected by a computer 5.
9 is a block diagram showing a device configuration of a three-dimensional recognition device that determines whether the robot 7 is 8 or 7, and at the same time operates the robot hand 2 according to the data of the gripping position of the object 7 determined by the computer 5. The computer 5 is the CA of the objects 6, 7 and 8.
It is also a means for creating and storing D data, and the computer 5 constructs the database 9 by extracting CAD figure feature amounts from CAD data of three-dimensional figures.
【0037】このデータベース9に対象物6,7,8ごと
の把持位置を計算して付随させておくと、特定物4を判
別した後すぐにロボットハンド2を操作することができ
る。更に、予め十分な時間を掛けてデータベース9を構
築できる場合には、各対象物6,7,8のCAD図形特徴
量を判別しやすい対象物8と判別しにくい対象物6,7
(転倒状態と起立状態とが考えられる)とでグループ分け
をし、実際の判別の際に、判別しにくいグループの対象
物6,7と認識したら、ロボットハンド2を動かして、
カメラ3が斜め上方からも特定物4を捉えるようにし
て、認識の確度を高めるようにしてもよい。また、特定
物4の水平回転については、CADデータ特徴量とカメ
ラ画像特徴量との突き合わせにおいて、最小値と極小
値、又は最大値と極大値それぞれの重心に対する方向の
位相差をオフセット値として取り扱えばよい。When the grip positions of the objects 6, 7 and 8 are calculated and attached to the database 9, the robot hand 2 can be operated immediately after the specific object 4 is discriminated. Further, if the database 9 can be constructed in advance by taking a sufficient time, the CAD figure feature amount of each of the objects 6, 7, 8 can be easily distinguished from the easily distinguishable object 8 and the object 6 or 7.
(The fall state and the standing state can be considered) are divided into groups, and in the actual discrimination, when it is recognized as an object 6 or 7 of a group that is difficult to discriminate, the robot hand 2 is moved,
The accuracy of recognition may be increased by allowing the camera 3 to catch the specific object 4 from diagonally above. Regarding the horizontal rotation of the specific object 4, in the matching of the CAD data feature amount and the camera image feature amount, the phase difference in the direction with respect to the center of gravity of the minimum value and the minimum value or the maximum value and the maximum value can be treated as an offset value. Good.
【0038】CADデータから得られるCAD図形特徴
量は、重心、輪郭線、重心から輪郭線までの距離であ
る。重心から輪郭線までの距離という特徴量からは、更
に極大値及び極小値と、この距離のデータをFFTした
周波数特性とを得ることができる。このFFTは、対象
物の大きさや形状等の外観に惑わされることなく、その
対象物の特徴を捉えることができる点で極めて特徴量と
して優れている。なお、上記CAD図形特徴量を算出す
る手段と、画像データからカメラ図形特徴量を算出する
手段とは同一の処理であるため、両者の手段を同一の装
置、本例ではコンピュータ5で実現することができる。The CAD figure feature amount obtained from the CAD data is the center of gravity, the contour line, and the distance from the center of gravity to the contour line. From the feature amount of the distance from the center of gravity to the contour line, it is possible to obtain the maximum value and the minimum value, and the frequency characteristic obtained by FFT of the data of this distance. This FFT is extremely excellent as a feature amount in that the features of the object can be captured without being confused by the appearance such as the size and shape of the object. Since the means for calculating the CAD figure feature amount and the means for calculating the camera figure feature amount from the image data are the same processing, both means should be realized by the same device, in this example, the computer 5. You can
【0039】CAD図形特徴量とカメラ図形特徴量と
は、図2に見られるフローチャートの順序で比較してい
けば、効率よく特定物を認識できる。手順1では、CA
D図形特徴量及びカメラ図形特徴量の各FFTの結果の
一致具合を見て、明らかに異なれば、カメラ3が捉えた
特定物4と、コンピュータ5内で比較として取り出した
CAD図形特徴量の対象物6,8とは、以後比較検討の
必要がないとして除外する。FFTの一致具合は、CA
DデータのCLjから計算したFFTの値FCLqの最大
値の示す周波数F1と、画像データのILnのFFTの値
FILqの最大値の示す周波数F2とがほぼ等しいかどう
かによって判断する。If the CAD figure feature amount and the camera figure feature amount are compared in the order of the flowchart shown in FIG. 2, the specific object can be efficiently recognized. In step 1, CA
Looking at the degree of coincidence between the FFT results of the D graphic feature amount and the camera graphic feature amount, if they are clearly different, the target of the CAD graphic feature amount extracted as a comparison in the computer 5 and the specific object 4 captured by the camera 3 Items 6 and 8 are excluded because it is not necessary to compare them. FFT agreement is CA
It is determined whether the frequency F1 indicated by the maximum value of the FFT value FCLq calculated from CLj of the D data and the frequency F2 indicated by the maximum value of the FFT value FILq of the ILn of the image data are substantially equal.
【0040】手順2では、CAD図形特徴量とカメラ図
形特徴量との比率を同じにし、手順3では、CAD図形
特徴量のデータ数を再分割してカメラ図形特徴量のデー
タ数と一致させ、両者を比較できる状態にする。各特徴
量に、重心から輪郭線までの距離を選んだ場合、比率は
各特徴量で最も大きなものを比較して計算すれば求める
ことができ、データ数の一致はCAD図形特徴量を一次
補間して連続折線に変換し、カメラ図形特徴量のデータ
数で再分割することで実現できる。In step 2, the ratio between the CAD figure feature amount and the camera figure feature amount is made the same, and in step 3, the number of data of the CAD figure feature amount is subdivided to match the number of data of the camera figure feature amount, Put them in a state where they can be compared. When the distance from the center of gravity to the contour line is selected for each feature quantity, the ratio can be obtained by comparing and calculating the largest of the feature quantities, and the matching of the number of data is obtained by linearly interpolating the CAD figure feature quantity. Then, it can be realized by converting into continuous polygonal lines and subdividing by the number of data of the camera figure feature amount.
【0041】手順4において、CAD図形特徴量とカメ
ラ図形特徴量とを比較する。この比較は、各特徴量とし
て重心から輪郭線までの距離を選んだ場合、まずカメラ
図形特徴量の最小値を検出してこの最小値にCAD図形
特徴量の極小値を1つずつ一致させながら一致度Pmin
(数11)を算出し、次いでカメラ図形特徴量の最大値を検
出してこの最大値にCAD図形特徴量の極大値を1つず
つ一致させながら一致度Pmax(数11)を算出し、各一致
度Pmin,Pmaxのうち最も低い値を、カメラ3で捉えた
特定物4とコンピュータ5内で比較している対象物7と
の一致度Pとするのである。In step 4, the CAD figure feature amount and the camera figure feature amount are compared. In this comparison, when the distance from the center of gravity to the contour line is selected as each feature amount, first, the minimum value of the camera figure feature amount is detected and the minimum value of the CAD figure feature amount is made to match this minimum value one by one. Agreement Pmin
(Equation 11) is calculated, then the maximum value of the camera figure feature amount is detected, and the matching degree Pmax (Equation 11) is calculated while matching the maximum value of the CAD figure feature amount with this maximum value one by one. The lowest value of the matching degrees Pmin and Pmax is set as the matching degree P between the specific object 4 captured by the camera 3 and the target object 7 being compared in the computer 5.
【0042】以上の手順1〜手順4について、各対象物
6,7,8が持つ2次元図形の各々について実施し、比較
する対象物7のCAD図形特徴量が判別しにくいグルー
プに属するのであれば、この対象物7を斜めから見た2
次元図形のCAD図形特徴量と比較するようにし、特定
物4の認識の確度を高めるようにする。The above steps 1 to 4 are carried out for each of the two-dimensional figures possessed by each of the objects 6, 7, 8 and belong to a group in which the CAD figure feature quantity of the object 7 to be compared is difficult to discriminate. For example, 2
The accuracy of recognition of the specific object 4 is improved by comparing with the CAD figure feature amount of the three-dimensional figure.
【0043】カメラ3で捉えた特定物4が、どの対象物
6,7,8をどの方向から見ているかを認識できれば、次
にこの認識した対象物7のCAD図形特徴量に付随させ
た把持位置のデータを用いて、ロボットハンド2を操作
し、特定物4を把持する。把持位置のデータは、特定物
4が基準座標に従っているものとして算出されているか
ら、上記で求めた水平回転のオフセット値を加算又は減
算して、ロボットハンド4を把持方向にあわせて水平回
転させるものとする。なお、起立状態において認識され
た特定物は、すでに3次元的に把握されているから、ま
ずロボットハンドを軽く当てて転倒させ、転倒状態で把
持位置を確定してロボットハンドで掴むようにするのが
好ましい。If it is possible to recognize which object 6, 7, 8 from which direction the specific object 4 captured by the camera 3 is looking, then grasping the object 7 associated with the CAD figure feature amount of the recognized object 7. Using the position data, the robot hand 2 is operated to grip the specific object 4. Since the data of the gripping position is calculated assuming that the specific object 4 follows the reference coordinates, the horizontal rotation offset value obtained above is added or subtracted to horizontally rotate the robot hand 4 in accordance with the gripping direction. I shall. Since the specific object recognized in the standing state has already been grasped three-dimensionally, the robot hand is first lightly hit to cause the robot to fall, and the gripping position is determined and the robot hand is grasped in the falling state. Is preferred.
【0044】[0044]
【実施例】上記実施形態に基づいた実施例について説明
する。コンピュータ5はPentium(インテル社登
録商標)60Hzのパーソナルコンピュータ、ロボット
1は6軸多関節型の一般的工業用ロボットを使用し、コ
ンピュータ5とロボット1とはRS-232Cで接続してい
る。また、カメラ3については、ロボット1のロボット
ハンド2に備え付け、専用の画像ボードによってコンピ
ュータ5に画像データを取り込むようにしている(図1
参照)。図3に見られる対象物4のCADデータはDX
Fファイル形式にてコンピュータ5内で作成し、更にこ
のCADデータから各頂点の座標ファイル(表1)と面定
義ファイル(表2)とを作成し、保存した。EXAMPLE An example based on the above embodiment will be described. The computer 5 is a Pentium (registered trademark) 60 Hz personal computer, the robot 1 is a general industrial robot of 6-axis articulated type, and the computer 5 and the robot 1 are connected by RS-232C. The camera 3 is attached to the robot hand 2 of the robot 1 so that the image data can be taken into the computer 5 by a dedicated image board (see FIG. 1).
reference). The CAD data of the object 4 shown in FIG. 3 is DX.
The file was created in the computer 5 in the F file format, and the coordinate file of each vertex (Table 1) and the surface definition file (Table 2) were created from this CAD data and saved.
【0045】[0045]
【表1】 [Table 1]
【0046】[0046]
【表2】 [Table 2]
【0047】上記CADデータから、図4、図5又は図
6に見られるような3次元空間内で対象物6,7,8を回
転させた3次元図形(斜視図1,2,3)を得て、最外隔の
輪郭線を境に内側を塗りつぶすことで、例えば図5の2
次元サーフェースモデルを得る(図7)。CAD図形特徴
量は、この2次元サーフェースモデルについて算出す
る。本実施例のCAD図形特徴量としては、(1)重心、
(2)輪郭線、(3)重心から輪郭線までの距離、(4)重心か
ら輪郭線までの距離における極大値及び極小値、(5)重
心から輪郭線までの距離のFFTデータを選択した。図
8は(3)重心から輪郭線までの距離(CLj)の分布図であ
り、図9は(5)重心から輪郭線までの距離のFFTデー
タの周波数特性図である。From the CAD data, three-dimensional figures (perspective views 1, 2, 3) obtained by rotating the objects 6, 7, 8 in the three-dimensional space as shown in FIG. 4, FIG. 5 or FIG. Then, by painting the inside with the outermost contour line as a boundary, for example, in FIG.
Obtain a dimensional surface model (Fig. 7). The CAD figure feature amount is calculated for this two-dimensional surface model. As the CAD figure feature amount of this embodiment, (1) centroid,
(2) contour line, (3) distance from the center of gravity to the contour line, (4) maximum and minimum values in the distance from the center of gravity to the contour line, and (5) FFT data of the distance from the center of gravity to the contour line are selected. . FIG. 8 is a distribution diagram of the distance (CLj) from the center of gravity to the contour line (3), and FIG. 9 is a frequency characteristic diagram of FFT data of the distance from the center of gravity to the contour line (5).
【0048】CAD図形特徴量は、各対象物のCADデ
ータに対してデータベース9にまとめておく。表3は、
上記図4(左端番号1に対応)、図5(左端番号2に対応)
及び図6(左端番号3に対応)から得られる2次元サーフ
ェースモデルのCAD図形特徴量のデータベース9(図
1参照)を一覧にした表である。右から2列目の名称は
グループ名であり、noneは上方から見ただけの特定物4
の画像データと突き合わせればよいグループを、type_a
とtype_bとはそれぞれ上方及び斜め上方の2方向から見
た特定物4の画像データと突き合わせなければならない
グループ(aとbとは図形の外観による組分けであり、更
にc,d,…と増やしてもよい)を意味し、これらのデータ
はグループごとに分類して保管する。The CAD figure feature amount is collected in the database 9 for the CAD data of each object. Table 3 shows
Figure 4 (corresponding to the leftmost number 1), Figure 5 (corresponding to the leftmost number 2)
7 is a table listing a database 9 (see FIG. 1) of CAD figure feature amounts of a two-dimensional surface model obtained from FIG. 6 (corresponding to the leftmost number 3). The name in the second column from the right is the group name, and none is the specific object 4 just seen from above.
Group that can be matched with the image data of
And type_b are groups that must be matched with the image data of the specific object 4 viewed from above and obliquely above, respectively (a and b are groupings according to the appearance of the figure, and are further increased to c, d, ...). These data are classified into groups and stored.
【0049】[0049]
【表3】 [Table 3]
【0050】ロボットハンド2の下方に特定物4が置か
れた後、カメラ3が特定物を捉え、この画像データにつ
いて二値化による2次元サーフェースモデルを得て、上
記手順に基づきカメラ画像特徴量を算出する。そして、
このカメラ画像特徴量を、CAD図形特徴量のデータベ
ース9との突き合わせるのである。この実施例では、F
FTデータによる大まかな一致を得た後の段階から説明
する。まずCADデータ特徴量(CLj)に比例定数Kを
乗じてカメラ画像特徴量(ILn)の分布の幅とを一致さ
せる。次に、図10の輪郭線の分布図に見られるように、
一次補間によりカメラ画像特徴量の個数をCADデータ
特徴量の個数Ncに一致させる。この状態で、特定物4
と対象物7とが同一であれば、前記分布図の重ね合せは
ほぼ一致(図11参照)し、異なればずれるのである(図12
参照)。なお、以上の比較は、コンピュータ5内で一致
度Pの計算として実施される。After the specific object 4 is placed below the robot hand 2, the camera 3 captures the specific object, a two-dimensional surface model is obtained by binarizing this image data, and the camera image characteristics are obtained based on the above procedure. Calculate the amount. And
The camera image feature amount is matched with the CAD figure feature amount database 9. In this embodiment, F
It will be explained from the stage after the rough agreement by the FT data is obtained. First, the CAD data feature amount (CLj) is multiplied by the proportional constant K to match the width of the distribution of the camera image feature amount (ILn). Next, as you can see in the contour map of Figure 10,
The number of camera image features is matched with the number Nc of CAD data features by primary interpolation. In this state, the specific object 4
If the target object 7 and the target object 7 are the same, the superposition of the distribution charts is substantially the same (see FIG. 11), and if they are different, they are displaced (FIG. 12).
reference). The above comparison is carried out in the computer 5 as the calculation of the degree of coincidence P.
【0051】一致度Pが最も低いCADデータ特徴量を
持つ対象物7がカメラ3で捉えた特定物4であり、CA
Dデータの種類からその特定物4がどのような安定姿勢
を取っているかが判断され、CADデータ特徴量とカメ
ラ画像特徴量との極小値と最小値、又は極大値と最大値
それぞれの重心への方向のずれを対象物7と特定物4と
の位相差として認識し、ロボットハンド2は、この位相
差だけ把持方向を水平回転させ、予め算出しておいた把
持位置に基づいて特定物4を掴むのである。なお、ロボ
ットハンド2の操作については、コンピュータ5を用い
た既存の操作手段を利用した。The object 7 having the CAD data feature amount with the lowest coincidence P is the specific object 4 captured by the camera 3, and CA
The stable posture of the specific object 4 is determined from the type of the D data, and the minimum and minimum values of the CAD data feature amount and the camera image feature amount or the maximum value and the maximum value of the maximum value are obtained. Is recognized as a phase difference between the target object 7 and the specific object 4, the robot hand 2 horizontally rotates the gripping direction by this phase difference, and based on the grip position calculated in advance, the specific object 4 is detected. Grab. For the operation of the robot hand 2, the existing operation means using the computer 5 was used.
【0052】カメラが捉えた特定物がどの対象物である
かを判断するのは、上記手順に沿って計算、比較すれば
よいのであるが、実際の特定物の向きは必ずしも認識し
やすい方向とは限らない。場合によっては、ベルトコン
ベアで送られてくる部品を認識するため、動かしたり、
直立した特定物を倒したりするのが便宜である。本実施
例では、上記手順によりコンピュータが特定物を認識し
た時点でこの特定物の方向も識別されており、例えば、
直立した特定物はロボットハンドが倒すなどして、把持
しやすくするルーチンを付加している。これも、3次元
CADデータを用いて特定物を3次元的に認識できるた
めに、特定物の高さ方向についての認識が可能であるこ
とによる利点である。To determine which object the specific object captured by the camera is, the calculation and comparison may be performed according to the above procedure, but the actual orientation of the specific object is not always easy to recognize. Not necessarily. In some cases, in order to recognize the parts sent by the belt conveyor, you can move them,
It is convenient to defeat certain objects that are upright. In the present embodiment, the direction of this specific object is also identified when the computer recognizes the specific object by the above procedure, and for example,
The upright specific object is added with a routine that makes it easier for the robot hand to hold, such as when it is tilted. This is also an advantage because the specific object can be recognized three-dimensionally by using the three-dimensional CAD data, so that the specific object can be recognized in the height direction.
【0053】[0053]
【発明の効果】本手法は、従来見られたCAD情報を用
いた加工工程と同様に、組立工程においてもCAD情報
を利用するもので、設計から加工、組立に至るまで一貫
したデータ管理を実現できる点が特徴である。しかも、
特定物の認識に利用するCAD情報が3次元データであ
ることから特定物を3次元的に認識できる利点があり、
より柔軟なロボットハンド等の操作が可能になるのであ
る。これは、従来の産業用ロボットを高知能化すること
を意味する。As described above, this method uses CAD information in the assembly process as well as the conventional machining process using CAD information, and realizes consistent data management from design to processing and assembly. The feature is that you can do it. Moreover,
Since the CAD information used for recognizing the specific object is three-dimensional data, there is an advantage that the specific object can be recognized three-dimensionally.
This enables more flexible operation of the robot hand and the like. This means increasing the intelligence of the conventional industrial robot.
【0054】工場の生産ライン等で用いられているロボ
ットでは、かなりの高速性が要求されるため、現在では
CAD情報から予めCAD図形特徴量及び把持位置のデ
ータをデータベースにまとめて記憶させ、特定物の認識
の段階では、このデータベースと特定物のカメラ図形特
徴量とを比較するのが便宜である。しかしながら、将来
的によりコンピュータの処理能力が向上すれば、リアル
タイムにCAD情報をコンピュータに移しながら、即座
に特定物の認識が可能になり、より広範な利用が可能に
なる。Since a robot used in a factory production line or the like is required to have a considerably high speed, at present, the CAD figure feature amount and the gripping position data are collectively stored in a database from the CAD information and specified. At the stage of object recognition, it is convenient to compare this database with the camera figure feature amount of the specific object. However, if the processing capacity of the computer is further improved in the future, the CAD information can be transferred to the computer in real time, and the specific object can be immediately recognized, which enables wider use.
【0055】現在段階においては、本発明は、テレオペ
レーションによる工場の無人化に大きく貢献する。近年
では、コンピュータによるネットワークが大きく取り上
げられており、CAD情報をパソコンのネットワークで
やり取りすることも少なくない。本発明の手法及び装置
により、遠隔地からロボットを操作するコンピュータへ
対象物のCAD情報を送り、更にそのデータに基づいて
ロボットが特定物を3次元的に認識、把持できるように
なれば、テレオペレーションによる工場の無人化が実現
できるようになるのである。At the present stage, the present invention greatly contributes to the unmanned operation of the factory by teleoperation. In recent years, a computer network has been widely taken up, and CAD information is often exchanged on a personal computer network. With the method and apparatus of the present invention, if the CAD information of an object is sent from a remote location to a computer that operates the robot and the robot can recognize and grip a specific object three-dimensionally based on the data, the Unmanned factories can be realized through operations.
【図1】ロボットハンドを操作する3次元認識装置を表
わしたブロック図である。FIG. 1 is a block diagram showing a three-dimensional recognition device that operates a robot hand.
【図2】各特徴量を比較する順序を表わしたフローチャ
ートである。FIG. 2 is a flowchart showing an order of comparing respective feature amounts.
【図3】実施例で使用した対象物の斜視図である。FIG. 3 is a perspective view of an object used in an example.
【図4】図3の対象物のCAD図形を回転させた3次元
図形の斜視図1である。4 is a perspective view 1 of a three-dimensional figure obtained by rotating the CAD figure of the object of FIG. 3;
【図5】図3の対象物のCAD図形を回転させた3次元
図形の斜視図2である。5 is a perspective view 2 of a three-dimensional figure obtained by rotating the CAD figure of the object shown in FIG. 3;
【図6】図3の対象物のCAD図形を回転させた3次元
図形の斜視図3である。6 is a perspective view 3 of a three-dimensional figure obtained by rotating the CAD figure of the object shown in FIG. 3;
【図7】図5の3次元図形から得たサーフェースモデル
である。FIG. 7 is a surface model obtained from the three-dimensional figure of FIG.
【図8】CAD図形特徴量である重心から輪郭線までの
距離の分布図である。FIG. 8 is a distribution diagram of a distance from a center of gravity, which is a CAD graphic feature amount, to a contour line.
【図9】CAD図形特徴量である重心から輪郭線までの
距離のFFTデータの周波数特性図である。FIG. 9 is a frequency characteristic diagram of FFT data of a distance from a center of gravity, which is a CAD graphic feature amount, to a contour line.
【図10】CAD図形特徴量とカメラ画像特徴量との大
きさ、個数を一致させる状態の輪郭線の分布図である。FIG. 10 is a distribution diagram of contour lines in a state where the CAD graphic feature amount and the camera image feature amount are matched in size and number.
【図11】図10の状態で、各輪郭線がほぼ一致状態の輪
郭線の分布図である。FIG. 11 is a distribution diagram of contour lines in a state where the contour lines substantially match each other in the state of FIG.
【図12】図10の状態で、各輪郭線が異なり、ずれた状
態の輪郭線の分布図である。FIG. 12 is a distribution diagram of contour lines in which each contour line is different and shifted in the state of FIG.
1 ロボット 2 ロボットハンド 3 カメラ 4 特定物 5 コンピュータ 6 対象物 7 対象物 8 対象物 9 データーベース 1 Robot 2 Robot Hand 3 Camera 4 Specific Object 5 Computer 6 Object 7 Object 8 Object 9 Database
───────────────────────────────────────────────────── フロントページの続き (72)発明者 宗澤 良臣 岡山県岡山市庭瀬161番地の1 (72)発明者 内田 孝夫 岡山県倉敷市宮前419番地の4 ─────────────────────────────────────────────────── ─── Continuation of the front page (72) Inventor Yoshiomi Sozawa 1-161, Nise, Okayama-shi, Okayama (72) Inventor Takao Uchida 4-4, 419, Miyamae, Kurashiki-shi, Okayama
Claims (8)
物それぞれを所定方向の直交面に投影した各2次元図形
を導いて、該2次元図形の重心、輪郭線、該輪郭線上に
配列した各点における輪郭線との法線方向、重心から輪
郭線上に配列した各点までの距離等のCAD図形特徴量
を算出し、カメラが所定方向から捉えた特定の対象物か
ら得られる2次元画像における前記CAD図形特徴量と
同様のカメラ図形特徴量を算出して、CAD図形特徴量
とカメラ図形特徴量とを比較して、カメラが捉えた特定
の対象物の種類、姿勢等を判別するCAD情報を用いた
3次元認識手法。1. A center of gravity, a contour line, and an array on the contour line of the two-dimensional figure are derived by deriving two-dimensional figures obtained by projecting the respective objects on orthogonal planes in a predetermined direction from the CAD information of the plurality of objects. Two-dimensional obtained from a specific object captured from a predetermined direction by the camera by calculating the CAD figure feature amount such as the normal direction to the contour line at each point and the distance from the center of gravity to each point arranged on the contour line The same camera figure feature amount as the CAD figure feature amount in the image is calculated, and the CAD figure feature amount and the camera figure feature amount are compared to determine the type, posture, etc. of the specific object captured by the camera. A three-dimensional recognition method using CAD information.
ち、2次元図形の重心から輪郭線上に配列した各点まで
の距離の極大値と該極大値を除いた次の極大値の示す2
点を結んだ方向の法線方向をロボットハンドによる対象
物の把持方向として判別するCAD情報を用いた3次元
認識手法。2. The CAD figure feature quantity according to claim 1, wherein the maximum value of the distance from the center of gravity of the two-dimensional figure to each point arranged on the contour line and the next maximum value excluding the maximum value are shown.
A three-dimensional recognition method using CAD information that discriminates the normal direction of the direction connecting the dots as the gripping direction of the object by the robot hand.
ち、2次元図形の重心から輪郭線上に配列した各点まで
の距離の極小値と該極小値を除いた次の極小値の示す2
点を結んだ方向をロボットハンドによる対象物の把持方
向として判別するCAD情報を用いた3次元認識手法。3. The CAD figure feature quantity according to claim 1, which shows a minimum value of a distance from a center of gravity of a two-dimensional figure to each point arranged on a contour line and a next minimum value excluding the minimum value.
A three-dimensional recognition method using CAD information for discriminating the direction connecting the dots as the gripping direction of the object by the robot hand.
ち、2次元図形の輪郭線上に配列した各点における輪郭
線との法線方向に殆ど変化がない点が並ぶ輪郭線部分の
うち最も長いものの法線方向をロボットハンドによる対
象物の把持方向として判別するCAD情報を用いた3次
元認識手法。4. The CAD figure feature quantity according to claim 1, which is the most among the contour line portions in which points having almost no change in the normal direction to the contour line at each point arranged on the contour line of the two-dimensional figure are arranged. A three-dimensional recognition method using CAD information that discriminates the normal direction of a long object as the gripping direction of an object by a robot hand.
ち、2次元図形の輪郭線上に配列した各点における輪郭
線との法線方向の度数分布の極大値を調べ、該極大値の
うちから約πだけずれた法線方向の組の一方をロボット
ハンドによる対象物の把持方向として判別するCAD情
報を用いた3次元認識手法。5. The CAD figure feature quantity according to claim 1, wherein the maximum value of the frequency distribution in the normal direction to the contour line at each point arranged on the contour line of the two-dimensional figure is examined, and the maximum value is calculated. A three-dimensional recognition method using CAD information that discriminates one of the pairs of normal directions deviated by about π from the grip direction of the object by the robot hand.
持方向がある場合、該把持方向のうち2つ以上が一致す
る把持方向に平行かつ重心を通過する方向を把持位置と
し、前記把持方向がすべて異なる場合、請求項1記載の
CAD図形特徴量のうち、2次元図形の輪郭線上に配列
した各点における輪郭線との法線方向の度数分布の極大
値を調べ、該極大値のうちから約πだけずれた法線方向
の組の一方をロボットハンドによる対象物の把持方向と
し、該把持方向に平行かつ重心を通過する方向を把持位
置として判別するCAD情報を用いた3次元認識手法。6. When the specific object according to claim 1 has a plurality of gripping directions, a gripping position is defined as a direction parallel to a gripping direction in which two or more of the gripping directions match and passing through the center of gravity. If all are different, the maximum value of the frequency distribution in the normal direction to the contour line at each point arranged on the contour line of the two-dimensional figure in the CAD figure feature amount according to claim 1 is examined, and the maximum value A three-dimensional recognition method using CAD information in which one of the pairs of normal directions deviated by about π from is set as the gripping direction of the object by the robot hand, and the direction parallel to the gripping direction and passing through the center of gravity is determined as the gripping position. .
ンピュータが複数の対象物の各CAD情報を作成及び記
憶するCAD情報取扱手段、該CAD情報から前記対象
物それぞれを所定方向の直交面に投影した各2次元図形
を導いて、該2次元図形の重心、輪郭線、該輪郭線上に
配列した各点における輪郭線との法線方向、重心から輪
郭線上に配列した各点までの距離等のCAD図形特徴量
を算出するCAD図形特徴量抽出手段、カメラが所定方
向から捉えた特定の対象物から2次元画像を得る画像デ
ータ取得手段、該2次元画像における前記CAD図形特
徴量と同様のカメラ図形特徴量を算出するカメラ図形特
徴量抽出手段、CAD図形特徴量とカメラ図形特徴量と
を比較し、カメラが捉えた特定の対象物の種類、姿勢等
を判別する判別手段とを備えてなるCAD情報を用いた
3次元認識装置。7. A CAD information handling means comprising a camera and a computer, the computer creating and storing CAD information of a plurality of objects, and projecting each of the objects from the CAD information onto an orthogonal plane in a predetermined direction. CAD for guiding each two-dimensional figure, such as the center of gravity of the two-dimensional figure, the contour line, the normal direction to the contour line at each point arranged on the contour line, and the distance from the center of gravity to each point arranged on the contour line CAD figure feature quantity extraction means for calculating figure feature quantity, image data acquisition means for obtaining a two-dimensional image from a specific object captured by a camera from a predetermined direction, camera figure similar to the CAD figure feature quantity in the two-dimensional image A camera figure feature quantity extracting means for calculating a feature quantity, and a distinguishing means for comparing the CAD figure feature quantity and the camera figure feature quantity to determine the type, posture, etc. of a specific object captured by the camera. A three-dimensional recognition device using CAD information, including:
が捉えた特定の対象物の種類、姿勢等と一致させた対象
物のCAD情報から導いた2次元図形のCAD図形特徴
量から該対象物の把持方向及び把持位置を算出する把持
判定手段を加えてなるCAD情報を用いた3次元認識装
置。8. The computer according to claim 7, wherein the object is selected from the CAD figure feature amount of a two-dimensional figure derived from the CAD information of the object matched with the type, posture, etc. of the specific object captured by the camera. A three-dimensional recognition device using CAD information, which is added with a grip determination unit that calculates the grip direction and grip position of the.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP07327408A JP3101674B2 (en) | 1995-12-15 | 1995-12-15 | 3D recognition method and apparatus using CAD information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP07327408A JP3101674B2 (en) | 1995-12-15 | 1995-12-15 | 3D recognition method and apparatus using CAD information |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH09167234A true JPH09167234A (en) | 1997-06-24 |
JP3101674B2 JP3101674B2 (en) | 2000-10-23 |
Family
ID=18198828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP07327408A Expired - Fee Related JP3101674B2 (en) | 1995-12-15 | 1995-12-15 | 3D recognition method and apparatus using CAD information |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP3101674B2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11344322A (en) * | 1998-06-01 | 1999-12-14 | Daihatsu Motor Co Ltd | Work posture discrimination device |
JP2008275391A (en) * | 2007-04-26 | 2008-11-13 | Canon Inc | Position attitude measurement device and method |
JP2009523623A (en) * | 2006-01-23 | 2009-06-25 | ジェローム グロボア, | Method and apparatus for automatic workpiece gripping |
JP2010117754A (en) * | 2008-11-11 | 2010-05-27 | Seiko Epson Corp | Workpiece attitude recognition method and workpiece attitude recognition system |
WO2018155010A1 (en) * | 2017-02-22 | 2018-08-30 | 日本電産サンキョー株式会社 | Edge detection device and alignment device |
JP2018142267A (en) * | 2017-02-28 | 2018-09-13 | 三菱重工業株式会社 | Object determination device, object determination method, program and data structure of feature quantity string |
WO2019187179A1 (en) * | 2018-03-26 | 2019-10-03 | オオクマ電子株式会社 | Injection solution container recognition system and injection solution container recognition method |
WO2021096320A1 (en) * | 2019-11-15 | 2021-05-20 | 주식회사 씨메스 | Method and apparatus for calibrating position of robot using 3d scanner |
KR20210059664A (en) * | 2019-11-15 | 2021-05-25 | 주식회사 씨메스 | Method and Apparatus for Position Calibation for Robot Using 3D Scanner |
WO2021118702A1 (en) * | 2019-12-12 | 2021-06-17 | Mujin, Inc. | Method and computing system for performing motion planning based on image information generated by a camera |
-
1995
- 1995-12-15 JP JP07327408A patent/JP3101674B2/en not_active Expired - Fee Related
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11344322A (en) * | 1998-06-01 | 1999-12-14 | Daihatsu Motor Co Ltd | Work posture discrimination device |
JP2009523623A (en) * | 2006-01-23 | 2009-06-25 | ジェローム グロボア, | Method and apparatus for automatic workpiece gripping |
JP2008275391A (en) * | 2007-04-26 | 2008-11-13 | Canon Inc | Position attitude measurement device and method |
US8639025B2 (en) | 2007-04-26 | 2014-01-28 | Canon Kabushiki Kaisha | Measurement apparatus and control method |
JP2010117754A (en) * | 2008-11-11 | 2010-05-27 | Seiko Epson Corp | Workpiece attitude recognition method and workpiece attitude recognition system |
CN109804221A (en) * | 2017-02-22 | 2019-05-24 | 日本电产三协株式会社 | Edge detection device and alignment device |
WO2018155010A1 (en) * | 2017-02-22 | 2018-08-30 | 日本電産サンキョー株式会社 | Edge detection device and alignment device |
JP2018142267A (en) * | 2017-02-28 | 2018-09-13 | 三菱重工業株式会社 | Object determination device, object determination method, program and data structure of feature quantity string |
WO2019187179A1 (en) * | 2018-03-26 | 2019-10-03 | オオクマ電子株式会社 | Injection solution container recognition system and injection solution container recognition method |
JP2019166259A (en) * | 2018-03-26 | 2019-10-03 | オオクマ電子株式会社 | Container recognition system for parenteral solution |
WO2021096320A1 (en) * | 2019-11-15 | 2021-05-20 | 주식회사 씨메스 | Method and apparatus for calibrating position of robot using 3d scanner |
KR20210059664A (en) * | 2019-11-15 | 2021-05-25 | 주식회사 씨메스 | Method and Apparatus for Position Calibation for Robot Using 3D Scanner |
WO2021118702A1 (en) * | 2019-12-12 | 2021-06-17 | Mujin, Inc. | Method and computing system for performing motion planning based on image information generated by a camera |
JP2022503406A (en) * | 2019-12-12 | 2022-01-12 | 株式会社Mujin | A method and calculation system for executing motion planning based on the image information generated by the camera. |
US11717971B2 (en) | 2019-12-12 | 2023-08-08 | Mujin, Inc. | Method and computing system for performing motion planning based on image information generated by a camera |
Also Published As
Publication number | Publication date |
---|---|
JP3101674B2 (en) | 2000-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110648361B (en) | Real-time pose estimation method and positioning and grabbing system of three-dimensional target object | |
CN109801337B (en) | 6D pose estimation method based on instance segmentation network and iterative optimization | |
Song et al. | CAD-based pose estimation design for random bin picking using a RGB-D camera | |
Klingbeil et al. | Grasping with application to an autonomous checkout robot | |
Ikeuchi et al. | Determining grasp configurations using photometric stereo and the prism binocular stereo system | |
CN110315525A (en) | A kind of robot workpiece grabbing method of view-based access control model guidance | |
Dong et al. | Real-time robotic manipulation of cylindrical objects in dynamic scenarios through elliptic shape primitives | |
CN107671008A (en) | A kind of part stream waterline automatic sorting boxing apparatus of view-based access control model | |
CN110378325B (en) | Target pose identification method in robot grabbing process | |
CN108080289A (en) | Robot sorting system, robot sorting control method and device | |
Herakovic | Robot vision in industrial assembly and quality control processes | |
JP2004050390A (en) | Work taking out device | |
CN109145969A (en) | Processing method, device, equipment and the medium of three-dimension object point cloud data | |
CN113538459B (en) | Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection | |
Suzuki et al. | Grasping of unknown objects on a planar surface using a single depth image | |
JPH09167234A (en) | Three-dimensional recognizing method and its device using card information | |
CN111882610A (en) | Method for grabbing target object by service robot based on elliptical cone artificial potential field | |
Mateo et al. | Visual perception for the 3D recognition of geometric pieces in robotic manipulation | |
CN114998578B (en) | Dynamic multi-article positioning, grabbing and packaging method and system | |
JP2520397B2 (en) | Visual system for distinguishing contact parts | |
CN115648176A (en) | Vision-guided pick-and-place method, mobile robot, and computer-readable storage medium | |
Rodriguez-Garavito et al. | 3D object pose estimation for robotic packing applications | |
JP2000263482A (en) | Attitude searching method and attitude searching device of work, and work grasping method and work grasping device by robot | |
JP6456557B1 (en) | Gripping position / posture teaching apparatus, gripping position / posture teaching method, and robot system | |
WO2018135326A1 (en) | Image processing device, image processing system, image processing program, and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
S111 | Request for change of ownership or part of ownership |
Free format text: JAPANESE INTERMEDIATE CODE: R313115 |
|
R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090825 Year of fee payment: 9 |
|
S531 | Written request for registration of change of domicile |
Free format text: JAPANESE INTERMEDIATE CODE: R313532 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090825 Year of fee payment: 9 |
|
R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090825 Year of fee payment: 9 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100825 Year of fee payment: 10 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100825 Year of fee payment: 10 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110825 Year of fee payment: 11 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110825 Year of fee payment: 11 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120825 Year of fee payment: 12 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120825 Year of fee payment: 12 |
|
S111 | Request for change of ownership or part of ownership |
Free format text: JAPANESE INTERMEDIATE CODE: R313117 |
|
R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120825 Year of fee payment: 12 |
|
LAPS | Cancellation because of no payment of annual fees |