JPS58114172A - Three-dimensional visual input device - Google Patents
Three-dimensional visual input deviceInfo
- Publication number
- JPS58114172A JPS58114172A JP21053581A JP21053581A JPS58114172A JP S58114172 A JPS58114172 A JP S58114172A JP 21053581 A JP21053581 A JP 21053581A JP 21053581 A JP21053581 A JP 21053581A JP S58114172 A JPS58114172 A JP S58114172A
- Authority
- JP
- Japan
- Prior art keywords
- camera device
- circuit
- robot
- output
- dimensional visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Input (AREA)
Abstract
Description
【発明の詳細な説明】
(1)発明の技術分野
本発明は3次元視覚入力装置に係り、特にロボットのハ
ンドリング作業等に用いて好適な2台のカメラ装置を使
用した3次元視覚入力装置に関する。Detailed Description of the Invention (1) Technical Field of the Invention The present invention relates to a three-dimensional visual input device, and particularly relates to a three-dimensional visual input device using two camera devices suitable for use in robot handling work, etc. .
(2)技術の背景
物体を3次元画像としてカメラ装置を用いて認識する場
合、通常用いられている自動焦点カメラ装置においては
焦点合せはファインダーを人間が目視することで被写体
を選択しているために被写体以外の背景等に焦点が合う
ことがない。しかし、ロボット等を用いて物体をハンド
リングする場合には物体の置かれた位置を正しく認識し
てロボットに正確な位置情報を与える必要がある。(2) Background of the technology When recognizing an object as a three-dimensional image using a camera device, in the normally used autofocus camera device, the focus is selected by a human visually checking the viewfinder. The background, etc. other than the subject cannot be brought into focus. However, when handling an object using a robot or the like, it is necessary to correctly recognize the position of the object and provide accurate position information to the robot.
この場合、物体をカメラ装置によって俯鰍し、三角測量
の測量法等を用いて位置情報を認識する等の方法が提案
されているがロボットが握持すべき物体の置かれるべき
平面はあらかじめ設定されているが、奥行及び左右方向
の位置を限定されずに置かれた場合、あるいは複数個の
物体を同一平面上に配置されること等も行われるために
上記した物体認識方法では正しい物体位置情報が得られ
なかった。このために物体位置情報しく認識できる3次
元視覚入力装置が要望されていた。In this case, methods have been proposed in which the object is viewed overhead using a camera device and the position information is recognized using a surveying method such as triangulation, but the plane on which the object to be grasped by the robot is placed is set in advance. However, if the object is placed without any restrictions on its depth or horizontal position, or if multiple objects are placed on the same plane, the above object recognition method cannot accurately locate the object. No information was obtained. For this reason, there has been a demand for a three-dimensional visual input device that can recognize object position information.
(3)従来技術と問題点
第1図は従来の位置認識を行うためにカメラ装置を用い
て物体の位置を知る方法を示すもので、認識しようとす
る物体2が載置台3上に配置されているとすれば、該物
体2を載置台3から所定高さH位置に配されたカメラ装
置1により俯鰍することで三角測量の原理で物体2の位
置や高さを求めることができる。(3) Prior art and problems Figure 1 shows a conventional method of determining the position of an object using a camera device for position recognition. If so, the position and height of the object 2 can be determined by the principle of triangulation by looking down on the object 2 with the camera device 1 placed at a predetermined height H from the mounting table 3.
しかし、上記した測量方法ではカメラ装置の定まった視
野内に、すなわちカメラ装置の配置位置から所定距11
1Lに物体を配置する必要がある。However, in the above-mentioned surveying method, within the fixed field of view of the camera device, that is, at a predetermined distance of 11
It is necessary to place an object in 1L.
しかし、載置台上の物体2は常に一定位置に配されるこ
となく、たとえば破線で示す位置に物体2を載置したと
きはその物体2はカメラ装置1の視野外か焦点の合わな
い位置となるため位置計測が不可能となる欠点を有する
。However, the object 2 on the mounting table is not always placed at a fixed position; for example, when the object 2 is placed at the position indicated by the broken line, the object 2 may be out of the field of view of the camera device 1 or out of focus. This has the disadvantage that position measurement is impossible.
(4)発明の目的
本発明は上記した従来の欠点に鑑み、載置台上に置かれ
た物体の軸方向に配設した第1のカメラ装置と、該第1
のカメラ装置の光軸と直交する方向に配置した第2のカ
メラ装置に第1のカメラ装置よりの位置情報を与えて自
動的に焦点調整し、質の良い3次元情報を該第2のカメ
ラ装置より得て、ロボット等を:1ントロールすること
を目的とするものである。(4) Purpose of the Invention In view of the above-mentioned conventional drawbacks, the present invention provides a first camera device disposed in the axial direction of an object placed on a mounting table;
The position information from the first camera device is given to a second camera device arranged in a direction perpendicular to the optical axis of the camera device, the focus is automatically adjusted, and high-quality three-dimensional information is transmitted to the second camera device. The purpose is to obtain data from a device and control robots, etc.
〔5)発明の構成
そして、この目的は本発明によれば、物体の形状認識を
行うために該物体の垂直方向に配置したー第1のカメラ
装置と、該物体の水平方向に配置した第2のカメラ装置
とを有し、上記第2のカメラ装置は焦点を外部より制御
可能な可変焦点機構を有し、該第1のカメラ装置により
該物体画像のパターン認識によって物体位置を計測した
位置データを該第2のカメラ装置に入力し、該カメラ装
置を上記物体に焦点台上を行って得た画像により3次元
の形状位置を認識することを特徴とする3次元視覚入力
装置を提供することで達成される。[5) Structure of the Invention According to the present invention, in order to recognize the shape of an object, a first camera device is arranged in the vertical direction of the object, and a first camera device is arranged in the horizontal direction of the object. The second camera device has a variable focus mechanism whose focus can be controlled from the outside, and the object position is measured by pattern recognition of the object image by the first camera device. To provide a three-dimensional visual input device, characterized in that data is input to the second camera device, and the three-dimensional shape position is recognized from an image obtained by moving the camera device over the object on a focusing table. This is achieved by
(6)発明の実施例
以下、本発明の1実施例を第2図乃至第5図について説
明する。(6) Embodiment of the Invention An embodiment of the present invention will be described below with reference to FIGS. 2 to 5.
第2図は本発明の3次元視覚入力装置の配置を示す斜視
図で載置台3に載置した物体2をロボット4で握持し1
、ハンドリングを行う場合を説明すると、第1のカメラ
装置IVは載置台3のX軸方向に、すなわち物体2の軸
方向に載置台3の上面に焦点が合うように配置する。FIG. 2 is a perspective view showing the arrangement of the three-dimensional visual input device of the present invention, in which an object 2 placed on a mounting table 3 is grasped by a robot 4.
To describe the case of handling, the first camera device IV is arranged so as to focus on the top surface of the mounting table 3 in the X-axis direction of the mounting table 3, that is, in the axial direction of the object 2.
第2のカメラ装置IHはX軸方向に、すなわちZ軸と直
交する水平面内に配置されている。The second camera device IH is arranged in the X-axis direction, that is, in a horizontal plane perpendicular to the Z-axis.
上述の如き構成の3次元視覚入力装置に於て、第1のカ
メラ装置1■より得られた載置台3上の物体2の平面像
を画像処理することにより視野内に設定したXY座標5
における物体位置を計測するために該第1のカメラ装置
1vの出力を第3図に示すように水平方向パターン計測
回路6の出力を判定回路8に加える。一方、第2のカメ
ラ装置IHよりの出力を垂直方向パターン計測回路7に
加え、該垂直方向パターン計測回路7の出力は判定回路
8に加えられる。In the three-dimensional visual input device configured as described above, the XY coordinates 5 are set within the field of view by image processing the plane image of the object 2 on the mounting table 3 obtained from the first camera device 1.
In order to measure the object position at , the output of the first camera device 1v and the output of the horizontal pattern measuring circuit 6 are applied to the determination circuit 8 as shown in FIG. On the other hand, the output from the second camera device IH is applied to the vertical pattern measurement circuit 7, and the output of the vertical pattern measurement circuit 7 is applied to the determination circuit 8.
判定回路8の結果に基づいて焦点制御回路9を通して第
2のカメラ装置IHの焦点を正しく物体2に合せる。Based on the result of the determination circuit 8, the second camera device IH is correctly focused on the object 2 through the focus control circuit 9.
判定回路8よりの判定出力は物体2の水平位置。The judgment output from the judgment circuit 8 is the horizontal position of the object 2.
^さに応じてロボット4の位置づけすべきXYZ座標を
ロボット制御回路10を通じてロボット4に加えること
でロボット4は物体2を握持してハンドリングを行うこ
とになる。The robot 4 grasps and handles the object 2 by applying the XYZ coordinates in which the robot 4 should be positioned according to the object 2 to the robot 4 through the robot control circuit 10.
第2図は載置台3上に1個の物体2を載置した場合を示
したが第4図に示すように複数の物体・2a、2b、2
c、2dを載置台3に載置した場合には第2のカメラ装
置IHに近い物体、すなわちX軸座標の大きい物体2d
より順次焦点を合せて計測を行いロボット4により順次
ハンドリングするようにすればよい。Although FIG. 2 shows the case where one object 2 is placed on the mounting table 3, as shown in FIG.
c and 2d are placed on the mounting table 3, an object close to the second camera device IH, that is, an object 2d with a large X-axis coordinate.
What is necessary is to sequentially focus and measure the objects and sequentially handle them by the robot 4.
第5図に示すものは第2のカメラ装置IHに対して複数
の物体2a、 2b、2CがY座標の−yI点にX座
標方向に一列に並べられたもので物体2a(Y+、
X+)、2b()’l、+x2)。In the case shown in FIG. 5, a plurality of objects 2a, 2b, and 2C are arranged in a line in the X coordinate direction at the -yI point of the Y coordinate with respect to the second camera device IH.
X+), 2b()'l, +x2).
2c (−yl、’+X3)に配列した場合であるが
、この場合は第2のカメラ装置IHに近い物体2Cを計
測してハンドリングした後に順次物体2b。2c (-yl, '+X3), but in this case, after measuring and handling the object 2C close to the second camera device IH, the object 2b is sequentially arranged.
2aを計測し、ハンドリングを行うようにすればよい。What is necessary is to measure 2a and perform handling.
(7)発明の効果
以上、詳細に説明したように、本発明の3次元視覚入力
装置によれば物体像を正確に計測してロボットの物体へ
の位置づけに必要な情報を与えることが簡単な構成でで
きる特徴を有する。(7) Effects of the Invention As explained in detail above, the three-dimensional visual input device of the present invention makes it easy to accurately measure an object image and provide information necessary for positioning the robot to the object. It has characteristics that can be achieved through configuration.
第1図は従来の物体認識方法を説明するための説明図、
第2図は本発明の3次元視覚入力装置の構成を示す斜視
図、第3図は本発明の3次元視覚人力装置の系統図、第
4図、及び第5図は本発明の載置台上に載置した物体の
載置状態を示す斜視図である。
1 、LH,IV・−・カメラ装置、2.2a。
2b、2c、2d・・・物体、3・・・載置台、4・・
・ロボット、5・・・XY座標、6・・・水平方向パタ
ーン計測回路、7・・・垂直方向パターン計測回路、8
・・・焦点制御回路、10・・・ロボット制御回路。FIG. 1 is an explanatory diagram for explaining the conventional object recognition method,
FIG. 2 is a perspective view showing the configuration of the three-dimensional visual input device of the present invention, FIG. 3 is a system diagram of the three-dimensional visual human power device of the present invention, and FIGS. FIG. 1, LH, IV--Camera device, 2.2a. 2b, 2c, 2d...object, 3...mounting table, 4...
・Robot, 5...XY coordinates, 6...Horizontal pattern measurement circuit, 7...Vertical pattern measurement circuit, 8
... Focus control circuit, 10... Robot control circuit.
Claims (1)
た第1のカメラ装置と、該物体の水平方向に配置した第
2のカメラ装置を有し、上記第2のカメラ装置は焦点を
外部より制御可能な可変焦点機構を有し、該第1のカメ
ラ装置により該物体画像′めパターン認識によって物体
位置を計測した位置データを該第2のカメラ装置に入力
し、該第2のカメラ装置を上記物体に焦点合せを行って
得た画像により3次元の形状位置を認識することを特徴
とする3次元視覚入力装置。In order to recognize the shape of an object, it has a first camera device arranged vertically to the object, and a second camera device arranged horizontally to the object, and the second camera device focuses on the outside. inputting position data obtained by measuring the object position by pattern recognition of the object image by the first camera device to the second camera device; A three-dimensional visual input device, characterized in that a three-dimensional shape position is recognized by an image obtained by focusing on the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP21053581A JPS58114172A (en) | 1981-12-26 | 1981-12-26 | Three-dimensional visual input device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP21053581A JPS58114172A (en) | 1981-12-26 | 1981-12-26 | Three-dimensional visual input device |
Publications (1)
Publication Number | Publication Date |
---|---|
JPS58114172A true JPS58114172A (en) | 1983-07-07 |
Family
ID=16590963
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP21053581A Pending JPS58114172A (en) | 1981-12-26 | 1981-12-26 | Three-dimensional visual input device |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPS58114172A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6258380A (en) * | 1985-07-17 | 1987-03-14 | レコグニシヨン システムズ インコ−ポレ−テツド | Identifier |
JPH0327477A (en) * | 1990-05-09 | 1991-02-05 | Canon Inc | Body information processing method |
-
1981
- 1981-12-26 JP JP21053581A patent/JPS58114172A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6258380A (en) * | 1985-07-17 | 1987-03-14 | レコグニシヨン システムズ インコ−ポレ−テツド | Identifier |
JPH0327477A (en) * | 1990-05-09 | 1991-02-05 | Canon Inc | Body information processing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106338245B (en) | A kind of non-contact traverse measurement method of workpiece | |
CN110202573B (en) | Full-automatic hand-eye calibration and working plane calibration method and device | |
EP1629366B1 (en) | Single camera system for gesture-based input and target indication | |
CN108177143A (en) | A kind of robot localization grasping means and system based on laser vision guiding | |
CN101458072A (en) | Three-dimensional contour outline measuring set based on multi sensors and measuring method thereof | |
CN105716547A (en) | Rapid measurement device and method for planeness of mechanical workpiece | |
JPH03213251A (en) | Workpiece position detecting device | |
JPS6332306A (en) | Non-contact three-dimensional automatic dimension measuring method | |
JPS58114172A (en) | Three-dimensional visual input device | |
JPH06243236A (en) | Setting device for coordinate system correction parameter of visual recognizer | |
JP2519445B2 (en) | Work line tracking method | |
CN111136800B (en) | System for determining and drilling predetermined drilling points on a building surface | |
CN107796306B (en) | Quadratic element measuring instrument and measuring method | |
JPS6133880A (en) | Method of controlling gripping of robot | |
JPH0282106A (en) | Optical measuring method for three-dimensional position | |
JPH07139918A (en) | Method for measuring central position/radius of cylinder | |
CN108907897A (en) | Milling glue film carve shape in machine visible detection method | |
JP2626780B2 (en) | 3D image measurement method by segment correspondence | |
JP4568978B2 (en) | Manipulator device | |
JP2523420B2 (en) | Image processing method in optical measuring device | |
JP3415921B2 (en) | Length or distance measurement method and calibration jig for measurement | |
JP2601176Y2 (en) | Work handling equipment | |
JPH07139909A (en) | Method for measuring/controlling position of viewpoint | |
JP2005181023A (en) | Measuring device and method of height difference and tilt angle between planes | |
JP3442496B2 (en) | Position recognition method, inspection method and apparatus using the same, and control method and apparatus |