JPH11338532A - Teaching device - Google Patents

Teaching device

Info

Publication number
JPH11338532A
JPH11338532A JP14080198A JP14080198A JPH11338532A JP H11338532 A JPH11338532 A JP H11338532A JP 14080198 A JP14080198 A JP 14080198A JP 14080198 A JP14080198 A JP 14080198A JP H11338532 A JPH11338532 A JP H11338532A
Authority
JP
Japan
Prior art keywords
teaching
camera
shape model
work
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP14080198A
Other languages
Japanese (ja)
Inventor
Yuji Hosoda
祐司 細田
Makoto Hattori
誠 服部
Takao Shimura
孝夫 志村
Nobuhiko Sugawara
宣彦 菅原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP14080198A priority Critical patent/JPH11338532A/en
Publication of JPH11338532A publication Critical patent/JPH11338532A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

PROBLEM TO BE SOLVED: To make precise teaching operation performable for a work object which is poor in shape variation by specifying a part of a figure as a teaching index on a shape model displayed on a screen and calculating and recording the three-dimensional position corresponding to specified coordinates on the screen on the surface of the shape model as teaching data. SOLUTION: A teaching mark 15a is moved on the screen of a display device 15 by operation an input device 17 to indicates the top of the figure 10 as the teaching index or its periphery and on the basis of the coordinates of the indicated teaching point on the screen of the display device 15, a teaching position arithmetic part 18 calculates the two-dimensional coordinates of the teaching point on a virtual image. Further, the three-dimensional coordinates of the corresponding point on the surface of the shape model 8 which corresponds to the two-dimensional coordinates of the teaching point on the virtual image and is uniquely determined is calculated through reverse conversion. This operation is repeated to generate a series of teaching points on the surface of the shape model 8, where an operation track is described, and they are registered as teaching data in a teaching data recording part 19.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、マニピュレータ等
の自動作業装置の位置決め教示を行う教示装置、特に遠
隔作業に用いる自動作業装置の位置決め教示に有効な教
示装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a teaching device for teaching the positioning of an automatic working device such as a manipulator, and more particularly to a teaching device effective for teaching the positioning of an automatic working device used for remote operation.

【0002】[0002]

【従来の技術】遠隔作業に用いるマニピュレータ等の自
動作業装置の位置決め教示においては、作業対象と自動
作業装置の双方の映像を遠隔TVカメラで監視し、作業
対象上の所定の作業目標に向け自動作業装置を接近させ
作業位置をオンラインで教示していた。一方、FA加工
組立ての分野では、位置決め教示の手段として、作業対
象と相似の形状を有するCADデータ等の形状モデル上
で教示点を指定するオフライン教示方式が普及してい
る。これらの技術に対し、オフライン教示の作業効率の
良さと、作業対象の実画像による作業確認の信頼性を兼
ね備える教示手段として、立体視実画像と作業対象の立
体形状モデル画像を重ね合わせ表示し表示画面上での教
示を行う技術(松井他,“マルチメディアディスプレイ
を用いた統合型遠隔ロボット制御法",日本ロボット学
会誌,Vol.6,No.4,pp33−41,1988等)
が開示されている。
2. Description of the Related Art In the teaching of positioning of an automatic working device such as a manipulator used for remote work, images of both the work target and the automatic work device are monitored by a remote TV camera, and the images are automatically directed toward a predetermined work target on the work target. The work device was approached and the work position was taught online. On the other hand, in the field of FA machining / assembly, an off-line teaching method for designating a teaching point on a shape model such as CAD data having a shape similar to the work target has become widespread as a means of teaching positioning. For these technologies, as a teaching means that combines the work efficiency of off-line teaching with the reliability of work confirmation by the actual image of the work target, the stereoscopic real image and the three-dimensional shape model image of the work target are superimposed and displayed. Technology for teaching on the screen (Matsui et al., "Integrated Remote Robot Control Method Using Multimedia Display", Journal of the Robotics Society of Japan, Vol. 6, No. 4, pp 33-41, 1988, etc.)
Is disclosed.

【0003】[0003]

【発明が解決しようとする課題】第1の遠隔TVカメラ
の監視下でのオンライン教示に関しては、自動作業装置
と作業対象の相対位置を確実に把握するため多方向から
の監視画像を確認しながら、慎重に作業対象に対する自
動作業装置の接近を図るため教示に多くの作業時間を要
する。さらに、作業対象が例えば大寸法の平坦な壁面の
様に形状の変化が乏しい物の場合、監視画面上で教示点
の設定位置を決める基準が確認できず教示作業が難しく
なる問題があった。
With regard to online teaching under the supervision of the first remote TV camera, while checking the monitoring images from multiple directions in order to surely grasp the relative position between the automatic working device and the work target. In order to carefully approach the automatic operation device to the work target, it takes a lot of work time to teach. Further, when the work target is an object having a small change in shape such as a large-sized flat wall surface, there is a problem that a reference for determining a setting position of a teaching point cannot be confirmed on a monitoring screen, and teaching work becomes difficult.

【0004】第2のオフライン教示に関しては、実物の
作業対象の形状を忠実に再現した形状モデルが必要であ
り、作業内容が作業現場の状況に応じて多種多様になる
場合、精密な形状モデルの製作に多大なコストを要す
る。さらに、形状モデルの精密化によりデータ規模が大
きくなると、システムコストが高くなり、またコンピュ
ータグラフィック演算速度の低下により、教示作業能率
が落ちる問題があった。
For the second off-line teaching, a shape model that faithfully reproduces the shape of the actual work target is required. If the work content varies widely depending on the situation at the work site, a precise shape model is required. It requires a great deal of production cost. Further, when the data scale is increased due to the refinement of the shape model, there is a problem that the system cost is increased, and the teaching work efficiency is reduced due to a decrease in computer graphic operation speed.

【0005】第3の立体視実画像と作業対象の立体形状
モデル画像を重ね合わせ表示し表示画面上での教示を行
う技術に関しては、実画像とモデル画像の双方が立体画
像であるため各々の座標及び寸法の校正作業が煩雑にな
り、システムコストも高くなる問題があった。
[0005] Regarding the technique of superimposing and displaying the third stereoscopic real image and the three-dimensional shape model image of the work object and teaching on the display screen, since both the real image and the model image are three-dimensional images, each of them is required. There has been a problem that the coordinate and dimension calibration work becomes complicated and the system cost increases.

【0006】本発明の目的は、精度の低い形状モデルを
用いた信頼性の高いオフライン教示が可能で、形状変化
に乏しい作業対象に対し精密な教示作業を効率良く実施
可能で、さらに、システムコストが安く形状モデルの校
正作業が容易な教示装置を提供することにある。
SUMMARY OF THE INVENTION An object of the present invention is to enable highly reliable off-line teaching using a low-accuracy shape model, to efficiently perform a precise teaching operation on an object having a small change in shape, and to further reduce system cost. It is an object of the present invention to provide a teaching device which is inexpensive and easy to calibrate a shape model.

【0007】[0007]

【課題を解決するための手段】1)本発明の目的は、作
業対象の映像を捕らえる少なくとも1つの単眼視の監視
カメラと、作業対象に対する監視カメラの位置及び視線
方向を任意に設定するカメラ位置決め手段と、作業対象
の形状と概略相似な形状を有する形状モデルのデータを
記憶するモデルデータベースと、形状モデルの表面上に
教示指標の図形を描画する手段と、作業対象に対する監
視カメラの相互位置及び視線方向と同等の相互位置及び
視線方向から形状モデルを捕らえ、監視カメラの撮像性
能の内少なくとも画角を同等とした形状モデルの2次元
の仮想画像を生成する仮想画像生成手段と、監視カメラ
で捕らえた作業対象の実画像に仮想画像生成手段で生成
した仮想画像を重ねて操作者に提示する表示装置と、表
示装置の画面上で表示された形状モデル上の教示指標の
図形の一部もしくはその近傍を指定し、指定した画面の
座標に対応した形状モデルの表面上の3次元位置を演算
し教示データとして記録する手段とから、教示装置を構
成することによって達成される。
SUMMARY OF THE INVENTION 1) An object of the present invention is to provide at least one monocular surveillance camera for capturing an image of a work target, and camera positioning for arbitrarily setting the position and the line of sight of the surveillance camera with respect to the work target. Means, a model database for storing data of a shape model having a shape substantially similar to the shape of the work target, means for drawing a figure of the teaching index on the surface of the shape model, Virtual image generation means for capturing a shape model from a mutual position and a line-of-sight direction equivalent to the line-of-sight direction and generating a two-dimensional virtual image of the shape model with at least the same angle of view as the imaging performance of the monitoring camera; A display device that superimposes the virtual image generated by the virtual image generation means on the captured real image of the work target and presents it to the operator; Means for designating a part of the figure of the teaching index on the indicated shape model or its vicinity, calculating a three-dimensional position on the surface of the shape model corresponding to the coordinates of the specified screen, and recording as teaching data; This is achieved by configuring the teaching device.

【0008】2)本発明の目的は、カメラ位置決め手段
に監視カメラの位置及び姿勢を操作するカメラ位置決め
機構と、操作者がカメラ位置決め機構の位置決め指令情
報を与える操縦装置とを備え、形状モデルとカメラ位置
決め機構の計算機モデルの映像を操作者に対し提示する
作業シミュレータを備え、操縦装置により操作されたカ
メラ位置決め機構の動作情報に基づき仮想画像生成手段
で生成する仮想画像の視野の更新を行うと共に、作業シ
ミュレータ上での形状モデルに対するカメラ位置決め機
構の計算機モデルの位置及び形状を更新することを特徴
として、課題を解決するための手段1)記載の教示装置
を構成することにより達成される。
2) An object of the present invention is to provide a camera positioning mechanism for controlling the position and orientation of a monitoring camera in a camera positioning means, and a control device for providing an operator with positioning command information of the camera positioning mechanism. A work simulator for presenting an image of a computer model of the camera positioning mechanism to an operator; updating a visual field of a virtual image generated by a virtual image generating means based on operation information of the camera positioning mechanism operated by the control device; The present invention is characterized in that the position and the shape of the computer model of the camera positioning mechanism with respect to the shape model on the work simulator are updated, and the teaching device according to the means for solving the problem 1) is constituted.

【0009】3)教示指標の図形として、格子図形,教
示点の候補の位置を示すマーク,形状モデルの表面上の
作業軌跡の線画,図形に関するコメント情報等のいずれ
かもしくは組み合わせた情報を用いることを特徴とし
て、課題を解決するための手段1)記載の教示装置を構
成することにより達成される。
3) As the figure of the teaching index, any one or a combination of a grid figure, a mark indicating the position of a teaching point candidate, a line drawing of a work locus on the surface of the shape model, comment information on the figure, and the like are used. This is achieved by configuring the teaching device described in the means 1) for solving the problem.

【0010】[0010]

【発明の実施の形態】以下、図1及び図2を用いて本発
明の実施例について説明する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the present invention will be described below with reference to FIGS.

【0011】図1は本実施例の構成図、図2は本実施例
の動作を表す説明図である。図1及び図2において、同
じ構成要素に対して同じ構成番号を記してある。
FIG. 1 is a block diagram of this embodiment, and FIG. 2 is an explanatory diagram showing the operation of this embodiment. 1 and 2, the same components are denoted by the same component numbers.

【0012】本実施例の構成を、図1を用いて説明す
る。
The configuration of this embodiment will be described with reference to FIG.

【0013】自動作業装置である作業アーム1の位置姿
勢を制御する作業アーム制御装置2に対し、作業軌跡を
形成するための教示点座標のシーケンスを生成出力する
本実施例の教示装置3は、作業対象4を撮影する単眼の
TVカメラ5と、TVカメラ5の位置及び視線方向を操
作するカメラアーム6と、TVカメラ5及びカメラアー
ム6を制御するカメラ位置/視野制御装置7と、作業対
象4と概略相似な形状を記述した形状モデル8と、形状
モデル8の計算機データを保存管理するモデルデータベ
ース9と、形状モデル8の表面に教示指標の図形10を
描画する教示指標生成部11と、教示指標生成部11で
生成した教示指標の図形10を描画した形状モデル8に
ついて、作業対象4に対するTVカメラ5の相互位置及
び視線方向と同等の相互位置及び視線方向から形状モデ
ル8を捕らえ、TVカメラ5の画角等の撮像条件を同等
とした形状モデル8の2次元の仮想画像を生成する仮想
画像生成部12と、TVカメラ5で捕らえた作業対象4
の実画像に仮想画像生成部12で生成した形状モデル8
の仮想画像をオーバーラップした合成画像を生成する画
像合成部13と、合成画像を操作者14に対し提示する
表示装置15と、操作者14がカメラ位置/視野制御装
置7に操作情報を与えるための操縦装置16と、表示装
置15の画面上で教示位置を指示するためのマウス等の
入力デバイス17と、入力デバイス17の操作により指
定される教示位置を教示マーク15aとして表示装置1
5に表示し、かつ教示マーク15aで指定した表示装置
15の画面上の教示位置の2次元座標から、仮想画像生
成部12での仮想画像生成条件に基づき一意に定まる形
状モデル8の表面上の対応点の3次元座標を演算する教
示位置演算部18と、教示位置演算部18の演算出力結
果を作業アーム1の作業軌跡を形成するための教示点座
標として時系列に保持する教示データ記録部19と、操
作者14に対し形状モデル8の鳥瞰画像と共にカメラア
ーム6の動作情報に基づき連動するカメラアームモデル
20の画像を提示し、さらに教示データ記録部19に蓄
積された教示点座標のシーケンスに基づき形状モデル8
に対する作業アームモデル21の仮想的なプレイバック
動作を提示する作業シミュレータ22とから構成され
る。
A teaching device 3 of the present embodiment, which generates and outputs a sequence of teaching point coordinates for forming a work trajectory, to a working arm control device 2 which controls the position and orientation of a working arm 1 which is an automatic working device, A monocular TV camera 5 for photographing the work object 4, a camera arm 6 for controlling the position and the line of sight of the TV camera 5, a camera position / view control device 7 for controlling the TV camera 5 and the camera arm 6, A model database 9 for storing and managing computer data of the shape model 8; a teaching index generator 11 for drawing a figure 10 of a teaching index on the surface of the shape model 8; The shape model 8 in which the figure 10 of the teaching index generated by the teaching index generating unit 11 is drawn is equivalent to the mutual position and the viewing direction of the TV camera 5 with respect to the work target 4. A virtual image generation unit 12 that captures the shape model 8 from the mutual position and the line-of-sight direction and generates a two-dimensional virtual image of the shape model 8 in which imaging conditions such as the angle of view of the TV camera 5 are equalized. Work target 4
Model 8 generated by the virtual image generator 12 on the real image of
An image synthesizing unit 13 that generates a synthesized image in which virtual images of the above are overlapped, a display device 15 that presents the synthesized image to the operator 14, and the operator 14 that provides operation information to the camera position / view control device 7. , An input device 17 such as a mouse for indicating a teaching position on the screen of the display device 15, and a teaching position specified by operating the input device 17 as a teaching mark 15a.
5 on the surface of the shape model 8 uniquely determined based on the virtual image generation conditions in the virtual image generation unit 12 from the two-dimensional coordinates of the teaching position on the screen of the display device 15 specified by the teaching mark 15a. A teaching position calculating section 18 for calculating three-dimensional coordinates of corresponding points; and a teaching data recording section for holding a calculation output result of the teaching position calculating section 18 in time series as teaching point coordinates for forming a work path of the working arm 1. 19, an image of the camera arm model 20 linked to the operator 14 based on the operation information of the camera arm 6 together with the bird's-eye image of the shape model 8, and a sequence of teaching point coordinates stored in the teaching data recording unit 19. Shape model 8 based on
And a work simulator 22 for presenting a virtual playback operation of the work arm model 21 with respect to.

【0014】次に本実施例の動作について説明する。Next, the operation of this embodiment will be described.

【0015】作業対象4のベース座標4aを基準とした
TVカメラ5の座標5aは、ベース座標4aからカメラ
アーム6のベース座標6aとの間の位置方向変換情報及
びベース座標6aから座標5aまでの位置変換情報によ
り算出される。ベース座標6aから座標5aまでの位置
変換情報は、カメラアーム6の動作に伴い変化する。仮
想画像生成部12にはTVカメラ5を模擬したカメラモ
デル23を含み、形状モデル8のベース座標8aを基準
としたカメラモデル23の座標23aは、ベース座標8
aからカメラモデル23の移動の原点となるベース座標
23bとの間の位置方向変換情報及びベース座標23b
から座標23aまでの位置変換情報により算出される。
ここで、ベース座標8aからベース座標23bとの間の
位置方向変換情報は、ベース座標4aからベース座標6
aとの間の位置方向変換情報と一致するように、作業の
初期段階で予め校正される。この条件で、ベース座標2
3bから座標23aまでの位置変換情報は、カメラ位置
/視野制御装置7から得られるベース座標6aから座標
5aまでの位置変換情報に一致するように逐次更新され
る。
The coordinates 5a of the TV camera 5 based on the base coordinates 4a of the work 4 are converted from the position / direction conversion information between the base coordinates 4a and the base coordinates 6a of the camera arm 6 and from the base coordinates 6a to the coordinates 5a. It is calculated based on the position conversion information. The position conversion information from the base coordinates 6a to the coordinates 5a changes with the operation of the camera arm 6. The virtual image generation unit 12 includes a camera model 23 simulating the TV camera 5, and the coordinates 23 a of the camera model 23 based on the base coordinates 8 a of the shape model 8 are the base coordinates 8.
a and the base coordinate 23b from the position a to the base coordinate 23b which is the origin of the movement of the camera model 23
Is calculated from the position conversion information from to the coordinate 23a.
Here, the position / direction conversion information between the base coordinates 8a and the base coordinates 23b includes the base coordinates 4a to the base coordinates 6a.
The calibration is performed in advance in the initial stage of the work so as to match the position / direction conversion information between “a” and “a”. Under these conditions, base coordinates 2
The position conversion information from 3b to coordinates 23a is sequentially updated so as to match the position conversion information from base coordinates 6a to coordinates 5a obtained from the camera position / view control device 7.

【0016】この様な構成から、形状モデル8に対する
カメラモデル23の位置及び視線方向は、常に作業対象
4に対するTVカメラ5の位置及び視線方向に一致する
ように条件設定される。さらに、カメラモデル23は自
身の位置及び視線方向及びカメラ位置/視野制御装置7
から得られるTVカメラ5の画角等の撮像条件に基づ
き、形状モデル8の3次元形状データをカメラモデル2
3の撮像面(図示せず)での2次元画像データに変換演
算する。このような構成から、仮想画像生成部12は、
TVカメラ5から作業対象4を撮影したのと等価な形状
モデル8の仮想画像を生成する。
With such a configuration, the condition and the line of sight of the camera model 23 with respect to the shape model 8 are set so as to always match the position and the line of sight of the TV camera 5 with respect to the work 4. Further, the camera model 23 has its own position, line-of-sight direction and camera position / view control device 7.
3D shape data of the shape model 8 based on the imaging conditions such as the angle of view of the TV camera 5 obtained from the camera model 2
3 and converted into two-dimensional image data on an imaging surface (not shown). With such a configuration, the virtual image generation unit 12
A virtual image of a shape model 8 equivalent to capturing the work target 4 from the TV camera 5 is generated.

【0017】カメラアーム6の運用に関しては、操縦装
置16の操作により任意の位置,視線方向にTVカメラ
5を移動させる。この時、操作者14は、2つの画像を
確認しながら作業を進める。1つは作業シミュレータ2
2の画面であり、実際の作業対象4とカメラアーム6の
相互位置関係を、作業シミュレータ22の画面上で示さ
れる形状モデル8に対するカメラアームモデル20の鳥
瞰画像により、大域的なTVカメラ5の位置を確認す
る。もう1つの画像は、表示装置15に提示される合成
画像であり、所定の教示位置に達したか詳細情報を見
る。
As for the operation of the camera arm 6, the TV camera 5 is moved to an arbitrary position and a line of sight by operating the operation device 16. At this time, the operator 14 proceeds with the operation while checking the two images. One is work simulator 2
2, the relative positional relationship between the actual work object 4 and the camera arm 6 is shown by a bird's-eye view image of the camera arm model 20 with respect to the shape model 8 shown on the screen of the work simulator 22. Check the position. The other image is a composite image presented on the display device 15 and sees detailed information on whether a predetermined teaching position has been reached.

【0018】教示作業は、次の様な手順で行われる。The teaching operation is performed in the following procedure.

【0019】表示装置には、図2に示すように、作業対
象4をTVカメラ5で捕らえた実画像15bと、仮想画
像生成部12で生成した形状モデル8の仮想画像15c
を合成した、合成画像15dが表示される。仮想画像1
5cは、形状モデル8の監視方向から見える表面の輪郭
線15hと教示指標の図形10のみを表示し、他の部分
は透明表示する。従って、合成画像15dは、実画像1
5bの上に輪郭線15hと教示指標の図形10を上書きし
た画像となる。
As shown in FIG. 2, the display device has a real image 15b of the work object 4 captured by the TV camera 5, and a virtual image 15c of the shape model 8 generated by the virtual image generation unit 12.
Are composited, and a composite image 15d is displayed. Virtual image 1
5c displays only the outline 15h of the surface and the figure 10 of the teaching index viewed from the monitoring direction of the shape model 8, and the other portions are transparently displayed. Therefore, the composite image 15d is the real image 1
An image is obtained by overwriting the outline 15h and the figure 10 of the teaching index on 5b.

【0020】このような画面構成により、形状モデル4
にて表現を省いた詳細構造15e及び形状モデル8の作
成時には存在しなかった作業対象4の変化部分15fを
確認しながら教示作業を実現できる。さらに、教示指標
の図形10を基準として教示点を定めることができるの
で、実画像15b上に構造として特徴の無い場合でも教
示点位置の設定を容易にできる。
With such a screen configuration, the shape model 4
The teaching work can be realized while confirming the changed portion 15f of the work object 4 that did not exist at the time of creating the detailed structure 15e and the shape model 8 omitting the expression. Further, since the teaching point can be determined based on the figure 10 of the teaching index, the setting of the teaching point position can be easily performed even when there is no feature as a structure on the real image 15b.

【0021】教示指標の図形10としては、本実施例で
は格子図形を例として提示している。例えば、格子図形
の垂直線及び水平線の描画間隔を規定のスケールで表示
すれば、作業対象4に教示点の設定座標を定義できる。
さらに、本実施例に示すように予め作業軌跡の線画像1
5gを描画すれば、これに沿って教示点の設定が可能で
ある。さらに、教示指標の図形10として座標位置の数
値,作業軌跡の識別名称などのコメント情報を重ねて描
画することも可能である。
As the figure 10 of the teaching index, in this embodiment, a lattice figure is presented as an example. For example, if the drawing intervals of the vertical lines and the horizontal lines of the lattice figure are displayed on a prescribed scale, the coordinates of the teaching point can be defined for the work object 4.
Further, as shown in the present embodiment, a line image 1
If 5 g is drawn, the teaching point can be set along this. Further, it is also possible to draw comment information such as the numerical value of the coordinate position and the identification name of the work locus as the figure 10 of the teaching index.

【0022】教示点の指定は、入力デバイス17の操作
により表示装置15の画面上で教示マーク15aを移動
し教示指標の図形10の上もしくは近傍を指示すること
で実行される。教示位置演算部18は、指定された教示
点の表示装置15の画面上での座標に基づき、仮想画像
15c上の教示点の2次元座標を算出する。さらに、仮
想画像生成部12より得たカメラモデル23自身の位
置,視線方向及び画角等の撮像条件に基づき、仮想画像
15c上の教示点の2次元座標に対応し一意に定まる形
状モデル8の表面上の対応点の3次元座標を逆変換演算
する。この作業を繰り返すことにより、作業軌跡を記述
する形状モデル8の表面上の教示点の系列が生成され、
教示データ記録部19に登録され教示作業が完了する。
この処理では、作業アーム1の手先の位置座標の教示が
実施されるが、手先の方向の教示については、形状モデ
ル8の教示位置近傍の表面の法線方向を設定する等の教
示点の位置情報に連動した特定の規範に基づく自動設
定、あるいは作業アームモデル21を形状モデル8と同
等に合成画像15d上に投影し、仮想的な操縦操作によ
り作業対象4との干渉の有無を確認しながら方向を設定
する等の方法で決定できる。
The designation of the teaching point is executed by moving the teaching mark 15a on the screen of the display device 15 by operating the input device 17 and pointing on or near the figure 10 of the teaching index. The teaching position calculator 18 calculates two-dimensional coordinates of the teaching point on the virtual image 15c based on the coordinates of the designated teaching point on the screen of the display device 15. Further, based on the imaging conditions such as the position, the line-of-sight direction, and the angle of view of the camera model 23 itself obtained from the virtual image generation unit 12, the shape model 8 uniquely corresponding to the two-dimensional coordinates of the teaching point on the virtual image 15c is determined. The three-dimensional coordinates of the corresponding point on the surface are subjected to an inverse transformation operation. By repeating this work, a series of teaching points on the surface of the shape model 8 describing the work trajectory is generated,
The teaching work is completed by being registered in the teaching data recording unit 19.
In this processing, the teaching of the position coordinates of the hand of the working arm 1 is performed, but the teaching of the direction of the hand is performed by setting the position of the teaching point such as setting the normal direction of the surface near the teaching position of the shape model 8. Automatic setting based on a specific standard linked to the information, or projecting the work arm model 21 on the composite image 15d equivalently to the shape model 8, and confirming the presence or absence of interference with the work target 4 by virtual steering operation It can be determined by a method such as setting a direction.

【0023】教示作業が終了した後は、教示データ記録
部19に登録された教示データに基づき作業シミュレー
タ上で作業アームモデル21を運用し、作業時の干渉の
有無等のチェックを行い、最終的に作業アーム1による
作業を実施する。
After the teaching work is completed, the work arm model 21 is operated on a work simulator based on the teaching data registered in the teaching data recording unit 19 to check whether or not there is interference during the work. Then, the work by the work arm 1 is performed.

【0024】以上説明した本実施例によれば、単眼視T
Vカメラにより撮影した作業対象の実画像上に、TVカ
メラより撮影したのと同等の形状モデルの仮想画像を重
ね合わせて操作者に提示でき、さらに仮想画像上に教示
指標の図形を表示し操作者に、教示位置の設定基準を提
示できる。さらに、表示装置の2次元画面上での教示点
位置の指示のみにより、自動的に形状モデル表面上の教
示点対応の3次元位置を求めることができる。さらに、
作業シミュレータ上でカメラアームの動作及びTVカメ
ラの撮影範囲を大局的に確認しながら実画像の視野を設
定でき、これに連動して実画像と同等の仮想画像を生成
できる。
According to the embodiment described above, the monocular T
A virtual image of the same shape model as the one taken by the TV camera can be superimposed on the real image of the work target taken by the V camera and presented to the operator, and furthermore, the figure of the teaching index is displayed and operated on the virtual image To the user. Further, a three-dimensional position corresponding to a taught point on the surface of a shape model can be automatically obtained only by specifying a taught point position on a two-dimensional screen of the display device. further,
The field of view of the real image can be set while checking the operation of the camera arm and the photographing range of the TV camera on the work simulator, and a virtual image equivalent to the real image can be generated in conjunction with this.

【0025】[0025]

【発明の効果】本発明によれば、作業対象の表面上に作
業軌跡等の作業指示情報を直接描くのと同様に、遠隔監
視画像上に作業指示情報を得ることができ、さらに作業
対象の実画像にオーバーラップした仮想画像上でオフラ
イン教示が可能であるため、特に原子力プラント予防保
全,宇宙ロボット,医療用テレオペレーション等直接作
業者が作業対象に接近して段取り作業をすることが困難
な遠隔作業の効率向上及び信頼性向上に効果を持つ。さ
らに、作業対象の実画像にオーバーラップした仮想画像
上でオフライン教示を行うため、オフライン教示に用い
る形状モデルを低精度にしても実画像による補完が可能
であり、コンピュータグラフィックの高速化,システム
コストの低減及び形状モデルの製作コスト低減に効果を
持つ。
According to the present invention, it is possible to obtain work instruction information on a remote monitoring image as well as to directly draw work instruction information such as a work locus on the surface of a work object. Offline teaching is possible on a virtual image that overlaps the real image, making it difficult for direct workers, especially in nuclear power plant preventive maintenance, space robots, and medical teleoperation, to approach the work target and perform setup work. It has the effect of improving the efficiency and reliability of remote work. Furthermore, since offline teaching is performed on a virtual image that overlaps with the real image of the work target, even if the shape model used for offline teaching is low in accuracy, it can be complemented with the real image, thereby speeding up computer graphics and reducing system cost. This has the effect of reducing the cost and manufacturing cost of the shape model.

【0026】さらに、単眼視の2次元画像上の教示位置
指定のみで自動的に作業対象表面上の3次元位置を教示
できるため操作が簡便であり、教示作業の効率向上に効
果がある。さらに、単眼視の画像をベースにした教示シ
ステムのため、立体視をベースとした教示システムに比
べ、作業対象に対するTVカメラの位置/視野の校正作
業が容易であり、さらに、システム構成も簡素になりシ
ステムコストの低減が可能である。
Further, since the three-dimensional position on the surface of the work can be automatically taught only by specifying the taught position on the monocular two-dimensional image, the operation is simple, and the efficiency of the teaching operation is improved. Furthermore, since the teaching system is based on a monocular image, it is easier to calibrate the position / view of the TV camera with respect to the work target than the stereoscopic-based teaching system, and the system configuration is also simplified. The system cost can be reduced.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の実施例である教示装置の構成図。FIG. 1 is a configuration diagram of a teaching device according to an embodiment of the present invention.

【図2】図1の動作を示す説明図。FIG. 2 is an explanatory diagram showing the operation of FIG. 1;

【符号の説明】[Explanation of symbols]

1…作業アーム、2…作業アーム制御装置、3…教示装
置、4…作業対象、5…TVカメラ、6…カメラアー
ム、7…カメラ位置/視野制御装置、8…形状モデル、
9…モデルデータベース、10…教示指標の図形、11
…教示指標生成部、12…仮想画像生成部、13…画像
合成部、14…操作者、15…表示装置、16…操縦装
置、17…入力デバイス、18…教示位置演算部、19
…教示データ記録部、20…カメラアームモデル、21
…作業アームモデル、22…作業シミュレータ、23…
カメラモデル。
DESCRIPTION OF SYMBOLS 1 ... Work arm, 2 ... Work arm control device, 3 ... Teaching device, 4 ... Work target, 5 ... TV camera, 6 ... Camera arm, 7 ... Camera position / view control device, 8 ... Shape model,
9: Model database, 10: Graphic of teaching index, 11
... Teaching index generating unit, 12 ... Virtual image generating unit, 13 ... Image synthesizing unit, 14 ... Operator, 15 ... Display device, 16 ... Control device, 17 ... Input device, 18 ... Teaching position calculating unit, 19
... Teaching data recording unit, 20 ... Camera arm model, 21
... Work arm model, 22 ... Work simulator, 23 ...
Camera model.

───────────────────────────────────────────────────── フロントページの続き (72)発明者 菅原 宣彦 茨城県日立市幸町三丁目1番1号 株式会 社日立製作所日立工場内 ────────────────────────────────────────────────── ─── Continuing from the front page (72) Inventor Nobuhiko Sugawara 3-1-1 Sachimachi, Hitachi-shi, Ibaraki Pref. Hitachi, Ltd. Hitachi Plant

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】マニピュレータ等の自動作業装置の位置決
め教示を行う教示装置に於いて、作業対象の映像を捕ら
える少なくとも1つの単眼視の監視カメラと、作業対象
に対する監視カメラの位置及び視線方向を任意に設定す
るカメラ位置決め手段と、作業対象の形状と概略相似な
形状を有する形状モデルのデータを記憶するモデルデー
タベースと、形状モデルの表面上に教示指標の図形を描
画する手段と、作業対象に対する監視カメラの相互位置
及び視線方向と同等の相互位置及び視線方向から形状モ
デルを捕らえ、監視カメラの撮像条件の内少なくとも画
角を同等とした形状モデルの2次元の仮想画像を生成す
る仮想画像生成手段と、監視カメラで捕らえた作業対象
の実画像に仮想画像生成手段で生成した仮想画像を重ね
て操作者に提示する表示装置と、表示装置の画面上で表
示された形状モデル上の教示指標の図形の一部もしくは
その近傍を指定し、指定した画面の座標に対応した形状
モデルの表面上の3次元位置を演算し教示データとして
記録する手段とを備えたことを特徴とする教示装置。
In a teaching device for performing positioning teaching of an automatic working device such as a manipulator, at least one monocular monitoring camera for capturing an image of a working object, and a position and a line of sight of the monitoring camera with respect to the working object are arbitrarily set. Camera positioning means, a model database for storing data of a shape model having a shape substantially similar to the shape of the work object, means for drawing a figure of a teaching index on the surface of the shape model, and monitoring of the work object Virtual image generation means for capturing a shape model from a mutual position and a line-of-sight direction equivalent to the mutual position and line-of-sight direction of the camera and generating a two-dimensional virtual image of the shape model with at least the same angle of view among the imaging conditions of the monitoring camera And superimpose the virtual image generated by the virtual image generation means on the real image of the work target captured by the surveillance camera and present it to the operator. Designate a display device and a part or its vicinity of the figure of the teaching index on the shape model displayed on the screen of the display device, and calculate the three-dimensional position on the surface of the shape model corresponding to the coordinates of the specified screen Means for recording as teaching data.
【請求項2】前記カメラ位置決め手段に監視カメラの位
置及び姿勢を操作するカメラ位置決め機構と、操作者が
カメラ位置決め機構の位置決め指令情報を与える操縦装
置とを備え、前記形状モデルと前記カメラ位置決め機構
の計算機モデルの映像を操作者に対し提示する作業シミ
ュレータを備え、操縦装置により操作されたカメラ位置
決め機構の動作情報に基づき前記仮想画像生成手段で生
成する仮想画像の視野の更新を行うと共に、前記作業シ
ミュレータ上での形状モデルに対するカメラ位置決め機
構の計算機モデルの位置及び姿勢を更新することを特徴
とする請求項1記載の教示装置。
2. The camera according to claim 1, further comprising: a camera positioning mechanism for operating the position and orientation of the monitoring camera in the camera positioning means; and a control device for providing an operator with positioning command information of the camera positioning mechanism. A work simulator for presenting an image of a computer model to an operator, and updating a visual field of a virtual image generated by the virtual image generating means based on operation information of a camera positioning mechanism operated by a control device, 2. The teaching device according to claim 1, wherein the position and orientation of the computer model of the camera positioning mechanism with respect to the shape model on the work simulator are updated.
【請求項3】前記教示指標の図形として、格子図形,教
示点の候補の位置を示すマーク,前記形状モデルの表面
上の作業軌跡の線画,図形に関するコメント情報等のい
ずれかもしくは組み合わせた情報を用いることを特徴と
する請求項1記載の教示装置。
3. The graphic of the teaching index may be any one or a combination of a grid graphic, a mark indicating a position of a teaching point candidate, a line drawing of a work locus on the surface of the shape model, comment information on the graphic, and the like. The teaching device according to claim 1, wherein the teaching device is used.
JP14080198A 1998-05-22 1998-05-22 Teaching device Pending JPH11338532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP14080198A JPH11338532A (en) 1998-05-22 1998-05-22 Teaching device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP14080198A JPH11338532A (en) 1998-05-22 1998-05-22 Teaching device

Publications (1)

Publication Number Publication Date
JPH11338532A true JPH11338532A (en) 1999-12-10

Family

ID=15277066

Family Applications (1)

Application Number Title Priority Date Filing Date
JP14080198A Pending JPH11338532A (en) 1998-05-22 1998-05-22 Teaching device

Country Status (1)

Country Link
JP (1) JPH11338532A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002222007A (en) * 2000-09-26 2002-08-09 Faro Technol Inc Method and medium for manufacture, measurement, and analysis for computer aid
JP2002283258A (en) * 2001-03-22 2002-10-03 Fujie:Kk Program list display system
JP2004243516A (en) * 2003-02-11 2004-09-02 Kuka Roboter Gmbh Method for fading-in information created by computer into image of real environment, and device for visualizing information created by computer to image of real environment
JP2008080466A (en) * 2006-09-28 2008-04-10 Daihen Corp Teaching method of carrier robot
JP2009266221A (en) * 2008-04-21 2009-11-12 Mori Seiki Co Ltd Machining simulation method and machining simulation apparatus
JP2009269155A (en) * 2008-05-09 2009-11-19 Yamatake Corp Teaching device and teaching method
JP2010061662A (en) * 2008-09-05 2010-03-18 Mori Seiki Co Ltd Machining status monitoring method and machining status monitoring apparatus
JP2012171024A (en) * 2011-02-17 2012-09-10 Japan Science & Technology Agency Robot system
JP2016522089A (en) * 2013-03-15 2016-07-28 カーネギー メロン ユニバーシティ Controlled autonomous robot system for complex surface inspection and processing
CN109275046A (en) * 2018-08-21 2019-01-25 华中师范大学 A kind of teaching data mask method based on double video acquisitions

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002222007A (en) * 2000-09-26 2002-08-09 Faro Technol Inc Method and medium for manufacture, measurement, and analysis for computer aid
JP2002283258A (en) * 2001-03-22 2002-10-03 Fujie:Kk Program list display system
JP4680516B2 (en) * 2003-02-11 2011-05-11 クーカ・ロボター・ゲゼルシャフト・ミット・ベシュレンクテル・ハフツング Method for fading in robot information to real-world images, and apparatus for visualizing robot information into real-world images
JP2004243516A (en) * 2003-02-11 2004-09-02 Kuka Roboter Gmbh Method for fading-in information created by computer into image of real environment, and device for visualizing information created by computer to image of real environment
JP2008080466A (en) * 2006-09-28 2008-04-10 Daihen Corp Teaching method of carrier robot
JP2009266221A (en) * 2008-04-21 2009-11-12 Mori Seiki Co Ltd Machining simulation method and machining simulation apparatus
JP2009269155A (en) * 2008-05-09 2009-11-19 Yamatake Corp Teaching device and teaching method
JP2010061662A (en) * 2008-09-05 2010-03-18 Mori Seiki Co Ltd Machining status monitoring method and machining status monitoring apparatus
JP2010061661A (en) * 2008-09-05 2010-03-18 Mori Seiki Co Ltd Machining status monitoring method and machining status monitoring apparatus
JP2012171024A (en) * 2011-02-17 2012-09-10 Japan Science & Technology Agency Robot system
JP2016522089A (en) * 2013-03-15 2016-07-28 カーネギー メロン ユニバーシティ Controlled autonomous robot system for complex surface inspection and processing
CN109275046A (en) * 2018-08-21 2019-01-25 华中师范大学 A kind of teaching data mask method based on double video acquisitions
CN109275046B (en) * 2018-08-21 2021-06-18 华中师范大学 Teaching data labeling method based on double video acquisition

Similar Documents

Publication Publication Date Title
US11440179B2 (en) System and method for robot teaching based on RGB-D images and teach pendant
CN100594517C (en) Method and device for determining optical overlaps with AR objects
CN108422435B (en) Remote monitoring and control system based on augmented reality
Yew et al. Immersive augmented reality environment for the teleoperation of maintenance robots
US20200078951A1 (en) Robot system equipped with video display apparatus that displays image of virtual object in superimposed fashion on real image of robot
US10888998B2 (en) Method and device for verifying one or more safety volumes for a movable mechanical unit
JP3343682B2 (en) Robot operation teaching device and operation teaching method
CN102848389B (en) Realization method for mechanical arm calibrating and tracking system based on visual motion capture
US20160158937A1 (en) Robot system having augmented reality-compatible display
EP1435737A1 (en) An augmented reality system and method
CN106142092A (en) A kind of method robot being carried out teaching based on stereovision technique
Buss et al. Development of a multi-modal multi-user telepresence and teleaction system
JPH06317090A (en) Three-dimensional display device
Fang et al. Robot path and end-effector orientation planning using augmented reality
JPH11338532A (en) Teaching device
Rastogi et al. Telerobotic control with stereoscopic augmented reality
Lawson et al. Augmented reality as a tool to aid the telerobotic exploration and characterization of remote environments
JPS59229619A (en) Work instructing system of robot and its using
CN112958974A (en) Interactive automatic welding system based on three-dimensional vision
Cooke et al. Interactive graphical model building using telepresence and virtual reality
AU2015345061B2 (en) A method of controlling a subsea platform, a system and a computer program product
Guan et al. A novel robot teaching system based on augmented reality
JPS6097409A (en) Operation teaching method of robot
JP2003271993A (en) Monitor image processing method, image monitoring system, and maintenance work system
Ibari et al. An application of augmented reality (ar) in the manipulation of fanuc 200ic robot