JPH09304013A - Three-dimentional position and attitude detector of plane - Google Patents

Three-dimentional position and attitude detector of plane

Info

Publication number
JPH09304013A
JPH09304013A JP11725596A JP11725596A JPH09304013A JP H09304013 A JPH09304013 A JP H09304013A JP 11725596 A JP11725596 A JP 11725596A JP 11725596 A JP11725596 A JP 11725596A JP H09304013 A JPH09304013 A JP H09304013A
Authority
JP
Japan
Prior art keywords
plane
marks
mark
height
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP11725596A
Other languages
Japanese (ja)
Inventor
Hiroyuki Hagiwara
裕之 萩原
Masaaki Nakazawa
正明 中沢
Hitoshi Wada
均 和田
Takeshi Mizuno
剛 水野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IHI Shibaura Machinery Corp
Original Assignee
IHI Shibaura Machinery Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IHI Shibaura Machinery Corp filed Critical IHI Shibaura Machinery Corp
Priority to JP11725596A priority Critical patent/JPH09304013A/en
Publication of JPH09304013A publication Critical patent/JPH09304013A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

PROBLEM TO BE SOLVED: To enable the detection of the three-dimentional position and attitude of a plane to be performed by a speedy and inexpensive device. SOLUTION: The height in the z-axial direction of three marks 6a to 6c provided on a plane 5 is obtained on the basis of the output from a height detecting sensor 9, and the position in the x-y axial direction of each of the marks 6a to 6c is obtained on the basis of the photograph data of a camera 8 which has photographed these marks 6a to 6c and the preset teach data. The 3-D position of a plane is arranged to be obtained by computational processing on the basis of the values obtained in above-mentioned ways. Therefore, as the detection by the height detecting sensor 9 and the photographing by the camera 8 can be performed in a short time, the detection of the 3-D position and attitude is performed in a short time. Furthermore, as providing the height detecting sensor 9, camera 8 and a means performing computational processing on the basis of the height in the z-axial direction and the position in the X-Y direction of each mark 6a to 6c is structurally sufficient, the 3-D position and attitude of the plane 5 can be detected by an inexpensive device.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【発明の属する技術分野】本発明は、平面の3次元位置
姿勢検出方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a plane three-dimensional position / orientation detecting method.

【0002】[0002]

【従来の技術】ロボットを利用した自動化システム、例
えば、中子を主型に納めるような作業をロボットで行な
う場合には、主型上面の3次元位置姿勢を検出し、その
検出結果に応じてロボットを制御する必要がある。この
ような平面の3次元位置姿勢検出方法としては、従来、
光切断方法が知られている。この光切断方法は、検出し
ようとする平面に対してスリット光を照射することによ
りその平面の3次元位置姿勢を検出するものである。
2. Description of the Related Art An automated system using a robot, for example, in the case where a robot carries out work such as placing a core in a main mold, the three-dimensional position and orientation of the upper surface of the main mold is detected, and the detected result is detected. You need to control the robot. As a method of detecting the three-dimensional position and orientation of such a plane, conventionally,
Light cutting methods are known. In this light cutting method, the plane to be detected is irradiated with slit light to detect the three-dimensional position and orientation of the plane.

【0003】[0003]

【発明が解決しようとする課題】しかし、光切断法は、
読み取りに時間がかかり、しかも、装置が高価である。
However, the optical cutting method is
It takes time to read and the device is expensive.

【0004】従って、主型への中子納め作業をロボット
を用いて行う場合において、主型の上面の3次元位置姿
勢検出を光切断法で行うと、作業能率が非常に低くな
り、しかも、設備費が膨大になる。
Therefore, when the core is placed in the main mold by using a robot, if the three-dimensional position and orientation of the upper surface of the main mold is detected by the optical cutting method, the work efficiency becomes very low, and The equipment cost becomes huge.

【0005】[0005]

【課題を解決するための手段】請求項1記載の発明は、
平面上に設けられた3個のマークのZ軸方向の高さを高
さ検出センサからの出力に基づいて求め、各マークのX
−Y軸方向の位置をこれらのマークを撮影したカメラの
撮影データと予め設定したこれらのマークの教示データ
とに基づいて求め、各マークのZ軸方向の高さとX−Y
軸方向の位置とに基づく演算処理により前記平面の3次
元位置姿勢を求めるようにした。従って、高さ検出セン
サによる検出とカメラによる撮影とを短時間で行うこと
ができ、3次元位置姿勢の検出を短時間で行える。しか
も、構造的には、高さ検出センサと、カメラと、各マー
クのZ軸方向の高さ及びX−Y軸方向の位置に基づく演
算処理を行う手段とを設ければよく、安価な装置により
平面の3次元位置姿勢を検出することができる。
According to the first aspect of the present invention,
The heights of the three marks on the plane in the Z-axis direction are calculated based on the output from the height detection sensor, and the X of each mark is calculated.
The position in the Y-axis direction is obtained based on the photographing data of the camera that photographed these marks and the preset teaching data of these marks, and the height of each mark in the Z-axis direction and the XY direction.
The three-dimensional position / orientation of the plane is obtained by arithmetic processing based on the position in the axial direction. Therefore, the detection by the height detection sensor and the photographing by the camera can be performed in a short time, and the three-dimensional position and orientation can be detected in a short time. In addition, structurally, it is sufficient to provide a height detection sensor, a camera, and means for performing arithmetic processing based on the height of each mark in the Z-axis direction and the position in the XY axis direction, which is an inexpensive device. Thus, the three-dimensional position and orientation of the plane can be detected.

【0006】[0006]

【発明の実施の形態】本発明の一実施の形態を図面に基
づいて説明する。図1は中子1を把持したロボット2の
ハンド部3と、この中子1を納める主型4とを示す模式
図である。この主型4における3次元位置姿勢を検出す
る平面である主型上面5には、丸穴形状の3個のマーク
6a,6b,6cが形成されている。
DESCRIPTION OF THE PREFERRED EMBODIMENTS One embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a schematic diagram showing a hand portion 3 of a robot 2 that holds a core 1 and a main mold 4 that houses the core 1. Three marks 6a, 6b, 6c in the shape of round holes are formed on the upper surface 5 of the main mold, which is a plane for detecting the three-dimensional position and orientation of the main mold 4.

【0007】図2は、前記ロボット2の制御システムを
示したブロック図である。中子搬送ライン(図示せず)
上を搬送された前記中子1の停止位置、及び、主型搬送
ライン(図示せず)上を搬送された前記主型4の停止位
置の近傍に前記ロボット2が配置されている。このロボ
ット2の前記ハンド部3には、中子搬送ライン上で停止
している前記中子1の位置を求めるための中子検出用C
CDカメラ7と、前記マーク6a〜6cの水平方向(X
−Y軸方向)の位置を求めるためのカメラであるマーク
検出用CCDカメラ8と、前記マーク6a〜6cの垂直
方向(Z軸方向)の高さを求めるための高さ検出センサ
であるレーザセンサ9とが取り付けられている。
FIG. 2 is a block diagram showing a control system of the robot 2. Core transport line (not shown)
The robot 2 is arranged in the vicinity of the stop position of the core 1 conveyed above and the stop position of the main mold 4 conveyed on a main mold conveying line (not shown). The hand unit 3 of the robot 2 has a core detecting C for determining the position of the core 1 stopped on the core conveying line.
The CD camera 7 and the marks 6a to 6c in the horizontal direction (X
Mark detection CCD camera 8 which is a camera for obtaining the position in the (Y axis direction), and a laser sensor which is a height detection sensor for obtaining the height of the marks 6a to 6c in the vertical direction (Z axis direction). 9 and 9 are attached.

【0008】前記ロボット2にはこのロボット2を駆動
させるロボットコントローラ10が接続されている。前
記中子検出用CCDカメラ7と前記マーク検出用CCD
カメラ8とは前記中子1や前記マーク6a〜6cの水平
方向の位置検出を行う画像処理装置11に接続されてい
る。そして、これらのロボットコントローラ10と画像
処理装置11、及び、前記レーザセンサ9がFA(Facto
ry Automation)コンピュータ12に接続されている。
A robot controller 10 for driving the robot 2 is connected to the robot 2. CCD camera 7 for detecting the core and CCD for detecting the mark
The camera 8 is connected to an image processing device 11 that detects the positions of the core 1 and the marks 6a to 6c in the horizontal direction. Then, the robot controller 10, the image processing device 11, and the laser sensor 9 are FA (Facto
ry Automation) connected to the computer 12.

【0009】前記FAコンピュータ12では様々な演算
が行われる。その一つとして、前記画像処理装置11か
らの前記中子1の位置データに基づいてこの中子1を前
記ハンド部3で把持する際に必要なロボット2の補正量
が演算される。また、前記画像処理装置11からの前記
マーク6a〜6cの水平方向の位置データと前記レーザ
センサ9からの前記マーク6a〜6cの垂直方向の高さ
データとに基づいて各マーク6a〜6cの3次元位置が
演算され、各マーク6a〜6cの3次元位置に基づく主
型上面5の3次元位置姿勢が演算される。さらに、前記
ハンド部3で把持した中子1を主型4内に納める際に必
要なロボット2の補正量が演算される。
Various calculations are performed in the FA computer 12. As one of them, based on the position data of the core 1 from the image processing device 11, a correction amount of the robot 2 required when the core 1 is held by the hand unit 3 is calculated. Further, based on the horizontal position data of the marks 6a to 6c from the image processing device 11 and the vertical height data of the marks 6a to 6c from the laser sensor 9, each of the marks 6a to 6c is set to three. The three-dimensional position is calculated, and the three-dimensional position / orientation of the main mold upper surface 5 is calculated based on the three-dimensional positions of the marks 6a to 6c. Further, a correction amount of the robot 2 required when the core 1 gripped by the hand portion 3 is stored in the main mold 4 is calculated.

【0010】このような構成において、このロボット2
を用いて主型4内へ中子1を納める場合には、まず、中
子検出用CCDカメラ7で撮影した中子1の撮影データ
と予め設定した中子1の教示データとに基づいて中子1
の位置を求め、その結果に基づいてロボット2を位置制
御することによりハンド部3で中子1を把持する。
In this structure, the robot 2
When the core 1 is put into the main mold 4 by using, the core 1 is first taken on the basis of the photographing data of the core 1 photographed by the core detecting CCD camera 7 and the preset teaching data of the core 1. Child 1
The position of the core 1 is determined by controlling the position of the robot 2 based on the result of the determination.

【0011】ハンド部3で中子1を把持した後は、マー
ク検出用CCDカメラ8で撮影したマーク6a〜6cの
撮影データと予め設定したマーク6a〜6cの教示デー
タとに基づいて求めた各マーク6a〜6cのX−Y軸方
向の位置と、レーザセンサ9の出力に基づいて求めたマ
ーク6a〜6cのZ軸方向の高さとから、主型上面5の
3次元位置姿勢を演算し、その演算結果に基づいてロボ
ット2を位置制御することにより把持している中子1を
主型4内へ納める。
After the core 1 is gripped by the hand portion 3, the respective values obtained based on the photographing data of the marks 6a to 6c photographed by the mark detecting CCD camera 8 and the preset teaching data of the marks 6a to 6c. From the positions of the marks 6a to 6c in the XY axis direction and the heights of the marks 6a to 6c in the Z axis direction obtained based on the output of the laser sensor 9, the three-dimensional position and orientation of the main mold upper surface 5 is calculated, By controlling the position of the robot 2 based on the result of the calculation, the core 1 being held is housed in the main mold 4.

【0012】主型上面5の3次元位置姿勢の検出過程を
図3のフローチャートに基づいて説明する。まず、マー
ク6a〜6cの個数を意味するnを“0”にクリアし、
クリア後にnの値を1つインクリメントする。つぎに、
レーザセンサ9が主型上面5のn個目のマーク6aのZ
軸方向(垂直方向)の高さを検出できる位置へハンド部
3を移動させる制御が行われる(ステップS1)。その
後、レーザセンサ9から出射されて主型上面5で反射さ
れたレーザを受光することにより得られるレーザセンサ
9からの出力がFAコンピュータ12へ入力され、レー
ザセンサ9からの出力に基づいてマーク6aの高さ位置
が演算される(ステップS2)。この演算結果に基づい
て、マーク6aとマーク検出用CCDカメラ8との距離
を、マーク検出用CCDカメラ8の焦点距離と等しくす
るために必要なZ軸方向の補正量が演算され(ステップ
S3)、その補正量に応じてハンド部3が昇降制御され
る(ステップS4)。
The process of detecting the three-dimensional position and orientation of the upper surface 5 of the main mold will be described with reference to the flowchart of FIG. First, clear n, which means the number of marks 6a to 6c, to "0",
After clearing, the value of n is incremented by 1. Next,
The laser sensor 9 is the Z of the nth mark 6a on the upper surface 5 of the main mold.
Control is performed to move the hand unit 3 to a position where the height in the axial direction (vertical direction) can be detected (step S1). After that, the output from the laser sensor 9 obtained by receiving the laser emitted from the laser sensor 9 and reflected by the upper surface 5 of the main mold is input to the FA computer 12, and the mark 6 a is output based on the output from the laser sensor 9. Is calculated (step S2). Based on this calculation result, the correction amount in the Z-axis direction required to make the distance between the mark 6a and the mark detecting CCD camera 8 equal to the focal length of the mark detecting CCD camera 8 is calculated (step S3). The hand unit 3 is controlled to move up and down according to the correction amount (step S4).

【0013】つぎに、マーク検出用CCDカメラ8がn
個目のマーク6aの水平方向(X−Y軸方向)の位置を
検出できる位置へハンド部3を移動させる制御が行われ
(ステップS5)、マーク検出用CCDカメラ8がマー
ク6aを撮影した撮影データとこのマーク6aに関する
教示データとをFAコンピュータ12で比較することに
より、マーク6aのX−Y軸方向(水平方向)の位置が
演算される(ステップS6)。そして、ステップS2の
演算結果とステップS6の演算結果とに基づいて、マー
ク6aの3次元位置が演算される(ステップS7)。
Next, the mark detecting CCD camera 8
Control is performed to move the hand unit 3 to a position where the horizontal position (X-Y axis direction) of the sixth mark 6a can be detected (step S5), and the mark detection CCD camera 8 photographs the mark 6a. The FA computer 12 compares the data with the teaching data about the mark 6a to calculate the position of the mark 6a in the X-Y axis direction (horizontal direction) (step S6). Then, the three-dimensional position of the mark 6a is calculated based on the calculation result of step S2 and the calculation result of step S6 (step S7).

【0014】上述したステップS1〜ステップS7の処
理が他のマーク6b,6cについても行われ、3個のマ
ーク6a〜6cの3次元位置データに基づいて主型上面
5の3次元位置姿勢が演算される(ステップS8)。な
お、この3次元位置姿勢の演算は、平面上の3点のデー
タに基づいて行われる周知の演算方法である。
The processes of steps S1 to S7 described above are performed for the other marks 6b and 6c, and the three-dimensional position and orientation of the upper surface 5 of the main mold is calculated based on the three-dimensional position data of the three marks 6a to 6c. (Step S8). The calculation of the three-dimensional position and orientation is a known calculation method performed based on the data of three points on the plane.

【0015】ここで、マーク6a〜6cの3次元位置の
演算について、1つのマーク6aを例に挙げて説明す
る。教示時のレーザセンサ9の出力値をTH1、教示時
のマーク検出用CCDカメラ8の出力値を(TX1,T
Y1)とする。実際の検出時におけるレーザセンサ9の
出力値をH1、実際の検出時におけるマーク検出用CC
Dカメラ8の出力値を(IX1,IY1)とする。する
と、実際の検出時における教示時に対するマーク6aの
位置変化量(X1,Y1,Z1)は、以下のようにな
る。
Here, the calculation of the three-dimensional positions of the marks 6a to 6c will be described by taking one mark 6a as an example. The output value of the laser sensor 9 at the time of teaching is TH1, and the output value of the CCD camera 8 for mark detection at the time of teaching is (TX1, T
Y1). The output value of the laser sensor 9 at the time of actual detection is H1, the CC for mark detection at the time of actual detection
The output value of the D camera 8 is (IX1, IY1). Then, the position change amount (X1, Y1, Z1) of the mark 6a at the time of teaching at the time of actual detection is as follows.

【0016】X1=IX1−TX1 Y1=IY1−TY1 Z1=H1−TH1 従って、この位置変化量から、教示時に基準とした座標
系におけるマーク6aの3次元位置を演算することがで
きる。
X1 = IX1-TX1 Y1 = IY1-TY1 Z1 = H1-TH1 Therefore, the three-dimensional position of the mark 6a in the coordinate system used as the reference during teaching can be calculated from this position change amount.

【0017】本実施の形態によれば、主型上面5の3次
元位置姿勢を検出しているため、主型上面5が水平状態
から傾いた状態となっていても、その傾きに応じて中子
1を把持したハンド部3の向きを制御することにより、
主型4内への中子1納めを精度良く行うことができる。
According to this embodiment, the three-dimensional position and orientation of the upper surface 5 of the main mold is detected. Therefore, even if the upper surface 5 of the main mold is tilted from the horizontal state, the middle surface is inclined according to the tilt. By controlling the direction of the hand unit 3 that holds the child 1,
The core 1 can be accurately stored in the main mold 4.

【0018】なお、本実施の形態では、高さ検出センサ
としてレーザセンサ9を用いた場合を例に挙げて説明し
たが、超音波センサを用いてもよい。この場合には、超
音波センサから出射された超音波を主型上面で反射させ
て超音波センサに入射させるようにする。
In the present embodiment, the case where the laser sensor 9 is used as the height detecting sensor has been described as an example, but an ultrasonic sensor may be used. In this case, the ultrasonic wave emitted from the ultrasonic sensor is reflected on the upper surface of the main mold and is incident on the ultrasonic sensor.

【0019】[0019]

【発明の効果】請求項1記載の発明によれば、平面上に
設けられた3個のマークのZ軸方向の高さを高さ検出セ
ンサからの出力に基づいて求め、各マークのX−Y軸方
向の位置をこれらのマークを撮影したカメラの撮影デー
タと予め設定したこれらのマークの教示データとに基づ
いて求め、各マークのZ軸方向の高さとX−Y軸方向の
位置とに基づく演算処理により前記平面の3次元位置姿
勢を求めるようにしたので、高さ検出センサによる検出
とカメラによる撮影とを短時間で行うことができ、従っ
て、3次元位置姿勢の検出を短時間で行うことができ、
しかも、構造的には、高さ検出センサと、カメラと、各
マークのZ軸方向の高さ及びX−Y軸方向の位置に基づ
く演算処理を行う手段とを設ければよく、安価な装置に
より平面の3次元位置姿勢を検出することができる。
According to the first aspect of the invention, the heights of the three marks provided on the plane in the Z-axis direction are determined based on the output from the height detection sensor, and the X- The position in the Y-axis direction is obtained based on the photographing data of the camera that photographed these marks and the preset teaching data of these marks, and the height of each mark in the Z-axis direction and the position in the X-Y-axis direction are determined. Since the three-dimensional position and orientation of the plane is obtained by the calculation processing based on the above, the detection by the height detection sensor and the photographing by the camera can be performed in a short time. Therefore, the detection of the three-dimensional position and orientation can be performed in a short time. Can be done
In addition, structurally, it is sufficient to provide a height detection sensor, a camera, and means for performing arithmetic processing based on the height of each mark in the Z-axis direction and the position in the XY axis direction, which is an inexpensive device. Thus, the three-dimensional position and orientation of the plane can be detected.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の一実施の形態におけるロボットのハン
ド部に把持された中子と、この中子を納める主型とを示
す模式図である。
FIG. 1 is a schematic diagram showing a core gripped by a hand portion of a robot and a main mold for accommodating the core according to an embodiment of the present invention.

【図2】ロボットの制御システムを示すブロック図であ
る。
FIG. 2 is a block diagram showing a control system of a robot.

【図3】主型上面の3次元位置姿勢の検出過程を説明す
るフローチャートである。
FIG. 3 is a flowchart illustrating a process of detecting the three-dimensional position and orientation of the upper surface of the main mold.

【符号の説明】[Explanation of symbols]

5 平面 6a〜6c マーク 8 カメラ 9 高さ検出センサ 5 planes 6a to 6c marks 8 camera 9 height detection sensor

フロントページの続き (72)発明者 水野 剛 長野県松本市石芝1丁目1番1号 石川島 芝浦機械株式会社松本工場内Front page continued (72) Inventor Tsuyoshi Mizuno 1-1-1 Ishishiba, Matsumoto City, Nagano Ishikawajima Shibaura Machinery Co., Ltd. Matsumoto Factory

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 平面上に設けられた3個のマークのZ軸
方向の高さを高さ検出センサからの出力に基づいて求
め、各マークのX−Y軸方向の位置をこれらのマークを
撮影したカメラの撮影データと予め設定したこれらのマ
ークの教示データとに基づいて求め、各マークのZ軸方
向の高さとX−Y軸方向の位置とに基づく演算処理によ
り前記平面の3次元位置姿勢を求めるようにしたことを
特徴とする平面の3次元位置姿勢検出方法。
1. The heights of three marks on a plane in the Z-axis direction are obtained based on the output from a height detection sensor, and the position of each mark in the XY-axis direction is determined by these marks. The three-dimensional position of the plane is calculated based on the photographed data of the photographed camera and preset teaching data of these marks, and is calculated based on the height of each mark in the Z-axis direction and the position in the XY axis direction. A method for detecting a three-dimensional position and orientation of a plane, characterized in that the orientation is obtained.
JP11725596A 1996-05-13 1996-05-13 Three-dimentional position and attitude detector of plane Withdrawn JPH09304013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP11725596A JPH09304013A (en) 1996-05-13 1996-05-13 Three-dimentional position and attitude detector of plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP11725596A JPH09304013A (en) 1996-05-13 1996-05-13 Three-dimentional position and attitude detector of plane

Publications (1)

Publication Number Publication Date
JPH09304013A true JPH09304013A (en) 1997-11-28

Family

ID=14707242

Family Applications (1)

Application Number Title Priority Date Filing Date
JP11725596A Withdrawn JPH09304013A (en) 1996-05-13 1996-05-13 Three-dimentional position and attitude detector of plane

Country Status (1)

Country Link
JP (1) JPH09304013A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352212B2 (en) 2009-11-18 2013-01-08 Hexagon Metrology, Inc. Manipulable aid for dimensional metrology
US10573021B2 (en) 2016-09-28 2020-02-25 Honda Motor Co., Ltd. Position and attitude estimation method and position and attitude estimation system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352212B2 (en) 2009-11-18 2013-01-08 Hexagon Metrology, Inc. Manipulable aid for dimensional metrology
US10573021B2 (en) 2016-09-28 2020-02-25 Honda Motor Co., Ltd. Position and attitude estimation method and position and attitude estimation system

Similar Documents

Publication Publication Date Title
JP6407812B2 (en) Machine tool control system capable of obtaining workpiece origin and workpiece origin setting method
EP1190818B1 (en) Position-orientation recognition device
JP2008021092A (en) Simulation apparatus of robot system
JP2008296330A (en) Robot simulation device
JP2007021634A (en) Automatic machining method for workpiece and automatic machining system for workpiece
CN109732601B (en) Method and device for automatically calibrating pose of robot to be perpendicular to optical axis of camera
CN109835706B (en) Workpiece configuration system
JPS60189517A (en) Position controller
JPH09304013A (en) Three-dimentional position and attitude detector of plane
CN110039520B (en) Teaching and processing system based on image contrast
JPH03161223A (en) Fitting of work
JPH0871973A (en) Method of instructing robot for stocker
US11978645B2 (en) Laser processing apparatus
JP2708195B2 (en) Teaching method and apparatus for three-dimensional laser beam machine
KR100926272B1 (en) Method of the auto calibration for the laser vision system using X-Y stage
JPS5994539A (en) Positioning device for material to be worked in machine tool
JPH06214622A (en) Work position sensor
JP2521939B2 (en) Processing equipment
JP2523420B2 (en) Image processing method in optical measuring device
JPH10118976A (en) Image inputting position setting method and setting device
JPH0934552A (en) Mobile object controller, position detection device, mobile object device and their control method
WO2022163580A1 (en) Processing method and processing device for generating cross-sectional image from three-dimensional position information acquired by visual sensor
JPH0212710B2 (en)
JPH0914939A (en) Shape measuring system for target to be photographed
JPH06185995A (en) Visual inspecting device for substrate

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20030805