JPH03160303A - Detecting method of multiple hole - Google Patents

Detecting method of multiple hole

Info

Publication number
JPH03160303A
JPH03160303A JP30036889A JP30036889A JPH03160303A JP H03160303 A JPH03160303 A JP H03160303A JP 30036889 A JP30036889 A JP 30036889A JP 30036889 A JP30036889 A JP 30036889A JP H03160303 A JPH03160303 A JP H03160303A
Authority
JP
Japan
Prior art keywords
hole
plane
multiple holes
points
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP30036889A
Other languages
Japanese (ja)
Other versions
JPH0663733B2 (en
Inventor
Gohei Iijima
飯島 剛平
Sadahiro Taneda
定博 種子田
Takao Kanamaru
孝夫 金丸
Arata Hiramatsu
平松 新
Yasuo Nakano
康夫 中野
Sumihiro Ueda
上田 澄広
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kawasaki Heavy Industries Ltd
Original Assignee
Kawasaki Heavy Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kawasaki Heavy Industries Ltd filed Critical Kawasaki Heavy Industries Ltd
Priority to JP1300368A priority Critical patent/JPH0663733B2/en
Publication of JPH03160303A publication Critical patent/JPH03160303A/en
Publication of JPH0663733B2 publication Critical patent/JPH0663733B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Automatic Assembly (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

PURPOSE:To correctly detect the position of the center of each hole by a method comprising steps of irradiating slit beams intersecting through multiple holes, detecting three-dimensional coordinates of an end point of each hole, and determining the end point of each hole on the basis of the value of the coordinates. CONSTITUTION:A member 102 held by a working end 101 of an industrial robot 100 has a nut 106 fixed to a hole 104 formed in a flat plate 103. The nut 106 has a screw hole 105 of a smaller diameter than the hole 104. Slit beams 111 and 112 crossing at right angles to each other are irradiated to the member 102 from a slit beam source 110 in a manner to pass through the holes 104, 105. This is photographed by a second dimensional sensor 107 such as a television camera or the like, thereby to detect third dimensional coordinates of each of four end points A-D of the hole 104 and that of four end points E-H of the hole 105. In a processing circuit 109, the position of the center of each hole 104, 105 is operated and correctly detected on the basis of the value of the coordinates.

Description

【発明の詳細な説明】 産業上の利用分野 本発明は、機械部品などの部材に同心状にかつ軸線方向
に段差状に径が小さくなる多重孔の検出をするための方
法に関する。
DETAILED DESCRIPTION OF THE INVENTION Field of the Invention The present invention relates to a method for detecting multiple holes concentrically in a member such as a mechanical part and whose diameter decreases stepwise in the axial direction.

従来の技術 機械部品などでは、孔に臨んで同心にナットが固定され
ているn4造があり、また軸線方向に順に半径が小さく
なっている複数の孔が形戊されている構造があり、これ
らの段差状の多重孔のうち、最も径の小さい孔にボルト
を螺合し、またビンを挿通する必要が生じる.先行技術
では、そのような多重孔を有する部材をテレビカメラな
どの2次元センサによって撮像して、中心側にある径が
最も小さい孔を検出するようにしている。
In conventional technical mechanical parts, there is an N4 structure in which a nut is fixed concentrically facing the hole, and there is also a structure in which multiple holes are formed with decreasing radius in order in the axial direction. It becomes necessary to screw the bolt into the hole with the smallest diameter among the multiple step-shaped holes and insert the bottle through it. In the prior art, a member having such multiple holes is imaged by a two-dimensional sensor such as a television camera, and the hole with the smallest diameter located on the center side is detected.

発明が解決すべき課題 このような先行技術では、最外郭の孔の輪郭は比較的捕
らえやすく、その最外郭の孔の中心位置の検出は比較的
容易に可能であるけれども、さらに小さい径を有する内
側の孔の検出に関しては、光を多重孔の軸線に対して傾
斜した角度から照射したとき、外側の孔の段差状の周縁
の影が生じて、内側の孔の明確な輪郭を捕らえることが
一般的に困難である.また、多重孔は同心円状に配置さ
れるべきで゛あるが、実際のワークには心ずれがあり、
最外部の孔の中心を検出しても、内側の孔の中心と一致
しない.したがって内側の孔の中心を正確に求めること
が、先行技術では困難であった.本発明の目的は、多重
孔の各孔を正確に検出することができるようにした多重
孔の検出方法を提供することである。
Problems to be Solved by the Invention In such prior art, although the outline of the outermost hole is relatively easy to grasp and the center position of the outermost hole can be detected relatively easily, the outermost hole has a smaller diameter. Regarding the detection of inner holes, when the light is irradiated at an angle oblique to the axis of the multiple holes, the step-like peripheral shadow of the outer holes appears, making it difficult to capture the clear outline of the inner holes. Generally difficult. In addition, although multiple holes should be arranged concentrically, there is misalignment in the actual workpiece.
Even if the center of the outermost hole is detected, it does not match the center of the inner hole. Therefore, it was difficult with the prior art to accurately determine the center of the inner hole. An object of the present invention is to provide a method for detecting multiple holes that allows each hole in the multiple holes to be detected accurately.

課題を解決するための手段 本発明は、同心状にかつ軸線方向に段差状に径が小さく
なるように形戒された多重孔を有する部材の多重孔の検
出方法において、 多重孔を有する前記部材の一方表面に、多重孔を横切る
相互に交差したスリット光を照射し、各孔の周縁にある
スリット光の端点の3次元座標を検出し、 端点の前記軸線方向の座標値に基づいて、各孔毎の端点
を判別することを特徴とする多重孔の検出方法である. また本発明は、判別された各孔毎の端点の座標に基づい
て、各孔の中心位置を演算して求めることを特徴とする
Means for Solving the Problems The present invention provides a method for detecting multiple holes in a member having multiple holes concentrically shaped such that the diameter thereof decreases in steps in the axial direction. irradiate one surface of the multi-hole with mutually intersecting slit light, detect the three-dimensional coordinates of the end points of the slit light on the periphery of each hole, and determine the coordinates of each end point in the axis direction. This is a multiple hole detection method characterized by determining the end point of each hole. Further, the present invention is characterized in that the center position of each hole is calculated and determined based on the determined coordinates of the end point of each hole.

作  用 本発明に従えば、多重孔の段差状の各孔を横切るように
してスリット光を照射し、このスリット光は相互に交差
しており、たとえば十字状であり、これによって各孔の
周縁にあるスリット光の端点の3次元座標を、3次元セ
ンサによって検出する.こうして求めた各孔の端点の3
次元座標のうち、多重孔の軸線方向、すなわち奥行き方
向の座標値を相互に比較することによって、各端点がど
の孔の端点てあるかを判別することができる。たとえば
スリット光の端点を撮像する3次元センサの軸線と多重
孔の軸線とがほぼ平行であるとき、多重孔の各縁の軸線
方向の座標値は各孔毎にほぼ同一値である.したがって
軸線方向にほぼ同一値を有する3つの端点は同一孔の端
点てあるものと判別することができる. このような判別された各孔毎の端点の座標に基づいて、
各孔の中心位置を演算して求めることができる. スリット光は前述のように十字状であってもよく、ある
いはまたT字状であってもよく、その他の形状であって
もよい。
Effect According to the present invention, slit light is irradiated across each step-like hole of the multi-hole, and the slit light crosses each other, for example, in a cross shape, and thereby the periphery of each hole is irradiated. The three-dimensional coordinates of the end point of the slit light are detected by a three-dimensional sensor. 3 of the end points of each hole found in this way
By comparing the coordinate values in the axial direction, that is, the depth direction, of the multiple holes among the dimensional coordinates, it is possible to determine which hole each end point corresponds to. For example, when the axis of the three-dimensional sensor that images the end point of the slit light and the axis of the multiple hole are approximately parallel, the coordinate values of each edge of the multiple hole in the axial direction are approximately the same for each hole. Therefore, three end points that have almost the same value in the axial direction can be determined to be the end points of the same hole. Based on the coordinates of the end point of each hole determined in this way,
It can be found by calculating the center position of each hole. The slit light may be cross-shaped as described above, or may be T-shaped, or may have other shapes.

実施例 第1図は本発明の一実施例の構成を示す断面図である.
産業用ロボット100の作業端101によって把持され
た部材102は平板103に形成された孔104に同心
にその孔104よりも小径のねじ孔105を有するナッ
ト106が溶接などによって固定されている.第2図は
この孔104とねじ孔105とを示す正面図である.テ
レビカメラなどによって実現される2次元センサ107
は、部材102の第1図における左方の表面を撮像する
.前記孔104とねじ孔105とは多重孔108を構成
し、これらの孔104.105は同心状であり、しかも
それらの軸線方向に段差状に、第1図の左方から右方に
径が小さくなるように段差状に形成されている.すなわ
ち孔104の径に比べて、ねじ孔105の径が小さい.
センサ107の光軸は、たとえばこの実施例では、多重
孔108の軸線とほぼ平行にある.センサ107からの
出力は、マイクロコンピュータなどによって実現される
処理回路109に与えられる.部材102の前記一方表
面側、すなわちセンサ107が配置された側には、スリ
ット光源110が配置される.このスリット光源110
によって、相互に90度で交差する十字状のスリット光
が照射され、このスリット光が照射された状態は第3図
に示されている.平板103の表面には*(第3図の上
下方向)に延びるスリット光111が照射され、また、
横(第3図の左右方向〉に延びるスリット光112が照
射される。さらにねじ孔105を有するナット106に
は、スリット光111が参照符111a.lllbで示
されるように照射され、同様にしてスリット光112は
ナット106上で参照符112a,112bで示される
ように照射゛される. 第4図を参照して処理回路109の動作を説明する.ス
テップn1からステップn2に移り、スリット光源11
0によって十字状のスリッ1・光111,112を部材
102の一方表面に照射してこれらのスリット光111
.112が多重孔108を横切る状態とする。スリット
光111.112が孔104.1.05に掛からず、こ
れらの孔l04,105を横切らなかったときには、処
理回路109はステップn3においてロボット100を
動作させて、孔104,105に対するスリット光11
1.112の掛かり具合いをセンサ107で検出し、ロ
ボット100の作業端101による部材102の移動方
向を決定し、スリット光111.112が孔104,1
05を横切るようになるまで、移動指令信号を与え、こ
うして部材102をロボット100によって移動する.
こうしてスリット光111,112が孔104,105
を横切る状態とした後、ステップn4において、各孔1
04の4つの端点A〜Dおよび孔105の端点E〜Hの
3次元座標を検出する。このセンサl○7の座標系では
、センサ107の光軸方向はZ軸方向であり、第1の紙
面内でZ軸方向に垂直な方向をY軸方向とし、第1図の
紙面に垂直方向をX軸方向とする。
Embodiment FIG. 1 is a sectional view showing the configuration of an embodiment of the present invention.
A member 102 gripped by a working end 101 of an industrial robot 100 has a nut 106 concentrically attached to a hole 104 formed in a flat plate 103 and having a threaded hole 105 smaller in diameter than the hole 104 fixed by welding or the like. FIG. 2 is a front view showing this hole 104 and screw hole 105. Two-dimensional sensor 107 realized by a television camera, etc.
images the left surface of member 102 in FIG. The hole 104 and the threaded hole 105 constitute a multiple hole 108, and these holes 104 and 105 are concentric and step-shaped in the axial direction, and the diameter increases from the left to the right in FIG. It is shaped like a step to make it smaller. That is, the diameter of the screw hole 105 is smaller than the diameter of the hole 104.
The optical axis of the sensor 107 is, for example, approximately parallel to the axis of the multiple holes 108 in this embodiment. The output from the sensor 107 is given to a processing circuit 109 implemented by a microcomputer or the like. A slit light source 110 is arranged on the one surface side of the member 102, that is, on the side where the sensor 107 is arranged. This slit light source 110
, cross-shaped slit lights intersecting each other at 90 degrees are irradiated, and the state in which the slit lights are irradiated is shown in Fig. 3. The surface of the flat plate 103 is irradiated with a slit light 111 extending in * (vertical direction in FIG. 3), and
A slit light 112 extending laterally (in the left-right direction in FIG. The slit light 112 is irradiated on the nut 106 as shown by reference numerals 112a and 112b.The operation of the processing circuit 109 will be explained with reference to FIG.
0, the cross-shaped slit lights 111 and 112 are irradiated onto one surface of the member 102, and these slit lights 111
.. 112 crosses the multiple holes 108. When the slit beams 111, 112 do not hit the holes 104.1.05 and do not cross these holes 104, 105, the processing circuit 109 operates the robot 100 in step n3, so that the slit beams 11, 112 for the holes 104, 105
1.112 is detected by the sensor 107, the moving direction of the member 102 by the working end 101 of the robot 100 is determined, and the slit light 111.
A movement command signal is given, and the member 102 is moved by the robot 100 until it crosses the point 05.
In this way, the slit lights 111 and 112 are connected to the holes 104 and 105.
After crossing the hole 1, in step n4, each hole 1
The three-dimensional coordinates of four end points A to D of hole 104 and end points E to H of hole 105 are detected. In the coordinate system of this sensor l○7, the optical axis direction of the sensor 107 is the Z-axis direction, the direction perpendicular to the Z-axis direction in the first page is the Y-axis direction, and the direction perpendicular to the page of FIG. is the X-axis direction.

そこでステップn5では、センサ107の光軸方向、す
なわち多重孔108の軸線方向、すなわち前述のZ軸方
向の各端点A〜D.E〜Hの座標値を相互に比較する.
平板103の孔104における端点A〜Dの軸線方向の
座標値は同一またはごく近似している。またナット10
6の孔105における端点E〜Hの軸線方向の座標値は
同一またはごく近似している。また端点A〜Dと端点E
〜Hとの軸線方向の座標値は、平板103の厚みt1だ
けずれている。したがってこれらの端点A〜Dと端点E
〜Hとの軸線方向の座標値が同一または近似しているか
どうかを判別することによって、孔104の端点A−D
と孔105の端点E〜Hとを判別することができる. ステップn6では、こうして求めた端点A〜Dおよび端
点E〜Hの座標値に基づいて、各孔104,1.05毎
の中心位置を演算して求めることができる.こうして孔
104,105の中心位置を正確に検出することができ
るようになる.第5rXiは53次元センサ107によ
って多重孔108を3次元的に捕えるための構成を示す
斜視図である。以下の説明では、特に孔104に関連し
て行うけれども、もう1つの孔105に関連しても同様
である。部材102の平面である表面に臨んで真円の孔
104が形成されている。この孔104には、複数(こ
の実施例では2)のスリット光が照射される。なお、カ
メラとスリット光を照射する2台の投光器3.4は一体
化されている.スリット光は参照符5.6でそれぞれ示
される平面である.部材102の表面にある光切断線は
孔104において欠落しており、これらの端点を参照符
A,B,C,Dでそれぞれ示す.部材102の表面は、
工業用テレビカメラ7によって撮像される.このカメラ
7は、電荷蓄積素子(略称CCD)の撮像面8と、部材
102の表面を撮像面8に結像するレンズ9とを含む。
Therefore, in step n5, each end point A to D. Compare the coordinate values of E to H with each other.
The coordinate values of the end points A to D in the hole 104 of the flat plate 103 in the axial direction are the same or very similar. Also nut 10
The coordinate values of the end points E to H in the hole 105 of No. 6 are the same or very similar. Also, end points A to D and end point E
The coordinate values of ~H in the axial direction are shifted by the thickness t1 of the flat plate 103. Therefore, these end points A to D and end point E
By determining whether the coordinate values in the axial direction with ~H are the same or similar, the end points A-D of the hole 104 are determined.
and the end points E to H of the hole 105 can be determined. In step n6, the center position of each hole 104, 1.05 can be calculated and determined based on the coordinate values of the end points A to D and E to H obtained in this way. In this way, the center positions of the holes 104 and 105 can be detected accurately. 5rXi is a perspective view showing a configuration for three-dimensionally capturing multiple holes 108 by a 53-dimensional sensor 107. Although the following description will be made with particular reference to the hole 104, the same applies to the other hole 105 as well. A perfectly circular hole 104 is formed facing the flat surface of the member 102. This hole 104 is irradiated with a plurality of (two in this embodiment) slit lights. Note that the camera and the two floodlights 3.4 that emit slit light are integrated. The slit beams are planes respectively indicated by reference numerals 5.6. The optical cutting lines on the surface of member 102 are missing at holes 104, and their end points are indicated by reference numerals A, B, C, and D, respectively. The surface of the member 102 is
The image is captured by an industrial television camera 7. This camera 7 includes an imaging surface 8 of a charge storage device (abbreviated as CCD) and a lens 9 that images the surface of the member 102 on the imaging surface 8 .

部材102の3次元座標系をX,Y,Zで示し、カメラ
7のカメラ座標系(CCDの撮像面上に設定される座標
系)をXc,Ycで示す.カメラ7からの出力は、処理
回路10に与えられる. 第6図は、第5図に示されるセンサ107の電気的構成
を示すブロック図である。投光器3.4は駆動回路11
.12によって駆動される。処理回路10に備えられて
いるテレビカメラコントロール13は、カメラ7の電荷
蓄積素子にラインl4を介して同期信号を与え、これに
よって電荷蓄積素子から得られる映像信号はライン15
を介して処理回路10のアナログ/デジタル変換回路1
6に与えられてデジタル値に変換される.こうして得ら
れるアナログ/デジタル変換回路16からの出力は、し
きい値設定器17からの弁別レベルであるしきい値と、
比較器l8において比較されて、ライン19からは2値
化信号が得られる.この2値化信号は、フレームメモリ
20にストアされる.メモリ20の内容は、バス21を
介して処理手段22に与えられ、また通信コントローラ
23を介して外部の処理回路109とデータの転送を行
うことができる。このような基本的な構或を有する本発
明の一実施例において、まず孔104の中心位置の計測
を行い(後述のI章〜■章)、次に部材102の平面で
ある表面の傾き、すなわち姿勢角を計測しく後述の■章
〉、さらにまた、その部材102の一表面とカメラ7と
の間の距離を計測する(後述の■章). まず孔104の円の中心位置の計測原理を説明する.処
理回路10では、第7図のステップu1からステップu
2に移り、交差する2本゛のスリット光5.6を、孔1
04を含む平面に対して投光し、孔104の縁で欠落す
る4つの端点A,B,C,Dの3次元位置を計測する。
The three-dimensional coordinate system of the member 102 is indicated by X, Y, Z, and the camera coordinate system of the camera 7 (coordinate system set on the imaging surface of the CCD) is indicated by Xc, Yc. The output from the camera 7 is given to a processing circuit 10. FIG. 6 is a block diagram showing the electrical configuration of sensor 107 shown in FIG. The floodlight 3.4 is a drive circuit 11
.. 12. A television camera control 13 included in the processing circuit 10 provides a synchronization signal to the charge storage element of the camera 7 via line l4, so that the video signal obtained from the charge storage element is transferred to line 15.
Analog/digital conversion circuit 1 of processing circuit 10 via
6 and converted to a digital value. The output from the analog/digital conversion circuit 16 obtained in this way has a threshold value which is a discrimination level from the threshold value setter 17, and
After comparison in comparator l8, a binarized signal is obtained from line 19. This binary signal is stored in the frame memory 20. The contents of the memory 20 are provided to the processing means 22 via the bus 21, and data can be transferred to an external processing circuit 109 via the communication controller 23. In one embodiment of the present invention having such a basic structure, first, the center position of the hole 104 is measured (chapters I to II to be described later), and then the inclination of the plane surface of the member 102 is measured. That is, the attitude angle is measured (Chapter 2 described later), and the distance between one surface of the member 102 and the camera 7 is measured (Chapter 2 described later). First, the principle of measuring the center position of the circle of the hole 104 will be explained. In the processing circuit 10, from step u1 to step u in FIG.
2, pass the two intersecting slit lights 5.6 through the hole 1.
04, and measure the three-dimensional positions of four end points A, B, C, and D missing at the edge of the hole 104.

1、スリット光5.6による点A,B,C,Dの3次元
位置の計測方法。
1. Method for measuring three-dimensional positions of points A, B, C, and D using slit light 5.6.

第8図に示されるようにスリット光の投光器3と、カメ
ラ7とを配置し、スリット光平面5上のl点P(このP
は、前述の、A,B,C,Dを代表して表す)の物体座
標系での座標を(x, yZ〉、点Pの撮像面8上の像
の座標をカメラ座標系でQ (Xc,Yc)とする.カ
メラ7の透視変換を第1式に示す. またスリット光平面5の方程式を第2式に示す6a*X
+b*Y+Z=d            ・++ (
2)したがって、Pの物体座標系における座標(x,Y
,Z)は第1式および第2式を連立させて解くことによ
って求まる.基本的には、スリット光平面5,6上にあ
るすべての点の3次元座標を求めることができる. 第1式と第2式から成る連立方程式を解く前に、係数(
CIl〜C34,h,a,b,d)を予め求めておく。
A slit light projector 3 and a camera 7 are arranged as shown in FIG.
are the coordinates of the above-mentioned A, B, C, and D) in the object coordinate system as (x, yZ〉), and the coordinates of the image of point P on the imaging surface 8 as Q ( in the camera coordinate system). Xc, Yc).The perspective transformation of the camera 7 is shown in the first equation.The equation of the slit light plane 5 is shown in the second equation 6a*X
+b*Y+Z=d ・++ (
2) Therefore, the coordinates (x, Y
, Z) can be found by solving the first and second equations simultaneously. Basically, the three-dimensional coordinates of all points on the slit light planes 5 and 6 can be determined. Before solving the simultaneous equations consisting of the first and second equations, the coefficient (
CI1 to C34, h, a, b, d) are determined in advance.

以下にその方法を示す. (1)カメラパラメータのキヤリプレーションについて
. 第1式のCIl〜C,l4をカメラパラメータと称する
.カメラパラメータとは、レンズ9の焦点距離、レンズ
9の主点の位置、レンズ9と受光面すなわち撮像面8と
の距離などに依存して決定される値である.これらの値
を実測することは困難であるので、次の手法で求める. 第1式を展開し、係数hを消去すると、C1 1 本x
+c1 2本y+c1 )本Z+CI 4−C31 本
Xc*X−Csz*Xc*Y−C*x本Xc*Z−Cs
<DC= 0・・・(3−1) C2+ 車X+C22本Y+C2s’lZ+c24−C
Il*Xc*X−C=2車Xc*Y−C33*Xc*Z
−Cl 4本Xc= 0・・・(3−2) となる。したがって、同一平面上にない6点の既知の3
次元座標と、それぞれに対応するカメラ座標を第3−1
式および第3−2式に代入し、12元連立方程式を解く
ことによって12個の未知数( C + +〜C.4)
が求まる。ここではカメラパラメータの算出の精度を向
上するために、3次元座標が既知のn点(n>6)の計
測を行い、最小2乗法によって求める. 第3式から、係数CIl〜C)4に関する次の12元2
0連立方程式が得られる. E*G=F                ・・・(
4)?=[Xc.Yc.・・・・・・・・・・・・・・
・・・・・・・・XClIYC,l]t・・・(6〉G
= [C.CI2 Cps Cl4 C21 C2■C
23 C2− C3,C32 C33] ’   ・・
・(7−1)ただし、 C3−=1                 ・・・
(7−2 )最小2乗法により G= (Et*E) −’*Et*F        
   ・・・(8〉を計算すると、Gが求まる. (2)スリット光の平面の方程式の係数の算出.スリッ
ト光の平面上の既知の3点の3次元位置を第2式に代入
すれば、a,b,dに関する3元連立方程式が得られる
ので、これを解けばa,b,dを算出できる.ここでは
精度を上げるために、既知のn点(n>3>の3次元座
標を第2式に代入し、次の3元n連立方程式を最小2乗
法で解く.これを J*K=L                    
     ・・・(10)と置けば、 K= (J’* J ) −’ * Jt* l,  
        ・・・(11)より求まる. (3)特徴点の3次元座標の算出. 前述の方法でC.〜C34.a,b,dを求めておけば
、特徴点の3次元座標は第2式と第3式を連立して、次
式を解くことで求まる。
The method is shown below. (1) Regarding calibration of camera parameters. CIl~C,l4 in the first equation are called camera parameters. The camera parameters are values determined depending on the focal length of the lens 9, the position of the principal point of the lens 9, the distance between the lens 9 and the light-receiving surface, that is, the imaging surface 8, and the like. Since it is difficult to actually measure these values, we obtain them using the following method. Expanding the first equation and eliminating the coefficient h, we get C1 1 x
+c1 2 pieces y+c1 ) book Z + CI 4-C31 book Xc * X-Csz * Xc * Y-C * x book Xc * Z-Cs
<DC= 0...(3-1) C2+ Car X+C22 Y+C2s'lZ+c24-C
Il*Xc*X-C=2 cars Xc*Y-C33*Xc*Z
-4 Cl Xc=0...(3-2) Therefore, the known 3 points of 6 points that are not on the same plane
The dimensional coordinates and the corresponding camera coordinates are shown in 3-1.
By substituting into Equation and Equation 3-2 and solving the 12-element simultaneous equations, the 12 unknowns (C + + ~ C.4)
is found. Here, in order to improve the accuracy of camera parameter calculation, we measure n points (n>6) whose three-dimensional coordinates are known and calculate them using the least squares method. From the third equation, the following 12 elements 2 for the coefficients CIl~C)4
A zero simultaneous equation is obtained. E*G=F...(
4)? = [Xc. Yc.・・・・・・・・・・・・・・・
......XClIYC,l]t...(6〉G
= [C. CI2 Cps Cl4 C21 C2■C
23 C2- C3, C32 C33] '...
・(7-1) However, C3-=1...
(7-2) G= (Et*E) −'*Et*F by least squares method
...By calculating (8>), G is found. (2) Calculating the coefficients of the equation of the plane of the slit light. Substituting the three-dimensional positions of the known three points on the plane of the slit light into the second equation , a, b, and d are obtained, and by solving this, a, b, and d can be calculated.Here, to increase the accuracy, we will calculate the three-dimensional coordinates of known n points (n>3>). Substitute into the second equation and solve the following three-dimensional n simultaneous equations using the method of least squares.
...(10), K= (J'* J) -' * Jt* l,
...determined from (11). (3) Calculation of three-dimensional coordinates of feature points. C. using the method described above. ~C34. Once a, b, and d have been determined, the three-dimensional coordinates of the feature point can be determined by combining the second and third equations and solving the following equation.

M*N=R ・・・〈12) ただし、 N=[X   Y   Z]’           
      ・・・〈14)R=[C+4C3<*Xc
  C24  C34*Vc  d]t・=(15−1
)ただし、 C34=1                 ・・・
(15−2)第12式より、 N=M引*R               ・・・(
l6〉■、点A,B,C,Dを通る円の中心の計測方法
M*N=R...<12) However, N=[X Y Z]'
...<14) R=[C+4C3<*Xc
C24 C34*Vc d]t・=(15-1
) However, C34=1...
(15-2) From the 12th formula, N=M minus *R...(
l6〉■, How to measure the center of a circle passing through points A, B, C, and D.

点A,B,C,Dを通る円の中心は、 <la>4点A,B,C,Dを通る平面上にある. (2a〉各点A,B,C,Dからの距離が等しい. という2つの条件1a,2aから求まる.(1)部材1
02の表面である4点A,B,CDを含む平面P13、
すなわち第9図の紙面の方程式の係数の算出(第7図の
ステップu3).4点A,B,C,Dを含む平面の法線
ベクトル成分は、Z成分が大き<:、X,Y成分および
距離が小さいので、平面の方程式を次式で表す。
The center of the circle that passes through points A, B, C, and D is on the plane that passes through the four points A, B, C, and D. (2a> The distances from each point A, B, C, and D are equal. It is found from the two conditions 1a and 2a. (1) Member 1
A plane P13 containing four points A, B, and CD, which is the surface of 02,
That is, calculation of the coefficients of the equation on the paper in FIG. 9 (step u3 in FIG. 7). Since the normal vector component of a plane including the four points A, B, C, and D has a large Z component and a small X and Y component and a small distance, the equation of the plane is expressed by the following equation.

a+*X+b+’k3/+Z=d+         
        ++ (17)4点A,B,C,Dは
、この平面上の点であるので、 これにより、最小2乗法でa,b,dを算出し、あるい
はまた3点A,B,Cの場合には、1行の戒分を無視し
て逆行列でa l + b l + d l を算出す
る. (2)各点A,B,C,Dのうちの2点からの距離が等
しい平面の方程式の係数の算出.各点からの距離が等し
い点(x,y,z)は次式で表すことができる. (x−x+) ”+ (y−y+) ”+ (z−zt
) 2=S    − (19)(i=1〜4) 精度よく算出するために、互いに距離の大きい2点を用
いて算出する.ここでは点A,Bと点C,Dのベアを用
いる. 第9図の平面図を参照して、点A,Bから等しい距離に
ある点は次式になる(第7図のステップu4)。
a+*X+b+'k3/+Z=d+
++ (17) Since the four points A, B, C, and D are points on this plane, we can calculate a, b, and d using the least squares method, or we can also calculate the three points A, B, and C. In this case, a l + b l + d l is calculated using the inverse matrix, ignoring the precepts in the first row. (2) Calculation of the coefficients of the equation of a plane with equal distances from two points A, B, C, and D. Points (x, y, z) that are the same distance from each point can be expressed by the following formula. (x-x+) ”+ (y-y+) ”+ (z-zt
) 2=S − (19) (i=1 to 4) In order to calculate with high accuracy, two points with a large distance from each other are used for calculation. Here, we use bears at points A and B and points C and D. Referring to the plan view of FIG. 9, points at equal distances from points A and B are determined by the following equation (step u4 in FIG. 7).

−2*Xl *X+Xl”−2*y+ *l’+7+2
−2*z1*Z+Zl”=−2*X2*X+X2”−2
*g*y+y22−2*z2*z十z22      
・・− (20−1)2(x+  x2) *κ+20
’+  y2)*y+2(z,−z2)Lz=(X+2
X2”)+(y+’ y2’)+(21”−22”) 
       − (20−2)この第20−2式を、 a2*x+bz*3/十C2*z=d2       
     −・<20−3)と置く。第20−3式は、
平面Pllの式である.点C,Dも同様に算出する(第
7図のステップu5〉. 2(xt−x<)*x+2(ys−y<)* 3/+2
(Z3−Z.)* Z=(Xs” x<2)+(y32
3’42)+(Z32Z42〉−(21−1)これを a3*x十b,*y+(3*z=di        
      ・・・(21−2)と置く.第21−2式
は、平面P12の式である。
-2*Xl *X+Xl"-2*y+ *l'+7+2
-2*z1*Z+Zl"=-2*X2*X+X2"-2
*g*y+y22-2*z2*z10z22
・・− (20−1)2(x+x2) *κ+20
'+y2)*y+2(z,-z2)Lz=(X+2
X2") + (y+'y2') + (21"-22")
- (20-2) This equation 20-2, a2*x+bz*3/10C2*z=d2
-・<20-3). Formula 20-3 is
This is the formula for the plane Pll. Points C and D are calculated in the same way (step u5 in Figure 7). 2(xt-x<)*x+2(ys-y<)*3/+2
(Z3-Z.) * Z=(Xs” x<2)+(y32
3'42) + (Z32Z42〉-(21-1)) a3*x b, *y+(3*z=di
...Set it as (21-2). Equation 21-2 is an equation for the plane P12.

(3)円の中心Oの算出. 第17式、第20−3式、第21−2式の3平面の交点
が円の中心である.したがって、円の中心座標(xcl
,yel.zcl)は次式の連立方程式を解くことで求
まる(第7図のステップU6)。
(3) Calculation of the center O of the circle. The intersection of the three planes of Equation 17, Equation 20-3, and Equation 21-2 is the center of the circle. Therefore, the center coordinates of the circle (xcl
, yel. zcl) is found by solving the following simultaneous equations (step U6 in FIG. 7).

これより ■、平面P13のX軸まわりの姿勢角αおよびY軸まわ
りの姿勢角βの計測。
From this, ■, the attitude angle α around the X axis and the attitude angle β around the Y axis of the plane P13 are measured.

第10図(1)において、平面P13aがX軸まわりに
+Δαだけ角変位して平面P13bの姿勢となったとき
、スリット光5の平面P13a上の光切断線26は、平
面P13b上では光切断線27のとおりとなる.カメラ
7の撮像面8において、α=Oの光切断線26の像は参
照符26aで示され、その回転後の光切断線27の像は
参照符27aで示される。また第11図(1)で示され
るように、平面P13cがY軸まわりに角度Δβだけ角
変位して平面P13dとなったときには、平面P13c
上の光切断線28は平面P13d上で光切断線29とな
る.したがってカメラ7の撮像面8において、光切断線
28の像28aは光切断線2つの像29aとなる.こう
して撮像面8上の像27a.29aによって、平面P1
3a,P13bの相互の角度Δαと平面P13c,P1
3dの角度十Δβを演算して求めることができる.第1
0図および第11図にΔα.Δβの定義を示し、さらに
第12図〜第14図を参照して平面の傾きを求める手法
について具体的に述べる.(1)第13図に示される対
象面P13のX軸まわりの姿勢角αと、その対象面P1
3のY軸まわりの姿勢角βとを求めるにあたり、まず■
カメラ7の撮像面8上の水平スリット光の光切断線30
の方程式を予め求めておき、この光切断線30の方程式
と、■予め求.めておいた前述のカメラパラメータCI
l〜C34とから、■光切断線30とレンズ9の主点を
通る平面P14の方程式を求める(第12図のステップ
ml,m2),また■スリット光の平面P15の方程式
を予め求めておく(第12図のステップm3). (2)前のバラグラフ(1)で示した方程式■,■,■
と、カメラパラメータ■とによって、平面?14,P1
5の各平面の法線ベクトルを求め、その法線ベクトルを
P,,P■とし、平面P14,P15の交a31の方向
ベクトルを 1 .=(1. L, us)           
 −” (24)とすると、l,とPa,P’sとは直
交するので、P2−14=O            
   ・・・(25)P3・14=0        
       ・・・(26)これにより,e,が求め
られる(第12図のステツプm4), (3)同様にして第14図から、光切断線32とカメラ
パラメータより平面P16の方程式を求め、平面P17
の方程式も求めておけば、平面P16,Pl7の法線ベ
クトルをそれぞれPs,Pt、交線33の方向ベクトル
を 1 1=<5m.  1, ul)         
           ”・(27)としで、 P11m=0               ・・・(
28)P1・Is二〇               
・・・(29)これにより、l.が求められる(第12
図のステップm7). この第14図において、平面P16はレンズ9の主点を
通る平面であり、P17は投光器3のスリット光がなす
平面を示している。
In FIG. 10 (1), when the plane P13a is angularly displaced by +Δα around the X axis and assumes the attitude of the plane P13b, the light cutting line 26 on the plane P13a of the slit light 5 is the light cutting line 26 on the plane P13b. As shown in line 27. On the imaging plane 8 of the camera 7, the image of the optical section line 26 with α=O is indicated by reference numeral 26a, and the image of the optical section line 27 after its rotation is indicated by reference numeral 27a. Further, as shown in FIG. 11(1), when the plane P13c is angularly displaced by the angle Δβ around the Y axis and becomes the plane P13d, the plane P13c
The upper optical section line 28 becomes an optical section line 29 on the plane P13d. Therefore, on the imaging surface 8 of the camera 7, the image 28a of the light section line 28 becomes an image 29a of two light section lines. In this way, the image 27a on the imaging surface 8. 29a, the plane P1
3a, P13b mutual angle Δα and planes P13c, P1
It can be found by calculating the angle Δβ of 3d. 1st
0 and 11, Δα. The definition of Δβ will be shown, and the method for determining the inclination of the plane will be specifically described with reference to FIGS. 12 to 14. (1) Attitude angle α around the X-axis of the target plane P13 shown in FIG. 13 and its target plane P1
In order to find the attitude angle β around the Y-axis of 3, first
Optical cutting line 30 of horizontal slit light on imaging surface 8 of camera 7
The equation of the light section line 30 is calculated in advance, and the equation of the light section line 30 is calculated in advance. The previously stored camera parameter CI
From l to C34, ■ Find the equation of the plane P14 passing through the light section line 30 and the principal point of the lens 9 (steps ml and m2 in Figure 12), and Find the equation of the plane P15 of the slit light in advance. (Step m3 in Figure 12). (2) Equations shown in the previous bar graph (1)■,■,■
And, depending on the camera parameters ■, the plane? 14,P1
Find the normal vector of each plane of 1. = (1. L, us)
−” (24), then l, and Pa, P's are orthogonal, so P2-14=O
...(25) P3・14=0
...(26) From this, e is found (step m4 in FIG. 12). (3) Similarly, from FIG. P17
If you also find the equation, the normal vectors of the planes P16 and Pl7 are Ps and Pt, respectively, and the direction vector of the intersection line 33 is 11=<5m. 1, ul)
”・(27) Toshide, P11m=0 ・・・(
28) P1・Is20
...(29) As a result, l. is required (12th
Step m7) in the figure. In FIG. 14, a plane P16 is a plane passing through the principal point of the lens 9, and P17 is a plane formed by the slit light of the projector 3.

(4)対象面P13の法線ベクトル Po= (so. jo+ 1 >         
   −(30)はl 4+ 1 .に直交するから、 P0・14=0               ・・・
(31)P6−1 .=O             
  −(32)これにより、P.が求められる(第12
図のステツプm8). センサの撮像面8の対象面P13に対する姿勢角α(X
軸まわりの回転角)、β(Y軸まわりの回転角〉は次式
で求められる(第12図のステツブm9〉. α一jan” (to)              
      ・・・(33)β一jan−’ (so)
                   − (34)
■、距離の計測方法。
(4) Normal vector Po of target surface P13 = (so. jo+ 1 >
-(30) is l 4+ 1 . Since it is orthogonal to , P0・14=0...
(31) P6-1. =O
-(32) As a result, P. is required (12th
Step m8) in the figure. Attitude angle α(X
The angle of rotation around the axis) and β (the angle of rotation around the Y axis) can be found using the following formula (step m9 in Figure 12).
...(33) β-jan-' (so)
- (34)
■, How to measure distance.

第15図に示されるように、平面P13eと力メラ7の
撮像面8との間の距離を計測する際、この平面P13e
がP13fおよびP13gで示すように検出可能な範囲
で変位すると、第15図(2)で示されるように撮像面
8上では、投光器3のスリット光5の光切断4134,
35.36は像34a,35a,36aとなって検出さ
れる。このようにして撮像面8上の像34a,35a,
36aを検出することによって、平面P13e,P13
f,P13gの距離を計測することができる.この手法
を第16図および第17図を参照してさらに具体的に説
明する. (1)カメラ7の光軸の方程式は、 x=y=0                ・・・(
35)であって、その撮像面8と対象面P13との距離
dは、カメラ7のレンズ9の光軸と対象面P13の交点
のZ座標と定義する. 対象面P13上の1点の座標を求めれば、対象面P13
の法線ベクトルとから対象面P13の平面の方程式が決
定できる.その点は、平面P14P15,P17の交点
として得られ、その点を、(Xo,3’o, Zo )
とすると、対象面P13の方程式は、 5。(X XO) +to (y−yo) + 1・(
z zo)=0  ・・・(36〉となる(第16図の
ステップrl.r2).(2)距ldは、 d=soXotoYo+Zo  ’         
  ”’ (37)として求められる(第16図のステ
ップr3,r4〉. ■,面位置の計測。
As shown in FIG. 15, when measuring the distance between the plane P13e and the imaging surface 8 of the force camera 7, this plane P13e
is displaced in a detectable range as shown by P13f and P13g, on the imaging surface 8, as shown in FIG.
35 and 36 are detected as images 34a, 35a, and 36a. In this way, the images 34a, 35a,
36a, planes P13e, P13
The distance between f and P13g can be measured. This method will be explained in more detail with reference to FIGS. 16 and 17. (1) The equation of the optical axis of camera 7 is x=y=0...(
35), and the distance d between the imaging plane 8 and the object plane P13 is defined as the Z coordinate of the intersection of the optical axis of the lens 9 of the camera 7 and the object plane P13. If the coordinates of one point on the target plane P13 are found, the target plane P13
The equation of the plane of the target surface P13 can be determined from the normal vector of . The point is obtained as the intersection of planes P14P15 and P17, and the point is expressed as (Xo, 3'o, Zo)
Then, the equation of the target plane P13 is 5. (X XO) +to (y-yo) + 1・(
z zo)=0...(36>) (step rl.r2 in Figure 16).(2) The distance ld is d=soXotoYo+Zo'
"' (37) (Steps r3 and r4 in FIG. 16). ① Measurement of surface position.

第18図を参照して、面13の位置計測にあたっては、
単一の投光器4からのスリット光6を投光し、カメラ7
の光軸37は、物体座標系のX−Y平面に垂直であるも
のとする。このとき、計測対象となる平面P13とスリ
ット光平面6の交線38上の1点3つの3次元位置を計
測し、そのZ軸戒分を面位置{すなわち高さ}とする.
本発明は、孔104の中心位置を計測することができる
だけではなく、その孔104の面積およびその他の物理
量を広く演算して求めることが可能であり、そのような
改変は当業者に容易である.本発明の他の実施例として
、孔104,105は3以上、軸線方向に順次的に径が
小さくなるように形戒されていてもよい. 発明の効果 以上のように本発明によれば、多重孔を有する部材の段
差状となっている孔が臨む一方表面にその多重几を横切
る相互に交差したスリット光を照射し、各孔の周縁にあ
るスリット光の端点の3次元座標を検出し、端点の前記
軸線方向の座標値に基づいて各孔毎の端点を判別するよ
うにしたので、多重孔を構成する各孔毎の検出を容易に
かつ確実に行うことができるようになる.
Referring to FIG. 18, in measuring the position of surface 13,
A slit light 6 is emitted from a single light projector 4, and a camera 7
It is assumed that the optical axis 37 of is perpendicular to the X-Y plane of the object coordinate system. At this time, one point and three three-dimensional positions on the intersection line 38 of the plane P13 to be measured and the slit light plane 6 are measured, and the Z-axis precept is defined as the surface position {that is, the height}.
The present invention can not only measure the center position of the hole 104, but also calculate the area of the hole 104 and other physical quantities widely, and such modifications are easy for those skilled in the art. .. As another embodiment of the present invention, three or more holes 104, 105 may be formed so that the diameter thereof decreases sequentially in the axial direction. Effects of the Invention As described above, according to the present invention, mutually intersecting slit light that crosses the multiple holes is irradiated onto the surface facing the step-like holes of a member having multiple holes, and the periphery of each hole is The three-dimensional coordinates of the end point of the slit light are detected, and the end point of each hole is determined based on the coordinate value of the end point in the axial direction, making it easy to detect each hole that makes up the multiple holes. You will be able to do this quickly and reliably.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の一実施例の全体の構成を示す断面図、
第2図は部材102のセンサ107側から見た正面冴、
第3図は部材102の多重孔108にスリット光源11
0からスリット光111,112を照射した状態を示す
正面図、第4図は処理回路109の動作を説明するため
のフローチャート、第5図は本発明の一実施例の3次元
センサ107の簡略化した斜視図、第6図は第5図に示
される3次元センサ107の電気的itを示すブロック
図、第7図は孔104の中心位置の算出手順を示すフロ
ーチャート、第8図はスリット光5による点Pの3次元
位置計測の手法を示す斜視図、第9図は平面P13の平
面図、第10図は平面の姿勢角αの定義を示す図、第1
1図は平面の姿勢角βの定義を示す簡略化した図、第1
2図は姿勢角α,βを計測する手順を示すフローチャー
ト、第13図および第14図は対象面P13のX軸まわ
りの姿勢角αとYlillまわりの姿勢角βを計測する
ための手法を示す簡略化した図、第15図は平面P13
e,P13f,P13gの距離の計測原理を示す簡略化
した図、第16図は平面の距離dの算出手順を示すフロ
ーチャート、第17図は距離dの計測を行うための楕戊
を簡略化して示す図、第18図は本発明のさらに他の実
施例の面13の位置計測を行う原理を示す簡略化した図
である。 100・・・ロボット、102・・・部材、107・・
・2次元センサ、108・・・多重孔、109・・・処
理回路、111.112・・・スリット光、A〜D,E
〜H・・・端点
FIG. 1 is a sectional view showing the overall configuration of an embodiment of the present invention;
FIG. 2 shows the front view of the member 102 as seen from the sensor 107 side.
FIG. 3 shows a slit light source 11 inserted into the multiple holes 108 of the member 102.
4 is a flowchart for explaining the operation of the processing circuit 109, and FIG. 5 is a simplified view of the three-dimensional sensor 107 according to an embodiment of the present invention. 6 is a block diagram showing the electrical IT of the three-dimensional sensor 107 shown in FIG. 5, FIG. 7 is a flowchart showing the procedure for calculating the center position of the hole 104, and FIG. FIG. 9 is a plan view of the plane P13, FIG. 10 is a diagram showing the definition of the attitude angle α of the plane,
Figure 1 is a simplified diagram showing the definition of the plane attitude angle β.
FIG. 2 is a flowchart showing the procedure for measuring the attitude angles α and β, and FIGS. 13 and 14 show a method for measuring the attitude angle α around the X axis and the attitude angle β around Ylill of the target plane P13. The simplified diagram, FIG. 15, is the plane P13.
A simplified diagram showing the principle of measuring the distances e, P13f, and P13g, FIG. 16 is a flowchart showing the procedure for calculating the distance d on a plane, and FIG. 17 is a simplified diagram showing the ellipse for measuring the distance d. The figure shown in FIG. 18 is a simplified diagram showing the principle of measuring the position of the surface 13 according to still another embodiment of the present invention. 100...Robot, 102...Member, 107...
・Two-dimensional sensor, 108...Multiple holes, 109...Processing circuit, 111.112...Slit light, A to D, E
~H...End point

Claims (2)

【特許請求の範囲】[Claims] (1)同心状にかつ軸線方向に段差状に径が小さくなる
ように形成された多重孔を有する部材の多重孔の検出方
法において、 多重孔を有する前記部材の一方表面に、多重孔を横切る
相互に交差したスリット光を照射し、各孔の周縁にある
スリット光の端点の3次元座標を検出し、 端点の前記軸線方向の座標値に基づいて、各孔毎の端点
を判別することを特徴とする多重孔の検出方法。
(1) In a method for detecting multiple holes in a member having multiple holes formed concentrically and with diameters decreasing stepwise in the axial direction, one surface of the member having multiple holes is provided with a hole that crosses the multiple holes. The method involves emitting mutually intersecting slit lights, detecting the three-dimensional coordinates of the end points of the slit lights on the periphery of each hole, and determining the end points of each hole based on the coordinate values of the end points in the axial direction. Characteristic method for detecting multiple holes.
(2)判別された各孔毎の端点の座標に基づいて、各孔
の中心位置を演算して求めることを特徴とする特許請求
の範囲第1項記載の多重孔の検出方法。
(2) The method for detecting multiple holes according to claim 1, characterized in that the center position of each hole is calculated and determined based on the determined coordinates of the end point of each hole.
JP1300368A 1989-11-17 1989-11-17 Method for detecting multiple holes Expired - Fee Related JPH0663733B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1300368A JPH0663733B2 (en) 1989-11-17 1989-11-17 Method for detecting multiple holes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1300368A JPH0663733B2 (en) 1989-11-17 1989-11-17 Method for detecting multiple holes

Publications (2)

Publication Number Publication Date
JPH03160303A true JPH03160303A (en) 1991-07-10
JPH0663733B2 JPH0663733B2 (en) 1994-08-22

Family

ID=17883937

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1300368A Expired - Fee Related JPH0663733B2 (en) 1989-11-17 1989-11-17 Method for detecting multiple holes

Country Status (1)

Country Link
JP (1) JPH0663733B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040033929A (en) * 2002-10-16 2004-04-28 한국생산기술연구원 Scanner and method for measuring a hole using the same
KR100750084B1 (en) * 2006-07-21 2007-08-22 (주)동서전자 Apparatus for inspecting badness of screw hole in inner case of lcd monitor
WO2012111510A1 (en) * 2011-02-16 2012-08-23 三菱重工業株式会社 Position detection device
JP2013120066A (en) * 2011-12-06 2013-06-17 Toyota Motor East Japan Inc Three-dimensional measuring method, three-dimensional measuring apparatus and three-dimensional measuring program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5635005A (en) * 1979-08-29 1981-04-07 Mitsubishi Electric Corp Detector for center position of object to be measured
JPS56132506A (en) * 1980-03-22 1981-10-16 Ando Electric Co Ltd Measuring device for center position of hole

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5635005A (en) * 1979-08-29 1981-04-07 Mitsubishi Electric Corp Detector for center position of object to be measured
JPS56132506A (en) * 1980-03-22 1981-10-16 Ando Electric Co Ltd Measuring device for center position of hole

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040033929A (en) * 2002-10-16 2004-04-28 한국생산기술연구원 Scanner and method for measuring a hole using the same
KR100750084B1 (en) * 2006-07-21 2007-08-22 (주)동서전자 Apparatus for inspecting badness of screw hole in inner case of lcd monitor
WO2012111510A1 (en) * 2011-02-16 2012-08-23 三菱重工業株式会社 Position detection device
JP2012168107A (en) * 2011-02-16 2012-09-06 Mitsubishi Heavy Ind Ltd Position detection apparatus
JP2013120066A (en) * 2011-12-06 2013-06-17 Toyota Motor East Japan Inc Three-dimensional measuring method, three-dimensional measuring apparatus and three-dimensional measuring program

Also Published As

Publication number Publication date
JPH0663733B2 (en) 1994-08-22

Similar Documents

Publication Publication Date Title
CA2555159C (en) Method for determining the position of an object in a space
US6175647B1 (en) Method and system for three-dimensional spatial position detection of surface points
Nitzan Three-dimensional vision structure for robot applications
US7177459B1 (en) Robot system having image processing function
US7502504B2 (en) Three-dimensional visual sensor
JPH02143309A (en) Operation method and apparatus
JP7353757B2 (en) Methods for measuring artifacts
JP2001148025A5 (en)
JPS6332306A (en) Non-contact three-dimensional automatic dimension measuring method
JPH1163952A (en) Three-dimensional shape measuring target and method for measuring inclination of collimation face
JPH03161223A (en) Fitting of work
JPH03160303A (en) Detecting method of multiple hole
JP2567923B2 (en) Distance measurement method
Krause et al. Remission based improvement of extrinsic parameter calibration of camera and laser scanner
EP1210619B1 (en) Apparatus and method for determining the angular orientation of an object
JP2913370B2 (en) Optical position measurement method
Kotthauser et al. Vision-based autonomous robot control for pick and place operations
JP2630844B2 (en) 3D shape and size measurement device
Tsukiyama Measuring the distance and orientation of a planar surface using nonstructured lighting-3-D measurement system for indoor mobile robots
JPH0642943A (en) Method of measuring inclination angle of camera
JPH08110206A (en) Method and apparatus for detecting position and posture
JPH10105719A (en) Optical measurement method for hole position
JPS6180008A (en) Shape measuring apparatus
Yuan et al. A Target-based Calibration Method for LiDAR-Visual-Thermal Multi-Sensor System
JPH05329793A (en) Visual sensor

Legal Events

Date Code Title Description
R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees