JP2795790B2 - Sensor coordinate correction method for three-dimensional measuring device - Google Patents

Sensor coordinate correction method for three-dimensional measuring device

Info

Publication number
JP2795790B2
JP2795790B2 JP14268393A JP14268393A JP2795790B2 JP 2795790 B2 JP2795790 B2 JP 2795790B2 JP 14268393 A JP14268393 A JP 14268393A JP 14268393 A JP14268393 A JP 14268393A JP 2795790 B2 JP2795790 B2 JP 2795790B2
Authority
JP
Japan
Prior art keywords
sensors
dimensional
imaging
light
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP14268393A
Other languages
Japanese (ja)
Other versions
JPH06331325A (en
Inventor
三男 磯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Zosen Corp
Original Assignee
Hitachi Zosen Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Zosen Corp filed Critical Hitachi Zosen Corp
Priority to JP14268393A priority Critical patent/JP2795790B2/en
Publication of JPH06331325A publication Critical patent/JPH06331325A/en
Application granted granted Critical
Publication of JP2795790B2 publication Critical patent/JP2795790B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【産業上の利用分野】本発明は、立体表面の3次元位置
を非接触計測する光学式の3次元計測装置のセンサ座標
補正方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for correcting sensor coordinates of an optical three-dimensional measuring device for non-contact measurement of a three-dimensional position of a three-dimensional surface.

【0002】[0002]

【従来の技術】従来、この種光学式の3次元計測装置は
CCD等の2個の2次元撮像センサにより立体表面のス
リット光照射部分を撮像し、その撮像出力に基づく演算
により前記スリット光照射部分の3次元位置を計測す
る。
2. Description of the Related Art Conventionally, an optical three-dimensional measuring apparatus of this kind captures an image of a slit light illuminated portion of a three-dimensional surface by two two-dimensional image sensors such as CCDs, and calculates the slit light illuminated by an operation based on the image output. Measure the three-dimensional position of the part.

【0003】つぎに、従来装置の1例について、本願発
明の実施例に対応する図2ないし図6を参照して説明す
る。図2は装置の計測機構部を示し、同図において、1
は基台、2は基台1に載置された水平な支持台、3は支
持台2に載置された立体、4は基台1の一隅に立設され
たセンサ支持用の支柱、5は固定部材6を介して支柱4
の上端部に取付けられて固定された非接触測定器、7は
測定器5の投光手段、8a,8bは投光手段7の左,右
に設けられた2個の2次元撮像センサであり、CCD型
エリアイメージセンサ等からなる。
Next, an example of a conventional apparatus will be described with reference to FIGS. 2 to 6 corresponding to an embodiment of the present invention. FIG. 2 shows a measuring mechanism of the apparatus.
Is a base, 2 is a horizontal support mounted on the base 1, 3 is a three-dimensional object mounted on the support 2, 4 is a column for supporting a sensor which is erected at one corner of the base 1, 5. Is the support 4 via the fixing member 6
A non-contact measuring device attached and fixed to the upper end of the light emitting device; 7, a light projecting means of the measuring device 5; and 8a and 8b, two two-dimensional image sensors provided on the left and right sides of the light projecting device 7. , A CCD type area image sensor and the like.

【0004】また、測定器5の投光手段7及び両センサ
8a,8bは図3に示すように構成され同図において、
9は線状のスリット付きキセノンランプ等の光源、10
は光源9とともに投光手段7を形成する回転自在の反射
鏡であり、光源9から放出された支持台2に直角な線状
のスリット光を左右方向に移動自在に立体3の表面に照
射する。11a,11bは撮像センサ8a,8bの撮像
部であり、CCDを縦M行,横N列の2次元マトリクス
状に配列して形成されている。12a,12bは集光レ
ンズであり、センサ8a,8bの立体3の表面の反射光
を撮像部11a,11bに結像する。
The light projecting means 7 and both sensors 8a and 8b of the measuring instrument 5 are constructed as shown in FIG.
9 is a light source such as a xenon lamp with a linear slit, 10
Is a rotatable reflecting mirror forming a light projecting means 7 together with the light source 9, and irradiates a linear slit light emitted from the light source 9 at right angles to the support base 2 to the surface of the three-dimensional body 3 so as to be movable in the left-right direction. . Reference numerals 11a and 11b denote imaging units of the imaging sensors 8a and 8b, which are formed by arranging CCDs in a two-dimensional matrix of M rows and N columns. Reference numerals 12a and 12b denote condensing lenses, which form reflected light from the surfaces of the solids 3 of the sensors 8a and 8b on the imaging units 11a and 11b.

【0005】そして、計測時は投光手段7から立体3
に、立体3の高さより十分長い上下方向のスリット光が
照射される。また、センサ8a,8bは立体3のスリッ
ト光照射部分を重複して撮影するように撮像視野が設定
され、この設定に基づき、図4の(a),(b)に示す
ように両センサ8a,8bの撮像部11a,11bの撮
像面Fa,Fbに、例えば立体3の上下方向に対応する
画面縦方向(Z軸方向)のスリット光像Sa,Sbが結
像し、撮像面Fa,Fbはスリット光像Sa,Sbの部
分が高輝度になる。
At the time of measurement, the three-dimensional 3
Then, vertical slit light sufficiently longer than the height of the solid 3 is irradiated. Also, the sensors 8a and 8b are set to have an imaging field of view such that the slit light illuminated portion of the solid 3 is photographed in an overlapping manner, and based on this setting, as shown in FIGS. , 8b, the slit light images Sa, Sb in the vertical direction of the screen (Z-axis direction) corresponding to, for example, the vertical direction of the solid 3 are formed on the imaging surfaces Fa, Fb of the imaging units 11a, 11b, and the imaging surfaces Fa, Fb , The portions of the slit light images Sa and Sb have high brightness.

【0006】そして、撮像面Fa,Fbの左右方向(X
軸方向)は立体3の水平方向に対応し、X軸方向に平行
な撮像部11a,11bの各1列の画素の受光出力によ
りセンサ8a,8bの各1走査線の撮影出力が形成され
る。なお、図4(a)のA1 ,…,Am ,Am+2 ,…A
n は撮像部11aの第1ないし第N走査線を示し、同図
(b)のB1 ,…,Bm ,Bm+1 ,Bm+2 ,…Bnは撮
像部11bの第1ないし第N走査線を示す。
Then, the left and right directions (X
(Axial direction) corresponds to the horizontal direction of the three-dimensional body 3, and the imaging output of one scanning line of each of the sensors 8 a and 8 b is formed by the light receiving output of each row of pixels of the imaging units 11 a and 11 b parallel to the X-axis direction. . Note that A 1 ,..., A m , A m + 2 ,.
n represents the first to N scanning lines of the image pickup unit 11a, B 1 in FIG. (b), ..., B m , B m + 1, B m + 2, ... B n is the first imaging unit 11b To Nth scanning line.

【0007】そして、両撮像部11a,11bの各走査
線の撮像出力は同一タイミングで順次に読出され、図5
の電子計算機13に供給される。この計算機13は図6
に示すように構成され、同図において、14はクロック
信号を発生するクロック回路、15a,15bは2個の
信号処理回路であり、センサ8a,8bの撮像部11
a,11bから順次に出力される各走査線の撮像出力を
前記クロック信号に同期して取込み、所定のスライスレ
ベルで2値化してスリット光像Sa,Sbの部分のみハ
イレベルになるデジタル画像信号を形成する。
The imaging outputs of the respective scanning lines of the imaging sections 11a and 11b are sequentially read out at the same timing, and are read out as shown in FIG.
Is supplied to the computer 13. This computer 13 is shown in FIG.
In the figure, reference numeral 14 denotes a clock circuit for generating a clock signal, 15a and 15b denote two signal processing circuits, and an imaging unit 11 of the sensors 8a and 8b.
a, 11b, an image pickup output of each scanning line sequentially taken in, is taken in synchronism with the clock signal, binarized at a predetermined slice level, and a digital image signal in which only the slit light images Sa, Sb become high level. To form

【0008】16a,16bは2個のカウンタであり、
クロック信号に同期して撮像部11a,11bの毎走査
線の左端基準点からスリット光像Sa,Sbまでの撮像
面Fa,Fbでの距離をカウントし、毎走査線のX軸方
向の距離データを出力する。17は演算回路であり、カ
ウンタ16a,16bから出力された毎走査線のX軸方
向の距離データ及び、走査線の番号,走査線の幅から求
まる毎走査線のZ軸方向の距離データに基づく撮像面F
a,Fbでのスリット光の2次元座標値と,撮像センサ
8a,8bの撮影倍率,設置間隔等の撮影条件の設定と
により後述の四則演算を実行し、立体3のスリット光照
射部分の毎走査線での3次元位置を算出して非接触計測
する。
[0008] 16a and 16b are two counters,
In synchronization with the clock signal, distances on the imaging planes Fa and Fb from the left end reference point of each scanning line of the imaging units 11a and 11b to the slit light images Sa and Sb are counted, and distance data in the X-axis direction of each scanning line is counted. Is output. An arithmetic circuit 17 is based on distance data in the X-axis direction of each scanning line output from the counters 16a and 16b, and distance data in the Z-axis direction of each scanning line obtained from the scanning line number and the width of the scanning line. Imaging surface F
The four arithmetic operations described below are executed based on the two-dimensional coordinate values of the slit light at a and Fb and the setting of the imaging conditions such as the imaging magnification and the installation interval of the imaging sensors 8a and 8b. Non-contact measurement is performed by calculating a three-dimensional position on a scanning line.

【0009】18は演算回路17により算出された3次
元位置の座標値を記憶する記憶部、19は表示条件設定
部、20は認識回路であり、設定部19の設定条件に基
づき記憶部18の各座標値から立体3の寸法,表面状
態,形状等を識別し、記憶部18の座標値及び識別結果
の表示信号を図5の表示手段21に出力して画面表示す
る。
Reference numeral 18 denotes a storage unit for storing the coordinate values of the three-dimensional position calculated by the arithmetic circuit 17, reference numeral 19 denotes a display condition setting unit, and reference numeral 20 denotes a recognition circuit. The dimensions, surface condition, shape, and the like of the solid 3 are identified from each coordinate value, and the coordinate values of the storage unit 18 and the display signal of the identification result are output to the display means 21 in FIG. 5 and displayed on the screen.

【0010】つぎに、演算回路17の計測演算について
説明する。この計測演算は、例えば本願出願人の既出願
(特願昭60−4408号,同4409号)の願書に添
付の明細書及び図面に記載の演算と同様にして行われ
る。
Next, the measurement operation of the operation circuit 17 will be described. This measurement calculation is performed, for example, in the same manner as the calculation described in the specification and drawings attached to the application filed by the present applicant (Japanese Patent Application Nos. 60-4408 and 4409).

【0011】すなわち、両センサ8a,8bの撮像視野
が完全に等しく、かつ撮像視野の左端がZ軸に一致して
いる場合、図7に示すように立体3のスリット光照射部
分Sの点G(x,y,z)の光が、両センサ8a,8b
のレンズ12a,12bの中心点P(a,0 ,z),Q
(b,0 ,z)を介して結像すると、このとき、それぞ
れの結像点と中心点P又はQとを結ぶ線分がX軸,Z軸
に直角なY軸の任意の点cを含むXZ平面と点U(d,
c,z),V(e,c,z)で交差すれば、点Gは点
P,Uを通る線分と、点Q,Vを通る線分との交点とし
て求まる。
That is, when the imaging fields of view of the two sensors 8a and 8b are completely equal and the left end of the imaging field coincides with the Z axis, as shown in FIG. The light of (x, y, z) is transmitted to both sensors 8a and 8b.
Center points P (a, 0 , z), Q of the lenses 12a, 12b
When an image is formed through (b, 0 , z), a line segment connecting each image forming point and the center point P or Q forms an arbitrary point c on the Y axis perpendicular to the X axis and the Z axis. XZ plane and the point U (d,
If they intersect at c, z) and V (e, c, z), the point G is determined as the intersection of the line segment passing through the points P and U and the line segment passing through the points Q and V.

【0012】そして、点P,Q,U,Vの座標値に基づ
き、点GのX,Y軸成分x,yは、つぎの数1,数2の
式から求まる。
Then, based on the coordinate values of the points P, Q, U and V, the X and Y axis components x and y of the point G are obtained from the following equations (1) and (2).

【0013】[0013]

【数1】 (Equation 1)

【0014】[0014]

【数2】 (Equation 2)

【0015】この2式中のb−aはセンサ8a,8bの
撮像部11a,11bの間隔Lであり、d,eはレンズ
12a,12bの倍率及びセンサ8a,8bの取付位置
により決まる撮像面Fa,Fbでの点GのX軸方向の位
置である。そして、d,eは撮像面Fa,Fbの左端の
基準点d0 からの距離データとして求まる。また、a,
b,cはセンサ8a,8bの取付位置,レンズ12a,
12bの倍率などにより設定される定数である。
In the two equations, ba is the distance L between the image pickup units 11a and 11b of the sensors 8a and 8b, and d and e are the image pickup surfaces determined by the magnification of the lenses 12a and 12b and the mounting positions of the sensors 8a and 8b. This is the position of the point G in the X-axis direction at Fa and Fb. Then, d, e are determined as the distance data from the reference point d 0 of the left end of the imaging surface Fa, Fb. Also, a,
b and c are the mounting positions of the sensors 8a and 8b, the lenses 12a and
This is a constant set by the magnification of 12b or the like.

【0016】そして、レンズ12a,12bの倍率,セ
ンサ8a,8bの位置等に基づくX,Y軸方向の定数
a,cをKx,Kyとすると、点GのX,Y軸成分(座
標値)x,yはつぎの数3,数4の式から求まる。
If the constants a and c in the X and Y axis directions based on the magnification of the lenses 12a and 12b, the positions of the sensors 8a and 8b, and the like are Kx and Ky, the X and Y axis components (coordinate values) of the point G will be described. x and y are obtained from the following equations (3) and (4).

【0017】[0017]

【数3】 (Equation 3)

【0018】[0018]

【数4】 (Equation 4)

【0019】さらに、点GのZ軸成分(座標値)zは、
点Gの走査線番号rと、走査線の本数,幅及びレンズ1
1a,11bの倍率により定まる係数Kzとに基づき、
つぎの数5の式から求まる。
Further, the Z-axis component (coordinate value) z of the point G is
The scanning line number r of the point G, the number of scanning lines, the width, and the lens 1
Based on the coefficient Kz determined by the magnification of 1a and 11b,
It is obtained from the following equation (5).

【0020】[0020]

【数5】 (Equation 5)

【0021】そして、数3,数4の式中のKx,Ky,
L及び数5の式中のKzは定数であり、d,eはカウン
タ16a,16bにより求められたX軸方向の距離デー
タとして得られ、rはクロック信号のカウントにより得
られるため、演算回路17はカウンタ16a,16bの
距離データ,クロック信号のカウントデータが形成する
撮像面Fa,Fbでのスリット光照射位置Sの位置情報
及び撮影条件の設定データとしてのKx,Ky,Kz,
Lのデータに基づき、前記数3〜数5の式の四則演算を
実行して点Gの3次元位置(x,y,z)を算出する。
Then, Kx, Ky,
L and Kz in the equation (5) are constants, d and e are obtained as distance data in the X-axis direction obtained by the counters 16a and 16b, and r is obtained by counting clock signals. Kx, Ky, Kz, position information of the slit light irradiation position S on the imaging surfaces Fa and Fb formed by the distance data of the counters 16a and 16b and the count data of the clock signal, and the setting data of the imaging conditions.
Based on the L data, the four arithmetic operations of the equations (3) to (5) are executed to calculate the three-dimensional position (x, y, z) of the point G.

【0022】さらに、この3次元位置の算出を毎走査線
のスリット光照射部分の各点に施し、立体3のスリット
光照射部分の3次元位置を算出して計測する。ところ
で、投光手段7の反射鏡10は図外の制御手段により回
転制御され、この反射鏡10の回転により投光手段7の
スリット光の照射位置が両センサ8a,8bの重複視野
内でX軸方向に移動する。
Further, the calculation of the three-dimensional position is performed on each point of the slit light irradiation part of each scanning line, and the three-dimensional position of the slit light irradiation part of the solid 3 is calculated and measured. The rotation of the reflecting mirror 10 of the light projecting means 7 is controlled by a control means (not shown), and the rotation of the reflecting mirror 10 causes the irradiation position of the slit light of the light projecting means 7 to be X within the overlapping visual field of the two sensors 8a and 8b. Move in the axial direction.

【0023】そして、この照射位置の移動に基づき、演
算回路17はセンサ8a,8bの重複視野内の立体3の
表面の各点の3次元位置を算出して計測する。この計測
により得られた立体表面の各点の3次元位置の座標値は
記憶部18を介して認識回路20に供給され、この回路
20は設定部19の設定条件に基づき、供給された各点
の3次元位置の座標値から立体3の寸法,表面状態,形
状等を識別し、この識別結果の表示信号が表示手段21
に供給されて画面表示される。
Then, based on the movement of the irradiation position, the arithmetic circuit 17 calculates and measures the three-dimensional position of each point on the surface of the solid 3 in the overlapping visual field of the sensors 8a and 8b. The coordinate values of the three-dimensional position of each point on the three-dimensional surface obtained by this measurement are supplied to a recognition circuit 20 via a storage unit 18, and the circuit 20 supplies each point based on the setting conditions of a setting unit 19. , The surface state, the shape, etc. of the solid 3 are identified from the coordinate values of the three-dimensional position of the three-dimensional position.
And displayed on the screen.

【0024】なお、立体3の全表面について計測を行う
ときは、例えば前記特願昭60−4408号,同440
9号の出願の願書に添付の明細書及び図面に記載されて
いるように、基台1の周囲に支柱4に相当する複数の支
柱を立設し、各支柱に測定器5と同様の非接触測定器を
設けて装置が形成される。
When the measurement is performed on the entire surface of the three-dimensional object 3, for example, Japanese Patent Application Nos. 60-4408 and 440
As described in the specification and the drawings attached to the application for the application No. 9, a plurality of columns corresponding to the columns 4 are erected around the base 1, and each column has the same non- The device is formed with a contact measuring device.

【0025】また、スリット光を支持台2に水平(平
行)な線状光にして形成した装置もある。そして、この
光学式の3次元計測装置は測定器5を動かさず、スリッ
ト光の照射位置を移動して測定位置が変えられ、しか
も、簡単な四則演算により測定できるため、従来の他の
非接触法の装置より短時間に計測できる利点がある。
There is also an apparatus in which the slit light is formed on the support 2 as horizontal (parallel) linear light. The optical three-dimensional measuring apparatus can change the measuring position by moving the irradiation position of the slit light without moving the measuring device 5, and can perform measurement by simple four arithmetic operations. There is an advantage that measurement can be performed in a shorter time than the apparatus of the method.

【0026】また、立体3のスリット光照射部分を1対
のセンサ8a,8bにより2方向から撮像するため、立
体3が小さい場合及びその表面に凹凸がある場合にも精
度のよい計測が行える。
Further, since the slit light irradiation portion of the solid 3 is imaged from two directions by the pair of sensors 8a and 8b, accurate measurement can be performed even when the solid 3 is small and when the surface of the solid 3 has irregularities.

【0027】さらに、非接触で計測するため、立体3が
ゴム等の柔軟で変形し易いものであっても容易に計測す
ることができ、その上、線状のスリット光を使用してい
るため、エネルギー密度が低い弱い光でもよく、照明を
使用したときの照明熱により、立体3に歪が生じたりす
ることもない。
Further, since the measurement is carried out in a non-contact manner, even if the solid 3 is flexible and easily deformed, such as rubber, it can be easily measured. In addition, since the linear slit light is used. Alternatively, weak light having a low energy density may be used, and no distortion occurs in the three-dimensional body 3 due to illumination heat when illumination is used.

【0028】[0028]

【発明が解決しようとする課題】前記従来の光学式の3
次元計測装置の場合、撮像センサ8a,8bの取付位置
誤差(ずれ)により、撮像面Fa,Fbでの照射光(ス
リット光)に対する2次元の座標軸(X軸,Z軸)がセ
ンサ8a,8b間でずれると、このずれに基づく誤差を
含む位置情報により数3〜数5の演算が行われて計測誤
差が生じる。そして、センサ8a,8b間の座標軸のず
れを極力抑えて計測精度を向上するため、従来はセンサ
8a,8bの取付位置の煩雑かつ高度の調整作業等を要
し、しかも、その調整に機械的精度等に伴う限界があ
る。
SUMMARY OF THE INVENTION The conventional optical type 3
In the case of the dimension measuring device, the two-dimensional coordinate axes (X axis, Z axis) with respect to the irradiation light (slit light) on the imaging surfaces Fa, Fb are changed by the sensors 8a, 8b due to the mounting position error (deviation) of the imaging sensors 8a, 8b. If there is a deviation between them, the computations of Equations 3 to 5 are performed using position information including an error based on this deviation, and a measurement error occurs. In order to improve the measurement accuracy by minimizing the displacement of the coordinate axes between the sensors 8a and 8b, conventionally, complicated and advanced adjustment work of the mounting position of the sensors 8a and 8b is required, and the adjustment is mechanically performed. There is a limit associated with accuracy.

【0029】したがって、調整作業を簡素化して高精度
の計測を行うことができない問題点がある。本発明は、
撮像センサの取付位置の機械的な調整を省いて高精度の
計測が行えるようにすることを目的とする。
Therefore, there is a problem that the adjustment operation cannot be simplified and high-precision measurement cannot be performed. The present invention
An object of the present invention is to enable high-accuracy measurement without mechanical adjustment of the mounting position of an image sensor.

【0030】[0030]

【課題を解決するための手段】前記の目的を達成するた
めに、本発明の3次元計測装置のセンサ座標補正方法に
おいては、立体表面の2点に照射された標識点用のスポ
ット光を両2次元撮像センサにより撮影し、両センサの
撮像面での両スポット光の位置情報のセンサ間の差から
両センサの撮像面の2次元の座標軸のずれを検出し、こ
の検出の結果に基づき、両センサの一方の座標軸を基準
にして両センサの座標軸が一致する両センサの他方の座
標軸の補正値を求め、この補正値により計測時の両セン
サの他方の位置情報を補正して両センサの座標軸のずれ
を演算より補正する。
In order to achieve the above object, a method of correcting a sensor coordinate of a three-dimensional measuring apparatus according to the present invention comprises the steps of: A two-dimensional image sensor is used to take an image, and a difference between two-dimensional coordinate axes of the imaging surfaces of both sensors is detected based on a difference between position information of the two spot lights on the imaging surfaces of the two sensors, based on a result of the detection. A correction value of the other coordinate axis of both sensors whose coordinate axes are coincident with each other with reference to one coordinate axis of both sensors is obtained. The displacement of the coordinate axis is corrected by calculation.

【0031】[0031]

【作用】前記のように構成された本発明の3次元計測装
置のセンサ座標補正方法においては、両撮像センサの撮
像面での標識点用の2点のスポット光の位置情報のセン
サ間の差から両センサの取付位置誤差に相当する撮像面
の2次元の座標軸のずれが検出される。さらに、この検
出の結果により両センサの一方の座標軸を基準にして両
センサの他方の座標軸の補正値が求められる。
In the sensor coordinate correcting method of the three-dimensional measuring apparatus according to the present invention having the above-described configuration, the difference between the position information of the two spot light spots for the marker points on the imaging surfaces of the two imaging sensors is obtained. From this, the displacement of the two-dimensional coordinate axis of the imaging surface corresponding to the mounting position error of both sensors is detected. Further, based on the result of this detection, a correction value of the other coordinate axis of both sensors is obtained based on one coordinate axis of both sensors.

【0032】そして、この補正値により計測時の両セン
サの他方の位置情報が補正され、座標軸のずれを演算に
より補正して立体表面の3次元位置が非接触計測され
る。したがって、両センサの取付位置の従来のような煩
雑かつ高度の機械的な調整作業を省き、機械的精度等に
伴う限界のない高精度の計測が行える。
Then, the position information of the other sensor at the time of measurement is corrected by this correction value, and the three-dimensional position of the three-dimensional surface is measured in a non-contact manner by correcting the displacement of the coordinate axes by calculation. Therefore, the complicated and sophisticated mechanical adjustment work of the mounting positions of the two sensors as in the related art can be omitted, and high-precision measurement without limit due to mechanical accuracy or the like can be performed.

【0033】[0033]

【実施例】1実施例について、図1ないし図9を参照し
て説明する。まず、図2及び図3の計測機構において、
従来と異なる点は、センサ8a,8bの取付位置誤差の
調整時、レバー操作等により投光手段7の光源9と反射
鏡10との間のスリット光の光路上に図8(a)の透孔
22a,22bが形成されたスポット光形成用フィルタ
23又は同図(b)の透孔24が形成されたスポット光
形成用フィルタ25が介在する点である。
DESCRIPTION OF THE PREFERRED EMBODIMENTS One embodiment will be described with reference to FIGS. First, in the measurement mechanism of FIGS. 2 and 3,
The difference from the prior art is that, when adjusting the mounting position error of the sensors 8a and 8b, the lever operation or the like causes the slit light between the light source 9 of the light projecting means 7 and the reflecting mirror 10 to pass through the optical path shown in FIG. The point is that the spot light forming filter 23 having the holes 22a and 22b formed therein or the spot light forming filter 25 having the through hole 24 shown in FIG.

【0034】なお、フィルタ25は透孔24をフィルタ
23の透孔22a,22bに兼用するため、例えば手動
操作で光路に直角に移動されて透孔24の位置が変えら
れる。そして、光源9と反射鏡10との間にフィルタ2
3又は25が介在したときは、光源9のスリット光が撮
像センサ8a,8bの重複視野内の立体3の上下方向
(Z軸方向)の2標識点のスポット光に加工され、この
両スポット光が立体3に照射される。
Since the filter 25 also uses the through hole 24 as the through holes 22a and 22b of the filter 23, the position of the through hole 24 is changed, for example, by a manual operation at right angles to the optical path. The filter 2 is disposed between the light source 9 and the reflecting mirror 10.
When 3 or 25 is interposed, the slit light of the light source 9 is processed into spot light at two marker points in the vertical direction (Z-axis direction) of the solid 3 in the overlapping visual field of the imaging sensors 8a and 8b, and the two spot lights are used. Is irradiated on the solid 3.

【0035】さらに、この両スポット光が撮像センサ8
a,8bにより撮影され、両センサ8a,8bの各走査
線の撮影出力が図5及び図6の電子計算機13に供給さ
れる。このとき、両センサ8a,8bの取付位置誤差に
基づき、両センサ8a,8bの撮像面Fa,Fbでの両
スポット光の位置は例えば図1(a),(b)に示すよ
うにずれる。
Further, the two spot lights are used by the imaging sensor 8.
a and 8b, and the imaging output of each scanning line of both sensors 8a and 8b is supplied to the computer 13 of FIGS. At this time, based on the mounting position error between the sensors 8a and 8b, the positions of the spot lights on the imaging surfaces Fa and Fb of the sensors 8a and 8b are shifted as shown in FIGS. 1A and 1B, for example.

【0036】なお、図1において、P1a,P2aは撮
像面Faのスポット光像、P1b,P2bはスポット光
像P1a,P2aに相当する撮像面Fbのスポット光像
を示す。また、フィルタ25を用いたときは、スポット
光像P1a,P1bとスポット光像P2a,P2bとは
別個に撮影されて合成される。
In FIG. 1, P1a and P2a indicate spot light images on the imaging surface Fa, and P1b and P2b indicate spot light images on the imaging surface Fb corresponding to the spot light images P1a and P2a. When the filter 25 is used, the spot light images P1a and P1b and the spot light images P2a and P2b are separately photographed and combined.

【0037】そして、図1(a),(b)の比較からも
明らかなように、センサ8a,8bの取付位置誤差に基
づくX軸,Z軸のずれにより、例えば、スポット光像P
2a,P2bは共に第M走査線Am ,Bm に位置する
が、スポット光像P1a,P1bは第M−1走査線A
m -1,第M−2走査線Bm-2 に位置する。つぎに、電子
計算機13が従来と異なる点は、従来と同様の計測処理
手段とともにセンサ座標調整手段を有する点である。
As is clear from the comparison between FIGS. 1A and 1B, the displacement of the X-axis and Z-axis based on the mounting position error of the sensors 8a and 8b causes, for example, a spot light image P
2a, P2b both the M scan line A m, is located on the B m, the spot light image P1a, P1b the first M-1 scan line A
m −1 , located at the (M−2) th scanning line B m−2 . Next, the electronic computer 13 is different from the conventional computer in that the computer 13 has a sensor processing unit as well as a measurement processing unit similar to the conventional computer.

【0038】そして、センサ8a,8bの取付位置誤差
の調整時、例えばフィルタ23又は25の移動に連動し
て図5,図6の制御端子26に座標調整処理の指令が与
えられると、図6に示すように、この指令が演算回路1
7に供給される。この供給により演算回路17は通常の
計測処理からセンサ座標調整処理に移行する。
At the time of adjusting the mounting position error of the sensors 8a and 8b, for example, when a command for coordinate adjustment processing is given to the control terminal 26 of FIGS. 5 and 6 in conjunction with the movement of the filter 23 or 25, FIG. As shown in FIG.
7 is supplied. With this supply, the arithmetic circuit 17 shifts from the normal measurement processing to the sensor coordinate adjustment processing.

【0039】ところで、この調整時に得られた図1
(a),(b)の撮像面Fa,Fbの各走査線の撮影出
力は、電子計算機13の信号処理回路15a,15b及
びカウンタ16a,16bにより通常の計測時と同様に
処理される。この処理によりカウンタ16aはスポット
光像P1a,P2aのX軸方向の距離データを演算回路
17に出力し、カウンタ16bはスポット光像P1b,
P2bのX軸方向の距離データを演算回路17に出力す
る。
FIG. 1 obtained at the time of this adjustment
The imaging output of each scanning line of the imaging planes Fa and Fb in (a) and (b) is processed by the signal processing circuits 15a and 15b and the counters 16a and 16b of the electronic computer 13 in the same manner as in normal measurement. By this processing, the counter 16a outputs distance data in the X-axis direction of the spot light images P1a and P2a to the arithmetic circuit 17, and the counter 16b outputs the spot light images P1b and P2a.
The distance data in the X-axis direction of P2b is output to the arithmetic circuit 17.

【0040】そして、演算回路17はカウンタ16a,
16bのX軸方向の距離データ及びクロック回路14の
クロック信号のカウントデータ(走査線)が形成する撮
像面Fa,Fbでの2標識点のスポット光の位置情報を
得るとともに、そのセンサ間の差をセンサ8a,8bの
取付位置誤差に基づくX,Z軸のずれとして検出する。
さらに、この検出の結果に基づき、例えばセンサ8aの
座標軸を基準にして、この座標軸にセンサ8bの座標軸
が一致し、撮像面Fa,Fbでの両スポット光の位置情
報が等しくなるセンサ8bの座標軸の2次元の補正値を
求める。
The operation circuit 17 includes a counter 16a,
The position information of the spot light at the two marker points on the imaging surfaces Fa and Fb formed by the distance data in the X-axis direction 16b and the count data (scanning lines) of the clock signal of the clock circuit 14 is obtained, and the difference between the sensors is obtained. Is detected as a shift between the X and Z axes based on the mounting position error of the sensors 8a and 8b.
Further, based on the detection result, for example, with reference to the coordinate axis of the sensor 8a, the coordinate axis of the sensor 8b coincides with the coordinate axis of the sensor 8a, and the coordinate axis of the sensor 8b at which the position information of both spot lights on the imaging surfaces Fa and Fb becomes equal. Is obtained.

【0041】すなわち、図1(a)と同じ同図(c)の
センサ8aの走査線A1 〜An を基準にして同図(d)
のセンサ8bの実線の補正前の走査線B1 〜Bn を同図
中の破線の走査線B1 ’〜Bn ’に補正するX軸,Z軸
の補正値を求める。そして、この補正値を例えば演算回
路17に設けた不揮発性の補正メモリに記憶し、センサ
座標調整処理を終了して通常の計測処理に戻る。
[0041] That is, FIGS. 1 (a) the same figure the same with reference FIG scan lines A 1 to A n of the sensor 8a (c) and (d)
X-axis to correct the scanning line B 1 .about.B n before correction indicated by the solid line of the sensor 8b the dashed scan lines B 1 '~B n' in the figure, obtaining a correction value of the Z-axis. Then, this correction value is stored in, for example, a non-volatile correction memory provided in the arithmetic circuit 17, and the sensor coordinate adjustment processing is ended to return to the normal measurement processing.

【0042】以降、レバー操作等でフィルタ23又は2
5をスリット光の光路外に移動し、通常の計測に移行す
ると、立体3に従来と同様のスリット光が照射される。
そして、このスリット光がセンサ8a,8bにより2方
向から撮影され、両センサ8a,8bの取付位置誤差を
含む従来と同様の各走査線の撮像出力が電子計算機13
に供給される。
Thereafter, the filter 23 or 2 is operated by lever operation or the like.
When the specimen 5 is moved out of the optical path of the slit light and shifts to the normal measurement, the three-dimensional body 3 is irradiated with the same slit light as in the related art.
Then, the slit light is photographed from two directions by the sensors 8a and 8b, and the imaging output of each scanning line including the mounting position error of the two sensors 8a and 8b is output to the computer 13 as in the related art.
Supplied to

【0043】さらに、電子計算機13は信号処理回路1
5a,15b,カウンタ16a,16b及びクロック回
路14により従来と同様にして毎走査線のX方向の距離
データ等の撮像面Fa,Fbでのスリット光の位置情報
を求め、この情報を演算回路17に供給する。このと
き、演算回路17は記憶している2次元の補正値によ
り、撮像面Fbでのスリット光の位置情報を演算補正
し、撮像面Fa,Fbでのスリット光の位置情報のセン
サ8a,8bの取付位置誤差に基づく誤差を排除する。
Further, the computer 13 includes the signal processing circuit 1
5a, 15b, counters 16a, 16b, and a clock circuit 14 determine the position information of the slit light on the imaging surfaces Fa, Fb such as distance data in the X direction of each scanning line in the same manner as in the prior art. To supply. At this time, the arithmetic circuit 17 calculates and corrects the position information of the slit light on the imaging surface Fb based on the stored two-dimensional correction value, and the sensors 8a and 8b of the position information of the slit light on the imaging surfaces Fa and Fb. The error based on the mounting position error is eliminated.

【0044】そして、補正後のスリット光の位置情報と
センサ8a,8bの撮影条件の設定とに基づき、従来と
同様、前記の数3〜数5の式の演算から立体3の表面の
スリット光照射部分の3次元位置を算出して計測する。
さらに、この計測の結果に基づき、認識回路20により
立体3の表面状態,形状等が識別され、この識別の結果
が表示手段21に画面表示される。
Then, based on the corrected position information of the slit light and the setting of the photographing conditions of the sensors 8a and 8b, the slit light on the surface of the three-dimensional object 3 is calculated from the above-mentioned equations (3) to (5) in the same manner as before. The three-dimensional position of the irradiated part is calculated and measured.
Further, based on the result of the measurement, the recognition circuit 20 identifies the surface state, shape, and the like of the solid 3, and the identification result is displayed on the display unit 21 on a screen.

【0045】この場合、センサ8a,8bの取付位置誤
差に基づく計測誤差が演算により補正されるため、従来
のようなセンサ8a,8bの取付位置の煩雑かつ高度の
調整作業を省き、機械的精度に伴う調整の限界等なく、
高精度の計測が行える。そして、前記実施例ではフィル
タ23又は25を用いてスリット光から2標識点のスポ
ット光を形成したが、投光手段7の光源9をスポット光
源とし、図9に示すように光源9と反射鏡10との間に
スリット光を形成する移動自在の凸面筒レンズ27を設
け、調整時はレンズ27を光源9と反射鏡10との間の
光路外に移動し、反射鏡10を回転して2標識点のスポ
ット光を形成してもよく、この場合、通常の計測時はレ
ンズ27を光源9と反射鏡10との間の光路に介在させ
てスポット光をスリット光に加工すればよい。
In this case, since the measurement error based on the mounting position error of the sensors 8a and 8b is corrected by calculation, the complicated and high-level adjustment work of the mounting position of the sensors 8a and 8b as in the related art is omitted, and the mechanical accuracy is reduced. Without any adjustment limitations
High-precision measurement can be performed. In the above embodiment, the spot light at the two marker points is formed from the slit light by using the filter 23 or 25. However, the light source 9 of the light projecting means 7 is used as a spot light source, and the light source 9 and the reflecting mirror are used as shown in FIG. A movable convex cylindrical lens 27 for forming slit light is provided between the light source 9 and the reflecting mirror 10, and the lens 27 is moved out of the optical path between the light source 9 and the reflecting mirror 10 during adjustment. A spot light at the marker point may be formed. In this case, at the time of normal measurement, the spot light may be processed into slit light by interposing the lens 27 in the optical path between the light source 9 and the reflecting mirror 10.

【0046】また、立体3の全表面について計測を行う
ときは、例えば基台1の周囲に支柱4に相当する複数の
支柱を立設し、各支柱に測定器5と同様の非接触測定器
を設けて装置を形成すればよい。
When measuring the entire surface of the solid 3, for example, a plurality of columns corresponding to the columns 4 are erected around the base 1, and a non-contact measuring device similar to the measuring device 5 is provided on each column. May be provided to form the device.

【0047】[0047]

【発明の効果】本発明は、以上説明したように構成され
ているため、以下に記載する効果を奏する。2個の2次
元撮像センサ8a,8bの撮像面Fa,Fbでの標識点
用の2点のスポット光像P1a,P2a,P1b,P2
bの位置情報のセンサ間の差から両センサ8a,8bの
取付位置誤差に相当する撮像面Fa,Fbの2次元の座
標軸のずれを検出し、この検出の結果により両センサ8
a,8bの一方の座標軸を基準にして他方の座標軸の補
正値を求め、この補正値により計測時の両センサ8a,
8bの他方の位置情報を補正し、座標軸のずれを演算に
より補正して立体表面の3次元位置を非接触計測するた
め、両センサ8a,8bの取付位置の従来のような煩雑
かつ高度の機械的な調整作業を省いて機械的精度等に伴
う調整限界のない高精度の計測が行える。
Since the present invention is configured as described above, the following effects can be obtained. Two spot light images P1a, P2a, P1b, P2 for marker points on the imaging surfaces Fa, Fb of the two two-dimensional imaging sensors 8a, 8b.
The displacement of the two-dimensional coordinate axes of the imaging surfaces Fa and Fb corresponding to the mounting position error of the two sensors 8a and 8b is detected from the difference between the sensors of the position information of the two sensors 8a and 8b.
A correction value for the other coordinate axis is determined with reference to one of the coordinate axes a and 8b, and the correction values for both sensors 8a and 8b during measurement are obtained.
The position information of the other of the sensors 8a and 8b is corrected in a non-contact manner by correcting the position information of the other of the sensors 8b and calculating the displacement of the coordinate axes by non-contact. High-precision measurement can be performed without an adjustment limit due to mechanical accuracy and the like, without performing a complicated adjustment operation.

【図面の簡単な説明】[Brief description of the drawings]

【図1】(a)〜(d)は本発明の3次元計測装置のセ
ンサ座標補正方法の1実施例の補正説明図である。
FIGS. 1A to 1D are explanatory views of correction of a sensor coordinate correcting method of a three-dimensional measuring apparatus according to an embodiment of the present invention.

【図2】本発明の1実施例の3次元計測装置の計測機構
部の斜視図である。
FIG. 2 is a perspective view of a measuring mechanism of the three-dimensional measuring apparatus according to one embodiment of the present invention.

【図3】図2の非接触測定器の構成説明図である。FIG. 3 is a diagram illustrating the configuration of the non-contact measurement device of FIG. 2;

【図4】(a),(b)は図3の2個の2次元撮像セン
サの計測時の撮像出力の説明図である。
4 (a) and 4 (b) are explanatory diagrams of an imaging output at the time of measurement by the two two-dimensional imaging sensors of FIG. 3;

【図5】図3の撮像出力を処理する回路部のブロック図
である。
FIG. 5 is a block diagram of a circuit unit that processes the imaging output of FIG. 3;

【図6】図5の一部の詳細なブロック図である。FIG. 6 is a detailed block diagram of a part of FIG. 5;

【図7】図6の演算回路の計測処理の説明図である。FIG. 7 is an explanatory diagram of a measurement process of the arithmetic circuit in FIG. 6;

【図8】(a),(b)は図2の非接触測定器の投光手
段に設けられるスポット光形成用フィルタの1例,他の
例の正面図である。
8 (a) and 8 (b) are front views of one example and another example of a spot light forming filter provided in the light projecting means of the non-contact measuring device of FIG. 2;

【図9】図2の非接触測定器の投光手段の他の例の構成
説明図である。
9 is a configuration explanatory view of another example of the light projecting means of the non-contact measurement device of FIG. 2;

【符号の説明】[Explanation of symbols]

3 立体 8a,8b 2次元撮像センサ Fa,Fb 撮像面 P1a,P1b,P2a,P2b 標識点用のスポット
光像
3 3D 8a, 8b Two-dimensional imaging sensor Fa, Fb Imaging surface P1a, P1b, P2a, P2b Spot light image for marker point

フロントページの続き (56)参考文献 特開 平4−184203(JP,A) 特開 平5−28246(JP,A) 特開 平2−243236(JP,A) 特開 平6−207810(JP,A) 特開 平4−118504(JP,A) 特開 昭63−217214(JP,A) 特開 平5−26639(JP,A) (58)調査した分野(Int.Cl.6,DB名) G01B 11/00 - 11/30Continuation of the front page (56) References JP-A-4-184203 (JP, A) JP-A-5-28246 (JP, A) JP-A-2-243236 (JP, A) JP-A-6-207810 (JP) JP-A-4-118504 (JP, A) JP-A-63-217214 (JP, A) JP-A-5-26639 (JP, A) (58) Fields investigated (Int. Cl. 6 , DB G01B 11/00-11/30

Claims (1)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】 立体表面の照射光を2個の2次元撮像セ
ンサにより2方向から撮影し、前記両センサそれぞれの
撮像面での前記照射光の位置情報及び前記両センサの撮
影条件の設定に基づく演算により、立体表面の3次元位
置を非接触計測する3次元計測装置において、 立体表面の2点に照射された標識点用のスポット光を前
記両センサにより撮影し、前記両センサの撮像面での前
記両スポット光の位置情報のセンサ間の差から前記両セ
ンサの撮像面の2次元の座標軸のずれを検出し、該検出
の結果に基づき、前記両センサの一方の座標軸を基準に
して前記両センサの座標軸が一致する前記両センサの他
方の座標軸の補正値を求め、該補正値により計測時の前
記両センサの他方の位置情報を補正して前記両センサの
前記座標軸のずれを演算補正することを特徴とする3次
元計測装置のセンサ座標補正方法。
1. An irradiation light on a three-dimensional surface is photographed from two directions by two two-dimensional imaging sensors, and position information of the irradiation light on an imaging surface of each of the two sensors and setting of imaging conditions of the two sensors are used. In a three-dimensional measuring device for non-contact measurement of a three-dimensional position of a three-dimensional surface by a calculation based on the three-dimensional surface, a spot light for a marker point applied to two points of the three-dimensional surface is photographed by the two sensors, and the imaging surfaces of the two sensors A difference between two-dimensional coordinate axes of the imaging surfaces of the two sensors is detected from a difference between the sensors of the position information of the two spot lights, and based on a result of the detection, one coordinate axis of the two sensors is used as a reference. A correction value of the other coordinate axis of the two sensors, in which the coordinate axes of the two sensors coincide, is calculated, and the positional information of the other sensor at the time of measurement is corrected by the correction value to calculate the displacement of the coordinate axes of the two sensors. Supplement Sensor coordinate correction method of the three-dimensional measuring apparatus, characterized by.
JP14268393A 1993-05-21 1993-05-21 Sensor coordinate correction method for three-dimensional measuring device Expired - Lifetime JP2795790B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP14268393A JP2795790B2 (en) 1993-05-21 1993-05-21 Sensor coordinate correction method for three-dimensional measuring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP14268393A JP2795790B2 (en) 1993-05-21 1993-05-21 Sensor coordinate correction method for three-dimensional measuring device

Publications (2)

Publication Number Publication Date
JPH06331325A JPH06331325A (en) 1994-12-02
JP2795790B2 true JP2795790B2 (en) 1998-09-10

Family

ID=15321099

Family Applications (1)

Application Number Title Priority Date Filing Date
JP14268393A Expired - Lifetime JP2795790B2 (en) 1993-05-21 1993-05-21 Sensor coordinate correction method for three-dimensional measuring device

Country Status (1)

Country Link
JP (1) JP2795790B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4030833B2 (en) * 2002-01-04 2008-01-09 株式会社小松製作所 Long structural member of work equipment
JP6805073B2 (en) * 2017-05-10 2020-12-23 株式会社東芝 Coordinate system synchronization system and coordinate system synchronization method

Also Published As

Publication number Publication date
JPH06331325A (en) 1994-12-02

Similar Documents

Publication Publication Date Title
JP3511450B2 (en) Position calibration method for optical measuring device
US5008743A (en) Telecentric imaging system optical inspection machine using the same and method for correcting optical distortion produced thereby
EP1062478B8 (en) Apparatus and method for optically measuring an object surface contour
US5671056A (en) Three-dimensional form measuring apparatus and method
US6055054A (en) Three dimensional inspection system
US6678058B2 (en) Integrated alignment and calibration of optical system
JPH11166818A (en) Calibrating method and device for three-dimensional shape measuring device
US20230333492A1 (en) Exposure control in photolithographic direct exposure methods for manufacturing circuit boards or circuits
US6819789B1 (en) Scaling and registration calibration especially in printed circuit board fabrication
US20030053045A1 (en) System for inspecting a flat sheet workpiece
JPH09113223A (en) Non-contacting method and instrument for measuring distance and attitude
US5721611A (en) Photogrammetric camera, in particular for photogrammetric measurements of technical objects
JP2795790B2 (en) Sensor coordinate correction method for three-dimensional measuring device
Clark et al. Measuring range using a triangulation sensor with variable geometry
JP3048107B2 (en) Method and apparatus for measuring object dimensions
JPS61162706A (en) Method for measuring solid body
JP3369235B2 (en) Calibration method for measuring distortion in three-dimensional measurement
JPH11132735A (en) Ic lead floating inspection device and inspection method
JPH0942946A (en) Measuring device and measuring method for electronic part and calibration mask
US20110134253A1 (en) Camera calibration method and camera calibration apparatus
JPS60138921A (en) Inspecting device of pattern shape
JP3006566B2 (en) Lead bending inspection equipment
JPS61159102A (en) Two-dimensional measuring method
JPS603502A (en) Non-contacting type distance measuring method
JPS61162705A (en) Method for measuring solid body