JPH06147830A - Three-dimensional position measuring equipment and measurement correcting method - Google Patents

Three-dimensional position measuring equipment and measurement correcting method

Info

Publication number
JPH06147830A
JPH06147830A JP30098092A JP30098092A JPH06147830A JP H06147830 A JPH06147830 A JP H06147830A JP 30098092 A JP30098092 A JP 30098092A JP 30098092 A JP30098092 A JP 30098092A JP H06147830 A JPH06147830 A JP H06147830A
Authority
JP
Japan
Prior art keywords
point
dimensional
dimensional position
led
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP30098092A
Other languages
Japanese (ja)
Inventor
Hiroyuki Makita
裕行 牧田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Priority to JP30098092A priority Critical patent/JPH06147830A/en
Publication of JPH06147830A publication Critical patent/JPH06147830A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

PURPOSE:To obtain highly accurate measurements while enhancing operability and shortening measuring time by detecting light emitted from an LED fixed to a coordinate teaching probe through a light emission detector and delivering a detection signal to an instrumentation controller. CONSTITUTION:When three-dimensional position of a measuring point P on a work 4 is measured, the measuring point 6 is touched by a supporting point 6 of a coordinate teaching probe 5 and then an LED switch 8 is depressed to light an LED 7. An LED emission detector 9 comprising a photodiode detects lighting of the LED 7 and delivers an emission sense signal to an instrumentation controller 10 thus notifying lighting of the LED 7. Immediately upon receiving the emission sense signal, the instrumentation controller 10 drives a three- dimensional position operating unit 3 to image the probe stereoscopically by means of TV cameras 1, 2. Three-dimensional position of each LED is then calculated based on brilliant point parallax of LED in the right and left stereoscopic images and three-dimensional position of a designated point 6 is calculated based on the three-dimensional position of each LED 7.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、放電加工機などの工作
機械における被加工物や工具の位置決めを行うためにそ
れらの3次元形状を測定をする際や、CAD(Computer
Aided Design) などの利用時に3次元形状を入力する
際に利用する3次元位置測定装置に関するものである。
また、上記3次元位置測定装置における3次元位置測定
結果補正方法に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention is used for measuring the three-dimensional shape of a workpiece or a tool in a machine tool such as an electric discharge machine, or for measuring the three-dimensional shape of the workpiece or CAD (Computer).
The present invention relates to a three-dimensional position measuring device used when inputting a three-dimensional shape when using such as Aided Design).
The present invention also relates to a three-dimensional position measurement result correction method in the above three-dimensional position measuring device.

【0002】[0002]

【従来の技術】図5は3次元位置測定装置の構成図であ
り、図において、1、2はTVカメラなどの撮像装置、
3はこれらのTVカメラ1、2からのステレオ画像の左
右画像中のずれ(視差)に基づいて3次元位置を算出す
る画像処理装置、4は3次元位置測定対象物(以下、ワ
ークと称す)で、このワーク4はTVカメラ1、2から
は撮像不可能な凹部4aを有している。5は座標教示プ
ローブ(測定点教示手段)で、この座標教示プローブ5
は、ワーク4の表面上の測定点Pを指示する指示点6
と、この指示点6に対し予め既知の固定的位置関係にあ
る複数のLED(点光源)7から構成される。
2. Description of the Related Art FIG. 5 is a block diagram of a three-dimensional position measuring apparatus. In the figure, reference numerals 1 and 2 are image pickup apparatuses such as TV cameras,
3 is an image processing device for calculating a 3D position based on a shift (parallax) between left and right images of stereo images from these TV cameras 1 and 2, and 4 is a 3D position measurement object (hereinafter referred to as a work). The work 4 has a recess 4a that cannot be imaged by the TV cameras 1 and 2. Reference numeral 5 is a coordinate teaching probe (measurement point teaching means).
Is a pointing point 6 for pointing a measuring point P on the surface of the work 4.
And a plurality of LEDs (point light sources) 7 that have a known fixed positional relationship with the designated point 6 in advance.

【0003】次に動作について説明する。図5におい
て、ワーク4の測定点Pを撮像する際には、座標教示プ
ローブ5の指示点6を測定点Pに接触配置し、接触した
後LED7を点灯する。そして、各LED7を点灯させ
た状態で、LED7を含む座標教示プローブ5のステレ
オ画像をTVカメラ1、2により撮像する。その撮像結
果は、画像処理装置3へ入力され、この画像処理装置3
においては、LED7のステレオ画像の左右画像中にお
ける輝点視差に基づいて、各LEDの3次元位置が計算
される。各LED7と指示点6の位置関係は予め既知で
あるので、各LEDの3次元位置から指示点6の3次元
位置を演算によって求める。
Next, the operation will be described. In FIG. 5, when the measurement point P of the work 4 is imaged, the designated point 6 of the coordinate teaching probe 5 is placed in contact with the measurement point P, and after the contact, the LED 7 is turned on. Then, with the LEDs 7 turned on, the TV cameras 1 and 2 capture a stereo image of the coordinate teaching probe 5 including the LEDs 7. The image pickup result is input to the image processing device 3, and the image processing device 3
In, the three-dimensional position of each LED is calculated based on the bright spot parallax in the left and right images of the stereo image of the LED 7. Since the positional relationship between each LED 7 and the pointing point 6 is known in advance, the three-dimensional position of the pointing point 6 is calculated from the three-dimensional position of each LED.

【0004】また、図6は、例えば、長尾真著「画像認
識論」(コロナ社発行、pp.23〜28、昭和59年
初版第3刷)に示された、撮像装置によって撮像された
ディジタル画像の幾何学的な歪の、従来の補正方法を説
明する説明図である。
Further, FIG. 6 is a digital image picked up by an image pickup apparatus as shown in, for example, "Image Recognition Theory" by Shin Nagao (published by Corona, pp.23-28, 3rd printing, 1984, first edition). It is explanatory drawing explaining the conventional correction method of the geometric distortion of.

【0005】図5の3次元位置測定装置において画像を
TVカメラなどの撮像装置で撮った場合、斜め方向から
撮ったために生ずる遠近法的歪、レンズ系の歪など、各
種の幾何学的な歪が生ずる。そこで、精密な測定を行う
ためにはこれらの歪の幾何学的補正を行う。従来の3次
元位置測定結果補正方法は、まず、TVカメラ1、2で
得られた画像それぞれにおいて、測定対象点P(X,
Y,Z)(指示点あるいは各LEDの位置)のカメラ
1、2の各撮像平面上でのカメラ座標(x1,y1)(x
2,y2)を測定空間内のカメラの位置、向き等の姿勢パ
ラメータを用いて求める。次にこれらのカメラ座標に対
して以下に示すような幾何学的補正を行う。そして、補
正されたカメラ座標をもとに3次元位置を計算する。
When an image is taken by an image pickup device such as a TV camera in the three-dimensional position measuring apparatus shown in FIG. 5, various geometric distortions such as perspective distortion and lens system distortion caused by photographing from an oblique direction. Occurs. Therefore, in order to perform accurate measurement, geometric correction of these distortions is performed. In the conventional three-dimensional position measurement result correction method, first, in each of the images obtained by the TV cameras 1 and 2, the measurement target point P (X,
Camera coordinates (x 1 , y 1 ) (x, Y, Z) (pointing point or position of each LED) on each imaging plane of the cameras 1, 2
2 , y 2 ) is obtained using posture parameters such as the position and orientation of the camera in the measurement space. Next, the following geometrical correction is performed on these camera coordinates. Then, the three-dimensional position is calculated based on the corrected camera coordinates.

【0006】以下に幾何学的補正方法について述べる。
図6において、修正された歪のない座標系の格子点
(x,y)が歪んだ座標系の上の一般の点(x',y'
に落ちるとして、その関係式を次式で与えるとする。 x' = h1(x,y) y' = h2(x,y) (1) ここで、h1,h2は、(x,y)から(x',y')への
写像関係を表す関数である。h1,h2としては、例え
ば、線形の式とすると、 x' = ax+by+c y' = dx+ey+f (2) と表され、また、テレビジョン画像などでよく現れる形
歪は、近似的に x' = k1x{1+k2(x2+y2)} y' = k1y{1+k2(x2+y2)} (3) となる。一般に、画像には種々の歪の要因が存在してい
て、これらを理論的に決めることは困難である。そこ
で、次式のような一般的な多項式によって歪補正を行
う。 x' = Σaijij' = Σbijii (4) i,jをどの程度高次まで撮るかは画像の性質による
が、多くの場合i+j≦5の範囲で定められる。
The geometrical correction method will be described below.
In FIG. 6, the grid points (x, y) of the corrected undistorted coordinate system are general points (x ' , y ' ) on the distorted coordinate system.
Suppose that it falls into, and the relational expression is given by the following expression. x '= h 1 (x, y) y' = h 2 (x, y) (1) Here, h 1, h 2 is, (x, y) mapping relationship from the (x ', y') Is a function that represents. If h 1 and h 2 are, for example, linear expressions, they are expressed as x = ax + by + cy = dx + ey + f (2), and the shape distortion that often appears in a television image is approximately x = k 1 x {1 + k 2 (x 2 + y 2 )} y = k 1 y {1 + k 2 (x 2 + y 2 )} (3) Generally, various distortion factors are present in an image, and it is difficult to theoretically determine these factors. Therefore, distortion correction is performed using a general polynomial expression such as the following equation. x '= Σa ij x i y j y' = Σb ij x i y i (4) i, or take to what extent higher the j will depend on the nature of the image, as defined by the scope of the often i + j ≦ 5 .

【0007】以上のような座標変換の式の係数の決定
は、歪のない理想的な画像上での座標がわかっている画
像上の特定の点を基準点として用いて行う。式(1)の
ような1次式では、係数が6個あるので、これらを決定
するためには最低3つの基準点の座標対応が必要であ
る。また、式(4)において3次の項まで取る場合には
係数は20個となるから、10個の基準点の座標対応が
必要となる。一般には、さらに多くの座標点対応を取
り、最小二乗法などの手法で、平均的により安定な変換
式を求める。
The determination of the coefficient of the coordinate conversion equation as described above is performed by using a specific point on the image whose coordinates on the ideal image without distortion are known as a reference point. Since there are six coefficients in a linear expression such as Expression (1), at least three reference points need to be coordinated to determine these. Further, in the case of taking up to the third-order term in the equation (4), the number of coefficients becomes 20, so that it is necessary to correspond the coordinates of 10 reference points. In general, more coordinate points are taken, and a more stable conversion equation is obtained on average by a method such as the least square method.

【0008】[0008]

【発明が解決しようとする課題】従来の3次元位置測定
装置は、以上のように構成されているので、LEDが点
灯されたか否かを画像処理によって判断しなければなら
ないが、画像処理に時間がかかるため、測定点に教示プ
ローブの指示点を接触配置し、長時間LEDを点灯した
まま長時間保持しなければならず、操作性が良くない等
の問題があった。
Since the conventional three-dimensional position measuring device is constructed as described above, it is necessary to judge whether or not the LED is turned on by image processing. Therefore, the pointing point of the teaching probe must be placed in contact with the measurement point and the LED must be kept on for a long time with the LED turned on for a long time, resulting in poor operability.

【0009】また、カメラなどの撮像装置を用いて計測
を行う場合、レンズの歪や撮像素子のずれなどがあるた
め、正確に測定できないなどの問題があった。従って補
正を行なう必要があったが、従来の測定結果補正方法は
以上のようになされているので、撮像装置毎に補正を行
わなければならないため、補正式の係数の決定などに大
きな労力を要すること、補正式を多項式で表現した場
合、最適な次数を決めるのが困難であること、さらに、
補正式で近似できる誤差しか補正できないこと、そし
て、測定対象点の撮像平面上での座標で補正し、3次元
位置は考慮されないので、正確な補正が行えないこと、
等の問題があった。
Further, when the measurement is performed using an image pickup device such as a camera, there is a problem that the measurement cannot be performed accurately due to the distortion of the lens and the displacement of the image pickup element. Therefore, it was necessary to make a correction, but since the conventional measurement result correction method is performed as described above, it is necessary to make a correction for each image pickup device, and thus a large amount of labor is required to determine the coefficient of the correction formula. It is difficult to determine the optimum order when the correction formula is expressed by a polynomial.
Only the error that can be approximated by the correction formula can be corrected, and the three-dimensional position is not taken into consideration because the correction is made by the coordinates of the measurement target point on the imaging plane, so that the accurate correction cannot be performed.
There was a problem such as.

【0010】本発明は上記のような問題点を解消するた
めになされたもので、測定の際に教示プローブの保持時
間を短縮することにより、操作性を向上すること及び測
定時間を短縮することを目的としており、さらに3次元
位置測定装置で測定した結果が補正式で近似できないよ
うな誤差を含んでいる場合でも、容易に精度良く測定が
可能となる3次元位置測定装置、及びより精度の高い測
定結果が得られる補正方法を提供することを目的とす
る。
The present invention has been made in order to solve the above-mentioned problems, and it is possible to improve the operability and shorten the measurement time by shortening the holding time of the teaching probe during measurement. In addition, even when the result measured by the three-dimensional position measuring device includes an error that cannot be approximated by the correction formula, the three-dimensional position measuring device that enables easy and accurate measurement, and a more accurate three-dimensional position measuring device. It is an object of the present invention to provide a correction method that can obtain high measurement results.

【0011】[0011]

【課題を解決するための手段】本発明に係る3次元位置
測定装置は、測定点教示手段の点光源の発光を検出する
点光源発光検出手段を用いて点光源の発光を検出し、点
光源発光検出手段から送出される発光感知信号に基づい
て3次元位置演算手段を駆動するものである。
A three-dimensional position measuring apparatus according to the present invention detects the light emission of a point light source by using the point light source light emission detection means of the measurement point teaching means for detecting the light emission of the point light source. The three-dimensional position calculation means is driven based on the light emission detection signal sent from the light emission detection means.

【0012】また、本発明に係る3次元位置測定装置
は、測定対象点のステレオ画像を得る、少なくとも2台
の撮像装置、上記ステレオ画像における2次元撮像座標
系で、上記測定対象点の2次元座標を各々算出する2次
元座標算出手段、上記2次元座標から3次元空間座標系
での測定対象点の3次元座標を算出する3次元座標算出
手段、並びに上記2次元座標及び上記3次元座標の少な
くとも一方を入力し、上記3次元座標算出手段において
算出される3次元座標と上記測定対象点の真の3次元座
標との計測誤差を出力するように、予め複数の基準点を
用いて学習されたニューラルネットワーク測定結果補正
手段を備えたものである。
Further, the three-dimensional position measuring apparatus according to the present invention uses at least two image pickup devices for obtaining a stereo image of the measurement target point, and a two-dimensional image pickup coordinate system in the stereo image, in which the two-dimensional position of the measurement target point is measured. Two-dimensional coordinate calculating means for calculating each coordinate, three-dimensional coordinate calculating means for calculating the three-dimensional coordinate of the measurement target point in the three-dimensional space coordinate system from the two-dimensional coordinate, and the two-dimensional coordinate and the three-dimensional coordinate It is learned in advance by using a plurality of reference points so that at least one of them is input and a measurement error between the three-dimensional coordinates calculated by the three-dimensional coordinate calculating means and the true three-dimensional coordinates of the measurement target point is output. The neural network measurement result correcting means is provided.

【0013】また、上記ニューラルネットワーク測定結
果補正手段において予め複数の基準点を用いて学習を行
なうにあたって、基準点のデータを用いて学習データセ
ットを作成する際に、学習誤差の大きい学習データ程、
上記学習データセットに多く含まれるようにしてニュー
ラルネットワークの学習を行なうようにするとよい。
Further, when learning is performed in advance by using a plurality of reference points in the neural network measurement result correcting means, when a learning data set is created using the data of the reference points, the learning data having a larger learning error,
It is advisable to carry out learning of the neural network such that the learning data set is included in a large amount.

【0014】[0014]

【作用】本発明における3次元位置測定装置は、測定点
教示手段に取り付けられた点光源の発光を点光源発光検
出手段が検出し、検出信号を計測制御手段に送出するこ
とにより、高速に3次元位置を算出する演算を開始す
る。
In the three-dimensional position measuring apparatus according to the present invention, the point light source light emission detecting means detects the light emission of the point light source attached to the measuring point teaching means, and the detection signal is sent to the measurement control means, whereby the high speed 3 The calculation for calculating the dimensional position is started.

【0015】また、本発明における3次元位置測定装置
は、測定した結果をニューラルネットワークを用いて補
正することにより、レンズの歪などの非線形の誤差も容
易にかつ高精度に補正する。
Further, the three-dimensional position measuring apparatus according to the present invention corrects the measurement result by using the neural network, so that the non-linear error such as the distortion of the lens is easily and accurately corrected.

【0016】また、上記ニューラルネットワークにおい
て、予め複数の基準点を用いて学習を行なうにあたっ
て、誤差の大きい学習データ程、学習データセットに多
く含まれるようにして学習を行なうようにしているの
で、未学習のデータに対しても誤差の少ない出力が可能
となる。
In the above neural network, when learning is performed in advance using a plurality of reference points, learning data having a larger error is included in the learning data set so that the learning is performed. It is possible to output the learning data with less error.

【0017】[0017]

【実施例】【Example】

実施例1.以下、本発明の一実施例を図について説明す
る。図1において、1から7及びPは上記従来装置とま
ったく同一のものである。8はプローブ5に取り付けら
れたLED点灯スイッチ、9はLED発光検出装置、1
0は計測制御装置である。
Example 1. An embodiment of the present invention will be described below with reference to the drawings. In FIG. 1, 1 to 7 and P are exactly the same as those in the conventional device. 8 is an LED lighting switch attached to the probe 5, 9 is an LED light emission detection device, 1
Reference numeral 0 is a measurement control device.

【0018】次に動作について説明する。図1におい
て、ワーク4の測定点Pの3次元位置を測定する場合、
座標教示プローブ5の指示点6を測定点Pに接触配置し
た後、LED点灯スイッチ8を押し、LED7を点灯す
る。このようにスイッチを押してLEDを点灯すること
により、傷のつきやすい金属や、布などの柔らかい物体
の3次元形状も計測可能となる。そして、フォトダイオ
ードなどで構成されたLED発光検出装置9がLEDの
点灯を検出し、発光感知信号を計測制御装置10に送出
し、LEDが点灯したことを計測制御装置10に知らせ
る。計測制御装置は発光感知信号を受信した後、直ちに
3次元位置演算装置3を駆動し、3次元位置演算装置3
はTVカメラ1、2によって座標教示プローブ5のステ
レオ画像を撮像する。そして、ステレオ画像の左右画像
中のLEDの輝点視差に基づいて各LED7の3次元位
置が計算される。教示プローブ5に取り付けられたLE
D7と指示点6の位置関係は予め既知であるので、LE
D7の3次元位置から指示点6の3次元位置が算出で
き、その結果、計測点Pの3次元位置が求められる。こ
のとき、指示点6との位置関係が既知のLED7が3個
以上教示プローブ5に取り付けられていれば、LED7
と指示点6は直線上に配置しなくても良い。
Next, the operation will be described. In FIG. 1, when measuring the three-dimensional position of the measurement point P of the work 4,
After the pointing point 6 of the coordinate teaching probe 5 is placed in contact with the measurement point P, the LED lighting switch 8 is pressed to turn on the LED 7. By pressing the switch and turning on the LED in this way, it is possible to measure the three-dimensional shape of a easily scratched metal or a soft object such as cloth. Then, the LED light emission detection device 9 including a photodiode detects the lighting of the LED, sends a light emission detection signal to the measurement control device 10, and notifies the measurement control device 10 that the LED has lighted. The measurement control device drives the three-dimensional position calculation device 3 immediately after receiving the light emission detection signal and drives the three-dimensional position calculation device 3
Captures a stereo image of the coordinate teaching probe 5 with the TV cameras 1 and 2. Then, the three-dimensional position of each LED 7 is calculated based on the bright spot parallax of the LEDs in the left and right images of the stereo image. LE attached to teaching probe 5
Since the positional relationship between D7 and the designated point 6 is known in advance, LE
The three-dimensional position of the designated point 6 can be calculated from the three-dimensional position of D7, and as a result, the three-dimensional position of the measurement point P is obtained. At this time, if three or more LEDs 7 having a known positional relationship with the pointing point 6 are attached to the teaching probe 5,
The indicating point 6 does not have to be arranged on a straight line.

【0019】ところで、上記実施例において、TVカメ
ラ1、2及びLED発光検出装置9に教示プローブ5に
取り付けられたLED7が発光する光の波長のみを透過
するフィルターを取り付けることにより、外乱光の影響
を受けない3次元位置測定装置を構成できる。また、L
ED7に赤外波長、例えば950nmの波長の光を発す
るLEDを用いることにより、より外乱光の影響を受け
ない3次元計測装置が得られる。
By the way, in the above-mentioned embodiment, the TV cameras 1 and 2 and the LED light emission detection device 9 are provided with a filter which transmits only the wavelength of the light emitted by the LED 7 attached to the teaching probe 5, so that the influence of ambient light is exerted. It is possible to configure a three-dimensional position measuring device that is not affected by the above. Also, L
By using an LED that emits light having an infrared wavelength, for example, a wavelength of 950 nm, for the ED 7, it is possible to obtain a three-dimensional measuring device that is less affected by ambient light.

【0020】なお、上記実施例では、3次元位置演算装
置3と計測制御装置10を分けて構成したが、それらを
1つの装置と構成しても良い。さらに、発光検出装置9
をも含めて1つの装置として構成しても良い。
Although the three-dimensional position calculation device 3 and the measurement control device 10 are separately configured in the above embodiment, they may be configured as one device. Furthermore, the luminescence detection device 9
May be configured as one device including.

【0021】実施例2.次に補正手段を備えた3次元位
置測定装置について説明する。図2は、本発明の他の実
施例による3次元位置測定装置であり、1、2は撮像装
置(TVカメラ)、31はカメラ座標算出手段(2次元
座標算出手段)、32は3次元座標算出手段、33は3
次元座標補正手段(測定結果補正手段)である。
Example 2. Next, a three-dimensional position measuring device equipped with a correction means will be described. FIG. 2 shows a three-dimensional position measuring apparatus according to another embodiment of the present invention, in which 1 and 2 are image pickup apparatuses (TV cameras), 31 is camera coordinate calculation means (two-dimensional coordinate calculation means), and 32 is three-dimensional coordinates. Calculation means, 33 is 3
It is a dimensional coordinate correction means (measurement result correction means).

【0022】次に動作について説明する。カメラ座標算
出手段31はTVカメラ1、2で撮像したステレオ画像
から、測定対象点のカメラ1、2におけるカメラ座標
(x1,y1)、(x2,y2)を求める。3次元座標算出
手段32はカメラ座標算出手段31から得られたカメラ
座標(x1,y1)、(x2,y2)とカメラの姿勢パラメ
ータから測定対象点の3次元座標(Xc,Yc,Zc) を
算出する。3次元座標補正手段33はニューラルネット
ワークで構成され、カメラ座標(x1,y1)、(x2
2)、及びそれらから求めた測定対象点の3次元座標
(Xc,Yc,Zc) を入力とし、測定対象点の真の3次
元座標と3次元座標算出手段32で求めた(Xc,Yc
c) との誤差(計測誤差)(ex,ey,ez) を出力
する。ここで、ニューラルネットワークとしては、例え
ば多層のフィードフォワード型のニューラルネットワー
クを用いる。ニューラルネットワークは、測定空間内の
3次元座標が既知の複数の基準点を用いて予め学習によ
って入出力関係を獲得しておく。すなわち、基準点iの
3次元座標を(Xti,Yti,Zti)、カメラ座標算出手
段で求めたカメラ座標を(x1i,y1i)(x2i
2i)、3次元座標算出手段32で求めた座標を
(Xci,Yci,Zci)とすると、(x1i,y1i)、(x
2i,y2i)、(Xci,Yci,Zci)を入力し、(exi
yi,ezi)=(Xti−Xci,Yti−Yci,Zti
ci)を出力するように予め学習を行う。そして、上記
のようにして求めた(Xc,Yc,Zc),(ex,ey
z)から(Xc+ex,Yc+ey,Zc+ez)を3次元
位置計測結果として求める。
Next, the operation will be described. The camera coordinate calculation means 31 obtains the camera coordinates (x 1 , y 1 ) and (x 2 , y 2 ) of the measurement target point in the cameras 1 and 2 from the stereo images taken by the TV cameras 1 and 2 . The three-dimensional coordinate calculation means 32 uses the camera coordinates (x 1 , y 1 ) and (x 2 , y 2 ) obtained from the camera coordinate calculation means 31 and the attitude parameters of the camera to determine the three-dimensional coordinates (X c , Yc , Zc ) is calculated. The three-dimensional coordinate correction means 33 is composed of a neural network, and has camera coordinates (x 1 , y 1 ), (x 2 ,
y 2 ) and the three-dimensional coordinates (X c , Y c , Z c ) of the measurement target point obtained from them are input, and the true three-dimensional coordinates of the measurement target point and the three-dimensional coordinate calculation means 32 are obtained ( X c , Y c ,
The error (measurement error) (e x , e y , e z ) with respect to Z c ) is output. Here, for example, a multilayer feedforward type neural network is used as the neural network. The neural network acquires an input-output relationship by learning in advance using a plurality of reference points with known three-dimensional coordinates in the measurement space. That is, the three-dimensional coordinates of the reference point i are (X ti , Y ti , Z ti ) and the camera coordinates obtained by the camera coordinate calculation means are (x 1i , y 1i ) (x 2i ,
y 2i ), where the coordinates obtained by the three-dimensional coordinate calculation means 32 are (X ci , Y ci , Z ci ), (x 1i , y 1i ), (x
2i , y 2i ), (X ci , Y ci , Z ci ) are input, and (e xi ,
e yi , e zi ) = (X ti −X ci , Y ti −Y ci , Z ti
Learning is performed in advance so as to output Z ci ). Then, (X c , Y c , Z c ), (ex x , e y , obtained as described above)
From (e z ), (X c + e x , Y c + e y , Z c + e z ) is obtained as a three-dimensional position measurement result.

【0023】なお、上記実施例2では、3次元座標補正
手段33の入力としてカメラ座標(x1,y1)、
(x2,y2)及びそれらから求めた測定対象点の3次元
座標(Xc,Yc,Zc)を用いたが、カメラ座標(x1
1)、(x2,y2)のみ、あるいは3次元座標(Xc
c,Zc)のみを入力としても良い。
In the second embodiment, the camera coordinates (x 1 , y 1 ) are input as inputs to the three-dimensional coordinate correction means 33.
(X 2 , y 2 ) and the three-dimensional coordinates (X c , Y c , Z c ) of the measurement target point obtained from them are used, but the camera coordinates (x 1 ,
y 1 ), (x 2 , y 2 ) only, or three-dimensional coordinates (X c ,
Only Y c , Z c ) may be input.

【0024】また、上記実施例2では、多層フィードフ
ォワード型ニューラルネットワークを用いて3次元座標
補正手段33を構成したが、フィードバック結合を持つ
ニューラルネットワークやラジアルベーシスファンクシ
ョンなど入出力関係の非線形写像を学習によって獲得で
きる他の手法で構成しても良い。
In the second embodiment, the three-dimensional coordinate correction means 33 is constructed by using the multilayer feedforward type neural network, but the input / output nonlinear mapping such as the neural network having the feedback connection or the radial basis function is learned. Other methods that can be obtained by

【0025】さらに、上記実施例2では、測定対象点や
学習時の基準点のカメラ座標として、カメラ座標算出手
段31から得られたカメラ座標をそのまま用いたが、こ
れらの座標に従来例で示したような幾何学的補正を行っ
た座標を代わりに用いれば、より精度の良い結果が得ら
れるのは言うまでもない。
Further, in the second embodiment, the camera coordinates obtained from the camera coordinate calculation means 31 are used as they are as the camera coordinates of the measurement target point and the reference point at the time of learning, but these coordinates are shown in the conventional example. It goes without saying that more accurate results can be obtained by substituting the geometrically corrected coordinates instead.

【0026】実施例3.上記3次元位置測定装置におい
て、複数の基準点による学習データを用いてニューラル
ネットワークの学習を行う場合、各学習データを均等に
用いて学習を行うと、一部の学習データの誤差が他の学
習データの誤差に比べて大きくなったり、未学習のデー
タを入力した場合の誤差が大きくなるなどの問題があっ
た。実施例3ではこのような問題を解消し、より精度良
く測定が可能となる測定結果補正方法を示す。図3は上
記補正方法に用いるニューラルネットワークを学習する
際に用いる学習方法を示すブロック図であり、11は基
準点のデータを用いて作成された複数の学習データを記
憶する学習データ記憶部、12は学習データを用いて学
習データセットを作成する学習データ制御部、13は学
習誤差を評価する誤差評価部、33はニューラルネット
ワークである。また、図4は学習方法を示すフローチャ
ートである。
Example 3. In the above-mentioned three-dimensional position measuring apparatus, when learning of a neural network is performed using learning data based on a plurality of reference points, if learning is performed using each learning data evenly, an error in some learning data may cause another learning error. There were problems such as the error being larger than the data error, and the error being large when unlearned data was input. The third embodiment shows a measurement result correction method that solves such a problem and enables more accurate measurement. FIG. 3 is a block diagram showing a learning method used when learning the neural network used for the correction method, and 11 is a learning data storage unit for storing a plurality of learning data created by using the data of the reference points, 12 Is a learning data control unit that creates a learning data set using learning data, 13 is an error evaluation unit that evaluates learning errors, and 33 is a neural network. Further, FIG. 4 is a flowchart showing the learning method.

【0027】まず、図4のステップST1では、図3の
学習データ記憶部11に記憶されている学習データをも
とに、ニューラルネットワークの学習に用いる学習デー
タセットを作成する。学習データ記憶部11のi番目に
記憶されている学習データiは、ニューラルネットワー
クの入力Ai=(ai1,ai2,・・・,aim)及びAi
を入力したときに出力すべき値Bi=(bi1,bi2,・
・・,bin)から成る。ここで、m,nはそれぞれニュ
ーラルネットワークの入力、出力の数である。例えばA
iはカメラ座標及びそれらから求めた3次元座標x1,y
1,x2,y2,Xc,Yc,Zcの7つの値よりなり(m=
7)、Biは真の3次元座標と3次元座標算出手段で求
めた(Xc,Yc,Zc)との計測誤差ex,ey,ezの3
つの値よりなる(n=3)。ステップST1で学習デー
タセットを作成する際、学習開始直後は学習データセッ
トを学習データ記憶部11に記憶されている学習データ
全てが1個ずつ含まれるように作成し、ある程度学習が
進んだ後には、学習データセットには学習データ記憶部
に記憶されている学習データ全てが少なくとも1個以上
含まれ、ニューラルネットワークの出力と学習データi
のBi の誤差(学習誤差)が大きい学習データiほど多
く含まれるように作成する。次にステップST2では、
ステップST1で作成した学習データセットを用いて学
習した回数を数えるカウンタ変数Iを0に初期化する。
First, in step ST1 of FIG. 4, a learning data set used for learning of the neural network is created based on the learning data stored in the learning data storage unit 11 of FIG. The learning data i stored in the i-th learning data storage unit 11 is the input A i = (a i1 , a i2 , ..., Aim ) and Ai of the neural network.
The value to be output when the input is Bi = (b i1 , b i2 , ...
.., b in ). Here, m and n are the numbers of inputs and outputs of the neural network, respectively. For example, A
i is the camera coordinates and the three-dimensional coordinates x 1 , y obtained from them
It consists of seven values of 1 , x 2 , y 2 , X c , Y c , and Z c (m =
7) and Bi are the measurement errors e x , e y , and e z between the true three-dimensional coordinates and (X c , Y c , Z c ) obtained by the three-dimensional coordinate calculation means.
It consists of three values (n = 3). When the learning data set is created in step ST1, immediately after the learning is started, the learning data set is created such that all the learning data stored in the learning data storage unit 11 are included one by one, and after the learning progresses to some extent, , The learning data set includes at least one or more all the learning data stored in the learning data storage unit, and the output of the neural network and the learning data i
It is created such that the larger the error (learning error) of B i is, the more the learning data i is included. Next, in step ST2,
A counter variable I for counting the number of times of learning using the learning data set created in step ST1 is initialized to 0.

【0028】ステップST3では学習に使う学習データ
を学習データセットから選択する。このとき、学習デー
タセットに含まれる各学習データが等しい確率で選ばれ
るように選択する。ステップST4では、ステップST
3で選択した学習データを用いてニューラルネットワー
クの学習を行う。ステップST1で、学習誤差の大きい
学習データほど多く学習データセットに含まれているの
で、このようにして学習データを選択することにより、
学習誤差の大きい学習データほど、多く学習されること
になる。
In step ST3, learning data used for learning is selected from the learning data set. At this time, it is selected so that each learning data included in the learning data set is selected with equal probability. In step ST4, step ST
Neural network learning is performed using the learning data selected in 3. In step ST1, the learning data having a larger learning error is included in the learning data set more. Therefore, by selecting the learning data in this way,
The larger the learning error is, the more the learning data is learned.

【0029】ステップST5では、変数Iに1を加え、
ステップST6でIが同一学習データセットでの学習回
数の上限Nを超えていないか判断し、超えていないなら
ばステップST3に戻って学習を続ける。超えている場
合は、ステップST7に進む。ステップST7では、学
習データ記憶部に記憶されている全学習データにおける
Biと、Biと対をなすAiをそれぞれニューラルネッ
トワークに入力したときの出力との誤差(学習誤差)の
和が許容誤差Esum以内か否かを判断し、Esum以内なら
学習を終了し、Esumより大きいならステップST1に
戻って学習を継続する。
At step ST5, 1 is added to the variable I,
In step ST6, it is determined whether or not I exceeds the upper limit N of the number of times of learning in the same learning data set. If not, the process returns to step ST3 to continue learning. When it exceeds, it progresses to step ST7. In step ST7, the sum of the errors (learning errors) between Bi in all the learning data stored in the learning data storage unit and the outputs when Ai forming a pair with Bi is input to the neural network is the allowable error E sum. If it is within E sum, the learning is ended, and if it is larger than E sum , the process returns to step ST1 to continue the learning.

【0030】なお、上記実施例3のステップST7で
は、学習誤差の和を用いて学習を終了するか否かを判断
しているが、全学習データの中でニューラルネットワー
クの出力との学習誤差が最大となるデータの学習誤差が
許容誤差Emax以内かというような条件等でも良い。
In step ST7 of the third embodiment, the sum of learning errors is used to determine whether or not the learning is to be ended. It may be a condition such that the learning error of the maximum data is within the allowable error E max .

【0031】[0031]

【発明の効果】以上のように、本発明によれば測定点教
示手段の点光源の発光を検出する点光源発光検出手段を
用いて点光源の発光を検出し、点光源発光検出手段から
送出される発光感知信号に基づいて3次元位置演算手段
を駆動するように構成したので、高速に発光を検出で
き、従って操作性に優れ、装置が安価にできるという効
果がある。
As described above, according to the present invention, the light emission of the point light source is detected by using the point light source light emission detection means for detecting the light emission of the point light source of the measuring point teaching means, and the light emission is sent from the point light source light emission detection means. Since the three-dimensional position calculating means is driven based on the emitted light sensing signal, it is possible to detect the emitted light at high speed, and therefore, the operability is excellent and the device can be inexpensive.

【0032】また、測定対象点の2次元座標及び上記2
次元座標から算出された3次元座標の少なくとも一方を
入力し、算出される3次元座標と測定対象点の真の3次
元座標との計測誤差を出力するように、予め複数の基準
点を用いて学習されたニューラルネットワークを用いて
測定結果を補正するように3次元位置測定装置を構成し
たので、精度の高い計測が可能となるという効果があ
る。
The two-dimensional coordinates of the measurement target point and the above-mentioned 2
At least one of the three-dimensional coordinates calculated from the three-dimensional coordinates is input, and a plurality of reference points are used in advance so as to output the measurement error between the calculated three-dimensional coordinates and the true three-dimensional coordinates of the measurement target point. Since the three-dimensional position measuring device is configured to correct the measurement result by using the learned neural network, there is an effect that highly accurate measurement can be performed.

【0033】また、ニューラルネットワーク測定結果補
正手段で予め複数の基準点を用いて学習を行なうにあた
って、基準点のデータを用いて学習データセットを作成
する際に、学習誤差の大きい学習データ程、上記学習デ
ータセットに多く含まれるようにしてニューラルネット
ワークの学習を行なうようにしたので、未学習のデータ
に対しても誤差の少ない出力が可能となるという効果が
ある。
Further, when learning is performed in advance by the neural network measurement result correcting means using a plurality of reference points, when the learning data set is created using the data of the reference points, the learning data having larger learning error Since the learning data set is included in a large amount in the learning data set so that the learning of the neural network is performed, there is an effect that an output with a small error can be output even for unlearned data.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の実施例1による3次元位置測定装置を
示す構成図である。
FIG. 1 is a configuration diagram showing a three-dimensional position measuring apparatus according to a first embodiment of the present invention.

【図2】本発明の実施例2による3次元位置測定装置を
示すブロック図である。
FIG. 2 is a block diagram showing a three-dimensional position measuring apparatus according to a second embodiment of the present invention.

【図3】本発明の実施例3に係わる学習データ制御方式
を示すブロック図である。
FIG. 3 is a block diagram showing a learning data control system according to a third embodiment of the present invention.

【図4】本発明の実施例3に係わる学習データ制御方式
を示すフローチャート図である。
FIG. 4 is a flowchart showing a learning data control method according to a third embodiment of the present invention.

【図5】従来の3次元位置測定装置を示す構成図であ
る。
FIG. 5 is a configuration diagram showing a conventional three-dimensional position measuring device.

【図6】従来の3次元位置測定装置における3次元位置
補正方法の説明図である。
FIG. 6 is an explanatory diagram of a three-dimensional position correction method in a conventional three-dimensional position measuring device.

【符号の説明】[Explanation of symbols]

1 撮像装置 2 撮像装置 3 3次元位置演算装置 5 測定対象点教示手段 6 指示点 7 点光源 9 点光源発光検出装置 10 計測制御装置 11 学習データ記憶部 12 学習データ制御部 13 誤差評価部 31 2次元座標算出手段 32 3次元座標算出手段 33 ニューラルネットワーク測定結果補正手段 DESCRIPTION OF SYMBOLS 1 Imaging device 2 Imaging device 3 Three-dimensional position calculation device 5 Measuring point teaching means 6 Pointing point 7 Point light source 9 Point light source emission detection device 10 Measurement control device 11 Learning data storage unit 12 Learning data control unit 13 Error evaluation unit 31 2 Dimensional coordinate calculating means 32 Three-dimensional coordinate calculating means 33 Neural network measurement result correcting means

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】 測定点を指示する指示点とこの指示点に
対して予め既知の固定的位置関係にある複数の点光源と
を有してなる測定点教示手段、この測定点教示手段のス
テレオ画像を得る、少なくとも2台の撮像装置、これら
撮像装置からのステレオ画像に基づいて、上記測定点の
3次元位置を算出する3次元位置演算手段、上記点光源
の発光を感知する点光源発光検出手段、及びこの点光源
発光検出手段から送出される発光感知信号に基づいて上
記3次元位置演算手段を駆動する計測制御手段を備えた
ことを特徴とする3次元位置測定装置。
1. A measuring point teaching means comprising a pointing point indicating a measuring point and a plurality of point light sources having a known fixed positional relationship with respect to the pointing point, and a stereo of the measuring point teaching means. At least two image pickup devices for obtaining images, three-dimensional position calculation means for calculating the three-dimensional position of the measurement point based on stereo images from these image pickup devices, and point light source light emission detection for sensing light emission of the point light source A three-dimensional position measuring apparatus comprising: a means and a measurement control means for driving the three-dimensional position calculating means based on a light emission sensing signal sent from the point light source light emission detecting means.
【請求項2】 測定対象点のステレオ画像を得る、少な
くとも2台の撮像装置、上記ステレオ画像における2次
元撮像座標系で、上記測定対象点の2次元座標を各々算
出する2次元座標算出手段、上記2次元座標から3次元
空間座標系での測定対象点の3次元座標を算出する3次
元座標算出手段、並びに上記2次元座標及び上記3次元
座標の少なくとも一方を入力し、上記3次元座標算出手
段において算出される3次元座標と上記測定対象点の真
の3次元座標との計測誤差を出力するように、予め複数
の基準点を用いて学習されたニューラルネットワーク測
定結果補正手段を備えた3次元位置測定装置。
2. A two-dimensional coordinate calculating means for calculating two-dimensional coordinates of the measurement target point in a two-dimensional imaging coordinate system in the stereo image, wherein at least two imaging devices obtain a stereo image of the measurement target point. A three-dimensional coordinate calculating means for calculating the three-dimensional coordinate of a measurement target point in a three-dimensional spatial coordinate system from the two-dimensional coordinate, and at least one of the two-dimensional coordinate and the three-dimensional coordinate are input to calculate the three-dimensional coordinate. The neural network measurement result correcting means learned in advance using a plurality of reference points is provided so as to output a measurement error between the three-dimensional coordinates calculated by the means and the true three-dimensional coordinates of the measurement target point. Dimensional position measuring device.
【請求項3】 請求項2記載のニューラルネットワーク
測定結果補正手段で予め複数の基準点を用いて学習を行
なうにあたって、上記基準点のデータを用いて学習デー
タセットを作成する際に、学習誤差の大きい学習データ
程、上記学習データセットに多く含まれるようにしてニ
ューラルネットワークの学習を行なうようにした3次元
位置測定結果補正方法。
3. When the learning is performed using a plurality of reference points in advance by the neural network measurement result correcting means according to claim 2, a learning error of a learning error is generated when a learning data set is created using the data of the reference points. A method for correcting a three-dimensional position measurement result in which a larger learning data is included in the learning data set so that the learning of the neural network is performed.
JP30098092A 1992-11-11 1992-11-11 Three-dimensional position measuring equipment and measurement correcting method Pending JPH06147830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP30098092A JPH06147830A (en) 1992-11-11 1992-11-11 Three-dimensional position measuring equipment and measurement correcting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP30098092A JPH06147830A (en) 1992-11-11 1992-11-11 Three-dimensional position measuring equipment and measurement correcting method

Publications (1)

Publication Number Publication Date
JPH06147830A true JPH06147830A (en) 1994-05-27

Family

ID=17891389

Family Applications (1)

Application Number Title Priority Date Filing Date
JP30098092A Pending JPH06147830A (en) 1992-11-11 1992-11-11 Three-dimensional position measuring equipment and measurement correcting method

Country Status (1)

Country Link
JP (1) JPH06147830A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1078304A (en) * 1995-10-13 1998-03-24 Nippon Telegr & Teleph Corp <Ntt> Method and device for image pickup
KR100620896B1 (en) * 2004-08-05 2006-09-19 학교법인 울산공업학원 automatic hole-location measuring system with CCD camera through mechanical contact
JP2006322855A (en) * 2005-05-19 2006-11-30 Denso Corp Determining device for focus of expansion
US7260252B2 (en) 2002-10-22 2007-08-21 Kabushiki Kaisha Toshiba X-ray computed tomographic apparatus, image processing apparatus, and image processing method
JP2007315841A (en) * 2006-05-24 2007-12-06 Hitachi Kokusai Electric Inc Moving object detection system
JP2008256692A (en) * 2007-03-30 2008-10-23 Mitsutoyo Corp Method for global calibration of stereo vision probe system
US7502504B2 (en) 2002-10-17 2009-03-10 Fanuc Ltd Three-dimensional visual sensor
JP2011227081A (en) * 2010-04-22 2011-11-10 Metronol As Optical measuring system
JP2013164413A (en) * 2012-01-10 2013-08-22 Anima Kk Virtual point determination device and method, and device used to determine virtual point
JP2014202521A (en) * 2013-04-02 2014-10-27 株式会社ミツトヨ Three dimensional measuring system
JP2014202522A (en) * 2013-04-02 2014-10-27 株式会社ミツトヨ Three-dimensional measuring system
JP2016509220A (en) * 2013-02-04 2016-03-24 デー・エヌ・ファオ.ゲー・エル.エス・エーDnv Gl Se Inspection camera unit for inspecting the interior, method for inspecting the interior, and sensor unit
WO2017115620A1 (en) * 2015-12-28 2017-07-06 川崎重工業株式会社 Deformation work assist system and deformation work assist method
JP2019008460A (en) * 2017-06-22 2019-01-17 株式会社東芝 Object detection device and object detection method and program

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1078304A (en) * 1995-10-13 1998-03-24 Nippon Telegr & Teleph Corp <Ntt> Method and device for image pickup
US7502504B2 (en) 2002-10-17 2009-03-10 Fanuc Ltd Three-dimensional visual sensor
US7260252B2 (en) 2002-10-22 2007-08-21 Kabushiki Kaisha Toshiba X-ray computed tomographic apparatus, image processing apparatus, and image processing method
KR100620896B1 (en) * 2004-08-05 2006-09-19 학교법인 울산공업학원 automatic hole-location measuring system with CCD camera through mechanical contact
JP2006322855A (en) * 2005-05-19 2006-11-30 Denso Corp Determining device for focus of expansion
JP2007315841A (en) * 2006-05-24 2007-12-06 Hitachi Kokusai Electric Inc Moving object detection system
JP4714078B2 (en) * 2006-05-24 2011-06-29 株式会社日立国際電気 Moving object detection system
JP2008256692A (en) * 2007-03-30 2008-10-23 Mitsutoyo Corp Method for global calibration of stereo vision probe system
JP2011227081A (en) * 2010-04-22 2011-11-10 Metronol As Optical measuring system
JP2013164413A (en) * 2012-01-10 2013-08-22 Anima Kk Virtual point determination device and method, and device used to determine virtual point
JP2016509220A (en) * 2013-02-04 2016-03-24 デー・エヌ・ファオ.ゲー・エル.エス・エーDnv Gl Se Inspection camera unit for inspecting the interior, method for inspecting the interior, and sensor unit
JP2014202521A (en) * 2013-04-02 2014-10-27 株式会社ミツトヨ Three dimensional measuring system
JP2014202522A (en) * 2013-04-02 2014-10-27 株式会社ミツトヨ Three-dimensional measuring system
WO2017115620A1 (en) * 2015-12-28 2017-07-06 川崎重工業株式会社 Deformation work assist system and deformation work assist method
KR20180090316A (en) * 2015-12-28 2018-08-10 카와사키 주코교 카부시키가이샤 Deformation processing support system and deformation processing support method
CN108472706A (en) * 2015-12-28 2018-08-31 川崎重工业株式会社 Deformation processing supports system and deformation processing to support method
US20190017815A1 (en) * 2015-12-28 2019-01-17 Kawasaki Jukogyo Kabushiki Kaisha Deformation processing support system and deformation processing support method
CN108472706B (en) * 2015-12-28 2020-06-30 川崎重工业株式会社 Deformation processing support system and deformation processing support method
US11105615B2 (en) 2015-12-28 2021-08-31 Kawasaki Jukogyo Kabushiki Kaisha Deformation processing support system and deformation processing support method
JP2019008460A (en) * 2017-06-22 2019-01-17 株式会社東芝 Object detection device and object detection method and program

Similar Documents

Publication Publication Date Title
US9734419B1 (en) System and method for validating camera calibration in a vision system
US10786904B2 (en) Method for industrial robot commissioning, industrial robot system and control system using the same
JP6180087B2 (en) Information processing apparatus and information processing method
JP2612097B2 (en) Method and system for automatically determining the position and orientation of an object in three-dimensional space
US9672630B2 (en) Contour line measurement apparatus and robot system
JPH06147830A (en) Three-dimensional position measuring equipment and measurement correcting method
JP4095491B2 (en) Distance measuring device, distance measuring method, and distance measuring program
US10478149B2 (en) Method of automatically positioning an X-ray source of an X-ray system and an X-ray system
WO2012066769A1 (en) Information processing apparatus and information processing method
KR20180120647A (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
JP2011027724A (en) Three-dimensional measurement apparatus, measurement method therefor, and program
WO2011105522A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
CN107077729B (en) Method and device for recognizing structural elements of a projected structural pattern in a camera image
JP2005515910A (en) Method and apparatus for single camera 3D vision guide robotics
JP6677522B2 (en) Information processing apparatus, control method for information processing apparatus, and program
JP2000293695A (en) Picture processor
JP4234059B2 (en) Camera calibration method and camera calibration apparatus
Le et al. Joint calibration of multiple sensors
JPH06137840A (en) Automatic calibration device for visual sensor
KR101535801B1 (en) Process inspection device, method and system for assembling process in product manufacturing using depth map sensors
JPWO2018168757A1 (en) Image processing apparatus, system, image processing method, article manufacturing method, program
JP6180158B2 (en) Position / orientation measuring apparatus, control method and program for position / orientation measuring apparatus
JPH06249615A (en) Position detecting method
JP4077755B2 (en) POSITION DETECTION METHOD, DEVICE THEREOF, PROGRAM THEREOF, AND CALIBRATION INFORMATION GENERATION METHOD
JP2020071034A (en) Three-dimensional measurement method, three-dimensional measurement device, and robot system