WO1998005922A1 - Calibration method - Google Patents

Calibration method Download PDF

Info

Publication number
WO1998005922A1
WO1998005922A1 PCT/JP1997/002752 JP9702752W WO9805922A1 WO 1998005922 A1 WO1998005922 A1 WO 1998005922A1 JP 9702752 W JP9702752 W JP 9702752W WO 9805922 A1 WO9805922 A1 WO 9805922A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
distance image
value
image
light
Prior art date
Application number
PCT/JP1997/002752
Other languages
French (fr)
Japanese (ja)
Inventor
Kosuke Sato
Takayuki Kataoka
Shozo Hirose
Motohide Yasukawa
Original Assignee
Komatsu Ltd.
Osaka Gas Information System Research Institute Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Komatsu Ltd., Osaka Gas Information System Research Institute Co., Ltd. filed Critical Komatsu Ltd.
Publication of WO1998005922A1 publication Critical patent/WO1998005922A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to a calibration method for a three-dimensional measuring device required for measuring a three-dimensional position of an object to be measured by an imaging means such as a camera.
  • a method shown in FIG. 19 has been known as a calibration method in this type of three-dimensional measuring apparatus (for example, see Japanese Patent Application Laid-Open No. H5-2481988).
  • This method uses a measuring plate 52 engraved with grid lines 51 (or many points) indicating coordinates as a calibration object, and the measuring plate 52 is controlled by a controller 53.
  • the calibration data is obtained by recognizing the grid line 51 with the three-dimensional visual sensor 54 while moving it back and forth, and processing it with the computer 55.
  • an image processing method called binarization processing is used.
  • the grid 51 when the grid 51 is extracted by image processing, the grid 51 can be faintly extracted due to a change in the amount of light in the surrounding environment. Or, the center of grid line 51 could not be extracted and a deviated line was extracted, and the resulting coordinate values deviated from the correct position, resulting in accurate calibration. There is a problem that can not be performed.
  • the present invention has been made in order to solve such a problem, and it is an object of the present invention to provide a calibration method capable of removing errors due to a binarization process and achieving higher accuracy. And for the purpose. Disclosure of the invention
  • the calibration method according to the present invention comprises:
  • the surface of the device under test is irradiated with light by the light projecting device, and the reflected light is imaged by the image capturing device.
  • the information of the imaged reflected light is used to calculate the reflected light based on the principle of triangulation.
  • a calibration method for a three-dimensional measuring device for measuring a position
  • a distance image at each moving point when the calibration object is moved one rotation, a pitch rotation, and a back and forth movement with respect to the imaging means is taken by the imaging means, and the taken distance image is taken.
  • the processing is performed to obtain three-dimensional coordinate values for each distance value of each pixel, and a calibration value is obtained from the obtained three-dimensional coordinate values.
  • a calibration target such as a measuring plate is rotated once, a pitch or a back and forth by a fixed amount with respect to an imaging means such as a camera. Then, the distance image at each moving point is acquired by the imaging means. Then, by processing the acquired distance image, a three-dimensional coordinate value of each pixel for each distance value is obtained, and a calibration value is obtained from the obtained three-dimensional coordinate value.
  • a median around a specific pixel for one image is processed.
  • the value is set as the distance value of the pixel, and the obtained distance value is set multiple times at each position, and the median values of the multiple times are set as the distance value of the pixel. It is preferable to set.
  • an average value of the circumference of a specific pixel in one image is set as a distance value of the pixel.
  • the distance value thus obtained can be set a plurality of times at each position, and the average value of the plurality of times can be set as the value of the distance value of the pixel.
  • the imaging means may be configured to replace the acquired distance image information with light / dark information and output the information.
  • the imaging means acquires a three-dimensional image of the measured object from reflected light of the coded pattern light projected on the surface of the measured object by the light projecting means.
  • a method of acquiring a three-dimensional image of the measured object by imaging the reflected light of the light projected on the surface of the measured object by the two light emitting means by the light projecting means It may be.
  • FIG. 1 is a system configuration diagram of a three-dimensional measuring apparatus according to one embodiment of the present invention
  • Figure 2 shows a flow chart for creating a look-up table in the X direction.
  • Figure 3 shows the flow for creating an X-direction look-up table. Char.
  • Figure 4 is a flowchart 3 for creating a look-up table in the X direction.
  • Figure 5 is a flow chart for creating a look-up table in the X direction.
  • Figure 6 shows a flow chart for creating an X-direction lookup table.
  • Figure 7 shows a flow chart for creating a lookup table in the Y direction.
  • Figure 8 is a flow chart for creating a look-up table in the Y direction.
  • Figure 9 is a flow chart for creating a look-up table in the Y direction.
  • FIG. 10 shows a flowchart for creating a lookup table in the Y direction.
  • Figure 11 is a flow chart for creating a look-up table in the Y direction.
  • Figure 12 is a flow chart for creating a Z-direction look-up table
  • Figure 13 shows a flow chart for creating a Z-direction look-up table
  • Fig. 14 shows flow charts (3) and (3) for creating a look-up table in the Z direction.
  • Figure 15 is an illustration of the movement of the measuring plate for acquiring a distance image in the X direction.
  • FIG. 16 is an explanatory diagram of a calculation method of the distance image data in the X direction.
  • Figure 17 shows the movement of the measuring plate for acquiring the distance image in the Y direction.
  • Fig. 18 is an illustration of the movement of the measurement plate for acquiring a distance image in the Z direction.
  • FIG. 19 is an explanatory diagram of a conventional calibration method. BEST MODE FOR CARRYING OUT THE INVENTION
  • FIG. 1 shows a system configuration diagram of a three-dimensional measuring apparatus according to one embodiment of the present invention.
  • the three-dimensional measuring apparatus of the present embodiment includes a measuring plate 1 as a calibration object constituted by a plain plate, a measuring plate moving device 2 for moving the measuring plate 1 to a desired position, A measuring plate control device 3 for controlling the measuring plate moving device 2; a three-dimensional visual sensor 4 provided opposite to the measuring plate 1; and a control device for controlling the measuring plate control device 3.
  • a computer 5 for storing and processing the distance image obtained by the three-dimensional visual sensor 4 in accordance with the moving position of the measuring device 1.
  • the three-dimensional visual sensor 4 is configured to irradiate a laser beam (coded pattern light) onto the surface of the measurement plate 1, and to capture an image of light reflected from the surface of the measurement plate 1.
  • a CCD camera is provided as imaging means, and an image obtained by the CCD camera is sent to the computer 5. In this way, the distance image at each moving point is three-dimensionally visualized while moving the measuring plate 1 by a fixed amount based on the control signal from the computer 5. Acquired by sensor 4.
  • the three-dimensional visual sensor 4 outputs the distance between the three-dimensional visual sensor 4 and the measuring plate 1 by converting the information into bright and dark information such that a bright area is displayed near and a dark area is displayed far away. Then, what three-dimensional coordinates actually correspond to the value indicating the light and darkness is taught by performing the calibration.
  • FIG. 15 to 18 flowcharts shown in FIGS. 2 to 14.
  • the distance image data of the measurement plate 1 is acquired by the CCD camera of the three-dimensional vision sensor 4, and the acquired image is used as a media to remove noise due to reflection of the surface of the measurement plate 1 or the like. Correct with an unfilter (smoothing filter). Note that this median filter replaces the median value of the pixel to be measured with the median value (median value) of an area near the pixel (for example, 3 ⁇ 3). This process is repeated until the image at each position is acquired three times.
  • Distance image data is acquired in the same manner as described above by shifting the front and rear positions of the measurement plate 1 backward by a fixed pitch (for example, 0.1 mm). This data is acquired for all distance image files. (1500 times) Repeatedly write the distance image file XP 0. kmt to xp 149. 9 kmt when the X axis rotates forward. In this way, for example, all range image data during the forward rotation of the X axis in the measurement range of 75 mm to 22 mm are acquired.
  • a fixed pitch for example, 0.1 mm
  • S12 to S18 Performs the same processing as S3 to S9 described above, for example, to obtain all the range image data during the negative rotation of the X axis in the measurement range of 75 mm to 25 mm.
  • the acquired data is written to the distance image file xn O .kmt ⁇ xnl 4 9 9.kmt at the time of negative rotation of the X axis,
  • Initial setting is first performed to create a look-up table in the X direction.
  • an undefined value is set to 1 ut -X (when the X axis is positively rotated) and 1 ut -w (when the X axis is negatively rotated) indicating the X coordinate of the lookup table.
  • S29 i; 256 pixels, j; 242
  • the processing of S23 to S28 is performed for all pixels.
  • (ip-in) represents the difference between the image position at the time of the X-axis positive rotation and the image position at the time of the X-axis negative rotation. Therefore, the X coordinate of point A, which represents the same pixel value when the measurement plate 1 rotates forward and when it rotates negatively, is expressed by the following equation.
  • the X coordinate of this point A is stored in the above-mentioned memory 1ut-X [i] [j] [z] to save memory.
  • S50 Performs the above-described processing of S44 to S49 for all pixels i; 256 pixels, j; 24 pixels, completes the look-up table in the X direction, and completes the flow. finish.
  • a look-up table in the Y direction for storing the position data in the Y direction is created in accordance with each of steps T1 to T50 shown in FIGS. 7 to 11.
  • the flow for creating the ⁇ -direction look-up table is as follows.
  • the measurement plate 1 is installed with a pitch angle of ⁇ 20 ° (see step 17).
  • the distance image data in the Y-direction is acquired by using the distance image data in the Y direction, and the distance image data is acquired as the distance image data yp O.
  • Kmt to ypl 4 9 9. kmt and yn O.
  • Kmt to yn 1 4 9 9.Except for writing to kmt, This is the same as the processing in each of steps S1 to S50. Therefore, the detailed description of this flow is omitted.
  • U 1 All axes of the movement axis of the measuring plate 1 with respect to the three-dimensional visual sensor 4, in other words, the vertical rotation axis (Y axis), the horizontal rotation axis (X axis), and the front-rear movement axis (Z axis) Move each axis to the origin as a measurement reference.
  • U2 to U4 Obtain the distance image data of the measurement plate 1 by the CCD camera of the three-dimensional visual sensor 4, and correct the obtained image by a median filter (smoothing filter). This process is repeated until the image at each position is acquired three times.
  • a median filter smoothing filter
  • U7 to U8 Obtain distance image data in the same manner as above by shifting the front and rear position of the measurement plate 1 backward by a fixed pitch (for example, 0.1 mm). Acquire this data for all distance image files. (100 times) Repeatedly write to the Z-axis range image file zO.kmt to z99.kmt. Thus, for example, 100 mn! Acquire all the Z-axis range image data within the measurement range of ⁇ 200 mm.
  • a fixed pitch for example, 0.1 mm
  • U 9 Initial settings are made to create a Z-direction look-up table. In this initial setting, an undefined value is set to 1 ut-z indicating the Z coordinate of the lookup table.
  • U10 to U11 Read the pixel position z0.kmt of the z-axis range image file created as described above.
  • U12 to U15 Perform processing to eliminate noise in the range image data obtained in the same manner as steps S23 to S26 in FIGS. That is, first, each pixel of i and j (i: 256 pixels, j; 242 pixels) is compared with the immediately preceding image data, and the pixel is within 50 mm from the head, and If the difference between the pixels is greater than 128, the value of 1 ut-z for that pixel is made indefinite to invalidate this data.
  • U18 i; 256 pixels and j; 242 pixels U11 to U17 are processed for all pixels.
  • U26 Performs the processing of U22 to U25 described above for all pixels i; 256 and j; 242 pixels, completes the Z-direction look-up table and ends the flow I do.
  • a correspondence table (lookup table) between the brightness information of each pixel and the three-dimensional coordinate value is created using a plain plate, and the conventional grid lines are formed by image processing. Significantly higher accuracy can be achieved compared to what is extracted.
  • the correction when the distance image is processed, the correction is performed using the median value.
  • the correction may be performed using the average value instead of the median value. It is also possible to combine these corrections with the median value and the correction with the average value, such that the first correction is performed with the median value and the second correction is performed with the average value. It is possible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

A calibration method which can improve accuracy by eliminating an error resulting from a binarization processing. When a measurement plate (1) as a calibration object is subjected to yaw revolution, pitch revolution and movement in a longitudinal direction reltative to a three-dimensional visual sensor (4), the three-dimensional visual sensor (4) acquires a depth map at each movement point, the depth map so acquired is then processed to obtain a three-dimensional coordinates value of each pixel for each depth map, and a calibration value is obtained from this three-dimensional coordinates value.

Description

明細書  Specification
キャ リ ブレー シ ョ ン方法 技術分野  Calibration method Technical field
本発明は、 カメ ラ等の撮像手段によ り被測定物の三次元位置を計 測するのに必要と される三次元計測装置におけるキヤ リ ブレーシ ョ ン方法に関する ものである。 背景技術  The present invention relates to a calibration method for a three-dimensional measuring device required for measuring a three-dimensional position of an object to be measured by an imaging means such as a camera. Background art
従来、 この種の三次元計測装置におけるキャ リ ブレー シ ョ ン方法 と して、 図 1 9 に示される方法が知られている (例えば特開平 5 — 2 4 8 8 1 9 号公報参照) 。 この方法は、 較正用対象物と して座標 を示す方眼線 5 1 (も し く は多数の点) が刻まれた測定板 5 2 を用 い、 この測定板 5 2 を制御装置 5 3 によって前後に移動させながら 三次元視覚セ ンサ 5 4 によって方眼線 5 1 を認識してコ ンピュ一タ 5 5で処理するこ とによ って較正データを得るものである。 この場 合、 方眼線 5 1 を認識するに際し、 二値化処理と称する画像処理手 法が用いられる。  Conventionally, a method shown in FIG. 19 has been known as a calibration method in this type of three-dimensional measuring apparatus (for example, see Japanese Patent Application Laid-Open No. H5-2481988). This method uses a measuring plate 52 engraved with grid lines 51 (or many points) indicating coordinates as a calibration object, and the measuring plate 52 is controlled by a controller 53. The calibration data is obtained by recognizing the grid line 51 with the three-dimensional visual sensor 54 while moving it back and forth, and processing it with the computer 55. In this case, when recognizing the grid line 51, an image processing method called binarization processing is used.
しかしながら、 前述されている従来のキヤ リ ブレーシ ョ ン方法で は、 方眼線 5 1 を画像処理によ り抽出する際に、 この方眼線 5 1 が 周囲環境の光量の変化によってかすれて綺麗に抽出できなかったり . あるいは方眼線 5 1 の中心が抽出できずに偏った線を抽出してしま い、 この結果得られた座標値が正確な位置からずれてしま って精度 の良いキヤ リ ブレーシ ョ ンが行えないという問題点があった。  However, according to the conventional calibration method described above, when the grid 51 is extracted by image processing, the grid 51 can be faintly extracted due to a change in the amount of light in the surrounding environment. Or, the center of grid line 51 could not be extracted and a deviated line was extracted, and the resulting coordinate values deviated from the correct position, resulting in accurate calibration. There is a problem that can not be performed.
本発明は、 このよ うな問題点を解消するためになされたもので、 二値化処理による誤差を除去して高精度化を図るこ とのできるキヤ リ ブレ一シ ヨ ン方法を提供するこ とを目的とするものである。 発明の開示 The present invention has been made in order to solve such a problem, and it is an object of the present invention to provide a calibration method capable of removing errors due to a binarization process and achieving higher accuracy. And for the purpose. Disclosure of the invention
前述の目的を達成するために、 本発明によるキャ リ ブレー シ ョ ン 方法は、  To achieve the above-mentioned object, the calibration method according to the present invention comprises:
投光手段によ り被測定物の表面に光を照射してその反射光を撮像 手段によ り撮像し、 この撮像される反射光の情報から三角測量の原 理に基づき前記被測定物の位置を計測する三次元計測装置における キヤ リ ブレー シ ョ ン方法であって、  The surface of the device under test is irradiated with light by the light projecting device, and the reflected light is imaged by the image capturing device. The information of the imaged reflected light is used to calculate the reflected light based on the principle of triangulation. A calibration method for a three-dimensional measuring device for measuring a position,
較正用対象物を前記撮像手段に対してョ一回転, ピッ チ回転およ び前後移動させたときの各移動点での距離画像を前記撮像手段にて 撮像し、 この撮像された距離画像を処理して各画素の前記距離値毎 の三次元座標値を求め、 この求められる三次元座標値よ り較正値を 得る こ とを特徴とする ものである。  A distance image at each moving point when the calibration object is moved one rotation, a pitch rotation, and a back and forth movement with respect to the imaging means is taken by the imaging means, and the taken distance image is taken. The processing is performed to obtain three-dimensional coordinate values for each distance value of each pixel, and a calibration value is obtained from the obtained three-dimensional coordinate values.
本発明のキャ リ ブレー シ ョ ン方法においては、 例えば測定板のよ うな較正用対象物をカメ ラ等の撮像手段に対して一定量ずっョ一回 転, ピッチ回転も しく は前後移動させながら、 各移動点での距離画 像が撮像手段にて取得される。 そ して、 この取得された距離画像が 処理されるこ とによって各画素のその距離値毎の三次元座標値が求 められ、 この求められる三次元座標値よ り較正値が得られる。  In the calibration method of the present invention, for example, a calibration target such as a measuring plate is rotated once, a pitch or a back and forth by a fixed amount with respect to an imaging means such as a camera. Then, the distance image at each moving point is acquired by the imaging means. Then, by processing the acquired distance image, a three-dimensional coordinate value of each pixel for each distance value is obtained, and a calibration value is obtained from the obtained three-dimensional coordinate value.
本発明によれば、 較正用対象物の回転および前後移動によって各 画素の明暗情報と置き換えられた 2 5 6段階の距離情報 (即ちこれ が距離値である) と三次元座標値との対応表 (ルッ クア ッ プテープ ル) が作成されるので、 従来の方眼線を画像処理により抽出するも のにおけるよ うな二値化処理による誤差の発生を排除するこ とがで き、 高精度のキヤ リ ブレーシ ョ ンを実現する ことができ る。  According to the present invention, a correspondence table between the 256-level distance information (that is, this is a distance value) replaced with the brightness information of each pixel by rotating and moving the calibration object back and forth and the three-dimensional coordinate values (Look-up table), it is possible to eliminate the occurrence of errors due to binarization processing as in conventional extraction of grid lines by image processing, and a high-precision carrier. A break can be realized.
本発明においては、 前記撮像手段によ り撮像された距離画像を処 理するに際し、 一つの画像について特定の画素の周囲のメディ アン 値をその画素の距離値と して設定するとと もに、 こ う して得られる 距離値を各位置で複数回設定しそれら複数回のメディ アン値をその 画素の距離値の値と して設定するのが好ま しい。 こ うする ことで、 較正用対象物の表面の乱反射等によって取得画像の画素値が変わつ た場合に対して、 取得画像のノイズを除去するためのスムージング 処理を行う こ とができ、 よ り高精度なキヤ リ ブレーシ ョ ンを行う こ とが可能となる。 In the present invention, when processing the distance image captured by the image capturing unit, a median around a specific pixel for one image is processed. The value is set as the distance value of the pixel, and the obtained distance value is set multiple times at each position, and the median values of the multiple times are set as the distance value of the pixel. It is preferable to set. By doing so, when the pixel value of the acquired image changes due to irregular reflection on the surface of the calibration object, smoothing processing for removing noise from the acquired image can be performed. High-precision calibration can be performed.
また、 ノ イズ除去の手法と しては、 前記撮像手段により撮像され た距離画像を処理するに際し、 一つの画像について特定の画素の周 囲の平均値をその画素の距離値と して設定するとと もに、 こ う して 得られる距離値を各位置で複数回設定しそれら複数回の平均値をそ の画素の距離値の値と して設定するものとするこ と もできる。 前 記撮像手段は、 取得した距離画像情報を明暗情報に置き換えて出力 する ものと してもよい。  As a noise removal method, when processing a range image captured by the imaging unit, an average value of the circumference of a specific pixel in one image is set as a distance value of the pixel. At the same time, the distance value thus obtained can be set a plurality of times at each position, and the average value of the plurality of times can be set as the value of the distance value of the pixel. The imaging means may be configured to replace the acquired distance image information with light / dark information and output the information.
また、 前記撮像手段は、 前記投光手段によ り被測定物の表面に投 光されるコー ド化パター ン光の反射光からその被測定物の三次元画 像を取得する ものであっても良いし、 あるいは前記投光手段により 被測定物の表面に投光される光の反射光を 2台のカメ ラで撮像する こ とによ りその被測定物の三次元画像を取得するものであっても良 い。 図面の簡単な説明  Further, the imaging means acquires a three-dimensional image of the measured object from reflected light of the coded pattern light projected on the surface of the measured object by the light projecting means. Or a method of acquiring a three-dimensional image of the measured object by imaging the reflected light of the light projected on the surface of the measured object by the two light emitting means by the light projecting means. It may be. BRIEF DESCRIPTION OF THE FIGURES
図 1 は、 本発明の一実施例に係る三次元計測装置のシステム構成 図、  FIG. 1 is a system configuration diagram of a three-dimensional measuring apparatus according to one embodiment of the present invention,
図 2 は、 X方向のルッ クアップテーブルを作成するためのフロー チヤ一 ト①、  Figure 2 shows a flow chart for creating a look-up table in the X direction.
図 3 は、 X方向のルッ クアッ プテーブルを作成するためのフロー チ ヤ —ト②、 Figure 3 shows the flow for creating an X-direction look-up table. Char.
図 4 は、 X方向のルツ クアップテーブルを作成するためのフロー チャー ト③、  Figure 4 is a flowchart ③ for creating a look-up table in the X direction.
図 5 は、 X方向のルツ クア ップテーブルを作成するためのフロー チ ャー ト④、  Figure 5 is a flow chart for creating a look-up table in the X direction.
図 6 は、 X方向のルツ クアッ プテーブルを作成するためのフロー チャ ー ト⑤、  Figure 6 shows a flow chart for creating an X-direction lookup table.
図 7 は、 Y方向のルツ クアップテーブルを作成するためのフロー チ ヤ — ト①、  Figure 7 shows a flow chart for creating a lookup table in the Y direction.
図 8 は、 Y方向のルッ クアッ プテーブルを作成するためのフロー チャ ー ト②、  Figure 8 is a flow chart for creating a look-up table in the Y direction.
図 9 は、 Y方向のルツ クアップテーブルを作成するためのフ ロー チャ ー ト③、  Figure 9 is a flow chart for creating a look-up table in the Y direction.
図 1 0 は、 Y方向のルッ クアッ プテーブルを作成するためのフロ 一チ ヤ一ト④、  FIG. 10 shows a flowchart for creating a lookup table in the Y direction.
図 1 1 は、 Y方向のルッ クアップテーブルを作成するためのフロ —チヤ一ト⑤、  Figure 11 is a flow chart for creating a look-up table in the Y direction.
図 1 2 は、 Z方向のルッ クアッ プテーブルを作成するためのフロ 一チ ヤ一ト①、  Figure 12 is a flow chart for creating a Z-direction look-up table,
図 1 3 は、 Z方向のルッ クアッ プテーブルを作成するためのフロ 一チヤ一ト②、  Figure 13 shows a flow chart for creating a Z-direction look-up table,
図 1 4 は、 Z方向のルッ クアッ プテーブルを作成するためのフロ 一チ ヤ一ト③、  Fig. 14 shows flow charts (3) and (3) for creating a look-up table in the Z direction.
図 1 5 は、 X方向の距離画像取得のための測定板の移動状態説明 図、  Figure 15 is an illustration of the movement of the measuring plate for acquiring a distance image in the X direction.
図 1 6 は、 X方向の距離画像データの計算方法説明図である。 図 1 7 は、 Y方向の距離画像取得のための測定板の移動状態説明 図、 FIG. 16 is an explanatory diagram of a calculation method of the distance image data in the X direction. Figure 17 shows the movement of the measuring plate for acquiring the distance image in the Y direction. Figure,
図 1 8 は、 Z方向の距離画像取得のための測定板の移動状態説明 図、  Fig. 18 is an illustration of the movement of the measurement plate for acquiring a distance image in the Z direction.
図 1 9 は、 従来のキャ リ ブレー シ ョ ン方法説明図である。 発明を実施するための最良の形態  FIG. 19 is an explanatory diagram of a conventional calibration method. BEST MODE FOR CARRYING OUT THE INVENTION
次に、 本発明によるキヤ リ ブレーシ ョ ン方法の具体的実施例につ き、 図面を参照しつつ説明する。  Next, a specific embodiment of the calibration method according to the present invention will be described with reference to the drawings.
図 1 に、 本発明の一実施例に係る三次元計測装置のシステム構成 図が示されている。 本実施例の三次元計測装置は、 無地の平板によ り構成される較正用対象物と しての測定板 1 と、 この測定板 1 を所 望位置に移動させる測定板移動装置 2 と、 この測定板移動装置 2 を 制御する測定板制御装置 3 と、 前記測定板 1 に対位して設けられる 三次元視覚セ ンサ 4 と、 前記測定板制御装置 3 を制御すると と もに 前記測定板 1 の移動位置に応じて前記三次元視覚センサ 4 によ り取 得された距離画像を保存 · 処理するコ ン ピュータ 5 とを備えている, 前記測定板移動装置 2 は、 直動スライダ 2 a と、 この直動スライ ダ 2 a上に回転テーブル 2 bを介して設けられるゴニォ回転装置 2 c とを備え、 これによつて測定板 1 が三次元視覚センサ 4 に対して ョ一回転 (上下回転軸回りの回転) , ピッ チ回転 (水平回転軸回り の回転) および前後移動可能に構成されている。 また、 前記三次元 視覚セ ンサ 4 は、 測定板 1 の表面にレーザ光 (コ ー ド化パタ ー ン光 ) を照射する レーザ照射部と、 この測定板 1 の表面からの反射光を 撮像する撮像手段と しての C C Dカメ ラ とを備え、 この C C Dカメ ラによ り取得された画像がコ ンピュータ 5 に送られるよ うになって いる。 こう して、 コ ンピュータ 5からの制御信号に基づいて測定板 1 を一定量ずつ移動させながら各移動点での距離画像が三次元視覚 センサ 4 にて取得される。 FIG. 1 shows a system configuration diagram of a three-dimensional measuring apparatus according to one embodiment of the present invention. The three-dimensional measuring apparatus of the present embodiment includes a measuring plate 1 as a calibration object constituted by a plain plate, a measuring plate moving device 2 for moving the measuring plate 1 to a desired position, A measuring plate control device 3 for controlling the measuring plate moving device 2; a three-dimensional visual sensor 4 provided opposite to the measuring plate 1; and a control device for controlling the measuring plate control device 3. And a computer 5 for storing and processing the distance image obtained by the three-dimensional visual sensor 4 in accordance with the moving position of the measuring device 1. And a gonio rotary device 2c provided on the linear slide 2a via a rotary table 2b, whereby the measuring plate 1 rotates one turn with respect to the three-dimensional visual sensor 4 (up and down). Rotation around rotation axis), pitch rotation (rotation around horizontal rotation axis) And it has been configured to be movable back and forth. The three-dimensional visual sensor 4 is configured to irradiate a laser beam (coded pattern light) onto the surface of the measurement plate 1, and to capture an image of light reflected from the surface of the measurement plate 1. A CCD camera is provided as imaging means, and an image obtained by the CCD camera is sent to the computer 5. In this way, the distance image at each moving point is three-dimensionally visualized while moving the measuring plate 1 by a fixed amount based on the control signal from the computer 5. Acquired by sensor 4.
この三次元視覚センサ 4 は、 当該三次元視覚センサ 4 と測定板 1 との遠近距離を、 近いと ころは明る く 、 遠いと ころは暗く 表示する というよ う に明暗情報に置き換えて出力する。 そ して、 この明暗を 示す値が実際にはどのよ うな三次元座標に対応するかがキヤ リ ブレ —ショ ンを実施するこ とによって教示される。  The three-dimensional visual sensor 4 outputs the distance between the three-dimensional visual sensor 4 and the measuring plate 1 by converting the information into bright and dark information such that a bright area is displayed near and a dark area is displayed far away. Then, what three-dimensional coordinates actually correspond to the value indicating the light and darkness is taught by performing the calibration.
次に、 本実施例におけるキャ リ ブレーシ ョ ン方法を図 1 5 乃至図 1 8 を参照しつつ、 図 2 乃至図 1 4 に示されるフローチャ ー トに し たがって説明する。  Next, a calibration method according to the present embodiment will be described with reference to FIGS. 15 to 18 and flowcharts shown in FIGS. 2 to 14. FIG.
まず、 X方向の位置データを格納する X方向のルツ クア ッ プテ一 ブルを作成するためのフローを図 2乃至図 6 によって説明する。  First, a flow for creating an X-direction lookup table for storing X-direction position data will be described with reference to FIGS.
S 1 : 三次元視覚センサ 4 に対して測定板 1 の移動軸の全軸、 言 い換えれば上下回転軸 ( Y軸) , 水平回転軸 ( X軸) および前後移 動軸 ( Z軸) の各軸を測定基準と しての原点に移動させる。  S1: All three axes of the movement axis of the measuring plate 1 with respect to the three-dimensional visual sensor 4, in other words, the vertical rotation axis (Y axis), the horizontal rotation axis (X axis), and the front-rear movement axis (Z axis). Move each axis to the origin as a measurement reference.
S 2 : X方向の距離画像データを取得するために、 図 1 5 ( a ) に示されるよう に、 三次元視覚センサ 4 に対して測定板 1 を上下回 転軸まわりに + 3 0 ° すなわちョ一角を + 3 0 。 の角度をつけて設 置する。  S 2: In order to acquire distance image data in the X direction, as shown in FIG. 15 (a), the measuring plate 1 is moved + 30 ° around the vertical rotation axis with respect to the three-dimensional visual sensor 4. +30 for one corner. At an angle of
S 3 ~ S 5 : 三次元視覚センサ 4 の C C Dカメ ラにより測定板 1 の距離画像データを取得し、 この取得画像を、 測定板 1 の表面の反 射等によるノ イズを除去するためにメディ アンフ ィ ルタ (平滑化フ ィ ルタ) によ り補正する。 なお、 このメディ アンフィ ルタは、 計測 すべき画素の澳度値をその画素の近傍領域 (例えば 3 x 3 ) のメデ イ アン値 (中央値) によ って置き換える ものである。 そして、 この 処理は各位置での画像を 3 回取得するまで繰り返される。  S3 to S5: The distance image data of the measurement plate 1 is acquired by the CCD camera of the three-dimensional vision sensor 4, and the acquired image is used as a media to remove noise due to reflection of the surface of the measurement plate 1 or the like. Correct with an unfilter (smoothing filter). Note that this median filter replaces the median value of the pixel to be measured with the median value (median value) of an area near the pixel (for example, 3 × 3). This process is repeated until the image at each position is acquired three times.
S 6〜 S 7 : 各位置で 3 回の距離画像が取得されると、 同一画素 の値を比較してそのメディ アン値 (中央値) を求め、 このメ ディ ア ン値を最終的な距離値と して設定する。 この後、 この得られた距離 画像をコ ンピュータ 5 内の記憶装置における距離画像フ アイルに書 き込 。 S6 to S7: When three distance images are acquired at each position, the median value (median value) of the same pixel is calculated by comparing the values of the same pixel. Set the final distance value as the final distance value. Thereafter, the obtained distance image is written to a distance image file in a storage device in the computer 5.
S 8〜 S 9 : 測定板 1 の前後位置を一定ピッチ (例えば 0. 1 m m) だけ後方にずらせて前述と同様にして距離画像データを取得し. このデータの取得を全距離画像フ ァイル分 ( 1 5 0 0 回) 繰り返し て X軸の正回転時の距離画像フ ァイル X P 0 . k m t 〜 x p 1 4 9 9 . k m t に書き込む。 こ う して、 例えば 7 5 m m〜 2 2 5 mmの 測定範囲における X軸の正回転時の全ての距離画像データを取得す  S8 to S9: Distance image data is acquired in the same manner as described above by shifting the front and rear positions of the measurement plate 1 backward by a fixed pitch (for example, 0.1 mm). This data is acquired for all distance image files. (1500 times) Repeatedly write the distance image file XP 0. kmt to xp 149. 9 kmt when the X axis rotates forward. In this way, for example, all range image data during the forward rotation of the X axis in the measurement range of 75 mm to 22 mm are acquired.
S 1 0 〜 S 1 1 : 次に、 図 1 5 ( b ) に示されるよ うに、 三次元 視覚セ ンサ 4 に対して測定板 1 をョ一角を— 3 0 ° の角度をつけて 設置すると と もに、 この測定板 1 を前後方向に原点位置まで戻す。 S10 to S11: Next, as shown in Fig. 15 (b), when the measuring plate 1 is set at an angle of-30 ° with respect to the three-dimensional visual sensor 4, At the same time, return the measuring plate 1 to the home position in the front-rear direction.
S 1 2 〜 S 1 8 : 前述の S 3〜 S 9 と同様の処理を行って、 例え ば 7 5 m m〜 2 2 5 m mの測定範囲における X軸の負回転時の全て の距離画像データを取得し、 この取得データを X軸の負回転時の距 離画像フ ァイル x n O . k m t ~ x n l 4 9 9. k m t に書き込む, S12 to S18: Performs the same processing as S3 to S9 described above, for example, to obtain all the range image data during the negative rotation of the X axis in the measurement range of 75 mm to 25 mm. The acquired data is written to the distance image file xn O .kmt ~ xnl 4 9 9.kmt at the time of negative rotation of the X axis,
S 1 9 : X方向のルッ クアッ プテーブルを作成するために、 まず 初期設定を行う。 この初期設定においては、 ルッ クアッ プテ—ブル の X座標を示す 1 u t - X ( X軸の正回転時) および 1 u t - w ( X軸の負回転時) に不定値を設定する。 S19: Initial setting is first performed to create a look-up table in the X direction. In this initial setting, an undefined value is set to 1 ut -X (when the X axis is positively rotated) and 1 ut -w (when the X axis is negatively rotated) indicating the X coordinate of the lookup table.
S 2 0〜 S 2 2 : 前述のよ う に して作成された X軸の正回転時の 距離画像フ ァイル x p O . k m t 〜 x p l 4 9 9. k m t を読み込 む。  S20 to S22: Reads the range image file xpO.kmt to xpl49.9kmt at the time of the normal rotation of the X axis, created as described above.
S 2 3 〜 S 2 6 : 測定板 1 の設置の仕方によっては得られる距離 画像データに切れ目が存在してそのデータがノイズとなるために、 このノイズを排除するための処理を行う。 すなわち、 まず距離画像 i x j ( i ; 2 5 6画素, j ; 2 4 2画素) の各画素毎に一つ前の 画像データ と比較し、 例えば画像データ x p l O . k m t の n行 m 列の画素の値とその一つ前の位置の画像データ x P 9 . k m t の同 じく n行 m列の画素の値とを比較し、 その画素が先頭から 5 0 mm 以内にあり、 かつ前後の画素の値の差が 1 2 8以上である場合、 こ のデータを無効にするためにその画素の位置の l u t X [ m] [ n ] [ 0 ] 〜 l u t x [m] [ n ] [ 2 5 5 ] の値を不定値にす る S23 to S26: Distance that can be obtained depending on how the measurement plate 1 is set. Since there is a gap in the image data and that data becomes noise, processing to eliminate this noise is performed. That is, first the distance image ixj (i; 256 pixels, j; 242 pixels) is compared with the previous image data for each pixel. For example, the value of the pixel at n rows and m columns of the image data xpl O. The image data at the previous position xP9.Compare with the value of the pixel in the same n rows and m columns of kmt, and the pixel is within 50 mm from the top, and the difference between the values of the preceding and following pixels is If the value is 1 2 8 or more, the values of lut X [m] [n] [0] to lutx [m] [n] [255] at that pixel position are undefined to invalidate this data To value
S 2 7 〜 S 2 8 : 画素値の 1 u t - Xの値が不定値であるとき、 その画素の 1 u t - X に読み込んだフ ァイルを撮り込んだ時の移動 距離を設定する。 こ こでは、 フ ァ イ ルの x p « . k m t の《の部分 がフ ァイルの順を表し、 このフ ァ イ ル番号のひ に 0. 1 を掛けると 移動量になる ( ひ x O . 1 mm) 。 これによ り、 各画素の最初に現 れたコー ド値の位置力く l u t — に記憶される。  S27 to S28: When the value of 1ut-X of the pixel value is indefinite, set the moving distance when capturing the file read in 1ut-X of that pixel. In this case, the <part of xp «. Kmt of the file indicates the order of the file, and multiplying this file number by 0.1 to obtain the movement amount (hi x O.1 mm). As a result, the position of the first appearing code value of each pixel is stored in the lut —.
S 2 9 : i ; 2 5 6画素, j ; 2 4 2画素の全ての画素について S 2 3〜 S 2 8 の処理を行う。  S29: i; 256 pixels, j; 242 The processing of S23 to S28 is performed for all pixels.
S 3 0 : 全ての距離画像フ ァ イ ル x p O . k m t〜 x p 1 4 9 9 . k m t について処理がなされたか否か、 すなわち 力《 1 5 0 0 を 越えたか否かを確 Kし、 ひ く 1 5 0 0 の場合にはステッ プ S 2 2 に 民 «  S30: Determine whether or not all range image files xpO.kmt to xp149.kmt have been processed, that is, whether or not the force has exceeded the value of 1550. In the case of 1,500, the people «
S 3 1 〜 S 4 1 : X軸の負回転時の距離画像フ ァイル x n O . k m t 〜 x n 1 4 9 9. k m t について、 前述の X軸の正回転時の距 離画像フ アイノレ x p O . k m t〜 x p l 4 9 9. k m t における処 理 ( S 2 0〜 S 3 0 ) と同様の処理を行って、 各画素の 1 u t — w に読み込んだ画素位置を設定する。  S31 to S41: Distance image file xn O. Kmt to xn 149. 9 kmt to xn 149 during negative rotation of X-axis For distance image file xn O. kmt ~ xpl 4 9 9. Perform the same processing as the processing in kmt (S20 ~ S30), and set the pixel position read at 1 ut-w of each pixel.
S 4 2〜 S 4 9 : 各画素の画素のもつ値 ( Z ; 0〜 2 5 5 ) につ いて、 X軸正回転時における初期位置からの Z軸方向の相対距離を i P = 1 u t - x [ i ] [ j ] [ z ] および X軸負回転時における 初期位置からの Z軸方向の相対距離 i n = 1 u t - w [ j ] [ i ] [ z ] と して、 i pおよび i nに設定する。 そ して、 これら i pお よび i nが不定値でないときに、 次式によつてその位置の画素値に 対する X座標を算出する。 S42 to S49: For the value (Z; 0 to 255) of each pixel, the relative distance in the Z-axis direction from the initial position when the X-axis is positively rotated. Let i P = 1 ut-x [i] [j] [z] and the relative distance in the Z-axis direction from the initial position during the negative rotation of the X axis in = 1 ut-w [j] [i] [z]. And set it to ip and in. Then, when these ip and in are not indefinite values, the X coordinate for the pixel value at that position is calculated by the following equation.
X - ( i p - i n ) / ( 2 x t a n 0 ) ①  X-(ip-in) / (2 x t a n 0) ①
Θ : 測定板 1 の傾斜角度  :: Tilt angle of measuring plate 1
図 1 6 に示されているよう に、 この①式において、 ( i p — i n ) は X軸正回転時における画像位置と X軸負回転時における画像位 置との差を表す。 したがって、 測定板 1 の正回転時と負回転時の同 じ画素値を表す点 Aの X座標は①式で表される。 なお、 この点 Aの X座標は、 メ モリの節約のために前述のメモリ 1 u t — X [ i ] [ j ] [ z ] 内に格納される。  As shown in FIG. 16, in this formula, (ip-in) represents the difference between the image position at the time of the X-axis positive rotation and the image position at the time of the X-axis negative rotation. Therefore, the X coordinate of point A, which represents the same pixel value when the measurement plate 1 rotates forward and when it rotates negatively, is expressed by the following equation. The X coordinate of this point A is stored in the above-mentioned memory 1ut-X [i] [j] [z] to save memory.
一方、 i P も し く は i nが不定値のときには、 メ モ リ 1 u t - X [ j ] [ i ] [ z ] に不定値を設定する。  On the other hand, when i P or in n is an undefined value, an undefined value is set in the memory 1 ut -X [j] [i] [z].
S 5 0 : 前述の S 4 4 ~ S 4 9 の処理を i ; 2 5 6画素, j ; 2 4 2画素の全ての画素について行い、 X方向のルッ クアッ プテープ ルを完成させてフ ローを終了する。  S50: Performs the above-described processing of S44 to S49 for all pixels i; 256 pixels, j; 24 pixels, completes the look-up table in the X direction, and completes the flow. finish.
次に、 Y方向の位置データを格納する Y方向のルッ クアッ プテー ブルは図 7 乃至図 1 1 に示される T 1 〜 T 5 0 の各ステッ プに した がって作成される。  Next, a look-up table in the Y direction for storing the position data in the Y direction is created in accordance with each of steps T1 to T50 shown in FIGS. 7 to 11.
この Υ方向のルッ クアッ プテーブル作成のためのフローは、 図 1 7 ( a ) ( b ) に示されるように、 測定板 1 をピッチ角を ± 2 0 ° の角度をつけて設置する (ステッ プ T 2 , T 1 0 参照) ことにより Y方向の距離画像データを取得してその距離画像データを距離画像 フ アイ ノレ y p O . k m t 〜 y p l 4 9 9. k m t および y n O . k m t ~ y n 1 4 9 9. k m t に書き込む以外は、 図 2乃至図 6 に示 される S 1〜S 5 0 の各ステッ プにおける処理と同様である。 した がって、 このフローの詳細内容については説明を省略すること とす る o As shown in Fig. 17 (a) and (b), the flow for creating the Υ-direction look-up table is as follows. The measurement plate 1 is installed with a pitch angle of ± 20 ° (see step 17). The distance image data in the Y-direction is acquired by using the distance image data in the Y direction, and the distance image data is acquired as the distance image data yp O. Kmt to ypl 4 9 9. kmt and yn O. Kmt to yn 1 4 9 9.Except for writing to kmt, This is the same as the processing in each of steps S1 to S50. Therefore, the detailed description of this flow is omitted.
続いて、 Z方向の位置データを格納する Z方向のルッ クアッ プテ 一ブルを作成するためのフローを図 1 2乃至図 1 4 によ って説明す る o  Next, a flow for creating a Z-direction look-up table for storing Z-direction position data will be described with reference to FIGS. 12 to 14.o
U 1 : 三次元視覚センサ 4 に対して測定板 1 の移動軸の全軸、 言 い換えれば上下回転軸 ( Y軸) , 水平回転軸 ( X軸) および前後移 動軸 ( Z軸) の各軸を測定基準と しての原点に移動させる。  U 1: All axes of the movement axis of the measuring plate 1 with respect to the three-dimensional visual sensor 4, in other words, the vertical rotation axis (Y axis), the horizontal rotation axis (X axis), and the front-rear movement axis (Z axis) Move each axis to the origin as a measurement reference.
U 2〜U 4 : 三次元視覚センサ 4 の C C Dカメ ラによ り測定板 1 の距離画像データを取得し、 この取得画像を、 メディ アンフィ ルタ (平滑化フィ ルタ) によ り補正する。 そ して、 この処理は各位置で の画像を 3 回取得するまで繰り返される。  U2 to U4: Obtain the distance image data of the measurement plate 1 by the CCD camera of the three-dimensional visual sensor 4, and correct the obtained image by a median filter (smoothing filter). This process is repeated until the image at each position is acquired three times.
U 5 ~U 6 : 各位置で 3 回の距離画像が取得される と、 同一画素 の値を比較してそのメディ アン値 (中央値) を求め、 このメディ ア ン値を最終的な距離値と して設定する。 この後、 この得られた距離 画像をコ ンピュータ 5 内の記憶装置における距離画像フ アイルに書 さ込む o  U5 to U6: When three distance images are acquired at each position, the value of the same pixel is compared to find the median value (median value), and this median value is used as the final distance value. Set as. Thereafter, the obtained distance image is written to a distance image file in a storage device in the computer 5.o
U 7〜 U 8 : 測定板 1 の前後位置を一定ピッチ (例えば 0. 1 m m) だけ後方にずらせて前述と同様にして距離画像データを取得し. このデータの取得を全距離画像フ ァイル分 ( 1 0 0 0 回) 繰り返し て Z軸の距離画像フ ァイル z O . k m t〜 z 9 9 9 . k m t に書き 込む。 こ う して、 例えば 1 0 0 mn!〜 2 0 0 mmの測定範囲におけ る Z軸の全ての距離画像データを取得する。  U7 to U8: Obtain distance image data in the same manner as above by shifting the front and rear position of the measurement plate 1 backward by a fixed pitch (for example, 0.1 mm). Acquire this data for all distance image files. (100 times) Repeatedly write to the Z-axis range image file zO.kmt to z99.kmt. Thus, for example, 100 mn! Acquire all the Z-axis range image data within the measurement range of ~ 200 mm.
U 9 : Z方向のルッ クアッ プテーブルを作成するために、 まず初 期設定を行う。 この初期設定においては、 ルッ クアッ プテーブルの Z座標を示す 1 u t - z に不定値を設定する。 U 1 0〜U 1 1 : 前述のよ う にして作成された z軸の距離画像フ アイルの画素位置 z 0. k m t を読み込む。 U 9: Initial settings are made to create a Z-direction look-up table. In this initial setting, an undefined value is set to 1 ut-z indicating the Z coordinate of the lookup table. U10 to U11: Read the pixel position z0.kmt of the z-axis range image file created as described above.
U 1 2〜U 1 5 : 図 2乃至図 6のステッ プ S 2 3〜 S 2 6 と同様 にして得られる距離画像データのノ イズを排除するための処理を行 う。 すなわち、 まず i , j の各画素 ( i : 2 5 6画素, j ; 2 4 2 画素) 毎に一つ前の画像データ と比較し、 その画素が先頭から 5 0 mm以内にあり、 かつ前後の画素の差が 1 2 8以上である場合、 こ のデータを無効にするためにその画素の 1 u t - zの値を不定値に する。  U12 to U15: Perform processing to eliminate noise in the range image data obtained in the same manner as steps S23 to S26 in FIGS. That is, first, each pixel of i and j (i: 256 pixels, j; 242 pixels) is compared with the immediately preceding image data, and the pixel is within 50 mm from the head, and If the difference between the pixels is greater than 128, the value of 1 ut-z for that pixel is made indefinite to invalidate this data.
U 1 6 〜U 1 7 : 画素値の 1 u t zの値が不定値であるとき、 その画素の 1 u t - z に読み込んだフ ァイルを撮り込んだ時の移動 距離を設定する。 これにより、 各画素の最初に現れたコー ド値の位 置力く 1 u t - z に記憶される。  U16 to U17: When the value of 1utz of the pixel value is undefined, set the moving distance when the file read at 1ut-z of that pixel is captured. As a result, the position of the first appearing code value of each pixel is stored at 1 u t -z.
U 1 8 : i ; 2 5 6画素, j ; 2 4 2画素の全ての画素について U 1 1 〜U 1 7 の処理を行う。  U18: i; 256 pixels and j; 242 pixels U11 to U17 are processed for all pixels.
U 1 9 : 全ての距離画像フ ァイル z 0. k m t〜 z 9 9 9. k m t について処理がなされたか否かを確認する。  U 19: It is checked whether or not the processing has been performed for all the range image files z0.kmt to z99.9kmt.
U 2 0〜U 2 5 : 各画素の画素のもつ値 ( Z ; 0 - 2 5 5 ) につ いて、 指定画素値が最初に見つかった位置をそれら i = 1 u t - z [ j ] [ i ] [ z ] に設定する。 そ して、 この i が不定値でないと きに、 この i の値を l u t— z [ j ] [ i ] [ z ] に格納するこ と によ りその位置の画素値に対する Z座標を算出する。  U 20 to U 25: For the value (Z; 0-255) of each pixel, the position where the specified pixel value is first found is i = 1 ut -z [j] [i ] Set to [z]. Then, when this i is not an indefinite value, the value of i is stored in lut—z [j] [i] [z] to calculate the Z coordinate for the pixel value at that position. .
—方、 i が不定値のときには、 全ての画素値であるか否かを確認 してステッ プ U 2 2へ戻る。  On the other hand, if i is an indefinite value, check whether all pixel values are present and return to step U22.
U 2 6 : 前述の U 2 2 ~U 2 5の処理を i ; 2 5 6画素, j ; 2 4 2画素の全ての画素について行い、 Z方向のルッ クアッ プテープ ルを完成させてフローを終了する。 本実施例によれば、 無地の板を用いて各画素の明暗情報と三次元 座標値との対応表 (ルッ クアッ プテーブル) が作成されるので、 従 来の方眼線を画像処理によ り抽出する ものに比べ格段に高精度化を 図る こ とができる。 U26: Performs the processing of U22 to U25 described above for all pixels i; 256 and j; 242 pixels, completes the Z-direction look-up table and ends the flow I do. According to the present embodiment, a correspondence table (lookup table) between the brightness information of each pixel and the three-dimensional coordinate value is created using a plain plate, and the conventional grid lines are formed by image processing. Significantly higher accuracy can be achieved compared to what is extracted.
本実施例においては、 距離画像を処理するに際しメディ アン値を 用いて補正する ものと したが、 このメディ アン値の代わりに平均値 を用いて補正するこ と もできる。 また、 1 回目の補正をメディ アン 値によ り行い、 2 回目の補正を平均値によ り行う というよう に、 こ れらメディ アン値による補正と平均値による補正とを組み合わせて 行う こと も可能である。  In the present embodiment, when the distance image is processed, the correction is performed using the median value. However, the correction may be performed using the average value instead of the median value. It is also possible to combine these corrections with the median value and the correction with the average value, such that the first correction is performed with the median value and the second correction is performed with the average value. It is possible.
本実施例においては、 測定板にコー ド化パター ン光を照射して三 次元画像を取得するものを説明したが、 本発明は、 2台のカメ ラに よって三次元画像を取得するものに対しても適用できるのは言うま でもない。  In the present embodiment, an example in which a three-dimensional image is obtained by irradiating a coded pattern light to a measurement plate has been described.However, the present invention is directed to a method in which a three-dimensional image is obtained by two cameras. It goes without saying that it can be applied to this.

Claims

請求の範囲 The scope of the claims
投光手段によ り被測定物の表面に光を照射してその反射光を 撮像手段によ り撮像し、 この撮像される反射光の情報から三角 測量の原理に基づき前記被測定物の位置を計測する三次元計測 装置におけるキヤ リ ブレーショ ン方法であって、  The surface of the device under test is irradiated with light by the light projecting device, and the reflected light is imaged by the imaging device. The information of the imaged reflected light is used to determine the position of the device under test based on the principle of triangulation. A calibration method for a three-dimensional measuring device that measures
較正用対象物を前記撮像手段に対してョ一回転, ピッチ回転お よび前後移動させたと きの各移動点での距離画像を前記撮像手 段にて撮像し、 この撮像された距離画像を処理して各画素の前 記距離画像毎の三次元座標値を求め、 この求められる三次元座 標値よ り較正値を得る こ とを特徴とするキヤ リ ブレーシ ョ ン方 法 A distance image at each moving point when the calibration object is moved one rotation, a pitch rotation, and a back and forth movement with respect to the imaging means is captured by the imaging means, and the captured distance image is processed. A three-dimensional coordinate value of each pixel for each of the distance images, and a calibration value is obtained from the obtained three-dimensional coordinate value.
前記撮像手段により撮像された距離画像を処理するに際し、 一つの画像について特定の画素の周囲のメディ ア ン値をその画 素の距離画像と して設定すると ともに、 こ う して得られる距離 画像を各位置で複数回設定しそれら複数回のメディ アン値をそ の画素の距離画像の補正値と して設定する請求項 1 に記載のキ ヤ リ ブレー シ ヨ ン方法。  In processing the distance image captured by the imaging unit, a median value around a specific pixel in one image is set as a distance image of the pixel, and the distance image obtained in this way is set. 2. The method according to claim 1, wherein the median value is set a plurality of times at each position, and the median values of the plurality of times are set as correction values of the distance image of the pixel.
前記撮像手段により撮像された距離画像を処理するに際し、 一つの画像について特定の画素の周囲の平均値をその画素の距 離画像と して設定すると と もに、 こ う して得られる距離画像を 各位置で複数回設定しそれら複数回の平均値をその画素の距離 画像の補正値と して設定する請求項 1 に記載のキヤ リ ブレーシ ョ ン方法。  In processing the distance image captured by the imaging unit, an average value around a specific pixel in one image is set as the distance image of the pixel, and the distance image obtained in this manner is set. 2. The calibration method according to claim 1, wherein is set a plurality of times at each position, and an average value of the plurality of times is set as a correction value of the distance image of the pixel.
前記撮像手段は、 取得した距離画像情報を明暗情報に置き換 えて出力する ものである請求項 1 ~ 3 のうちのいずれかに記載 のキヤ リ ブレー シ ョ ン方法。  4. The calibration method according to claim 1, wherein the imaging unit replaces the acquired distance image information with light / dark information and outputs the information.
前記撮像手段は、 前記投光手段によ り被測定物の表面に投光 されるコー ド化パターン光の反射光からその被測定物の三次元 画像を取得する ものである請求項 1 〜 3 のう ちのいずれかに記 載のキヤ リ ブレーシ ョ ン方法。 The imaging unit projects light onto the surface of the device under test by the projection unit. The method according to any one of claims 1 to 3, wherein a three-dimensional image of the object to be measured is acquired from reflected light of the coded pattern light.
前記撮像手段は、 前記投光手段によ り被測定物の表面に投光 される光の反射光を 2台のカメ ラで撮像する ことによりその被 測定物の三次元画像を取得するものである請求項 1 ~ 3 のうち のいずれかに記載のキヤ リ ブレーシ ョ ン方法。  The imaging means is for acquiring a three-dimensional image of the measured object by imaging the reflected light of the light projected on the surface of the measured object by the light projecting means with two cameras. 4. The method for calibrating according to any one of claims 1 to 3.
PCT/JP1997/002752 1996-08-07 1997-08-06 Calibration method WO1998005922A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP20822596A JP3453734B2 (en) 1996-08-07 1996-08-07 Calibration method
JP8/208225 1996-08-07

Publications (1)

Publication Number Publication Date
WO1998005922A1 true WO1998005922A1 (en) 1998-02-12

Family

ID=16552743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1997/002752 WO1998005922A1 (en) 1996-08-07 1997-08-06 Calibration method

Country Status (2)

Country Link
JP (1) JP3453734B2 (en)
WO (1) WO1998005922A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2368740A (en) * 2000-04-11 2002-05-08 Roke Manor Research Self-calibration of sensors
EP2096460A3 (en) * 2008-02-28 2011-06-22 Aisin Seiki Kabushiki Kaisha Calibration device and calibration method for range image sensor
CN104154875A (en) * 2014-08-20 2014-11-19 深圳大学 Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
US9995820B2 (en) 2014-05-08 2018-06-12 Sick Ag Distance-measuring sensor and method for detecting and determining the distance of objects
WO2019216297A1 (en) * 2018-05-09 2019-11-14 日本電気株式会社 Calibration device and calibration method
CN111798522A (en) * 2020-05-20 2020-10-20 惠州市德赛西威汽车电子股份有限公司 Automatic plane position checking method, system and equipment for test prototype
CN117109647A (en) * 2023-08-25 2023-11-24 上海大学 Dynamic vision sensor performance testing device and testing method

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6407656B1 (en) 1999-08-18 2002-06-18 Autonetworks Technologies, Ltd. Breaker device
TW561241B (en) 2002-08-22 2003-11-11 Ind Tech Res Inst Method and apparatus for calibrating laser three-dimensional digitizing sensor
JP4885584B2 (en) * 2006-03-23 2012-02-29 株式会社スペースビジョン Rangefinder calibration method and apparatus
JP2010048553A (en) * 2006-12-19 2010-03-04 Panasonic Corp Inspecting method of compound-eye distance measuring device and chart used for same
JP4943270B2 (en) * 2007-08-09 2012-05-30 富士フイルム株式会社 Method and apparatus for setting a three-dimensional coordinate system
JP5228614B2 (en) * 2008-05-15 2013-07-03 株式会社豊田中央研究所 Parameter calculation apparatus, parameter calculation system and program
KR101626072B1 (en) 2009-11-13 2016-06-13 삼성전자주식회사 Method and Apparatus for Compensating Image
CN103604367B (en) * 2013-11-14 2016-10-12 上海交通大学 A kind of calibrating installation for Laser Triangulation Measurement System Based and method
CN111207685A (en) * 2020-01-14 2020-05-29 华中科技大学鄂州工业技术研究院 Full-automatic calibration system for structured light depth measurement

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01207393A (en) * 1987-10-02 1989-08-21 Exxon Chem Patents Inc Improved lubricant composition for internal combustion engine
JPH05248819A (en) * 1992-03-06 1993-09-28 Kobe Steel Ltd Calibrating method of data of calibration object for measuring three dimensional position of object by camera and measuring method of three dimensional position
JPH0835828A (en) * 1994-07-25 1996-02-06 Kobe Steel Ltd Calibration method of three-dimensional measuring apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01207393A (en) * 1987-10-02 1989-08-21 Exxon Chem Patents Inc Improved lubricant composition for internal combustion engine
JPH05248819A (en) * 1992-03-06 1993-09-28 Kobe Steel Ltd Calibrating method of data of calibration object for measuring three dimensional position of object by camera and measuring method of three dimensional position
JPH0835828A (en) * 1994-07-25 1996-02-06 Kobe Steel Ltd Calibration method of three-dimensional measuring apparatus

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2368740A (en) * 2000-04-11 2002-05-08 Roke Manor Research Self-calibration of sensors
GB2368740B (en) * 2000-04-11 2005-01-12 Roke Manor Research Method of self-calibration of sensors
EP2096460A3 (en) * 2008-02-28 2011-06-22 Aisin Seiki Kabushiki Kaisha Calibration device and calibration method for range image sensor
US9995820B2 (en) 2014-05-08 2018-06-12 Sick Ag Distance-measuring sensor and method for detecting and determining the distance of objects
CN104154875A (en) * 2014-08-20 2014-11-19 深圳大学 Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
WO2019216297A1 (en) * 2018-05-09 2019-11-14 日本電気株式会社 Calibration device and calibration method
JPWO2019216297A1 (en) * 2018-05-09 2021-04-22 日本電気株式会社 Calibration device and calibration method
CN111798522A (en) * 2020-05-20 2020-10-20 惠州市德赛西威汽车电子股份有限公司 Automatic plane position checking method, system and equipment for test prototype
CN117109647A (en) * 2023-08-25 2023-11-24 上海大学 Dynamic vision sensor performance testing device and testing method
CN117109647B (en) * 2023-08-25 2024-02-20 上海大学 Dynamic vision sensor performance testing device and testing method

Also Published As

Publication number Publication date
JP3453734B2 (en) 2003-10-06
JPH1047920A (en) 1998-02-20

Similar Documents

Publication Publication Date Title
WO1998005922A1 (en) Calibration method
US11105624B2 (en) Methods and apparatus to track a blade
CN110657785B (en) Efficient scene depth information acquisition method and system
JP3930482B2 (en) 3D visual sensor
JP6543705B2 (en) Method of calibrating a patient monitoring system for use with a radiation therapy apparatus
US20140192234A1 (en) Method for generating and evaluating an image
JPH08237407A (en) Method of positioning relative alignment of picture tile andcorrecting penetrative distortion
JP5303405B2 (en) Vehicle inspection device
CN109948470B (en) Hough transform-based parking line distance detection method and system
CN112161586A (en) Line structured light vision sensor calibration method based on coding checkerboard
CN116342718B (en) Calibration method, device, storage medium and equipment of line laser 3D camera
WO2019021876A1 (en) In-vehicle camera calibration device and method
TWI388797B (en) Three - dimensional model reconstruction method and its system
CN108288065A (en) A kind of four-wheel position finder detection method based on image analysis
JP3327068B2 (en) Road surface measurement device
JP3621215B2 (en) 3D measuring device
JP2000321039A (en) Apparatus and method for inspecting coating fault
WO2003042924A1 (en) Connection of point clouds measured by a computer vision system
JP4077755B2 (en) POSITION DETECTION METHOD, DEVICE THEREOF, PROGRAM THEREOF, AND CALIBRATION INFORMATION GENERATION METHOD
CN116158780A (en) Method for carrying out multi-mode ultrasonic imaging on large-size target
JP2006023133A (en) Instrument and method for measuring three-dimensional shape
JPH07260451A (en) Three dimensional shape measuring system
KR102222898B1 (en) Method and apparatus for inspecting workpiece using laser
JPH11241916A (en) Height measuring method, height data processing method, and height measuring device
JP2001099631A (en) Plane flatness measuring method and measuring device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): DE KR US

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642