JP4141627B2 - Information acquisition method, image capturing apparatus, and image processing apparatus - Google Patents

Information acquisition method, image capturing apparatus, and image processing apparatus Download PDF

Info

Publication number
JP4141627B2
JP4141627B2 JP2000275176A JP2000275176A JP4141627B2 JP 4141627 B2 JP4141627 B2 JP 4141627B2 JP 2000275176 A JP2000275176 A JP 2000275176A JP 2000275176 A JP2000275176 A JP 2000275176A JP 4141627 B2 JP4141627 B2 JP 4141627B2
Authority
JP
Japan
Prior art keywords
measured
information
measured part
light
vicinity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2000275176A
Other languages
Japanese (ja)
Other versions
JP2002081908A (en
Inventor
修司 小野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Priority to JP2000275176A priority Critical patent/JP4141627B2/en
Priority to EP01103436A priority patent/EP1126412B1/en
Priority to US09/784,028 priority patent/US6538751B2/en
Publication of JP2002081908A publication Critical patent/JP2002081908A/en
Application granted granted Critical
Publication of JP4141627B2 publication Critical patent/JP4141627B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Description

【0001】
【発明の属する技術分野】
本発明は、物体の奥行き距離、物体表面の傾き及び、物体表面の反射率に関する情報を獲得する情報獲得方法、画像撮像装置及び、画像処理装置に関する。特に光が照射された物体から得られる反射光を撮像し、物体に関する情報を獲得する情報獲得方法、画像撮像装置及び、画像処理装置に関する。
【0002】
【従来の技術】
物体までの距離情報や、物体の位置情報を得るために、物体にスリットや縞模様などのパターン光を投影し、物体に投影されたパターンを撮影して解析する三次元画像計測の手法が知られている。代表的な計測方法として、スリット光投影法(別名、光寸断法)、コード化パターン光投影法などがあり、井口征士、佐藤宏介著「三次元画像計測」(昭晃堂)に詳しい。
【0003】
特開昭61−155909号公報(公開日昭和61年7月15日)及び特開昭63−233312号公報(公開日昭和63年9月29日)には、異なる光源位置から物体に光を照射し、物体からの反射光の強度比に基づいて、物体までの距離を測定する距離測定装置及び距離測定方法が開示されている。
【0004】
特開昭62−46207号公報(公開日昭和62年2月28日)には、位相の異なる2つの光を物体に照射し、物体からの反射光の位相差に基づいて、被写体までの距離を測定する距離検出装置が開示されている。
【0005】
また、河北他「三次元撮像装置AXi-Vision Cameraの開発」(三次元画像コンファレンス99、1999年)には、投影光に超高速の強度変調を加え、強度変調光で照明された物体を高速シャッター機能を備えたカメラで撮影し、物体までの距離によって変化する強度変調度合いから、距離を測定する方法が開示されている。
【0006】
また、特開平10−48336号公報及び特開平11−94520号公報には、異なる波長特性を有した、異なる光パタンを物体に投射し、物体からの入射光の波長成分を抽出することにより物体までの距離を計算する実時間レンジファインダが開示されている。
【0007】
【発明が解決しようとする課題】
従来の距離測定装置及び距離測定方法では、光を被測定物に照射して、その反射光を測定することによって距離情報を測定していたが、被測定物の表面の傾きを考慮していなかった。そのため、同距離にある物体であっても、表面の傾きによって、反射光の測定強度が異なり、距離測定に誤差が生じていた。
【0008】
また、特開昭62−46207号公報に開示された距離検出装置では、位相差を検出するための精度の高い位相検出器が必要となり、装置が高価になり、簡便性に欠ける。また、被写体の点からの出射光の位相を測定するため、被写体全体の奥行き分布を測定することができない。
【0009】
また、河北他「三次元撮像装置Axi-Vision Cameraの開発」(三次元画像コンファレンス99、1999年)に開示された強度変調を用いた距離測定方法は、同一の光源で強度の増減変調を行うため、非常に高速に光変調を行う必要があり、簡便に測定することができなかった。
【0010】
【課題を解決するための手段】
上記課題を解決するために、本発明の第1の形態においては、物体までの距離情報を獲得する情報獲得方法であって、光学的に異なる2点の照射位置から、物体に光を照射する照射段階と、2点の照射位置から照射された光による物体からの反射光をそれぞれ撮像する撮像段階と、光学的に異なる2点の照射位置から、物体に光を照射する角度のうちいずれかを算出する角度算出段階と、撮像位置において撮像された、物体の被測定部の反射光及び、被測定部の近傍からの反射光に基づいて、被測定部までの距離情報を算出する算出段階とを備えることを特徴とする情報獲得方法を提供する。
【0011】
撮像段階は、2点の照射位置のいずれかと光学的に同位置である撮像位置において、2点の照射位置から照射された光による物体からの反射光をそれぞれ撮像し、角度測定段階は、撮像段階が撮像する物体からの反射光に基づいて光を照射する角度を算出してよい。また、算出段階は、撮像段階において撮像された、被測定部からの反射光の強度及び被測定部の近傍からの反射光の強度に基づいて、被測定部の距離情報及び被測定部の面傾き情報を算出してもよい。また、算出段階は、被測定部からの反射光の強度及び前記反射光の入射角に基づいて、被測定部までの仮の距離情報を仮定した場合に、被測定部の面傾き情報を算出する第1算出段階と、被測定部までの仮の距離情報及び、被測定部の面傾き情報に基づいて、被測定部の近傍の仮の距離情報を算出する第2算出段階と、被測定部の近傍の仮の距離情報と、被測定部の近傍からの反射光の強度及び、反射光の入射角に基づいて、被測定部の近傍の面傾き情報を算出する第3算出段階と、被測定部の面傾き情報と、被測定部の近傍の面傾き情報との誤差を算出する第4算出段階とを有し、誤差が所定の範囲内である場合に、被測定部の仮の距離情報を、被測定部の真の距離情報として出力してもよい。
【0012】
第4算出段階は、被測定部の複数の仮の距離情報に対して、それぞれ誤差を算出し、算出段階は、誤差が最小となる場合の被測定部の仮の距離情報を真の被測定部の距離情報としてもよい。第2算出段階は、被測定部の面傾き情報に基づいて、被測定部の面を延長した近傍において、被測定部の近傍の仮の距離情報を算出してもよい。撮像段階は、被測定部からの反射光及び、被測定部の近傍の複数の場所からの反射光を撮像し、第4算出段階は、被測定部の近傍の複数の場所において、被測定部の仮の面傾き情報と前記被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、算出段階は、誤差の二乗和が最小となる場合の被測定部の仮の距離情報を真の被測定部の距離情報としてもよい。
【0013】
照射段階は、それぞれの照射位置において、異なる波長の光を物体の被測定部及び被測定部の近傍に照射し、それぞれの照射位置において、実質的に同時に物体に光を照射し、撮像段階は、異なる波長の光のそれぞれを選択的に透過させる波長選択手段を有し、それぞれの照射位置における光による物体からの反射光をそれぞれ撮像してもよい。
【0014】
本発明の第2の形態においては、物体までの距離情報を獲得する画像撮像装置であって、光学的に異なる2点の照射位置から、物体に光を照射する照射部と、2点の照射位置のいずれか一つと光学的に同位置で、2点の照射位置から照射された光による物体における光の反射光を撮像する撮像部と、撮像部において撮像された物体の被測定部の反射光及び被測定部の近傍からの反射光に基づいて、被測定部までの距離情報を算出する画像処理装置と
を備えることを特徴とする画像撮像装置を提供する。
【0015】
それぞれの照射位置において、照射部が光を照射する発光タイミング及び、撮像部が反射光を撮像する撮像タイミングを同期させる制御部を更に備えてもよい。照射部は、それぞれの照射位置において異なる波長の光を照射し、それぞれの照射位置において、実質的に同時に物体に光を照射し、異なる波長の光の物体による反射光を、異なる波長のそれぞれを主要な波長成分とする光に分離する分光部を更に備え、撮像部は、分離された光のそれぞれを撮像してもよい。分光部は、異なる波長の光のそれぞれを主に透過する複数の光学フィルターを有してもよい。
【0016】
本発明の第3の形態においては、物体を撮像した画像を処理する画像処理装置であって、2点の照射位置から照射されたそれぞれの光の、物体からの反射光に基づく画像データを入力する入力部と、入力された画像データを記憶する記憶部と、画像データの物体の被測定部及び被測定部の近傍のデータに基づいて、被測定部までの距離情報及び被測定部の面傾き情報を算出する算出部と、算出部が算出した、距離情報及び面傾き情報を出力する出力部とを備えることを特徴とする画像処理装置を提供する。
【0017】
画像データは、反射光を受光する複数の画素を有する撮像手段で撮像したデータであり、算出部は、画素毎の2点の照射位置から照射されたそれぞれの光の反射光の強度を抽出し、抽出したそれぞれの光の被測定部及び被測定部近傍からの反射光の画素毎の強度に基づいて、物体の距離情報の分布及び面傾き情報の分布を算出してもよい。算出部は、距離情報の分布及び面傾き情報の分布に基づいて物体の三次元情報を算出し、出力部は、算出部が算出した物体の三次元情報を出力してもよい。
【0018】
算出部は、被測定部からの反射光の強度及び反射光の入射角に基づいて、被測定部までの仮の距離情報を仮定した場合に、被測定部の面傾き情報を算出する第1算出手段と、被測定部までの仮の距離情報及び、被測定部の面傾き情報に基づいて、被測定部の近傍の仮の距離情報を算出する第2算出手段と、被測定部の近傍の仮の距離情報と、被測定部の近傍からの反射光の強度及び、反射光の入射角に基づいて、被測定部の近傍の面傾き情報を算出する第3算出手段と、被測定部の面傾き情報と、被測定部の近傍の面傾き情報との誤差を算出する第4算出手段とを有し、誤差が所定の範囲内である場合に、被測定部の仮の距離情報を、被測定部の真の距離情報として出力してもよい。
【0019】
第4算出手段は、被測定部の複数の仮の距離情報に対して、それぞれ誤差を算出し、算出部は、誤差が最小となる場合の被測定部の仮の距離情報を真の被測定部の距離情報としてもよい。第2算出手段は、被測定部の面傾き情報に基づいて、被測定部の面を延長した近傍において、被測定部の近傍の仮の距離情報を算出してもよい。入力部は、被測定部からの反射光及び、被測定部の近傍の複数の場所からの反射光に基づく画像データが入力され、第4算出手段は、被測定部の近傍の複数の場所おいて、被測定部の仮の面傾き情報と被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、算出部は、誤差の二乗和が最小となる場合の被測定部の仮の距離情報を真の被測定部の距離情報としてもよい。
【0020】
算出部は、物体の画像データに基づいて、物体のエッジを検出し、算出部は、被測定部の近傍の複数の場所の内、物体のエッジを境界として物体の画像データを複数の領域に分割した領域の、被測定部のデータが有る領域と同じ領域にデータが有る、被測定部の近傍の場所の仮の面傾き情報に基づいて被測定部の距離情報を算出してもよい。また、算出部は、被測定部の近傍の複数の場所の内、被測定部からの反射光の強度と被測定部の近傍からの反射光の強度との差が所定の範囲内にある、被測定部の近傍の場所の仮の面傾き情報に基づいて被測定部の距離情報を算出してもよい。
【0021】
尚、上記の発明の概要は、本発明の必要な特徴の全てを列挙したものではなく、これらの特徴群のサブコンビネーションも又、発明となりうる。
【0022】
【発明の実施の形態】
以下、発明の実施の形態を通じて本発明を説明するが、以下の実施形態は特許請求の範囲にかかる発明を限定するものではなく、又実施形態の中で説明されている特徴の組み合わせの全てが発明の解決手段に必須であるとは限らない。
【0023】
図1は、本発明の原理説明図である。照射位置12及び照射位置14から、それぞれ強度P1,P2の光を物体20に照射し、物体20によるそれぞれの光の反射光をカメラ10で撮像する。カメラ10は、例えば電荷結合素子(CCD)であって、複数の画素を有し、各画素単位で物体20の被測定部22及び砒素区手部の近傍24からの反射光を撮影し、画素毎にそれぞれの強度を検出する。カメラ10で撮像した反射光に基づいて物体20の被測定部22までの距離L、被測定部22の表面の傾きθ及び、被測定部22の表面の反射率RObjを算出する。照射位置(12,14)は、任意の位置に配置されてよい。照射位置(12,14)から物体20に照射される光の照射角のうち、いずれかを算出するが、本例においては、カメラ10を、照射位置(12,14)のいずれか一つと光学的に同じ位置に配置し、カメラ10が撮像する反射光の入射角に基づいて物体20に照射される光の照射角を算出する。本例においては、照射位置12とカメラ10を光学的に同じ位置に配置した場合について説明する。
【0024】
図1に示すように、照射位置14は、照射位置12からL12離れた位置に配置される。距離L12は、計測系固有の既知の値である。また、照射位置(12,14)から物体20の被測定部22に光を照射する角度をそれぞれθ、θとし、照射位置(12,14)から物体20の被測定部の近傍24に光を照射する角度をそれぞれθ’、θ’とする。また、カメラ10は、照射位置12と光学的に同じ位置に配置されているので、被測定部22からの反射光を受光する角度をθ、被測定部の近傍24からの反射光を受光する角度をθ’とした場合、θ=θ、θ’=θ’である。また、照射位置12から被測定部22までの距離をそれぞれL、被測定部の近傍24までの距離をL’とし、照射位置14から被測定部までの距離をL、被測定部の近傍24までの距離をL’とする。
【0025】
照射位置(12、14)から照射された光の、被測定部22からの反射光の強度をそれぞれD、Dとすると、D、Dは次のように与えられる。
【数1】

Figure 0004141627
すなわち、
【数2】
Figure 0004141627
となる。
【0026】
また、
【数3】
Figure 0004141627
であるので、
【数4】
Figure 0004141627
となる。また、
【数5】
Figure 0004141627
であり、L12、θは既知の値であるので、θはLの関数で与えられる。すなわち、
【数6】
Figure 0004141627
となる。よって、
【数7】
Figure 0004141627
となる。ここで、D1、D、P、P、θは、計測値又は既知の値であるので、上の式の未知数はθ及びLの2つである。したがって上の条件を満たす(θ、L)の組み合わせを求めることができる。すなわち、被測定部22までの距離Lを仮定した場合、仮定した距離Lに対応する被測定部の面傾きθを算出することができる。また、被測定部の近傍24についても、同様に被測定部の近傍24までの距離L’と被測定部の近傍24の面傾きθ’の組み合わせを求めることができる。
【0027】
本発明は、物体20の被測定部22までの距離Lと被測定部22の面傾きθとの組み合わせと、物体20の被測定部の近傍24までの距離L’と被測定部の近傍24の面傾きθ’との組み合わせに基づいて、被測定部22までの距離L及び面傾きθを算出する。以下、算出方法について詳細に説明する。
【0028】
図2は、被測定部22までの距離L、被測定部22の面傾きθを算出する方法の一例の説明図である。まず、被測定部22までの距離LをLaと仮定する。図1に関連して説明した数式に基づいて、仮定した距離Laに対応する被測定部22の面傾きθを算出する。次に、算出した面傾きθによって定まる被測定部22の面を延長し、被測定部の近傍24からの反射光の光路と交わる点に、被測定部の近傍24が存在すると仮定した場合の被測定部の近傍24までの仮の距離Lbを算出する。このとき被測定部の近傍24までの仮の距離Lbは、カメラ10と被測定部22までの仮の距離La、被測定部22からの反射光の入射角θ、被測定部の近傍24からの反射光の入射角θ’及び、被測定部22の面傾きθに基づいて幾何的に算出することができる。
【0029】
次に、算出した仮の距離Lbに対応する、被測定部の近傍24の面傾きθ’を算出する。面傾きθ’は、図1に関連して説明した数式により算出することができる。物体20の被測定部22と被測定部の近傍24との間隔は、微小であるので、被測定部22と被測定部の近傍24の面傾きはほぼ同一となる。したがって、算出した面傾きθとθ’とを比較することにより、最初に仮定した被測定部22までの距離Laが正しい値であるかを判定することができる。すなわち、面傾きθとθ’との差が所定の範囲内であれば、仮定した距離Laを被測定部22までの距離Lとすることができる。
【0030】
また、面傾きθとθ’との差が所定の範囲内に無い場合は、最初に仮定した距離Laの設定に誤りがあったとして、被測定部22までの距離Lを他の値に設定して同様の計算を行えばよい。図1に関連して説明した数式を満たす、被測定部22までの距離Lが、上限値を有することは明らかであり、下限値は零であるので、仮の距離Laについて、有限の範囲において調べればよい。例えば、有限の範囲の距離Laについて2分探索法によって、被測定部22までの真の距離Lを抽出してよい。また、仮の距離Laについて、有限の範囲で所定の距離間隔でθ、θ’の差を算出し、真の距離Lを抽出してもよい。また、仮の距離Laについて、複数の値についてθ、θ’の差を算出し、差が最小となる仮の距離Laを真の距離Lとしてもよい。また、被測定部の面傾き情報は、被測定部までの距離情報に基づいて、図1に関連して説明した数式によって算出することができる。また、図1に関連して説明した数式によって被測定部の表面反射率情報も算出することができる。
【0031】
図3は、仮の距離Laについての所定の範囲で所定の距離間隔で、面傾きθ及び面傾きθ’を算出した結果の一例を示す。図3のグラフにおいて、横軸は、被測定部22までの仮の距離Laを示し、縦軸は被測定部22及び被測定部の近傍24の面傾きを示す。本例において、物体20の被測定部22までの距離Lを200mm、被測定部22の面傾きを0度とした。また、照射位置12と照射位置14との距離を10mm、θを20度、仮の距離Laの間隔を10mmとして計算を行った。
【0032】
データ102は、横軸に示される仮の距離Laに対応する被測定部22の面傾きを示し、データ104は、被測定部の近傍24の面傾きを示す。仮の距離200mmにおいて、被測定部22及び被測定部の近傍24の面傾きが0度で一致する。
【0033】
図4は、本発明に係る画像撮像装置100の構成の一例を示す。画像撮像装置100は、照射部40、分光部50、制御部60、撮像部70及び、画像処理装置300を備える。照射部40は、光学的に異なる2点の照射位置から、物体20に光を照射する。照射部40は、2点の照射位置のそれぞれに光源を有し、それぞれの光源から物体20に光を照射してよく、また、照射部40は、一つの光源を有し、一つの光源を照射位置(12,14)のそれぞれに移動させて物体20に光を照射してもよい。照射部40が、2点の照射位置のそれぞれに光源を有する場合、それぞれの照射位置において異なる波長の光を物体20に実質的に同時に照射してよい。分光部50は、異なる波長の光による物体20からの反射光を、異なる波長のそれぞれを主要な波長成分とする光に分離する。撮像部70は、2点の照射位置のいずれか一つと光学的に同位置において、2点の照射位置から照射された光による物体20における光の反射光を撮像する。撮像部70は、分光部50によって分離された光のそれぞれを撮像してもよい。制御部60は、それぞれの照射位置において、照射部40が光を照射する発光タイミング及び、撮像部70が反射光を撮像する撮像タイミングを同期させる。
【0034】
画像処理装置300は、撮像部70において撮像された物体20の被測定部22からの反射光及び被測定部の近傍24からの反射光に基づいて、被測定部22までの距離情報を算出する。画像処理装置300は、入力部80、記憶部90、算出部110及び、出力部120を備える。入力部80は、2点の照射位置から照射されたそれぞれの光の、物体20によるそれぞれの反射光に基づく画像データを入力する。記憶部90は、入力部80に入力された画像データを記憶する。記憶部90は、照射部40が照射した光の波長特性毎に画像データを記憶してもよい。
【0035】
算出部110は、記憶部90に記憶された画像データに基づいて、物体20の被測定部22までの距離情報、物体20の被測定部22の面傾き情報を算出する。算出部110は、被測定部22からの反射光の強度及び反射光の入射角に基づいて、被測定部までの仮の距離情報を仮定した場合に、被測定部22の面傾き情報を算出する第1算出手段と、被測定部22までの仮の距離情報及び、被測定部22の面傾き情報に基づいて、被測定部の近傍24の仮の距離情報を算出する第2算出段階と、被測定部の近傍24の仮の距離情報と、被測定部の近傍24からの反射光の強度及び、反射光の入射角に基づいて、被測定部の近傍24の面傾き情報を算出する第3算出手段と、被測定部の面傾き情報と、被測定部の近傍24の面傾き情報との誤差を算出する第4算出手段とを有する。算出部110は、第4算出手段において算出した誤差が所定の範囲内である場合に、被測定部22の仮の距離情報を、被測定部22の真の距離情報として出力部120及び記憶部90に出力する。
【0036】
第4算出手段は、被測定部22の複数の仮の距離情報に対して、それぞれ誤差を算出し、算出部110は、誤差が最小となる場合の被測定部22の仮の距離情報を真の被測定部22の距離情報としてよい。また、第2算出手段は、被測定部22の面傾き情報に基づいて、被測定部22の面を延長した近傍において、被測定部の近傍24の仮の距離を算出してよい。
【0037】
また、算出部110は、画像データに基づいて、反射光の強度を撮像部70の画素単位又は画素領域単位で検出し、検出した反射光の強度に基づいて、物体20までの距離情報の分布及び面傾き情報を算出してもよい。また、算出部110は、距離情報の分布及び面傾き情報の分布に基づいて物体20の三次元情報を算出してもよい。算出部110は、図1から図3に関連して説明した方法と同様又は同一の方法で被測定部22の各情報及び物体20の各情報の分布を算出してもよい。また、照射部40が異なる波長の光を照射した場合、算出部110は、図6に関連して説明した方法によって、それぞれの照射位置から同一の波長の光を照射したときの、反射光の強度を算出し、算出した強度に基づいて被測定部22の各情報及び物体20の各情報の分布を算出することが好ましい。記憶部90は、算出部110が算出した被測定部22の各情報及び、物体20の各情報の分布を記憶することが好ましい。
【0038】
出力部120は、算出部110が算出した、物体20の被測定部22までの距離情報、被測定部22の面傾き情報及び、物体20における各情報の分布を出力する。出力部120は、被測定部22の各情報及び物体20の各情報の分布を、表示装置上に数値及び図形を用いて表示してよい。また、出力部120は、被測定部22の各情報及び物体20の各情報の分布を音によって出力してもよい。出力部120は、予め指定された位置情報に基づいて各情報を出力してもよい。また、出力部120は、予め指定された項目及び基準に基づいて出力してもよい。例えば、予め指定された項目が一定以上となる領域について各情報を出力してもよい。また、出力部120は、物体20が静止しているか動いているかに基づいて各情報を出力してもよい。
【0039】
また、入力部80は、被測定部22からの反射光及び被測定部の近傍の複数の場所からの反射光に基づく画像データが入力され、算出部110の第4算出手段は、被測定部の近傍の複数の場所において、被測定部22の仮の面傾き情報と被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、算出部110は、誤差の二乗和が最小となる場合の被測定部22の仮の面傾き情報を真の被測定部22の距離情報としてもよい。
【0040】
また、本例において、画像撮像装置100は、分光部50を備えているが、照射部40がそれぞれの照射位置で異なるタイミングで光を照射する場合は、画像撮像装置100は分光部50を備えず、撮像部70は、照射部40が光を照射したタイミングに基づくタイミングで、それぞれの照射位置から照射された光による反射光をそれぞれ撮像してもよい。
【0041】
図5は、撮像部70が撮像した物体20からの反射光に基づく画像データ150の一例を示す。物体20は、微小距離で面傾きが急激に変化するエッジ156を有する。算出部110は、物体20からの反射光に基づく画像データ150に基づいて、物体のエッジを検出し、被測定部の近傍の複数の場所の内、物体20のエッジを境界として物体20の画像データを複数の領域に分割した領域の、被測定部22のデータが有る領域と同じ領域に前記データが有る、被測定部の近傍24の場所の仮の面傾き情報に基づいて被測定部22の距離情報を算出する。例えば、図5に示すように、物体20の画像データが150が、エッジ156を境界とした領域152と領域154とを有する場合、算出部110は、被測定部22のデータが存在する領域152の中にある、被測定部の近傍24の面傾き情報に基づいて被測定部22の距離情報を算出する。
【0042】
また、算出部110は、被測定部の近傍の複数の場所の内、被測定部22からの反射光の強度と被測定部の近傍からの反射光の強度との差が所定の範囲内にある、被測定部の近傍の場所の仮の面傾き情報に基づいて被測定部22の距離情報を算出してもよい。
【0043】
図6は、画像撮像装置100の一例を説明する図である。図6において図4と同じ符号を付したものは図4に関連して説明したものと同一又は同様の機能及び構成を有してよい。照射部40は、光学的に異なる2点の照射位置(12,14)から、物体20にそれぞれ異なる波長の光を実質的に同時に物体20に照射する。例えば、照射部40は、それぞれ異なる波長の光を主に透過する光学フィルター(図示せず)を有していてもよい。このとき、照射部40はそれぞれの照射位置(12,14)から照射される光を光学フィルターを介して、波長λ、λの光として物体20に実質的に同時に照射する。
【0044】
照射部40から照射された光は、物体20で反射し、光学レンズ52に入射する。光学レンズ52に入射した物体20からの反射光は、光学レンズ52を介して分光部50に入射される。
【0045】
分光部50は、一例として光学レンズ52が結像した反射光を偏向する偏向子、偏向された光の偏向方向を波長に応じて回転させるリターダ、偏向方向の異なる光を分割するプリズムを有する。分光部50によって分光された光はそれぞれ撮像部(70a、70b)に受光される。撮像部(70a、70b)は、一例として2板の固体撮像素子である。撮像部(70a、70b)に受光された光は、光電効果により電荷として読み出され、A/D変換器(図示せず)によりデジタル電気信号に変換され、画像処理装置300に入力される。画像処理装置300は、入力された信号に基づいて物体20までの距離情報、物体20の表面の傾き情報、物体20の表面の反射率情報を算出する。
【0046】
撮像部70は、分光部50によって分離された光のそれぞれを撮像する。撮像部70は、一例として固体撮像素子であり、物体20の像は、固体撮像素子の受光面上に結像される。結像された物体20の像の光量に応じ、固体撮像素子の各センサエレメントに電荷が蓄積され、蓄積された電荷は、一定の順序で走査され、電気信号として読み出される。固体撮像素子は、物体20からの反射光の強度を画素単位に高い精度で検出可能なように、信号/雑音比が良く、画素数が大きい電荷結合素子(CCD)イメージセンサであることが好ましい。固体撮像素子としてCCD以外にMOSイメージセンサ、Cd−Se密着型イメージセンサ、aSi密着型イメージセンサ、又はバイポーラ密着型イメージセンサ等を用いてもよい。
【0047】
制御部60は、照射部40が光を照射する発光タイミング及び、撮像部70が反射光を撮像する撮像タイミングを制御する。制御部40は、同期信号が入力され、同期信号に基づいて発光タイミング及び撮像タイミングを制御する。また、制御部60は、物体20までの距離情報に基づいて、照射部40が照射する光の強度、撮像部70のフォーカス、絞り等を制御してもよい。制御部60は、撮像部70の露光時間を制御してもよい。
【0048】
本例においては、照射部40は、2点の照射位置においてそれぞれ一つの波長を主な波長成分とする光を照射したが、他の例においては、照射部40は2点の照射位置のそれぞれで、複数の波長を主な波長成分とする光を照射してもよい。このとき、分光部50は、物体20からの反射光を、照射部40が照射した光の複数の波長のそれぞれの波長成分に分割することが好ましい。しかし、全ての波長成分に分割する場合、分光部50が大きくなりすぎる場合がある。そこでさらに他の例においては、照射部40は、照射位置12において、波長λを主要な波長成分とする光を照射し、照射位置14において、2種類の波長を主要な波長成分とし、2種類の波長の平均値が波長λとなるような光を照射してもよい。このとき、分光部50は、物体20による反射光を、それぞれの照射位置から照射された光の波長成分に分割する。つまり、分光部50は、反射光を2種類の光に分割する。画像処理装置300は、照射位置12から照射された光の反射光強度と、照射位置14から照射された光の反射光強度の1/2とに基づいて、物体20の各情報を算出する。
【0049】
本例において、分光部50はプリズムを有していたが、他の例においては、異なる波長の光のそれぞれを主に透過する複数の光学フィルターを有してもよい。光学フィルターは、撮像部70の受光面に配置される。また、照射部40がそれぞれの照射位置で同時に光を照射しないときは、撮像部70は、分光部50を介さずに反射光を撮像してよい。
【0050】
図7は、照射位置(12,14)から照射する光の波長の一例を示す。本例においては、照射位置12からλaを主要な波長成分とする光を照射し、照射位置12からλb及びλcの2つの波長を主要な波長成分とする光を照射する。また、照射位置12から照射される光による物体20からの反射光の強度をWとし、照射位置14から照射される光のうち、波長λbを成分とする光による物体20からの反射光の強度をWb、波長λcを成分とする光による物体20からの反射光の強度をWcとする。この場合、分光部50は、物体20からの反射光を、波長λaを主要な波長成分とする光と、波長λa及びλbを主要な波長成分とする光に分割する。分光部50は、例えば光学フィルターを用いて物体20からの反射光を分割してよい。
【0051】
図7(a)に示すように、波長λbと波長λcの平均値がλaとなるような光を照射位置14から照射した場合、波長λbを成分とする光による反射光の強度Wbと、波長λcを成分とする光による反射光の強度Wcとの平均値Waを算出することにより、照射位置14から波長λaの光を照射した場合における物体20からの反射光の強度を算出することができる。つまり、反射光を分割するための光学フィルター等の減衰を考慮にいれた反射光強度について、照射位置14から照射された波長λb及びλcを主要な波長成分とする光による反射光の強度の1/2を算出することで、照射位置14から波長λaの光を照射した場合における物体20からの反射光の強度を算出することができる。
【0052】
また、図7(b)に示すように、波長λbと波長λcを成分とする光による反射光の強度を線形近似することにより、照射位置14から波長λaの光を照射した場合における物体20からの反射光の強度を算出してもよい。本例において説明したように、それぞれの照射位置において異なる波長の光を物体20に照射し、物体20からの反射光を分光部50により波長分離し、撮像することにより、2点の照射位置から照射された光による反射光のそれぞれを、同時に撮像することができるので、物体20が動いている場合においても、物体20までの距離情報、面傾き情報を算出することができる。また、図7においては、照射位置14から2つの波長成分を主要な波長成分とする光を照射したが、他の例においては、さらに多くの波長を主要な波長成分とする光を物体20に照射してもよい。
【0053】
図8は、画像処理装置300の一例を示す。画像処理装置300は、図4に関連して説明した画像処理装置300と、同一又は同様の機能及び構成を有してよい。本例において、入力部80、記録部90、出力部120は、図4に関連して説明したものと同一又は同様の機能及び構成を有する。
【0054】
算出部110は、CPU82、入力装置88,ROM84、ハードディスク92,RAM86、CD−ROMドライブ94を有する。CPU82は、ROM84及びRAM86に格納されたプログラムに基づいて動作する。キーボード、マウス等の入力装置88を介して利用者によりデータが入力される。ハードディスク92は、画像等のデータ、及びCPU82を動作させるプログラムを格納する。CD−ROMドライブ94はCD−ROM96からデータ又はプログラムを読みとり、RAM86、ハードディスク92及びCPU82の少なくともいずれかに提供する。
【0055】
CPU82が実行するプログラムの機能構成は、図4に関連して説明した画像処理装置300の算出部110の機能構成と同じであり、2点の照射位置から物体20に光を照射した場合の、それぞれの照射位置からの光による物体20からの反射光の強度を、それぞれ入力させる入力モジュールと、反射光のそれぞれの強度に基づいて、物体20までの距離情報を算出させる算出モジュールとを有する。入力モジュール及び算出モジュールが、CPU82に行わせる処理は、図4に関連して説明した算出部110の機能及び動作と同じであるから、説明を省略する。これらのプログラムは、CD−ROM84等の記録媒体に格納された利用者に提供される。記録媒体の一例としてのCD−ROM96には、本出願で説明した、画像処理装置300の動作の一部又は全ての機能を格納することができる。
【0056】
上記のプログラムは、記録媒体から直接RAM86に読み出されてCPU82により実行されてもよい。あるいは、上記のプログラムは、記録媒体からハードディスク(MD)等の磁気記録媒体、MO等の光磁気記録媒体、テープ状記録媒体、不揮発性の半導体メモリカード等を用いることができる。
【0057】
上記のプログラムは、単一の記録媒体に格納されてもよいし、複数の記録媒体に分割されて格納されてもよい。また、上記プログラムは、記録媒体に圧縮されて格納されてもよい。圧縮されたプログラムは伸張され、RAM86等の別の記録媒体に読み出され、実行されてもよい。さらに、圧縮されたプログラムは、CPU82によって伸張され、ハードディスク92等にインストールされた後、RAM86等の別の記録媒体に読み出され、実行されてもよい。
【0058】
さらに、記録媒体の一例としてのCD−ROM96は、通信ネットワークを介して、ホストコンピュータによって提供される上記のプログラムを格納してもよい。記録媒体に格納された上記のプログラムは、ホストコンピュータのハードディスクに格納され、通信ネットワークを介してホストコンピュータから当該コンピュータに送信され、RAM86等の別の記録媒体に読み出され、実行されてもよい。
【0059】
上記のプログラムを格納した記録媒体は、本出願の画像処理装置300を製造するためにのみ使用されるものであり、そのような記録媒体の行としての製造及び販売等が本出願に基づく特許険の侵害を構成することは明らかである。
【0060】
図9は、本発明に係る情報獲得方法のフローチャートの一例を示す。本発明に係る情報獲得方法は、光学的に異なる2点の照射位置から、物体に光を照射する照射段階と、2点の照射位置から照射された光による物体からの反射光をそれぞれ撮像する撮像段階と、光学的に異なる2点の照射位置から、物体に光を照射する角度のうちいずれかを算出する角度算出段階と、撮像位置において撮像された、物体の被測定部の反射光及び、被測定部の近傍からの反射光に基づいて、被測定部までの距離情報を算出する算出段階とを備える。本例においては、撮像段階は、2点の照射位置のいずれかと光学的に同位置である撮像位置において、2点の照射位置から照射された光による物体からの反射光をそれぞれ撮像し、角度測定段階は、撮像段階が撮像する物体からの反射光に基づいて光を照射する角度を算出する。
【0061】
算出段階は、撮像段階において撮像された、被測定部からの反射光の強度及び被測定部の近傍からの反射光の強度に基づいて、被測定部の距離情報及び被測定部の面傾き情報を算出してもよい。以下フローチャートを用いて算出段階における、被測定部までの距離情報を算出する方法について説明する。
【0062】
算出段階はまず、第1算出段階において、被測定部からの反射光の強度及び反射光の入射角に基づいて、被測定部までの仮の距離情報を仮定する(S100)。つぎに、仮定した距離情報に基づいて、被測定部の面傾き情報を算出する(S102)。仮定した距離情報に基づいて被測定部の面傾き情報を算出する方法は、図1から図3に関連して説明した方法と同一又は同様の方法で算出してよい。
【0063】
次に、第2算出段階において、被測定部までの仮の距離情報及び、被測定部の面傾き情報に基づいて、被測定部の近傍の仮の距離情報を算出する(S104)。被測定部の近傍の仮の距離情報を算出する方法は、図1から図3に関連して説明した方法と同一又は同様の方法で算出してよい。
【0064】
次に第3算出段階において、被測定部の近傍の仮の距離情報と、被測定部の近傍からの反射光の強度及び、反射光の入射角に基づいて、被測定部の近傍の面傾き情報を算出する(S106)。被測定部の近傍の面傾き情報を算出する方法は、図1から図3に関連して説明した方法と同一又は同様の方法で算出してよい。
【0065】
次に第4算出段階において、被測定部の面傾き情報と、被測定部の近傍の面傾き情報との誤差を算出する(S108)。算出した誤差が、所定の範囲内であるか否かを判定し(S110)、算出した誤差が所定の範囲内である場合に、被測定部の仮の距離情報を、被測定部の真の距離情報として出力する(S112)。また、算出した誤差が所定の範囲外である場合は、被測定部の仮の距離情報を他の値として第1算出段階に戻る。つまり、被測定部の複数の仮の距離情報に対して、S100からS110までの計算を行い、第4算出段階は、被測定部の複数の仮の距離情報に対して、それぞれ誤差を算出し、算出段階は、誤差が最小となる場合の被測定部の仮の距離情報を真の前記被測定部の距離情報としてもよい。算出した距離情報に基づいて、被測定部の面傾き情報を算出することができる。
【0066】
図2に関連して説明した方法と同様に、第2算出段階は、被測定部の面傾き情報に基づいて、被測定部の面を延長した近傍において、被測定部の近傍の仮の距離情報を算出してもよい。また、撮像段階は、被測定部からの反射光及び、被測定部の近傍の複数の場所からの反射光を撮像し、第4算出段階は、被測定部の近傍の複数の場所において、被測定部の仮の面傾き情報と被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、算出段階は、誤差の二乗和が最小となる場合の被測定部の仮の距離情報を真の被測定部の距離情報としてもよい。被測定部の近傍の複数の場所からの反射光から得られる画像データに基づいて、フローチャートで説明した計算を行うことにより、ノイズや計算誤差を低減した被測定部までの距離情報を得ることができる。
【0067】
照射段階は、それぞれの照射位置において、異なる波長の光を物体の被測定部及び被測定部の近傍に照射し、それぞれの照射位置において、実質的に同時に物体に光を照射し、撮像段階は、異なる波長の光のそれぞれを選択的に透過させる波長選択手段を有し、それぞれの照射位置における光による物体からの反射光をそれぞれ撮像してもよい。
【0068】
図8に関連して説明した情報獲得方法においては、図1から図3に関連して説明した方法と同一又は同様の方法で、被測定部までの距離情報及び面傾き情報を算出してよい。
【0069】
以上、本発明を実施の形態を用いて説明したが、本発明の技術的範囲は上記実施の形態に記載の範囲には限定されない。上記実施の形態に、多様な変更又は改良を加えることが可能であることが当業者に明らかである。その様な変更又は改良を加えた形態も本発明の技術的範囲に含まれ得ることが、特許請求の範囲の記載から明らかである。
【0070】
【発明の効果】
上記説明から明らかなように、本発明によれば、光が照射された物体から得られる反射光を撮像することにより、物体の表面の傾きによる誤差を排除した、物体までの距離情報を獲得することが可能となる。また、物体20の表面の傾き情報、物体20の表面の反射率情報を獲得することが可能となる。
【図面の簡単な説明】
【図1】 本発明の原理説明図である。
【図2】 被測定部22までの距離L、被測定部22の面傾きθを算出する方法の一例の説明図である。
【図3】 仮の距離Laについての所定の範囲で所定の距離間隔で、面傾きθ及び面傾きθ’を算出した結果の一例を示す。
【図4】 本発明に係る画像撮像装置100の構成の一例を示す。
【図5】 撮像部70が撮像した物体20からの反射光に基づく画像データ150の一例を示す。
【図6】 画像撮像装置100の一例を説明する図である。
【図7】 照射位置(12,14)から照射する光の波長の一例を示す。
【図8】 画像処理装置300の一例を示す。
【図9】 本発明に係る情報獲得方法のフローチャートの一例を示す。
【符号の説明】
10・・・カメラ、12、14・・・照射位置
20・・・物体、22・・・被測定部
24・・・被測定部の近傍、40・・・照射部
50・・・分光部、52・・・光学レンズ
60・・・制御部、70・・・撮像部
80・・・入力部、82・・・CPU
84・・・ROM、86・・・RAM
88・・・入力装置、92・・・ハードディスク
94・・・CD−ROMドライブ、96・・・CD−ROM
90・・・記憶部、100・・・画像撮像装置
110・・・算出部、120・・・出力部
150・・・画像データ、300・・・画像処理装置[0001]
BACKGROUND OF THE INVENTION
The present invention relates to an information acquisition method, an image capturing apparatus, and an image processing apparatus that acquire information related to the depth distance of an object, the tilt of the object surface, and the reflectance of the object surface. In particular, the present invention relates to an information acquisition method, an image imaging apparatus, and an image processing apparatus that capture an image of reflected light obtained from an object irradiated with light and acquire information about the object.
[0002]
[Prior art]
In order to obtain distance information to an object and position information of an object, a method of 3D image measurement is known in which pattern light such as a slit or stripe pattern is projected onto the object, and the pattern projected onto the object is photographed and analyzed. It has been. Typical measurement methods include the slit light projection method (also known as the light shredding method), the coded pattern light projection method, etc., and is well-known for “Three-dimensional image measurement” by Seiji Iguchi and Kosuke Sato (Shokendo).
[0003]
Japanese Patent Application Laid-Open No. 61-155909 (publication date: July 15, 1986) and Japanese Patent Application Laid-Open No. 63-23312 (publication date: September 29, 1988) disclose light from different light source positions to an object. A distance measuring device and a distance measuring method for measuring the distance to an object based on the intensity ratio of the reflected and reflected light from the object are disclosed.
[0004]
Japanese Laid-Open Patent Publication No. 62-46207 (published on February 28, 1987) irradiates an object with two lights having different phases, and based on the phase difference of reflected light from the object, the distance to the object A distance detection device for measuring the above is disclosed.
[0005]
Hebei et al., “Development of AXi-Vision Camera 3D Imager” (3D Image Conference 99, 1999) adds ultra-high-speed intensity modulation to projected light, and accelerates objects illuminated with intensity-modulated light at high speed. A method is disclosed in which a distance is measured from a degree of intensity modulation that is taken by a camera having a shutter function and changes depending on the distance to an object.
[0006]
Japanese Patent Application Laid-Open No. 10-48336 and Japanese Patent Application Laid-Open No. 11-94520 disclose that an object is formed by projecting different light patterns having different wavelength characteristics onto an object and extracting a wavelength component of incident light from the object. A real-time range finder for calculating the distance to is disclosed.
[0007]
[Problems to be solved by the invention]
In the conventional distance measuring device and distance measuring method, the distance information is measured by irradiating the object to be measured and measuring the reflected light, but the inclination of the surface of the object to be measured is not considered. It was. For this reason, even in the case of an object at the same distance, the measurement intensity of the reflected light differs depending on the inclination of the surface, resulting in an error in distance measurement.
[0008]
In addition, the distance detection device disclosed in Japanese Patent Laid-Open No. 62-46207 requires a highly accurate phase detector for detecting a phase difference, which makes the device expensive and lacks simplicity. In addition, since the phase of the emitted light from the point of the subject is measured, the depth distribution of the entire subject cannot be measured.
[0009]
Moreover, the distance measurement method using intensity modulation disclosed in “Development of Axi-Vision Camera 3D Imaging Device” (3D Image Conference 99, 1999) performs intensity increase / decrease modulation with the same light source. Therefore, it is necessary to perform light modulation at a very high speed, and it has not been possible to measure easily.
[0010]
[Means for Solving the Problems]
In order to solve the above problems, in the first embodiment of the present invention, there is provided an information acquisition method for acquiring distance information to an object, wherein the object is irradiated with light from two optically different irradiation positions. One of an irradiation stage, an imaging stage for imaging reflected light from an object by light irradiated from two irradiation positions, and an angle at which light is irradiated to the object from two optically different irradiation positions An angle calculation step for calculating the distance, and a calculation step for calculating distance information to the measured portion based on the reflected light of the measured portion of the object and the reflected light from the vicinity of the measured portion captured at the imaging position An information acquisition method is provided.
[0011]
In the imaging stage, the reflected light from the object by the light emitted from the two irradiation positions is respectively imaged at the imaging position that is optically the same as one of the two irradiation positions. The angle at which the light is irradiated may be calculated based on the reflected light from the object to be imaged. In addition, the calculation stage is based on the distance information of the measured part and the surface of the measured part based on the intensity of the reflected light from the measured part and the intensity of the reflected light from the vicinity of the measured part. Tilt information may be calculated. In addition, the calculation step calculates the surface tilt information of the measured part when assuming temporary distance information to the measured part based on the intensity of the reflected light from the measured part and the incident angle of the reflected light. A first calculation stage, a second calculation stage for calculating temporary distance information in the vicinity of the part to be measured based on provisional distance information to the part to be measured and surface inclination information of the part to be measured, A third calculation step of calculating surface inclination information in the vicinity of the measured part based on the temporary distance information in the vicinity of the part, the intensity of the reflected light from the vicinity of the measured part, and the incident angle of the reflected light; A fourth calculation stage for calculating an error between the surface inclination information of the part to be measured and the surface inclination information in the vicinity of the part to be measured, and if the error is within a predetermined range, The distance information may be output as the true distance information of the part to be measured.
[0012]
In the fourth calculation stage, an error is calculated for each of a plurality of provisional distance information of the part to be measured, and in the calculation stage, the provisional distance information of the part to be measured when the error is minimized is measured as a true measurement. It is good also as the distance information of a part. In the second calculation step, temporary distance information in the vicinity of the measurement target may be calculated in the vicinity of the surface of the measurement target extended based on the surface inclination information of the measurement target. In the imaging stage, reflected light from the part to be measured and reflected light from a plurality of places in the vicinity of the part to be measured are imaged. In the fourth calculation stage, the part to be measured is taken in a plurality of places in the vicinity of the part to be measured. And calculating a temporary distance information of the measured part when the sum of squares of the error is minimized. The distance information of the true part to be measured may be used.
[0013]
The irradiation stage irradiates light of different wavelengths to the measured part of the object and the vicinity of the measured part at each irradiation position, and irradiates the object substantially simultaneously at each irradiation position. Further, wavelength selection means for selectively transmitting each of light having different wavelengths may be provided, and reflected light from an object by light at each irradiation position may be respectively imaged.
[0014]
In the second embodiment of the present invention, an image capturing apparatus that acquires distance information to an object, an irradiation unit that irradiates light to the object from two optically different irradiation positions, and two irradiations An imaging unit that images the reflected light of an object by light emitted from two irradiation positions at the same optical position as any one of the positions, and reflection of the object to be measured captured by the imaging unit An image processing apparatus that calculates distance information to the measurement unit based on light and reflected light from the vicinity of the measurement unit;
An image pickup apparatus is provided.
[0015]
You may further provide the control part which synchronizes the light emission timing in which an irradiation part irradiates light in each irradiation position, and the imaging timing in which an imaging part images reflected light. The irradiation unit irradiates light of different wavelengths at each irradiation position, irradiates light to the object substantially simultaneously at each irradiation position, and reflects light reflected by the object of light of different wavelengths to each of different wavelengths. A spectroscopic unit that separates light into main wavelength components may be further provided, and the imaging unit may capture each of the separated lights. The spectroscopic unit may include a plurality of optical filters that mainly transmit light of different wavelengths.
[0016]
According to a third aspect of the present invention, there is provided an image processing apparatus for processing an image obtained by imaging an object, and inputting image data based on reflected light from the object for each light emitted from two irradiation positions. Input unit, storage unit for storing the input image data, distance information to the unit to be measured and surface of the unit to be measured based on the object to be measured of the image data and the data in the vicinity of the unit to be measured Provided is an image processing apparatus comprising: a calculation unit that calculates inclination information; and an output unit that outputs distance information and surface inclination information calculated by the calculation unit.
[0017]
The image data is data captured by an imaging unit having a plurality of pixels that receive reflected light, and the calculation unit extracts the intensity of reflected light of each light emitted from two irradiation positions for each pixel. The distribution of the object distance information and the distribution of the surface inclination information may be calculated based on the intensity of each extracted light to be measured and the reflected light from the vicinity of the measurement target for each pixel. The calculation unit may calculate the three-dimensional information of the object based on the distribution of the distance information and the distribution of the surface inclination information, and the output unit may output the three-dimensional information of the object calculated by the calculation unit.
[0018]
The calculation unit calculates the surface inclination information of the measured part when assuming temporary distance information to the measured part based on the intensity of the reflected light from the measured part and the incident angle of the reflected light. A calculation means, a second calculation means for calculating temporary distance information in the vicinity of the measured part based on the temporary distance information to the measured part and the surface tilt information of the measured part, and the vicinity of the measured part A third calculating means for calculating surface tilt information in the vicinity of the measured part based on the provisional distance information, the intensity of the reflected light from the vicinity of the measured part, and the incident angle of the reflected light, and the measured part And a fourth calculation means for calculating an error between the surface tilt information of the measured portion and the surface tilt information in the vicinity of the measured portion, and if the error is within a predetermined range, the provisional distance information of the measured portion is obtained. Alternatively, it may be output as the true distance information of the part to be measured.
[0019]
The fourth calculating means calculates an error for each of the plurality of provisional distance information of the measurement target, and the calculation unit calculates the provisional distance information of the measurement target when the error is the minimum as a true measurement. It is good also as the distance information of a part. The second calculating means may calculate temporary distance information in the vicinity of the measured part in the vicinity of the extended surface of the measured part based on the surface tilt information of the measured part. The input unit receives image data based on the reflected light from the part to be measured and the reflected light from a plurality of places in the vicinity of the part to be measured, and the fourth calculation means includes a plurality of places in the vicinity of the part to be measured. And calculating the error between the provisional surface inclination information of the measured part and the provisional surface inclination information in the vicinity of the measured part, and the calculating part calculates the provisional information of the measured part when the sum of squares of the error is minimized. This distance information may be the distance information of the true part to be measured.
[0020]
The calculation unit detects the edge of the object based on the image data of the object, and the calculation unit converts the image data of the object into a plurality of regions with the edge of the object as a boundary among a plurality of locations near the measured unit. The distance information of the part to be measured may be calculated based on provisional surface inclination information of a location in the vicinity of the part to be measured, which has data in the same area as the area where the data of the part to be measured exists. Further, the calculation unit has a difference between the intensity of the reflected light from the measured part and the intensity of the reflected light from the vicinity of the measured part in a plurality of locations in the vicinity of the measured part, The distance information of the part to be measured may be calculated based on temporary surface inclination information at a location near the part to be measured.
[0021]
The above summary of the invention does not enumerate all the necessary features of the present invention, and sub-combinations of these feature groups can also be the invention.
[0022]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, the present invention will be described through embodiments of the invention. However, the following embodiments do not limit the invention according to the claims, and all combinations of features described in the embodiments are included. It is not necessarily essential for the solution of the invention.
[0023]
FIG. 1 is an explanatory diagram of the principle of the present invention. The object 20 is irradiated with light of intensities P1 and P2 from the irradiation position 12 and the irradiation position 14, respectively, and the reflected light of each light by the object 20 is imaged by the camera 10. The camera 10 is, for example, a charge coupled device (CCD), has a plurality of pixels, and captures reflected light from the measured portion 22 of the object 20 and the vicinity 24 of the arsenic section in each pixel unit. Each intensity is detected every time. Based on the reflected light imaged by the camera 10, the distance L of the object 20 to the measured portion 22 and the surface inclination θ of the measured portion 22. 0 And the reflectance R of the surface of the measured part 22 Obj Is calculated. The irradiation positions (12, 14) may be arranged at arbitrary positions. One of the irradiation angles of light irradiated on the object 20 from the irradiation position (12, 14) is calculated. In this example, the camera 10 is optically connected to any one of the irradiation positions (12, 14). Therefore, the irradiation angle of the light applied to the object 20 is calculated based on the incident angle of the reflected light captured by the camera 10. In this example, the case where the irradiation position 12 and the camera 10 are optically arranged at the same position will be described.
[0024]
As shown in FIG. 1, the irradiation position 14 extends from the irradiation position 12 to L. 12 It is arranged at a distant position. Distance L 12 Is a known value unique to the measurement system. In addition, the angle at which light is irradiated from the irradiation position (12, 14) to the part to be measured 22 of the object 20 is set to θ. 1 , Θ 2 The angle at which light is irradiated from the irradiation position (12, 14) to the vicinity 24 of the measured portion of the object 20 is set to θ. 1 ', Θ 2 'And. In addition, since the camera 10 is optically disposed at the same position as the irradiation position 12, the angle at which the reflected light from the measured part 22 is received is θ. C The angle at which the reflected light from the vicinity 24 of the part to be measured is received is θ C ' C = Θ 1 , Θ C '= Θ 1 'Is. In addition, the distance from the irradiation position 12 to the measured portion 22 is set to L, respectively. 1 , The distance to the vicinity 24 of the part to be measured is L 1 'And the distance from the irradiation position 14 to the part to be measured is L 2 , The distance to the vicinity 24 of the part to be measured is L 2 'And.
[0025]
The intensity of the reflected light from the part to be measured 22 of the light irradiated from the irradiation position (12, 14) is D, respectively. 1 , D 2 Then D 1 , D 2 Is given as:
[Expression 1]
Figure 0004141627
That is,
[Expression 2]
Figure 0004141627
It becomes.
[0026]
Also,
[Equation 3]
Figure 0004141627
So
[Expression 4]
Figure 0004141627
It becomes. Also,
[Equation 5]
Figure 0004141627
And L 12 , Θ 1 Is a known value, so θ 2 Is given by a function of L. That is,
[Formula 6]
Figure 0004141627
It becomes. Therefore,
[Expression 7]
Figure 0004141627
It becomes. Where D1, D 2 , P 1 , P 2 , Θ 1 Is a measured or known value, so the unknown in the above equation is θ 0 And L. Therefore, the above condition is satisfied (θ 0 , L) can be determined. That is, when the distance L to the measured part 22 is assumed, the surface inclination θ of the measured part corresponding to the assumed distance L 0 Can be calculated. Similarly, the distance L ′ to the vicinity 24 of the measured portion and the surface inclination θ of the vicinity 24 of the measured portion also in the vicinity 24 of the measured portion. 0 You can ask for a combination of '.
[0027]
In the present invention, the distance L of the object 20 to the measured portion 22 and the surface inclination θ of the measured portion 22 are measured. 0 And the distance L ′ of the object 20 to the vicinity 24 of the part to be measured and the surface inclination θ of the vicinity 24 of the part to be measured. 0 Based on the combination with ', the distance L to the measured part 22 and the surface inclination θ 0 Is calculated. Hereinafter, the calculation method will be described in detail.
[0028]
FIG. 2 shows the distance L to the part to be measured 22 and the surface inclination θ of the part to be measured 22. 0 It is explanatory drawing of an example of the method of calculating. First, the distance L to the part to be measured 22 is assumed to be La. Based on the mathematical formula described with reference to FIG. 1, the surface inclination θ of the measured portion 22 corresponding to the assumed distance La 0 Is calculated. Next, the calculated surface inclination θ 0 The surface of the measured part 22 determined by the above is extended, and it is assumed that the vicinity 24 of the measured part exists at the point where it intersects the optical path of the reflected light from the vicinity 24 of the measured part. A temporary distance Lb is calculated. At this time, the temporary distance Lb to the vicinity 24 of the measured part is the temporary distance La between the camera 10 and the measured part 22 and the incident angle θ of the reflected light from the measured part 22. 1 , The incident angle θ of the reflected light from the vicinity 24 of the measured part 1 'And the surface inclination θ of the measured part 22 0 Can be calculated geometrically.
[0029]
Next, the surface inclination θ in the vicinity 24 of the measured portion corresponding to the calculated temporary distance Lb. 0 'Is calculated. Face tilt θ 0 'Can be calculated by the mathematical formula described with reference to FIG. Since the distance between the measured part 22 of the object 20 and the vicinity 24 of the measured part is very small, the surface inclinations of the measured part 22 and the vicinity 24 of the measured part are substantially the same. Therefore, the calculated surface inclination θ 0 And θ 0 By comparing with ', it is possible to determine whether or not the distance La to the measured portion 22 assumed first is a correct value. That is, the surface inclination θ 0 And θ 0 If the difference from 'is within a predetermined range, the assumed distance La can be set as the distance L to the measured portion 22.
[0030]
Also, the surface inclination θ 0 And θ 0 If the difference from 'is not within the predetermined range, assuming that there is an error in the setting of the initially assumed distance La, the distance L to the measured part 22 is set to another value and the same calculation is performed. Just do it. It is clear that the distance L to the measured part 22 that satisfies the mathematical formula described in relation to FIG. 1 has an upper limit value, and the lower limit value is zero. Therefore, the provisional distance La is in a finite range. Find out. For example, the true distance L to the part to be measured 22 may be extracted by the binary search method for the distance La in a finite range. Further, with respect to the temporary distance La, θ at a predetermined distance interval within a finite range. 0 , Θ 0 The difference of 'may be calculated and the true distance L may be extracted. Further, regarding the temporary distance La, θ 0 , Θ 0 The difference of 'may be calculated, and the temporary distance La that minimizes the difference may be set as the true distance L. Further, the surface tilt information of the part to be measured can be calculated by the mathematical formula described in relation to FIG. 1 based on the distance information to the part to be measured. In addition, the surface reflectance information of the portion to be measured can be calculated by the mathematical formula described with reference to FIG.
[0031]
FIG. 3 shows a surface inclination θ at a predetermined distance interval within a predetermined range with respect to the provisional distance La. 0 And surface tilt θ 0 An example of the result of calculating 'is shown. In the graph of FIG. 3, the horizontal axis indicates the provisional distance La to the measured part 22, and the vertical axis indicates the surface inclination of the measured part 22 and the vicinity 24 of the measured part. In this example, the distance L of the object 20 to the measured part 22 is 200 mm, and the surface inclination of the measured part 22 is 0 degree. Further, the distance between the irradiation position 12 and the irradiation position 14 is 10 mm, θ 1 Was calculated by setting the distance of the temporary distance La to 10 mm.
[0032]
The data 102 indicates the surface inclination of the measured portion 22 corresponding to the temporary distance La indicated on the horizontal axis, and the data 104 indicates the surface inclination of the vicinity 24 of the measured portion. At a temporary distance of 200 mm, the surface inclinations of the measured portion 22 and the vicinity 24 of the measured portion coincide with each other at 0 degree.
[0033]
FIG. 4 shows an example of the configuration of the image capturing apparatus 100 according to the present invention. The image capturing apparatus 100 includes an irradiation unit 40, a spectroscopic unit 50, a control unit 60, an image capturing unit 70, and an image processing device 300. The irradiation unit 40 irradiates the object 20 with light from two irradiation positions that are optically different. The irradiation unit 40 may have a light source at each of the two irradiation positions, and the object 20 may be irradiated with light from each of the light sources. The irradiation unit 40 may have one light source and one light source. The object 20 may be irradiated with light by moving to each of the irradiation positions (12, 14). When the irradiation unit 40 has a light source at each of the two irradiation positions, the object 20 may be irradiated with light of different wavelengths substantially simultaneously at the respective irradiation positions. The spectroscopic unit 50 separates the reflected light from the object 20 with light of different wavelengths into light having each of the different wavelengths as main wavelength components. The imaging unit 70 captures reflected light of the light from the object 20 by light emitted from the two irradiation positions at the same optical position as any one of the two irradiation positions. The imaging unit 70 may image each of the lights separated by the spectroscopic unit 50. The control unit 60 synchronizes the light emission timing at which the irradiation unit 40 emits light and the imaging timing at which the imaging unit 70 images reflected light at each irradiation position.
[0034]
The image processing apparatus 300 calculates distance information to the measured unit 22 based on the reflected light from the measured unit 22 of the object 20 captured by the imaging unit 70 and the reflected light from the vicinity 24 of the measured unit. . The image processing apparatus 300 includes an input unit 80, a storage unit 90, a calculation unit 110, and an output unit 120. The input unit 80 inputs image data based on each reflected light of the object 20 of each light irradiated from two irradiation positions. The storage unit 90 stores the image data input to the input unit 80. The storage unit 90 may store image data for each wavelength characteristic of light irradiated by the irradiation unit 40.
[0035]
Based on the image data stored in the storage unit 90, the calculation unit 110 calculates distance information of the object 20 to the measured unit 22 and surface tilt information of the measured unit 22 of the object 20. The calculation unit 110 calculates the surface tilt information of the measured part 22 when assuming temporary distance information to the measured part based on the intensity of the reflected light from the measured part 22 and the incident angle of the reflected light. A first calculation unit that calculates the temporary distance information of the vicinity of the measured part based on the temporary distance information to the measured part 22 and the surface inclination information of the measured part 22; Based on the temporary distance information of the vicinity 24 of the part to be measured, the intensity of the reflected light from the vicinity 24 of the part to be measured, and the incident angle of the reflected light, the surface tilt information of the vicinity 24 of the part to be measured is calculated. Third calculation means, and fourth calculation means for calculating an error between the surface inclination information of the part to be measured and the surface inclination information of the vicinity 24 of the part to be measured. When the error calculated by the fourth calculating means is within a predetermined range, the calculation unit 110 outputs the temporary distance information of the measured unit 22 as the true distance information of the measured unit 22 and the output unit 120 and the storage unit. Output to 90.
[0036]
The fourth calculation means calculates an error for each of a plurality of provisional distance information of the measurement target unit 22, and the calculation unit 110 calculates the provisional distance information of the measurement unit 22 when the error is minimum. The distance information of the measured part 22 may be used. Further, the second calculation means may calculate a provisional distance in the vicinity 24 of the measured part in the vicinity of the extended surface of the measured part 22 based on the surface inclination information of the measured part 22.
[0037]
Further, the calculation unit 110 detects the intensity of reflected light in units of pixels or pixel areas of the imaging unit 70 based on the image data, and the distribution of distance information to the object 20 based on the detected intensity of reflected light. In addition, surface tilt information may be calculated. The calculation unit 110 may calculate the three-dimensional information of the object 20 based on the distribution of distance information and the distribution of surface tilt information. The calculation unit 110 may calculate the distribution of each piece of information of the measured unit 22 and each piece of information of the object 20 in the same or the same manner as the method described with reference to FIGS. In addition, when the irradiation unit 40 emits light of different wavelengths, the calculation unit 110 uses the method described in relation to FIG. 6 to reflect the reflected light when the light of the same wavelength is irradiated from each irradiation position. It is preferable to calculate the intensity, and to calculate the distribution of each piece of information of the measured part 22 and each piece of information of the object 20 based on the calculated intensity. The storage unit 90 preferably stores each piece of information of the measurement target 22 calculated by the calculation unit 110 and the distribution of each piece of information of the object 20.
[0038]
The output unit 120 outputs the distance information of the object 20 to the measured unit 22, the surface tilt information of the measured unit 22, and the distribution of each information in the object 20 calculated by the calculating unit 110. The output unit 120 may display each information of the measured unit 22 and each information distribution of the object 20 on the display device using numerical values and figures. In addition, the output unit 120 may output the information of the measured unit 22 and the distribution of the information of the object 20 by sound. The output unit 120 may output each piece of information based on position information designated in advance. Further, the output unit 120 may output based on items and criteria specified in advance. For example, each piece of information may be output for a region in which items designated in advance become a certain level or more. The output unit 120 may output each piece of information based on whether the object 20 is stationary or moving.
[0039]
The input unit 80 receives image data based on the reflected light from the measured unit 22 and the reflected light from a plurality of locations in the vicinity of the measured unit, and the fourth calculating unit of the calculating unit 110 includes the measured unit , And calculates the error between the provisional surface inclination information of the part to be measured 22 and the provisional surface inclination information in the vicinity of the part to be measured, and the calculation unit 110 determines that the sum of squares of errors is the smallest. The provisional surface tilt information of the measured part 22 in this case may be used as the distance information of the true measured part 22.
[0040]
In this example, the image capturing apparatus 100 includes the spectroscopic unit 50. However, when the irradiation unit 40 irradiates light at different irradiation positions, the image capturing apparatus 100 includes the spectroscopic unit 50. Instead, the imaging unit 70 may capture reflected light from the light irradiated from each irradiation position at a timing based on the timing at which the irradiation unit 40 irradiates light.
[0041]
FIG. 5 shows an example of image data 150 based on the reflected light from the object 20 imaged by the imaging unit 70. The object 20 has an edge 156 whose surface inclination changes rapidly at a minute distance. The calculation unit 110 detects the edge of the object based on the image data 150 based on the reflected light from the object 20, and the image of the object 20 with the edge of the object 20 among a plurality of locations in the vicinity of the measurement target as a boundary. Based on the provisional surface tilt information of the location in the vicinity 24 of the measured part where the data is in the same area as the area where the data of the measured part 22 is present in the area obtained by dividing the data into a plurality of areas. The distance information is calculated. For example, as illustrated in FIG. 5, when the image data 150 of the object 20 includes a region 152 and a region 154 with the edge 156 as a boundary, the calculation unit 110 includes a region 152 in which data of the measured unit 22 exists. The distance information of the part to be measured 22 is calculated based on the surface tilt information in the vicinity 24 of the part to be measured.
[0042]
In addition, the calculation unit 110 determines that the difference between the intensity of the reflected light from the part to be measured 22 and the intensity of the reflected light from the vicinity of the part to be measured is within a predetermined range among a plurality of places near the part to be measured. The distance information of the part to be measured 22 may be calculated based on provisional surface tilt information of a location near the part to be measured.
[0043]
FIG. 6 is a diagram illustrating an example of the image capturing apparatus 100. 6 having the same reference numerals as those in FIG. 4 may have the same or similar functions and configurations as those described with reference to FIG. The irradiation unit 40 irradiates the object 20 with light of different wavelengths substantially simultaneously on the object 20 from two irradiation positions (12, 14) that are optically different. For example, the irradiation unit 40 may include an optical filter (not shown) that mainly transmits light having different wavelengths. At this time, the irradiation unit 40 transmits the light irradiated from each irradiation position (12, 14) through the optical filter to the wavelength λ. a , Λ b The light is irradiated onto the object 20 substantially simultaneously.
[0044]
The light emitted from the irradiation unit 40 is reflected by the object 20 and enters the optical lens 52. The reflected light from the object 20 that has entered the optical lens 52 enters the spectroscopic unit 50 through the optical lens 52.
[0045]
The spectroscopic unit 50 includes, as an example, a deflector that deflects the reflected light imaged by the optical lens 52, a retarder that rotates the deflection direction of the deflected light according to the wavelength, and a prism that divides light having different deflection directions. The light split by the spectroscopic unit 50 is received by the imaging units (70a, 70b), respectively. The imaging units (70a, 70b) are two-plate solid-state imaging devices as an example. The light received by the imaging units (70a, 70b) is read out as electric charges by the photoelectric effect, converted into a digital electrical signal by an A / D converter (not shown), and input to the image processing apparatus 300. The image processing apparatus 300 calculates distance information to the object 20, inclination information of the surface of the object 20, and reflectance information of the surface of the object 20 based on the input signal.
[0046]
The imaging unit 70 images each of the lights separated by the spectroscopic unit 50. The imaging unit 70 is a solid-state imaging device as an example, and the image of the object 20 is formed on the light receiving surface of the solid-state imaging device. Charges are accumulated in each sensor element of the solid-state imaging device in accordance with the amount of light of the image of the imaged object 20, and the accumulated charges are scanned in a predetermined order and read out as electrical signals. The solid-state imaging device is preferably a charge coupled device (CCD) image sensor having a good signal / noise ratio and a large number of pixels so that the intensity of reflected light from the object 20 can be detected with high accuracy in units of pixels. . In addition to the CCD, a MOS image sensor, a Cd-Se contact image sensor, an aSi contact image sensor, a bipolar contact image sensor, or the like may be used as the solid-state imaging device.
[0047]
The control unit 60 controls the light emission timing at which the irradiation unit 40 emits light and the imaging timing at which the imaging unit 70 images reflected light. The control unit 40 receives the synchronization signal and controls the light emission timing and the imaging timing based on the synchronization signal. Further, the control unit 60 may control the intensity of light emitted by the irradiation unit 40, the focus of the imaging unit 70, the diaphragm, and the like based on the distance information to the object 20. The control unit 60 may control the exposure time of the imaging unit 70.
[0048]
In this example, the irradiation unit 40 irradiates light having one wavelength as a main wavelength component at each of the two irradiation positions, but in another example, the irradiation unit 40 has two irradiation positions. Thus, light having a plurality of wavelengths as main wavelength components may be irradiated. At this time, the spectroscopic unit 50 preferably divides the reflected light from the object 20 into wavelength components of a plurality of wavelengths of the light irradiated by the irradiation unit 40. However, when dividing into all wavelength components, the spectroscopic unit 50 may become too large. Therefore, in yet another example, the irradiation unit 40 has a wavelength λ at the irradiation position 12. a Is irradiated with light having a main wavelength component, and at the irradiation position 14, two types of wavelengths are used as main wavelength components, and an average value of the two types of wavelengths is a wavelength λ. a You may irradiate the light which becomes. At this time, the spectroscopic unit 50 divides the light reflected by the object 20 into wavelength components of light emitted from the respective irradiation positions. That is, the spectroscopic unit 50 divides the reflected light into two types of light. The image processing apparatus 300 calculates each piece of information on the object 20 based on the reflected light intensity of the light irradiated from the irradiation position 12 and 1/2 of the reflected light intensity of the light irradiated from the irradiation position 14.
[0049]
In this example, the spectroscopic unit 50 has a prism. However, in another example, the spectroscopic unit 50 may have a plurality of optical filters that mainly transmit light of different wavelengths. The optical filter is disposed on the light receiving surface of the imaging unit 70. Moreover, when the irradiation part 40 does not irradiate light simultaneously in each irradiation position, the imaging part 70 may image reflected light without going through the spectroscopic part 50.
[0050]
FIG. 7 shows an example of the wavelength of light emitted from the irradiation position (12, 14). In this example, light having λa as a main wavelength component is irradiated from the irradiation position 12, and light having two wavelengths λb and λc as main wavelength components is irradiated from the irradiation position 12. The intensity of the reflected light from the object 20 by the light irradiated from the irradiation position 12 is W, and the intensity of the reflected light from the object 20 by the light having the wavelength λb among the light irradiated from the irradiation position 14. Is Wb, and the intensity of light reflected from the object 20 by light having the wavelength λc as a component is Wc. In this case, the spectroscopic unit 50 divides the reflected light from the object 20 into light having a wavelength λa as a main wavelength component and light having wavelengths λa and λb as main wavelength components. The spectroscopic unit 50 may divide the reflected light from the object 20 using, for example, an optical filter.
[0051]
As shown in FIG. 7A, when the light having the average value of the wavelength λb and the wavelength λc is irradiated from the irradiation position 14, the intensity Wb of the reflected light by the light having the wavelength λb as a component, and the wavelength By calculating the average value Wa of the reflected light intensity Wc by the light having λc as a component, the intensity of the reflected light from the object 20 when the light having the wavelength λa is irradiated from the irradiation position 14 can be calculated. . That is, with respect to the reflected light intensity taking into account the attenuation of an optical filter or the like for dividing the reflected light, 1 of the intensity of the reflected light by the light having the wavelengths λb and λc irradiated from the irradiation position 14 as main wavelength components. By calculating / 2, it is possible to calculate the intensity of the reflected light from the object 20 when the light having the wavelength λa is irradiated from the irradiation position 14.
[0052]
Further, as shown in FIG. 7B, by linearly approximating the intensity of the reflected light by the light components having the wavelength λb and the wavelength λc, the object 20 when the light having the wavelength λa is irradiated from the irradiation position 14 is used. The intensity of the reflected light may be calculated. As described in this example, the object 20 is irradiated with light having different wavelengths at the respective irradiation positions, and the reflected light from the object 20 is wavelength-separated by the spectroscopic unit 50 and imaged, so that the two points can be picked up. Since each reflected light by the irradiated light can be imaged simultaneously, even when the object 20 is moving, distance information to the object 20 and surface tilt information can be calculated. In FIG. 7, light having two wavelength components as main wavelength components is irradiated from the irradiation position 14, but in another example, light having more wavelengths as main wavelength components is applied to the object 20. It may be irradiated.
[0053]
FIG. 8 shows an example of the image processing apparatus 300. The image processing apparatus 300 may have the same or similar functions and configurations as the image processing apparatus 300 described with reference to FIG. In this example, the input unit 80, the recording unit 90, and the output unit 120 have the same or similar functions and configurations as those described with reference to FIG.
[0054]
The calculation unit 110 includes a CPU 82, an input device 88, a ROM 84, a hard disk 92, a RAM 86, and a CD-ROM drive 94. The CPU 82 operates based on programs stored in the ROM 84 and the RAM 86. Data is input by the user via an input device 88 such as a keyboard or a mouse. The hard disk 92 stores data such as images and a program for operating the CPU 82. The CD-ROM drive 94 reads data or a program from the CD-ROM 96 and provides it to at least one of the RAM 86, the hard disk 92, and the CPU 82.
[0055]
The functional configuration of the program executed by the CPU 82 is the same as the functional configuration of the calculation unit 110 of the image processing apparatus 300 described with reference to FIG. 4. When the object 20 is irradiated with light from two irradiation positions, An input module for inputting the intensity of reflected light from the object 20 by light from each irradiation position, and a calculation module for calculating distance information to the object 20 based on the intensity of each reflected light. The processing that the input module and calculation module cause the CPU 82 to perform is the same as the function and operation of the calculation unit 110 described with reference to FIG. These programs are provided to the user stored in a recording medium such as the CD-ROM 84. A CD-ROM 96 as an example of a recording medium can store a part or all of the functions of the image processing apparatus 300 described in the present application.
[0056]
The above program may be read directly from the recording medium into the RAM 86 and executed by the CPU 82. Alternatively, the program can use a recording medium to a magnetic recording medium such as a hard disk (MD), a magneto-optical recording medium such as an MO, a tape-shaped recording medium, a nonvolatile semiconductor memory card, or the like.
[0057]
The above program may be stored in a single recording medium or may be divided and stored in a plurality of recording media. The program may be stored after being compressed in a recording medium. The compressed program may be decompressed, read to another recording medium such as the RAM 86, and executed. Furthermore, the compressed program may be decompressed by the CPU 82 and installed in the hard disk 92 or the like, and then read out to another recording medium such as the RAM 86 and executed.
[0058]
Further, the CD-ROM 96 as an example of the recording medium may store the above-described program provided by the host computer via a communication network. The above program stored in the recording medium may be stored in the hard disk of the host computer, transmitted from the host computer to the computer via the communication network, read out to another recording medium such as the RAM 86, and executed. .
[0059]
The recording medium storing the above-described program is used only for manufacturing the image processing apparatus 300 of the present application, and the manufacture and sale of such a recording medium as a row is a patent certificate based on the present application. It is clear that it constitutes an infringement.
[0060]
FIG. 9 shows an example of a flowchart of the information acquisition method according to the present invention. The information acquisition method according to the present invention images an irradiation stage of irradiating an object with light from two optically different irradiation positions and reflected light from the object by light irradiated from the two irradiation positions. An imaging stage, an angle calculation stage for calculating one of the angles at which the object is irradiated with light from two optically different irradiation positions, reflected light of the measured part of the object captured at the imaging position, and And a calculation step of calculating distance information to the measured portion based on reflected light from the vicinity of the measured portion. In this example, in the imaging stage, the reflected light from the object by the light emitted from the two irradiation positions is respectively imaged at the imaging position that is optically the same as one of the two irradiation positions, and the angle In the measurement stage, an angle for irradiating light is calculated based on the reflected light from the object imaged in the imaging stage.
[0061]
The calculation stage is based on the intensity of the reflected light from the part to be measured and the intensity of the reflected light from the vicinity of the part to be measured, and the surface inclination information of the part to be measured, captured in the imaging stage. May be calculated. Hereinafter, a method for calculating the distance information to the part to be measured in the calculation stage will be described using a flowchart.
[0062]
In the calculation step, first, in the first calculation step, provisional distance information to the measured portion is assumed based on the intensity of the reflected light from the measured portion and the incident angle of the reflected light (S100). Next, based on the assumed distance information, the surface tilt information of the part to be measured is calculated (S102). The method of calculating the surface tilt information of the measurement target based on the assumed distance information may be calculated by the same or similar method as the method described in relation to FIGS.
[0063]
Next, in the second calculation stage, temporary distance information in the vicinity of the measured part is calculated based on the temporary distance information to the measured part and the surface tilt information of the measured part (S104). The method for calculating the temporary distance information in the vicinity of the part to be measured may be calculated by the same or similar method as the method described in relation to FIGS.
[0064]
Next, in the third calculation stage, based on the temporary distance information in the vicinity of the measured part, the intensity of the reflected light from the vicinity of the measured part, and the incident angle of the reflected light, the surface inclination in the vicinity of the measured part Information is calculated (S106). The method of calculating the surface tilt information in the vicinity of the part to be measured may be calculated by the same or similar method as the method described with reference to FIGS.
[0065]
Next, in a fourth calculation step, an error between the surface tilt information of the part to be measured and the surface tilt information near the part to be measured is calculated (S108). It is determined whether or not the calculated error is within a predetermined range (S110), and when the calculated error is within the predetermined range, the temporary distance information of the measured part is obtained as the true distance of the measured part. It outputs as distance information (S112). When the calculated error is outside the predetermined range, the temporary distance information of the measured part is set as another value and the process returns to the first calculation stage. That is, the calculation from S100 to S110 is performed for a plurality of temporary distance information of the measured part, and the fourth calculation stage calculates an error for each of the plurality of temporary distance information of the measured part. In the calculation step, the provisional distance information of the measured part when the error is minimized may be the true distance information of the measured part. Based on the calculated distance information, it is possible to calculate the surface tilt information of the measured part.
[0066]
Similar to the method described with reference to FIG. 2, the second calculation step is based on the surface inclination information of the part to be measured, and in the vicinity where the surface of the part to be measured is extended, a temporary distance near the part to be measured. Information may be calculated. In the imaging stage, reflected light from the part to be measured and reflected light from a plurality of places near the part to be measured are imaged. In the fourth calculation stage, the light to be measured is taken at a plurality of places near the part to be measured. Calculate the error between the provisional surface tilt information of the measurement unit and the provisional surface tilt information in the vicinity of the measurement unit, and the calculation step is the provisional distance information of the measurement unit when the sum of squares of the error is minimized. May be the distance information of the true part to be measured. By performing the calculation described in the flowchart based on image data obtained from reflected light from a plurality of locations in the vicinity of the part to be measured, distance information to the part to be measured with reduced noise and calculation error can be obtained. it can.
[0067]
The irradiation stage irradiates light of different wavelengths to the measured part of the object and the vicinity of the measured part at each irradiation position, and irradiates the object substantially simultaneously at each irradiation position. Further, wavelength selection means for selectively transmitting each of light having different wavelengths may be provided, and reflected light from an object by light at each irradiation position may be respectively imaged.
[0068]
In the information acquisition method described with reference to FIG. 8, the distance information and the surface inclination information to the measurement target may be calculated in the same or similar manner as the method described with reference to FIGS. .
[0069]
As mentioned above, although this invention was demonstrated using embodiment, the technical scope of this invention is not limited to the range as described in the said embodiment. It will be apparent to those skilled in the art that various modifications or improvements can be added to the above embodiment. It is apparent from the description of the scope of claims that embodiments with such changes or improvements can be included in the technical scope of the present invention.
[0070]
【The invention's effect】
As is clear from the above description, according to the present invention, the distance information to the object is obtained by imaging the reflected light obtained from the object irradiated with light, eliminating the error due to the inclination of the surface of the object. It becomes possible. Further, it is possible to acquire the tilt information of the surface of the object 20 and the reflectance information of the surface of the object 20.
[Brief description of the drawings]
FIG. 1 is a diagram illustrating the principle of the present invention.
FIG. 2 shows the distance L to the part to be measured 22 and the surface inclination θ of the part to be measured 22 0 It is explanatory drawing of an example of the method of calculating.
FIG. 3 shows a surface inclination θ at a predetermined distance interval within a predetermined range with respect to a temporary distance La. 0 And surface tilt θ 0 An example of the result of calculating 'is shown.
FIG. 4 shows an example of the configuration of an image capturing apparatus 100 according to the present invention.
5 shows an example of image data 150 based on reflected light from an object 20 imaged by an imaging unit 70. FIG.
FIG. 6 is a diagram illustrating an example of an image capturing apparatus.
FIG. 7 shows an example of the wavelength of light emitted from the irradiation position (12, 14).
8 shows an example of an image processing apparatus 300. FIG.
FIG. 9 shows an example of a flowchart of an information acquisition method according to the present invention.
[Explanation of symbols]
10 ... Camera, 12, 14 ... Irradiation position
20 ... object, 22 ... measured part
24: Near the part to be measured, 40 ... Irradiation part
50 ... spectral part, 52 ... optical lens
60 ... control unit, 70 ... imaging unit
80 ... Input unit, 82 ... CPU
84 ... ROM, 86 ... RAM
88 ... Input device, 92 ... Hard disk
94 ... CD-ROM drive, 96 ... CD-ROM
90... Storage unit, 100.
110 ... calculation unit, 120 ... output unit
150: image data, 300: image processing apparatus

Claims (34)

物体までの距離情報及び物体の面傾き情報を獲得する情報獲得方法であって、An information acquisition method for acquiring distance information to an object and surface tilt information of the object,
光学的に異なる2点の照射位置から、前記物体に光を照射する角度のうちいずれかを算出する角度算出段階と、An angle calculating step for calculating one of the angles at which the object is irradiated with light from two optically different irradiation positions;
前記2点の照射位置から照射された前記光による前記物体の被測定部からの反射光、及び前記被測定部の近傍からの反射光に基づいて、前記被測定部までの距離情報及び前記被測定部の面傾き情報を算出する算出段階とBased on the reflected light from the measured part of the object by the light emitted from the two irradiation positions and the reflected light from the vicinity of the measured part, the distance information to the measured part and the measured target A calculation stage for calculating surface tilt information of the measurement unit;
を備え、With
前記算出段階は、The calculating step includes
前記被測定部からの反射光の強度及び前記角度算出段階によって算出された前記物体に光を照射する角度に基づいて、前記被測定部までの仮の距離情報を仮定した場合に、前記被測定部の面傾き情報を算出する第1算出段階と、Based on the intensity of the reflected light from the measured part and the angle of irradiating the object with the light calculated by the angle calculating step, assuming temporary distance information to the measured part, the measured A first calculation stage for calculating surface inclination information of the part;
前記被測定部までの前記仮の距離情報、及び前記被測定部の前記面傾き情報に基づいて、前記被測定部の近傍の仮の距離情報を算出する第2算出段階と、A second calculation step of calculating temporary distance information in the vicinity of the measured part based on the temporary distance information to the measured part and the surface inclination information of the measured part;
前記被測定部の近傍の前記仮の距離情報と、前記被測定部の近傍からの反射光の強度、及び前記角度算出段階によって算出された前記物体に光を照射する角度に基づいて、前記被測定部の近傍の面傾き情報を算出する第3算出段階と、Based on the temporary distance information in the vicinity of the measurement target, the intensity of reflected light from the vicinity of the measurement target, and the angle at which the object is irradiated with light calculated by the angle calculation step, A third calculation stage for calculating surface tilt information in the vicinity of the measurement unit;
前記被測定部の前記面傾き情報と、前記被測定部の近傍の前記面傾き情報との誤差を算出する第4算出段階とを有し、A fourth calculation step of calculating an error between the surface inclination information of the measured part and the surface inclination information near the measured part;
前記誤差が所定の範囲内である場合に、前記被測定部の前記仮の距離情報を、前記被測定部の真の距離情報として出力する情報獲得方法。An information acquisition method for outputting the temporary distance information of the measured part as the true distance information of the measured part when the error is within a predetermined range.
前記第4算出段階は、前記被測定部の複数の仮の距離情報に対して、それぞれ前記誤差を算出し、The fourth calculating step calculates the error for each of a plurality of temporary distance information of the measured part,
前記算出段階は、前記誤差が最小となる場合の前記被測定部の仮の距離情報を真の前記被測定部の距離情報とする請求項1に記載の情報獲得方法。The information acquisition method according to claim 1, wherein, in the calculation step, provisional distance information of the measured part when the error is minimum is used as true distance information of the measured part.
前記第2算出段階は、前記被測定部の面傾き情報に基づいて、前記被測定部の面を延長した近傍において、前記被測定部の近傍の前記仮の距離情報を算出する請求項1又は請求項2に記載の情報獲得方法。2. The second calculation step calculates the temporary distance information in the vicinity of the measured part in the vicinity of an extended surface of the measured part based on the surface inclination information of the measured part. The information acquisition method according to claim 2. 前記算出段階は、画素毎の前記2点の照射位置から照射されたそれぞれの光の反射光の強度を抽出し、抽出した前記それぞれの光の前記被測定部及び前記被測定部の近傍からの前記反射光の前記画素毎の前記強度に基づいて、前記物体の前記距離情報の分布及び前記面傾き情報の分布を算出する請求項1乃至請求項3のいずれかに記載の情報獲得方法。The calculation step extracts the intensity of reflected light of each light emitted from the two irradiation positions for each pixel, and extracts the measured light from the vicinity of the measured part and the measured part. The information acquisition method according to claim 1, wherein the distance information distribution and the surface tilt information distribution of the object are calculated based on the intensity of the reflected light for each pixel. 前記算出段階は、前記距離情報の分布及び前記面傾き情報の分布に基づいて前記物体の三次元情報を算出する請求項4に記載の情報獲得方法。The information acquisition method according to claim 4, wherein the calculating step calculates three-dimensional information of the object based on a distribution of the distance information and a distribution of the surface inclination information. 前記第4算出段階は、前記被測定部の近傍の複数の場所において、前記被測定部の仮の面傾き情報と前記被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、In the fourth calculation step, at a plurality of locations in the vicinity of the part to be measured, an error between the provisional surface inclination information of the part to be measured and the provisional surface inclination information in the vicinity of the part to be measured is calculated,
前記算出段階は、前記誤差の二乗和が最小となる場合の前記被測定部の仮の距離情報を真の前記被測定部の距離情報とする請求項1乃至請求項5のいずれかに記載の情報獲得方法。6. The calculation step according to claim 1, wherein the provisional distance information of the measured part when the sum of squares of the error is minimized is the true distance information of the measured part. Information acquisition method.
前記算出段階は、前記物体の画像データに基づいて、前記物体のエッジを検出し、The calculating step detects an edge of the object based on image data of the object,
前記算出段階は、前記被測定部の近傍の複数の場所の内、前記物体のエッジを境界として前記物体の画像データを複数の領域に分割した領域の、前記被測定部の前記データが有る領域と同じ領域に前記データが有る、前記被測定部の近傍の場所の前記仮の面傾き情報In the calculation step, an area in which the data of the measured part is present is an area obtained by dividing the image data of the object into a plurality of areas from a plurality of locations in the vicinity of the measured part as boundaries. The provisional surface tilt information at a location in the vicinity of the part to be measured, where the data is in the same area as に基づいて前記被測定部の距離情報を算出する請求項6に記載の情報獲得方法。The information acquisition method according to claim 6, wherein the distance information of the measured part is calculated based on the information.
前記算出段階は、前記被測定部の近傍の複数の場所の内、前記被測定部からの反射光の強度と前記被測定部の近傍からの反射光の強度との差が所定の範囲内にある、前記被測定部の近傍の場所の前記仮の面傾き情報に基づいて前記被測定部の距離情報を算出する請求項7に記載の情報獲得方法。In the calculating step, the difference between the intensity of reflected light from the measured part and the intensity of reflected light from the vicinity of the measured part is within a predetermined range among a plurality of locations in the vicinity of the measured part. The information acquisition method according to claim 7, wherein the distance information of the measured part is calculated based on the temporary surface inclination information of a location in the vicinity of the measured part. 光学的に異なる2点の照射位置から、前記物体に光を照射する照射段階と、An irradiation stage for irradiating the object with light from two optically different irradiation positions;
前記2点の照射位置から照射された前記光による前記物体からの反射光をそれぞれ撮像する撮像段階と、An imaging stage for imaging reflected light from the object by the light emitted from the two irradiation positions;
をさらに備え、Further comprising
前記算出段階は、The calculating step includes
前記撮像段階によって撮像された、前記物体の被測定部からの反射光、及び前記被測定部の近傍からの反射光に基づいて、前記被測定部までの距離情報及び前記被測定部の面傾き情報を算出する請求項1乃至請求項8のいずれかに記載の情報獲得方法。Based on the reflected light from the measured part of the object and the reflected light from the vicinity of the measured part captured by the imaging step, the distance information to the measured part and the surface inclination of the measured part The information acquisition method according to claim 1, wherein the information is calculated.
前記撮像段階は、前記2点の照射位置のいずれかと光学的に同位置である撮像位置において、前記2点の照射位置から照射された前記光による前記物体からの反射光をそれぞれ撮像し、In the imaging step, at the imaging position that is optically the same as one of the two irradiation positions, the reflected light from the object by the light irradiated from the two irradiation positions is respectively imaged.
前記角度算出段階は、前記撮像段階が撮像する前記物体からの反射光に基づいて前記光を照射する角度を算出する請求項9に記載の情報獲得方法。The information acquisition method according to claim 9, wherein the angle calculation step calculates an angle at which the light is irradiated based on reflected light from the object imaged by the imaging step.
それぞれの前記照射位置において、前記照射段階が前記光を照射する発光タイミング、及び前記撮像段階が前記反射光を撮像する撮像タイミングを同期させる制御段階を更に備える請求項9又は請求項10に記載の情報獲得方法。11. The control method according to claim 9, further comprising: a control step of synchronizing an emission timing at which the irradiation step irradiates the light and an imaging timing at which the imaging step images the reflected light at each of the irradiation positions. Information acquisition method. 前記照射段階は、それぞれの前記照射位置において異なる波長の前記光を照射し、それぞれの前記照射位置において、実質的に同時に前記物体に前記光を照射し、The irradiating step irradiates the light of different wavelengths at each of the irradiation positions, and irradiates the object substantially simultaneously at the respective irradiation positions;
前記異なる波長の光の前記物体による前記反射光を、前記異なる波長のそれぞれを主要な波長成分とする光に分離する分光段階を更に備え、Further comprising a spectroscopic step of separating the reflected light of the different wavelength light from the object into light having each of the different wavelengths as main wavelength components,
前記撮像段階は、前記分離された光のそれぞれを撮像する請求項11に記載の情報獲得方法。The information acquisition method according to claim 11, wherein the imaging step images each of the separated lights.
前記分光段階は、前記異なる波長の光のそれぞれを主に透過する複数の光学フィルターを用いる請求項12に記載の情報獲得方法。The information acquisition method according to claim 12, wherein the spectroscopic step uses a plurality of optical filters that mainly transmit each of the light beams having different wavelengths. 物体までの距離情報及び物体の面傾き情報を獲得する画像撮像装置であって、An image capturing apparatus that acquires distance information to an object and surface tilt information of the object,
光学的に異なる2点の照射位置から、前記物体に光を照射する角度のうちいずれかを算出する角度算出部と、An angle calculation unit for calculating one of the angles at which the object is irradiated with light from two optically different irradiation positions;
前記2点の照射位置から照射された前記光による前記物体の被測定部からの反射光、及び前記被測定部の近傍からの反射光に基づいて、前記被測定部までの距離情報及び前記被測定部の面傾き情報を算出する算出部とBased on the reflected light from the measured part of the object by the light emitted from the two irradiation positions and the reflected light from the vicinity of the measured part, the distance information to the measured part and the measured target A calculation unit for calculating surface tilt information of the measurement unit;
を備え、With
前記算出部は、The calculation unit includes:
前記被測定部からの反射光の強度及び前記角度算出部によって算出された前記物体に光を照射する角度に基づいて、前記被測定部までの仮の距離情報を仮定した場合に、前記被測定部の面傾き情報を算出する第1算出手段と、Based on the intensity of reflected light from the measured part and the angle of irradiating the object with light calculated by the angle calculating part, assuming temporary distance information to the measured part, the measured First calculating means for calculating surface tilt information of the part;
前記被測定部までの前記仮の距離情報、及び前記被測定部の前記面傾き情報に基づいて、前記被測定部の近傍の仮の距離情報を算出する第2算出手段と、Second calculation means for calculating temporary distance information in the vicinity of the measured part based on the temporary distance information to the measured part and the surface inclination information of the measured part;
前記被測定部の近傍の前記仮の距離情報と、前記被測定部の近傍からの反射光の強度、The provisional distance information in the vicinity of the part to be measured and the intensity of reflected light from the vicinity of the part to be measured; 及び前記角度算出部によって算出された前記物体に光を照射する角度に基づいて、前記被測定部の近傍の面傾き情報を算出する第3算出手段と、And third calculation means for calculating surface tilt information in the vicinity of the measured part based on the angle of irradiating the object with light calculated by the angle calculating part;
前記被測定部の前記面傾き情報と、前記被測定部の近傍の前記面傾き情報との誤差を算出する第4算出手段とを有し、Fourth calculation means for calculating an error between the surface inclination information of the measured part and the surface inclination information near the measured part;
前記誤差が所定の範囲内である場合に、前記被測定部の前記仮の距離情報を、前記被測定部の真の距離情報として出力する画像撮像装置。An image pickup apparatus that outputs the temporary distance information of the measured part as the true distance information of the measured part when the error is within a predetermined range.
前記第4算出手段は、前記被測定部の複数の仮の距離情報に対して、それぞれ前記誤差を算出し、The fourth calculation means calculates the error for each of a plurality of temporary distance information of the measured part,
前記算出部は、前記誤差が最小となる場合の前記被測定部の仮の距離情報を真の前記被測定部の距離情報とする請求項14に記載の画像撮像装置。The image capturing apparatus according to claim 14, wherein the calculation unit sets the temporary distance information of the measured unit when the error is minimum as the true distance information of the measured unit.
前記第2算出手段は、前記被測定部の面傾き情報に基づいて、前記被測定部の面を延長した近傍において、前記被測定部の近傍の前記仮の距離情報を算出する請求項14又は請求項15に記載の画像撮像装置。The said 2nd calculation means calculates the said temporary distance information of the vicinity of the said to-be-measured part in the vicinity which extended the surface of the to-be-measured part based on the surface inclination information of the to-be-measured part. The image capturing device according to claim 15. 前記算出部は、画素毎の前記2点の照射位置から照射されたそれぞれの光の反射光の強度を抽出し、抽出した前記それぞれの光の前記被測定部及び前記被測定部の近傍からの前記反射光の前記画素毎の前記強度に基づいて、前記物体の前記距離情報の分布及び前記面傾き情報の分布を算出する請求項14乃至請求項16のいずれかに記載の画像撮像装置。The calculation unit extracts the intensity of reflected light of each light emitted from the two irradiation positions for each pixel, and extracts the measured light from the vicinity of the measured part and the measured part. The image imaging device according to claim 14, wherein the distance information distribution and the surface tilt information distribution of the object are calculated based on the intensity of the reflected light for each pixel. 前記算出部は、前記距離情報の分布及び前記面傾き情報の分布に基づいて前記物体の三次元情報を算出する請求項17に記載の画像撮像装置。The image capturing apparatus according to claim 17, wherein the calculation unit calculates three-dimensional information of the object based on a distribution of the distance information and a distribution of the surface inclination information. 前記第4算出手段は、前記被測定部の近傍の複数の場所において、前記被測定部の仮の面傾き情報と前記被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、The fourth calculating means calculates an error between the provisional surface inclination information of the measured part and the provisional surface inclination information near the measured part at a plurality of locations in the vicinity of the measured part,
前記算出部は、前記誤差の二乗和が最小となる場合の前記被測定部の仮の距離情報を真の前記被測定部の距離情報とする請求項14乃至請求項18のいずれかに記載の画像撮像装置。19. The calculation unit according to claim 14, wherein the provisional distance information of the measured unit when the sum of squares of the error is minimum is the true distance information of the measured unit. Imaging device.
前記算出部は、前記物体の画像データに基づいて、前記物体のエッジを検出し、The calculation unit detects an edge of the object based on image data of the object,
前記算出部は、前記被測定部の近傍の複数の場所の内、前記物体のエッジを境界として前記物体の画像データを複数の領域に分割した領域の、前記被測定部の前記データが有る領域と同じ領域に前記データが有る、前記被測定部の近傍の場所の前記仮の面傾き情報に基づいて前記被測定部の距離情報を算出する請求項19に記載の画像撮像装置。The calculation unit is an area in which the data of the measured part is present in an area obtained by dividing the image data of the object into a plurality of areas with the edge of the object as a boundary among a plurality of places in the vicinity of the measured part The image capturing apparatus according to claim 19, wherein the distance information of the measured part is calculated based on the temporary surface inclination information of a place near the measured part where the data is in the same area.
前記算出部は、前記被測定部の近傍の複数の場所の内、前記被測定部からの反射光の強度と前記被測定部の近傍からの反射光の強度との差が所定の範囲内にある、前記被測定部の近傍の場所の前記仮の面傾き情報に基づいて前記被測定部の距離情報を算出する請求項20に記載の画像撮像装置。The calculation unit includes a plurality of locations in the vicinity of the part to be measured, and a difference between the intensity of reflected light from the part to be measured and the intensity of reflected light from the vicinity of the part to be measured is within a predetermined range. 21. The image capturing apparatus according to claim 20, wherein distance information of the measurement target part is calculated based on the temporary surface inclination information of a location in the vicinity of the measurement target part. 光学的に異なる2点の照射位置から、前記物体に光を照射する照射部と、An irradiation unit for irradiating the object with light from two optically different irradiation positions;
前記2点の照射位置から照射された前記光による前記物体からの反射光をそれぞれ撮像する撮像部と、An imaging unit that images reflected light from the object by the light emitted from the two irradiation positions;
をさらに備え、Further comprising
前記算出部は、The calculation unit includes:
前記撮像部によって撮像された、前記物体の被測定部からの反射光、及び前記被測定部の近傍からの反射光に基づいて、前記被測定部までの距離情報及び前記被測定部の面傾き情報を算出する請求項14乃至請求項21のいずれかに記載の画像撮像装置。Based on the reflected light from the measured part of the object and the reflected light from the vicinity of the measured part captured by the imaging unit, the distance information to the measured part and the surface inclination of the measured part The image capturing apparatus according to any one of claims 14 to 21, which calculates information.
前記撮像部は、前記2点の照射位置のいずれかと光学的に同位置である撮像位置において、前記2点の照射位置から照射された前記光による前記物体からの反射光をそれぞれ撮像し、The imaging unit images reflected light from the object by the light emitted from the two irradiation positions at an imaging position that is optically the same as one of the two irradiation positions, respectively.
前記角度算出部は、前記撮像部が撮像する前記物体からの反射光に基づいて前記光を照射する角度を算出する請求項22に記載の画像撮像装置。The image imaging apparatus according to claim 22, wherein the angle calculation unit calculates an angle at which the light is irradiated based on reflected light from the object imaged by the imaging unit.
それぞれの前記照射位置において、前記照射部が前記光を照射する発光タイミング、及び前記撮像部が前記反射光を撮像する撮像タイミングを同期させる制御部を更に備える請求項22又は請求項23に記載の画像撮像装置。24. The control unit according to claim 22, further comprising: a control unit that synchronizes a light emission timing at which the irradiation unit irradiates the light and an imaging timing at which the imaging unit images the reflected light at each of the irradiation positions. Imaging device. 前記照射部は、それぞれの前記照射位置において異なる波長の前記光を照射し、それぞれの前記照射位置において、実質的に同時に前記物体に前記光を照射し、The irradiation unit irradiates the light of different wavelengths at each of the irradiation positions, and irradiates the light to the object substantially simultaneously at the respective irradiation positions;
前記異なる波長の光の前記物体による前記反射光を、前記異なる波長のそれぞれを主要な波長成分とする光に分離する分光部を更に備え、A spectroscopic unit that separates the reflected light from the object of light of different wavelengths into light having each of the different wavelengths as main wavelength components;
前記撮像部は、前記分離された光のそれぞれを撮像する請求項24に記載の画像撮像装置。The image capturing device according to claim 24, wherein the image capturing unit captures each of the separated lights.
前記分光部は、前記異なる波長の光のそれぞれを主に透過する複数の光学フィルターを用いる請求項25に記載の画像撮像装置。The image capturing apparatus according to claim 25, wherein the spectroscopic unit uses a plurality of optical filters that mainly transmit each of the light beams having different wavelengths. 物体までの距離情報及び物体の面傾き情報を獲得する画像処理装置であって、An image processing apparatus for acquiring distance information to an object and surface tilt information of the object,
光学的に異なる2点の照射位置から、前記物体に光を照射する角度のうちいずれかを算出する角度算出部と、An angle calculation unit for calculating one of the angles at which the object is irradiated with light from two optically different irradiation positions;
前記2点の照射位置から照射された前記光による前記物体の被測定部からの反射光、及び前記被測定部の近傍からの反射光に基づいて、前記被測定部までの距離情報及び前記被測定部の面傾き情報を算出する算出部とBased on the reflected light from the measured part of the object by the light emitted from the two irradiation positions and the reflected light from the vicinity of the measured part, the distance information to the measured part and the measured target A calculation unit for calculating surface tilt information of the measurement unit;
を備え、With
前記算出部は、The calculation unit includes:
前記被測定部からの反射光の強度及び前記角度算出部によって算出された前記物体に光を照射する角度に基づいて、前記被測定部までの仮の距離情報を仮定した場合に、前記被測定部の面傾き情報を算出する第1算出手段と、Based on the intensity of reflected light from the measured part and the angle of irradiating the object with light calculated by the angle calculating part, assuming temporary distance information to the measured part, the measured First calculating means for calculating surface tilt information of the part;
前記被測定部までの前記仮の距離情報、及び前記被測定部の前記面傾き情報に基づいて、前記被測定部の近傍の仮の距離情報を算出する第2算出手段と、Second calculation means for calculating temporary distance information in the vicinity of the measured part based on the temporary distance information to the measured part and the surface inclination information of the measured part;
前記被測定部の近傍の前記仮の距離情報と、前記被測定部の近傍からの反射光の強度、及び前記角度算出部によって算出された前記物体に光を照射する角度に基づいて、前記被測定部の近傍の面傾き情報を算出する第3算出手段と、Based on the temporary distance information in the vicinity of the measurement target, the intensity of reflected light from the vicinity of the measurement target, and the angle at which the object is irradiated with light calculated by the angle calculation unit. Third calculation means for calculating surface tilt information in the vicinity of the measurement unit;
前記被測定部の前記面傾き情報と、前記被測定部の近傍の前記面傾き情報との誤差を算出する第4算出手段とを有し、Fourth calculation means for calculating an error between the surface inclination information of the measured part and the surface inclination information near the measured part;
前記誤差が所定の範囲内である場合に、前記被測定部の前記仮の距離情報を、前記被測定部の真の距離情報として出力する画像処理装置。An image processing apparatus that outputs the temporary distance information of the measured part as the true distance information of the measured part when the error is within a predetermined range.
前記第4算出手段は、前記被測定部の複数の仮の距離情報に対して、それぞれ前記誤差を算出し、The fourth calculation means calculates the error for each of a plurality of temporary distance information of the measured part,
前記算出部は、前記誤差が最小となる場合の前記被測定部の仮の距離情報を真の前記被測定部の距離情報とする請求項27に記載の画像処理装置。28. The image processing apparatus according to claim 27, wherein the calculation unit sets the temporary distance information of the measured unit when the error is minimum as the true distance information of the measured unit.
前記第2算出手段は、前記被測定部の面傾き情報に基づいて、前記被測定部の面を延長した近傍において、前記被測定部の近傍の前記仮の距離情報を算出する請求項27又は請求項28に記載の画像処理装置。The said 2nd calculation means calculates the temporary distance information of the vicinity of the said to-be-measured part in the vicinity which extended the surface of the to-be-measured part based on the surface inclination information of the to-be-measured part. The image processing apparatus according to claim 28. 前記算出部は、画素毎の前記2点の照射位置から照射されたそれぞれの光の反射光の強度を抽出し、抽出した前記それぞれの光の前記被測定部及び前記被測定部の近傍からの前記反射光の前記画素毎の前記強度に基づいて、前記物体の前記距離情報の分布及び前記面傾き情報の分布を算出する請求項27乃至請求項29のいずれかに記載の画像処理装置。The calculation unit extracts the intensity of reflected light of each light emitted from the two irradiation positions for each pixel, and extracts the measured light from the vicinity of the measured part and the measured part. 30. The image processing apparatus according to claim 27, wherein the distance information distribution and the surface inclination information distribution of the object are calculated based on the intensity of the reflected light for each pixel. 前記算出部は、前記距離情報の分布及び前記面傾き情報の分布に基づいて前記物体の三次元情報を算出する請求項30に記載の画像処理装置。The image processing device according to claim 30, wherein the calculation unit calculates three-dimensional information of the object based on the distribution of the distance information and the distribution of the surface inclination information. 前記第4算出手段は、前記被測定部の近傍の複数の場所において、前記被測定部の仮の面傾き情報と前記被測定部の近傍の仮の面傾き情報との誤差をそれぞれ算出し、The fourth calculating means calculates an error between the provisional surface inclination information of the measured part and the provisional surface inclination information near the measured part at a plurality of locations in the vicinity of the measured part,
前記算出部は、前記誤差の二乗和が最小となる場合の前記被測定部の仮の距離情報を真の前記被測定部の距離情報とする請求項27乃至請求項31のいずれかに記載の画像処理装置。The calculation unit according to any one of claims 27 to 31, wherein the provisional distance information of the measured part when the sum of squares of the error is minimized is the true distance information of the measured part. Image processing device.
前記算出部は、前記物体の画像データに基づいて、前記物体のエッジを検出し、The calculation unit detects an edge of the object based on image data of the object,
前記算出部は、前記被測定部の近傍の複数の場所の内、前記物体のエッジを境界として前記物体の画像データを複数の領域に分割した領域の、前記被測定部の前記データが有る領域と同じ領域に前記データが有る、前記被測定部の近傍の場所の前記仮の面傾き情報に基づいて前記被測定部の距離情報を算出する請求項32に記載の画像処理装置。The calculation unit is an area in which the data of the measured part is present in an area obtained by dividing the image data of the object into a plurality of areas with the edge of the object as a boundary among a plurality of places in the vicinity of the measured part 33. The image processing apparatus according to claim 32, wherein distance information of the measured part is calculated based on the provisional surface inclination information of a location in the vicinity of the measured part where the data is in the same area.
前記算出部は、前記被測定部の近傍の複数の場所の内、前記被測定部からの反射光の強度と前記被測定部の近傍からの反射光の強度との差が所定の範囲内にある、前記被測定部の近傍の場所の前記仮の面傾き情報に基づいて前記被測定部の距離情報を算出する請求項33に記載の画像処理装置。The calculation unit includes a plurality of locations in the vicinity of the part to be measured, and a difference between the intensity of reflected light from the part to be measured and the intensity of reflected light from the vicinity of the part to be measured is within a predetermined range. 34. The image processing apparatus according to claim 33, wherein distance information of the measured part is calculated based on the provisional surface inclination information of a location in the vicinity of the measured part.
JP2000275176A 2000-02-16 2000-09-11 Information acquisition method, image capturing apparatus, and image processing apparatus Expired - Fee Related JP4141627B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2000275176A JP4141627B2 (en) 2000-09-11 2000-09-11 Information acquisition method, image capturing apparatus, and image processing apparatus
EP01103436A EP1126412B1 (en) 2000-02-16 2001-02-14 Image capturing apparatus and distance measuring method
US09/784,028 US6538751B2 (en) 2000-02-16 2001-02-16 Image capturing apparatus and distance measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2000275176A JP4141627B2 (en) 2000-09-11 2000-09-11 Information acquisition method, image capturing apparatus, and image processing apparatus

Publications (2)

Publication Number Publication Date
JP2002081908A JP2002081908A (en) 2002-03-22
JP4141627B2 true JP4141627B2 (en) 2008-08-27

Family

ID=18760836

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2000275176A Expired - Fee Related JP4141627B2 (en) 2000-02-16 2000-09-11 Information acquisition method, image capturing apparatus, and image processing apparatus

Country Status (1)

Country Link
JP (1) JP4141627B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101591471B1 (en) * 2008-11-03 2016-02-04 삼성전자주식회사 apparatus and method for extracting feature information of object and apparatus and method for generating feature map
KR101446173B1 (en) * 2013-02-21 2014-10-01 주식회사 고영테크놀러지 Tracking system and method for tracking using the same
JP2023130226A (en) * 2022-03-07 2023-09-20 東レエンジニアリング株式会社 Fluorescence inspection device

Also Published As

Publication number Publication date
JP2002081908A (en) 2002-03-22

Similar Documents

Publication Publication Date Title
EP1126412B1 (en) Image capturing apparatus and distance measuring method
JP4040825B2 (en) Image capturing apparatus and distance measuring method
US6549288B1 (en) Structured-light, triangulation-based three-dimensional digitizer
EP1183499B1 (en) Three dimensional optical scanning
WO2000070303A1 (en) Color structured light 3d-imaging system
US20140168424A1 (en) Imaging device for motion detection of objects in a scene, and method for motion detection of objects in a scene
JP3979670B2 (en) 3D color imaging
US6765606B1 (en) Three dimension imaging by dual wavelength triangulation
JP4090860B2 (en) 3D shape measuring device
JP2657505B2 (en) Mark position detecting device and mark arrangement method
JP4516590B2 (en) Image capturing apparatus and distance measuring method
JP3414624B2 (en) Real-time range finder
JP4150506B2 (en) Image capturing apparatus and distance measuring method
JP4141627B2 (en) Information acquisition method, image capturing apparatus, and image processing apparatus
US5701173A (en) Method and apparatus for reducing the unwanted effects of noise present in a three dimensional color imaging system
JP2003185412A (en) Apparatus and method for acquisition of image
JP3965894B2 (en) Image processing apparatus and image processing method
JP2004110804A (en) Three-dimensional image photographing equipment and method
JP4204746B2 (en) Information acquisition method, imaging apparatus, and image processing apparatus
JP3852285B2 (en) 3D shape measuring apparatus and 3D shape measuring method
JP4266286B2 (en) Distance information acquisition device and distance information acquisition method
JP2002015306A (en) Three-dimensional image generating device and three- dimensional image generating method
JPH11211470A (en) Distance measuring equipment
JPH0629702B2 (en) Spectral photo analyzer
JP2002286423A (en) Sample height measurement method, confocal microscope, and record medium with height measurement program of the confocal microscope recorded thereon, and the program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050909

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20061208

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070704

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070918

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20071116

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20080527

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20080611

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110620

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110620

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120620

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120620

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130620

Year of fee payment: 5

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees