JP4406796B2 - Non-contact three-dimensional object shape measuring method and apparatus - Google Patents

Non-contact three-dimensional object shape measuring method and apparatus Download PDF

Info

Publication number
JP4406796B2
JP4406796B2 JP19228098A JP19228098A JP4406796B2 JP 4406796 B2 JP4406796 B2 JP 4406796B2 JP 19228098 A JP19228098 A JP 19228098A JP 19228098 A JP19228098 A JP 19228098A JP 4406796 B2 JP4406796 B2 JP 4406796B2
Authority
JP
Japan
Prior art keywords
measurement
image
dimensional
section
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP19228098A
Other languages
Japanese (ja)
Other versions
JPH11344321A (en
Inventor
明 石井
Original Assignee
明 石井
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 明 石井 filed Critical 明 石井
Priority to JP19228098A priority Critical patent/JP4406796B2/en
Publication of JPH11344321A publication Critical patent/JPH11344321A/en
Application granted granted Critical
Publication of JP4406796B2 publication Critical patent/JP4406796B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Description

【0001】
【産業上の利用分野】
本発明は、種々の監視システム、交通計測システム、自動走行車、溶接、シーリング等の作業を行うロボット、製品の外観形状を自動検査する装置などの分野で利用されるもので、各システムあるいは各装置が対象とする物体上の各点の三次元位置座標を測定することにより物体の位置、距離あるいは形状を非接触に測定する方法および装置に関するものである。
【0002】
【従来の技術】
従来の代表的な非接触による三次元位置座標あるいは距離の測定方法には、三角測量法と合焦点法がある。前者は測定対象の表面にレーザ光を投射し、対象の表面から生じる反射光を投射方向から一定角度傾けた方向からテレビカメラで受光し、輝点像の位置の変化から三角測量の原理により、カメラに関連して設置された基準座標系における対象の三次元位置座標あるいは基準点からの距離を測定するものである。三角測量法は、板状の投射レーザ光と測定対象の表面とが交差してできる輝線で表される輪郭線を測定するので光切断法とも呼ばれる。後者は、距離を測定すべき対象の表面部分の像をCCD撮像素子のような光電変換素子の二次元配列からなるテレビカメラの受光面(光電変換面)上に結像させるために、対象あるいは撮像レンズの位置を調整して像の焦点を合わせる焦点調節機構を用いて、焦点調節機構の移動量から距離を測定するものである。対象表面全体の距離を求めるには、焦点調節機構を連続的に動かして、機構の移動位置ごとに焦点の合った表面部分を画像処理により検出し表面の各部の距離を求める。また基準座標系を設定することにより物体表面の各点の三次元位置座標を求めることができる。以下の記述では、各点の三次元位置座標を求めることと、異なる三次元位置座標間の距離を求めることは同じ測定方法によるものとし、技術的には同じ範疇にあるものとして特に両者を区別せずに用いることとする。
【0003】
【発明が解決しようとする課題】
三角測量法では、原理的に、対象からの反射散乱光を受光するために、受光光学系を測定光投射光学系に対し傾斜して配置する必要がある。このため対象の測定部分に死角が生じるという問題があり、特に、対象が多数の構成要素が密集したものであると、三角測量法は実際上適用不可能であった。また側方からレーザ光を投射するために投射光学系を測定空間に配置する必要があり、測定空間を塞ぐという問題がある。さらに投射光学系が横に張り出すため測定装置が全体として大きくなる傾向があり、視覚センサとしてロボットに搭載する目的には不利となっている。
【0004】
一方、従来の合焦点法では死角は少ないが、撮像光学系の光軸に沿って、すなわち対象に向かって移動する焦点調節機構を用いているため焦点調節機構の動作に時間を要し、物体までの距離測定を瞬時に行うことができないという欠点があり、また対象は、測定期間中、光軸と垂直の方向、すなわち横方向に移動することができないという制約があった。したがって、長尺の物体の形状測定、工場のコンベヤーベルトにより運ばれてくる製品の連続測定、ロボットの倣い制御に用いる距離センサ、あるいは屋外の移動物体の測定のように、測定期間中、対象が移動する場合の測定には従来の合焦点法は適していなかった。_
【0005】
本発明は、距離および三次元位置測定における死角の問題が少ない合焦点法において、光軸方向の焦点調節機構を必要としない撮像装置を用いることによって、高速の測定と光軸に対し横移動する対象の距離測定、あるいは三次元位置座標測定ができるようにすることを目的とするものである。
【0006】
【課題を解決するための手段】
上記目的を達成するために、本発明の合焦点法に基づく非接触三次元物体形状測定法においては、撮像装置において、撮像レンズの光軸に対して斜めに交差する測定断面を想定し、撮像レンズによる測定断面の像が結ぶ面にCCD撮像素子のような光電変換素子を二次元配列した二次元撮像素子の受光面を配置して、斜めに奥行き方向に広がる測定断面と物体表面が交差してできる物体の輪郭線を物体の画像から焦点の合った画像領域として検出し、測定断面により物体表面を横方向に走査して物体表面の異なる位置の輪郭線を連続して検出することにより対象の表面の位置と形状を得るようにする。
【0007】
そのため本発明の撮像装置では、通常のテレビカメラと異なり、光学像を受け光電変換する撮像素子の受光面を撮像レンズの光軸に垂直に配置することはしないで、測定断面の撮像レンズの光軸に対する傾きに依存して、受光面のレンズの光軸に対する角度を90゜からずらして配置する。その結果、斜め奥行き方向に広がる測定断面の近傍にある対象表面には、レンズの焦点調節をすることなしに、焦点が合うことになり、撮像素子の出力から得られる画像信号の中から画像情報処理により合焦領域を抽出することによって、合焦領域から対象を斜めに切断する断面の輪郭線を求めることができる。
【0008】
【作用】
上記のような撮像光学系の配置を採ることによって、対象の断面の輪郭形状が光学系の焦点調節をすることなしに画像の取得と同時に得られ高速測定が可能となる。また測定断面による横方向の走査により対象表面を走査するから対象が横移動する場合の測定も可能になる。また対象の横移動を対象表面の走査に積極的に利用することによって、コンベヤーベルト上の製品の形状測定に容易に適用することができる。したがって長尺の対象についても測定が可能となる。
【0009】
【実施例】
実施例について図面を参照して説明すると、図1は本発明にかかる三次元物体形状測定方法を実施した測定装置の基本的な構成を撮像光学系の断面図により示していて、撮像レンズ2の光軸4は断面図の紙面にあり、CCD撮像素子を例とする光電変換素子を二次元配列した二次元撮像素子3は紙面に垂直に配置され、その受光面(ab)7は光軸に対し図示のごとく角度φだけ傾斜し、同じく紙面に垂直な測定断面(AB)8の撮像レンズ2に関する結像面となっている。すなわち測定断面(AB)8は受光面(ab)7を結像面とする撮像光学系の合焦面になっていて、受光面(ab)7の端a、bは、それぞれ測定断面ABの端A、Bの像となっている。このとき測定断面(AB)8の撮像レンズの光軸4に対する傾斜角度を図示のごとく角度θとし、撮像レンズ2のレンズ中心(一般的にはレンズの物体側主点)Oから測定断面(AB)8を含む平面に立てた垂線の足をHとして距離OHを測定断面(AB)8の撮像レンズ2からの距離dとし、さらに撮像レンズ2の焦点距離をfとすると、これらの諸量の間には、d=f×cosθ×(tanθ+tanφ)の関係式が成立している必要がある。
【0010】
特に、角度θが45゜で、f=d×sinθの関係が成り立つように上記諸量を設定すると、角度φも45゜となり、急傾斜となることが避けられ、受光面(ab)7への像の入射角度が極端に大きくならず(入射光が受光面に平行するように入射せず)、受光条件を悪化させることなく本発明を実施できる構成となる。このとき撮像レンズ2の中心Oから測定断面(AB)8の中心(光軸上の点)までの距離は2fとなり、撮像レンズの光軸4の近傍の撮像倍率が1となって、受光面(ab)7と測定断面(AB)8は全体にわたりほぼ同じ寸法で一対一に対応する。厳密には撮像レンズの光軸4上で倍率が1であり、測定断面(AB)8の端Aに近づくにしたがい倍率は1から次第に大きくなり、端Bに近づくにっれて倍率は1より小さくなって行く。この変化が無視できる実施条件では測定断面(AB)8の全体に渡って倍率は1として良く、後で述べるように、受光面(ab)7上に得られる測定対象1の画像の各点に測定断面(AB)8上の三次元空間座標を対応付けることが容易となり、対象の三次元形状を得るための計算処理が簡単となる。図1は、角度θ、φが、共に45゜となる実施例について示している。従来の三角測量法(光切断法)では、撮像光学系の側方、測定断面の延長上の位置に、レーザ光を投射するための光学系が配置されるが、本発明では、そのような光学系を側方に配置する必要がないため小形で簡素な構成となっている。
【0011】
測定対象1の表面と測定断面(AB)8が交わってできる三次元曲線または直線として、測定対象1の輪郭線9(平行斜線で表示)が決定される。図1において、測定対象1をコンベヤーベルトのような搬送体6に載せて、矢印で示すように(図1と逆方向でも実施可能である)一定の速度で水平方向に、一般的には光軸4に対し垂直あるいは交差する方向に搬送すると、測定対象1の表面が測定断面(AB)8により一定速度で走査される。円環状光源5(光ファイバーの束で構成される場合はランプハウスと、円環状光源とランプハウスを繋ぐ光ファイバーガイドが必要になるが図には示されていない)により周囲から測定対象1に照明を与えながら、二次元撮像素子3により一定の時間間隔で測定対象1を撮像することによって、一定の間隔で測定対象1の異なる位置における断面の輪郭線9の画像を取り込むことができる。搬送速度が一定でなければ断面の輪郭線9の間隔は不等間隔となることは当然である。輪郭線9の像は、測定断面(AB)8上にあるから常に焦点が合っていて、二次元撮像素子3の出力として得られる測定対象1の画像から、従来の画像処理技術を適用して、合焦している画像領域として容易に抽出することができる。
【0012】
図2は、搬送体6上の平面に置かれた測定対象1を測定断面8により走査した時に得られる測定対象1上の異なる位置にあるN個の測定断面8をそれぞれ(1)、(2)、(3)、・・・、(N−2)、(N−1)、(N)として、また測定断面8の各々と測定対象1の表面とが交わるところとして検出されるN個の輪郭線9をそれぞれ[1]、[2]、[3]、・・・、[N−2]、[N−1]、[N]として、測定対象1の側面図(a)と平面図(b)にそれぞれ表示したもので、平面図(b)は輪郭線9の系列により測定対象1の三次元形状を表している。このとき搬送体6上の平面にできる輪郭線9も測定対象1の輪郭線の続きとして表示されている。側面図(a)において、(N−2)番目、(N−1)番目および(N)番目の測定断面8の、それぞれD1、D2、D3で示した測定部位は、従来の光切断法においては測定の死角になる部分で、測定断面8に相当する方向から投射される板状の測定光が届かない部分となる。本発明では、このような測定上の死角は生ぜず、平面図(b)の対応する輪郭線[N−2]、[N−1]、[N]に見られるように欠けることなく測定できる。
【0013】
合焦領域の抽出法は種々の方法が知られている。例えば、エリック・クロツコフの研究論文「フォーカシング」(インターナショナル・ジャーナル・オブ・コンピュータ・ビジョン誌、1巻、頁223−頁237、1987年に掲載、原題:ERIC KROTKOF,”Focusing,”International Journal of Computer Vision,1,pp.223−237(1987),Kluwer Academic Publishers,Boston)に種々の方法が比較して示されている。一般的には、合焦領域は非合焦領域に比較して明暗変化の大きい高い空間周波数成分を含むことを特徴とする画像領域である。多くの合焦領域抽出法は、このような特徴を反映する画像特徴量を用いて合焦の度合いを示す合焦度を定義し、合焦度の大小により合焦領域を判別している。大別すると、画像領域の空間周波数分布の高周波成分を直接的にフーリエ変換により求め合焦度を評価する方法と、計算処理を高速化するため高周波数成分を間接的に反映する画像特徴量を用いて合焦度を定義する方法がある。後者の方法としては、注目画素の近傍領域において、明暗分布の一次差分の大きさあるいは明暗分布のエッジ強度を近傍領域内の全画素について総和し、その総和量を注目画素の合焦度とするもの、明暗分布の二次差分量の大きさを全近傍画素に渡って総和したものを注目画素の合焦度とするもの、近傍領域の画素の明るさ(濃度)の変動の大きさを表す統計量である明るさ(濃度)に関する分散を注目画素の合焦度とするものなどがある。
【0014】
この他にも種々の合焦度の評価法があるが、いずれも注目画素の近傍領域における明るさ(濃度)変動の大きさを評価し、焦点が合っている領域は変動が大きく、焦点が外れている領域は変動が小さいことを利用していて、測定対象の表面の性状と測定の目的に適した、判別能力が高く、求めやすい合焦度が使用され、普及している画像処理の手法の組み合わせによって得られる画像特徴量を用いて定義されている。本発明の実施に当たっても、測定対象と測定の目的に応じて、適切な合焦度を定義して用いれば良い。
【0015】
抽出された合焦領域は、輪郭線と輪郭線の近傍画素を含むので、一般に帯状を呈し幅があり、より高精度に輪郭線の位置座標を求めるためには、合焦領域の中心を求める必要がある。合焦領域の中心を求める方法としは、画像処理技術において一般に細線化処理として知られている方法や単純に合焦領域を二値化して領域の幅の中心を取る方法や各画素の合焦度の値を重みとして合焦領域の幅方向の重心を求める方法など既存の画像処理手法やその変形、それらの組み合わせによって、画像処理量と得られる結果の精度の兼ね合いで適切な方法を取ることができる。なお輪郭線の検出処理をする前に、抽出された合焦領域に平滑化処理を施したり、孤立雑音を除くなどの雑音の影響を除く処理を行うことも、得られる画像の品質と目的とする測定精度に応じて行うことがある。
【0016】
以上において述べた画像処理手法や処理を高速化するためのハードウエア技術としての画像処理装置の構成については従来技術を適用でき、図1で示した実施例では特に述べていない。また二次元撮像素子を収容して駆動制御し、同素子より生ずる画像信号を適正な電気信号として出力する、いわゆる撮像カメラの全体構成についても本発明内容に直接関係しないので図示していない。さらに画像処理を行う計算機などのデータ処理装置(図示していない)に画像データを入力するために、画像信号を撮像カメラから受け取り、これを適正なディジタル信号に変換する信号変換装置についても従来の技術と知識により容易に構成できるものであり図示していない。さらに搬送体6を駆動制御するための装置、および以上の諸装置を全体として協調して動作させるための制御システムについても本発明の実施例の説明に直接関わらないので図示していないが、従来技術により実施できるものである。
【0017】
受光面(ab)3は測定断面(AB)8の結像面になっているから、受光面3で受ける画像の合焦領域にある各画素は、測定断面8上の各点に一対一に対応する。したがって合焦領域として得られる輪郭線9の像の各画素は、測定断面8上にある輪郭線9の各点と一対一に対応している。このことから、測定断面8上に三次元座標が既知の基準点を格子状に設定し、それらの基準点を撮像して対応する画素の画像上の座標を求め、予め画素座標と対応する三次元座標の変換表を作成しておくことができる。この時、撮像レンズ2の光学収差も同時に補正されている。
【0018】
具体的には、例えば、基準点として黒の背景をもつ白い円板を測定断面(AB)8上の格子点に複数個配置してこれらの円板を撮像し、画像処理により円板の二値化像を得て各円板の中心位置を算出する。そしてこれらの中心位置の画像上の座標と測定断面(AB)8上の基準円板の中心の三次元座標との対応表を先ず作成する。得られた画像上の各円板の中心位置は必ずしも画素の整数座標値あるいは有限桁の二進数で表された座標値(画素の座標の格子点)と一致しないから、先に得た対応表から補間法により画素の整数座標値あるいは有限桁の二進数座標値(画素の座標の格子点)に対する三次元座標値の対応表を改めて作成する。これは輪郭線の像の画素の座標値(m,n)を引数として対応する輪郭線の三次元座標値(x,y,z)を知ることができる座標変換表として働く。予め座標変換表を作成しておくことにより、輪郭線像から測定対象1上の輪郭線9の三次元座標を容易に、かつ高速に得ることができ、測定断面8によって測定対象1を走査することによって、測定対象1の表面形状を表す輪郭線の三次元座標の系列を効率的に得ることができる。この時、輪郭線9上の各点に対応する画素の明るさ(濃度)の値を三次元座標と共に保存しておけば、対象の三次元形状を各種表示装置や印刷物に表示する際、幾何学的形状に加えて対象の表面の明るさ(濃度)情報も同時に表示できるので、臨場感あるいは現実感豊かな表示が可能となる。
【0019】
前記したように、受光面7と測定断面8の傾斜角を共に撮像レンズの光軸4に対し45°となるように構成した場合には、測定断面における撮像倍率は光軸で厳密に等倍になり、光軸から離れるにしたがい変化するが、これを無視できる場合、あるいはレンズの結像公式に基づく倍率の理論式により補正することで十分な精度が達成できる場合には、二次元撮像素子3の光電変換素子の配列間隔pは既知であるから、基準格子点の撮像を行わなくても、倍率を用いて計算により輪郭線9の画素座標から三次元座標値を求めることができる。
【0020】
例えば、ここでは説明を簡単化するため二次元撮像素子3の光電変換素子と画素が一対一に対応する場合を扱う。輪郭線9の像の画素座標を(m,n)とし、測定断面8と撮像レンズの光軸4の交点を三次元座標軸の原点とする。x軸は測定断面8による測定対象1の走査方向に垂直に、y軸は走査方向に、z軸は光軸4に沿って取る。さらにm軸とn軸は、それぞれx軸とy軸に平行に取り、各座標軸の向きを適正に設定すると、輪郭線9上の各点の三次元座標(x,y,z)は、画素座標(m,n)から計算式x=p×(m−mo){1+δ(n−no)}、y=p×(n−no){1+δ(n−no)}×sin45°、z=p×(n−no){1+δ(n−no)}×sin45°により求めることができる。ただし、(mo,no)は光軸4の画素座標で、δ(n−no)は、光軸4から画素(m,n)が離れるにしたがい、撮像倍率が1から外れる度合いを表す補正量で、レンズの光学収差を除けば、光軸を原点とする画素座標値n−noの関数としてレンズの結像公式から容易に求めることができるものである。同様に、倍率が等倍でなくても予め光軸点の倍率がわかれば、画素座標から計算により対応する三次元座標を求めることができるのは当然である。なお測定対象1が測定断面8により連続走査され、N本の輪郭線9が得られるとき、N本の輪郭線9が測定対象1全体の三次元座標データを表現するようにするためには、例えば、N本の輪郭線9について、それぞれの前記y座標に輪郭線9の検出順に座標のオフセット値n×Δyを加えておけば良い。ただし、Δyは輪郭線9の測定間隔を示す定数、nは検出順番号1,2,・・,N−1,Nである。
【0021】
以上に述べた、本発明の実施例において行われる測定断面の撮像から測定対象の形状測定値を得るまでのデータ処理の流れを図3に示す。
【発明の効果】
【0022】
本発明は、以上説明したように構成されているので、以下に記載されるような効果を奏する。
【0023】
従来の合焦法の構成と異なり、合焦面を撮像光学系の光軸に対して斜めに設定し、合焦面を測定断面として、測定対象を測定断面により光軸に垂直にあるいは交差する方向に、横方向に走査することによって測定対象の三次元形状を得る構成となっているので、走査距離が光軸方向の測定空間の大きさに制限されず、長尺の対象でも測定可能で、また測定対象の移動を撮像光学系の光軸方向に制限する必要がないので測定システムの構成の自由度が大きく、製造業における生産ライン、種々の監視システム、交通計測システム、自動走行車、溶接、シーリング等の倣い作業を行うロボットなどに広く適用することができる。また焦点調節機構を必要としないので、測定対象を移動させながら高速に測定することができる。
【0024】
光切断法のように測定部分に撮像光学系の斜め側方からレーザ光を照射する必要がないので、照射光の陰となる部分に測定不能となる死角が生じない。このため高密度に集積された電子部品などの凹凸のある複雑な測定対象の形状測定において優れている。
【0025】
また光切断法のように測定部分に撮像光学系の斜め側方からレーザ光を照射するための光投射光学系を必要としないため、測定装置を小形に構成でき、作業経路を倣うための三次元形状測定用視覚センサとしてロボットに搭載するのに適している。
【図面の簡単な説明】
【図1】測定対象と本発明における撮像光学系の実施例を示す垂直断面図である。
【図2】測定対象と測定断面の系列の関係を示す側面図(a)と測定対象と検出された輪郭線の系列の関係を示す平面図(b)である。
【図3】測定断面の撮像から測定対象の形状測定値を得るまでのデータ処理の流れを示す流れ図である。
【符号の説明】
1 測定対象
2 撮像レンズ
3 二次元撮像素子
4 撮像レンズの光軸
5 円環状光源
6 搬送体
7 受光面
8 測定断面
9 輪郭線
[0001]
[Industrial application fields]
The present invention is used in the fields of various monitoring systems, traffic measurement systems, automatic traveling vehicles, robots that perform operations such as welding and sealing, devices for automatically inspecting the appearance of products, etc. The present invention relates to a method and apparatus for measuring the position, distance, or shape of an object in a non-contact manner by measuring the three-dimensional position coordinates of each point on the target object of the apparatus.
[0002]
[Prior art]
Conventional methods for measuring three-dimensional position coordinates or distance by non-contact include triangulation and focusing methods. The former projects laser light on the surface of the object to be measured, and the reflected light generated from the surface of the object is received by the TV camera from a direction inclined at a certain angle from the projection direction, and the principle of triangulation from the change in the position of the bright spot image, It measures the three-dimensional position coordinates of a target or the distance from a reference point in a reference coordinate system installed in association with the camera. The triangulation method is also called a light cutting method because it measures a contour line represented by a bright line formed by crossing a plate-like projection laser beam and the surface of a measurement object. In the latter, in order to form an image of a surface portion of an object whose distance is to be measured on a light receiving surface (photoelectric conversion surface) of a television camera composed of a two-dimensional array of photoelectric conversion elements such as a CCD image pickup element, The distance is measured from the amount of movement of the focus adjustment mechanism using a focus adjustment mechanism that adjusts the position of the imaging lens to focus the image. In order to obtain the distance of the entire target surface, the focus adjustment mechanism is continuously moved, the surface portion in focus at each moving position of the mechanism is detected by image processing, and the distance of each portion of the surface is obtained. Also, the three-dimensional position coordinates of each point on the object surface can be obtained by setting a reference coordinate system. In the following description, obtaining the 3D position coordinates of each point and obtaining the distance between different 3D position coordinates are based on the same measurement method, and technically distinguish between the two because they are in the same category. We will use without.
[0003]
[Problems to be solved by the invention]
In the triangulation method, in principle, in order to receive the reflected scattered light from the object, it is necessary to place the light receiving optical system tilted with respect to the measuring light projection optical system. For this reason, there is a problem that a blind spot is generated in the measurement part of the object, and in particular, when the object is a cluster of many components, the triangulation method is not practically applicable. Moreover, in order to project a laser beam from the side, it is necessary to arrange a projection optical system in the measurement space, and there is a problem that the measurement space is blocked. Furthermore, since the projection optical system projects sideways, the measuring apparatus tends to be large as a whole, which is disadvantageous for the purpose of mounting it on a robot as a visual sensor.
[0004]
On the other hand, in the conventional focusing method, the blind spot is small, but since the focus adjustment mechanism that moves along the optical axis of the imaging optical system, that is, toward the target, is used, it takes time to operate the focus adjustment mechanism. In this case, the object cannot be measured instantaneously, and the object cannot be moved in the direction perpendicular to the optical axis, that is, in the lateral direction during the measurement period. Therefore, during the measurement period, the object is measured during the measurement period, such as measuring the shape of a long object, continuously measuring products carried by a conveyor belt in a factory, distance sensors used for robot scanning control, or measuring outdoor moving objects. The conventional focusing method is not suitable for measurement when moving. _
[0005]
The present invention moves in a horizontal direction with respect to an optical axis at high speed by using an imaging apparatus that does not require a focus adjustment mechanism in the optical axis direction in a focusing method with few problems of blind spots in distance and three-dimensional position measurement. The object is to enable measurement of the distance of an object or measurement of three-dimensional position coordinates.
[0006]
[Means for Solving the Problems]
In order to achieve the above object, in the non-contact three-dimensional object shape measurement method based on the focusing method of the present invention, the imaging device assumes a measurement cross section that obliquely intersects with the optical axis of the imaging lens, and performs imaging. A light-receiving surface of a two-dimensional image sensor in which photoelectric conversion elements such as CCD image sensors are two-dimensionally arranged is placed on the surface that connects the images of the measurement cross-section with the lens, and the measurement cross-section that extends obliquely in the depth direction intersects the object surface. The object contour is detected as a focused image area from the object image, and the object surface is scanned laterally by the measurement cross section to detect the contour lines at different positions on the object surface continuously. Try to get the position and shape of the surface.
[0007]
Therefore, in the image pickup apparatus of the present invention, unlike a normal television camera, the light receiving surface of the image pickup element that receives and photoelectrically converts an optical image is not arranged perpendicular to the optical axis of the image pickup lens. Depending on the inclination with respect to the axis, the angle of the light receiving surface with respect to the optical axis of the lens is shifted from 90 °. As a result, the target surface in the vicinity of the measurement cross section extending in the oblique depth direction will be focused without adjusting the focus of the lens, and image information from the image signal obtained from the output of the image sensor By extracting the in-focus area by the processing, it is possible to obtain the outline of the cross section that cuts the target obliquely from the in-focus area.
[0008]
[Action]
By adopting the arrangement of the imaging optical system as described above, the contour shape of the cross section of the target can be obtained simultaneously with the image acquisition without adjusting the focus of the optical system, and high-speed measurement is possible. In addition, since the surface of the object is scanned by scanning in the horizontal direction using the measurement cross section, the measurement can be performed when the object moves laterally. Further, by actively utilizing the lateral movement of the object for scanning the surface of the object, it can be easily applied to the shape measurement of the product on the conveyor belt. Therefore, it is possible to measure a long object.
[0009]
【Example】
An embodiment will be described with reference to the drawings. FIG. 1 is a cross-sectional view of an imaging optical system showing a basic configuration of a measuring apparatus that implements a three-dimensional object shape measuring method according to the present invention. The optical axis 4 is on the paper surface of the cross-sectional view, the two-dimensional image sensor 3 in which photoelectric conversion elements such as a CCD image sensor are two-dimensionally arranged is arranged perpendicular to the paper surface, and its light receiving surface (ab) 7 is on the optical axis. On the other hand, it is inclined by an angle φ as shown in the figure, and is also an imaging plane for the imaging lens 2 having a measurement section (AB) 8 perpendicular to the paper surface. That is, the measurement cross section (AB) 8 is a focusing surface of the imaging optical system having the light receiving surface (ab) 7 as an imaging surface, and the ends a and b of the light receiving surface (ab) 7 are respectively in the measurement cross section AB. It is an image of ends A and B. At this time, the inclination angle of the measurement cross section (AB) 8 with respect to the optical axis 4 of the imaging lens is defined as an angle θ as shown in the figure, and the measurement cross section (AB) from the lens center (generally the object side principal point of the lens) O of the imaging lens 2. ) Where H is the vertical leg standing on the plane including 8 and OH is the distance d from the imaging lens 2 of the measurement cross section (AB) 8 and f is the focal length of the imaging lens 2. In the meantime, the relational expression d = f × cos θ × (tan θ + tan φ) needs to be established.
[0010]
In particular, when the above-mentioned various amounts are set so that the relationship of f = d × sin θ holds when the angle θ is 45 °, the angle φ is also 45 °, so that it is possible to avoid a steep inclination and to the light receiving surface (ab) 7. The incident angle of the image is not extremely large (incident light is not incident so as to be parallel to the light receiving surface), and the present invention can be implemented without deteriorating the light receiving conditions. At this time, the distance from the center O of the imaging lens 2 to the center of the measurement cross section (AB) 8 (a point on the optical axis) is 2f, the imaging magnification near the optical axis 4 of the imaging lens is 1, and the light receiving surface. (Ab) 7 and measurement cross section (AB) 8 have a substantially identical dimension and correspond one-to-one. Strictly speaking, the magnification is 1 on the optical axis 4 of the imaging lens, the magnification gradually increases from 1 as the end A of the measurement cross section (AB) 8 is approached, and the magnification is from 1 as the end B is approached. It gets smaller. Under the implementation conditions in which this change can be ignored, the magnification may be 1 over the entire measurement cross section (AB) 8, and as described later, at each point of the image of the measurement object 1 obtained on the light receiving surface (ab) 7. It becomes easy to associate the three-dimensional space coordinates on the measurement cross section (AB) 8, and the calculation process for obtaining the target three-dimensional shape is simplified. FIG. 1 shows an embodiment in which the angles θ and φ are both 45 °. In the conventional triangulation method (light cutting method), an optical system for projecting laser light is arranged on the side of the imaging optical system, on the extension of the measurement cross section. Since there is no need to arrange the optical system to the side, the structure is small and simple.
[0011]
An outline 9 (indicated by parallel oblique lines) of the measurement object 1 is determined as a three-dimensional curve or straight line formed by the surface of the measurement object 1 and the measurement cross section (AB) 8 intersecting. In FIG. 1, a measurement object 1 is placed on a carrier 6 such as a conveyor belt, and as indicated by an arrow (which can also be implemented in the opposite direction to FIG. 1) at a constant speed in the horizontal direction, generally light. When transported in a direction perpendicular to or intersecting the axis 4, the surface of the measurement object 1 is scanned at a constant speed by the measurement cross section (AB) 8. The object 1 is illuminated from the surroundings by an annular light source 5 (in the case of a bundle of optical fibers, a lamp house and an optical fiber guide connecting the annular light source and the lamp house are required but not shown in the figure). While giving the image of the measuring object 1 at a constant time interval by the two-dimensional imaging device 3, it is possible to capture an image of the cross-sectional outline 9 at different positions of the measuring object 1 at a constant interval. If the conveying speed is not constant, the intervals between the contour lines 9 of the cross section are naturally unequal intervals. Since the image of the contour line 9 is on the measurement cross section (AB) 8, it is always in focus, and the conventional image processing technique is applied from the image of the measurement object 1 obtained as the output of the two-dimensional image sensor 3. It can be easily extracted as an in-focus image area.
[0012]
FIG. 2 shows N measurement cross-sections 8 at different positions on the measurement target 1 obtained when the measurement target 1 placed on a plane on the transport body 6 is scanned by the measurement cross-section 8, respectively (1), (2 ), (3),..., (N-2), (N-1), (N), and N pieces detected as each of the measurement cross sections 8 and the surface of the measurement object 1 intersect. The contour line 9 is [1], [2], [3],..., [N-2], [N-1], and [N], respectively. These are respectively displayed in (b), and the plan view (b) represents the three-dimensional shape of the measuring object 1 by a series of contour lines 9. At this time, a contour line 9 formed on a plane on the carrier 6 is also displayed as a continuation of the contour line of the measuring object 1. In the side view (a), the measurement sites indicated by D1, D2, and D3 of the (N-2) -th, (N-1) -th, and (N) -th measurement sections 8, respectively, are obtained by the conventional light cutting method. Is a part that becomes a blind spot for measurement, and is a part where plate-shaped measurement light projected from the direction corresponding to the measurement cross section 8 does not reach. In the present invention, such a blind spot in measurement does not occur, and can be measured without being missing as seen in the corresponding contour lines [N-2], [N-1], and [N] in the plan view (b). .
[0013]
Various methods are known for extracting a focused area. For example, Eric Krockov's research paper “Focusing” (International Journal of Computer Vision, Volume 1, pages 223-237, published in 1987, original title: ERIC KROTKOF, “Focusing,” International Journal of Computer. Various methods are shown in comparison in Vision, 1, pp. 223-237 (1987), Kluwer Academic Publishers, Boston. In general, the in-focus area is an image area characterized by including a high spatial frequency component having a large change in brightness compared to the out-of-focus area. In many in-focus area extraction methods, an in-focus level indicating the degree of in-focus is defined using an image feature amount that reflects such characteristics, and the in-focus area is determined based on the in-focus level. Broadly speaking, a method for evaluating the in-focus degree by directly obtaining a high-frequency component of the spatial frequency distribution of the image region by Fourier transform, and an image feature amount that indirectly reflects the high-frequency component in order to speed up the calculation process. There is a method of defining the degree of focus using. As the latter method, the magnitude of the primary difference of the light / dark distribution or the edge intensity of the light / dark distribution is summed up for all the pixels in the neighboring area in the vicinity area of the target pixel, and the total amount is used as the focus degree of the target pixel. Represents the degree of variation in brightness (density) of pixels in the neighboring region, the sum of the magnitude of the secondary difference amount of the light / dark distribution over all neighboring pixels as the focus degree of the pixel of interest There is a method in which a variance relating to brightness (density), which is a statistic, is used as a focus degree of a pixel of interest.
[0014]
There are various other methods for evaluating the degree of focus. All of them evaluate the magnitude of brightness (density) fluctuation in the area near the pixel of interest. The out-of-range area uses small fluctuations, is suitable for the characteristics of the surface of the measurement target and the purpose of the measurement, has a high discrimination capability, is easy to find, and has a wide range of image processing. It is defined using image feature values obtained by a combination of methods. Even when the present invention is implemented, an appropriate degree of focus may be defined and used according to the measurement object and the purpose of measurement.
[0015]
Since the extracted in-focus area includes a contour line and pixels in the vicinity of the contour line, it generally has a band shape and a width. In order to obtain the position coordinates of the contour line with higher accuracy, the center of the in-focus area is obtained. There is a need. Focusing is a method of determining the center of the area, in the image processing technique generally methods and each pixel by binarizing method and simply focus area known as thinning process takes the center of the width of the area if Use an appropriate method in terms of the amount of image processing and the accuracy of the results obtained by using existing image processing methods such as the method of obtaining the center of gravity in the width direction of the in-focus area using the value of the degree of focus as a weight, and their modifications. be able to. In addition, before the contour detection process, it is possible to smooth the extracted focus area or perform a process that eliminates the influence of noise such as removing isolated noise. Depending on the measurement accuracy to be performed.
[0016]
The conventional technology can be applied to the configuration of the image processing apparatus as hardware technology for speeding up the image processing technique and processing described above, and is not particularly described in the embodiment shown in FIG. Also, the overall configuration of a so-called imaging camera that accommodates and controls the drive of a two-dimensional imaging device and outputs an image signal generated from the device as an appropriate electrical signal is not shown because it is not directly related to the content of the present invention. Furthermore, in order to input image data to a data processing device (not shown) such as a computer that performs image processing, a conventional signal conversion device that receives an image signal from an imaging camera and converts it into an appropriate digital signal is also used. It can be easily configured by technology and knowledge and is not shown. Further, a device for driving and controlling the transport body 6 and a control system for operating the above devices as a whole are not shown because they are not directly related to the description of the embodiments of the present invention. It can be implemented by technology.
[0017]
Since the light receiving surface (ab) 3 is an imaging surface of the measurement cross section (AB) 8, each pixel in the focus area of the image received by the light receiving surface 3 is in one-to-one correspondence with each point on the measurement cross section 8. Correspond. Therefore, each pixel of the image of the contour line 9 obtained as the in-focus region has a one-to-one correspondence with each point of the contour line 9 on the measurement section 8. Accordingly, reference points whose three-dimensional coordinates are known are set in a lattice shape on the measurement cross section 8, and the coordinates of the corresponding pixels are obtained by imaging these reference points, and the tertiary corresponding to the pixel coordinates in advance. An original coordinate conversion table can be created. At this time, the optical aberration of the imaging lens 2 is also corrected.
[0018]
Specifically, for example, a plurality of white disks having a black background as reference points are arranged at lattice points on the measurement cross section (AB) 8, and images of these disks are picked up. Obtain a digitized image and calculate the center position of each disk. Then, a correspondence table between the coordinates of these center positions on the image and the three-dimensional coordinates of the center of the reference disk on the measurement section (AB) 8 is first created. Since the center position of each disk on the obtained image does not always match the integer coordinate value of the pixel or the coordinate value represented by the binary number of finite digits (the grid point of the pixel coordinate), the correspondence table obtained earlier Then, a correspondence table of three-dimensional coordinate values with respect to integer coordinate values of pixels or binary coordinate values of finite digits (lattice points of pixel coordinates) is created again by interpolation. This functions as a coordinate conversion table that can know the three-dimensional coordinate values (x, y, z) of the corresponding contour line using the coordinate values (m, n) of the pixels of the contour image as arguments. By creating a coordinate conversion table in advance, the three-dimensional coordinates of the contour line 9 on the measurement object 1 can be obtained easily and at high speed from the contour image, and the measurement object 1 is scanned by the measurement section 8. Thus, a series of three-dimensional coordinates of the contour line representing the surface shape of the measuring object 1 can be efficiently obtained. At this time, if the brightness (density) values of the pixels corresponding to the respective points on the contour line 9 are stored together with the three-dimensional coordinates, the geometric three-dimensional shape is displayed when the target three-dimensional shape is displayed on various display devices or printed materials. In addition to the geometric shape, information on the brightness (density) of the surface of the object can be displayed at the same time, so that a realistic or realistic display can be achieved.
[0019]
As described above, when the inclination angle of the light receiving surface 7 and the measurement section 8 is both 45 ° with respect to the optical axis 4 of the imaging lens, the imaging magnification in the measurement section is strictly equal to the optical axis. If it can be ignored but it can be ignored, or if sufficient accuracy can be achieved by correcting with the theoretical formula of the magnification based on the lens imaging formula, the two-dimensional image sensor Since the arrangement interval p of the photoelectric conversion elements 3 is known, the three-dimensional coordinate value can be obtained from the pixel coordinates of the contour line 9 by calculation using the magnification without imaging the reference grid point.
[0020]
For example, here, the case where the photoelectric conversion element and the pixel of the two-dimensional imaging element 3 correspond one-to-one is handled for the sake of simplicity. The pixel coordinates of the image of the contour line 9 are (m, n), and the intersection of the measurement section 8 and the optical axis 4 of the imaging lens is the origin of the three-dimensional coordinate axis. The x axis is perpendicular to the scanning direction of the measuring object 1 by the measurement cross section 8, the y axis is along the scanning direction, and the z axis is along the optical axis 4. Furthermore, if the m-axis and the n-axis are parallel to the x-axis and the y-axis, respectively, and the orientation of each coordinate axis is set appropriately, the three-dimensional coordinates (x, y, z) of each point on the contour 9 are From the coordinates (m, n), the calculation formula x = p × (m−mo) {1 + δ (n−no)}, y = p × (n−no) {1 + δ (n−no)} × sin 45 °, z = p × (n−no) {1 + δ (n−no)} × sin 45 °. However, (mo, no) is the pixel coordinate of the optical axis 4, and δ (n-no) is a correction amount indicating the degree to which the imaging magnification deviates from 1 as the pixel (m, n) moves away from the optical axis 4. Thus, except for the optical aberration of the lens, it can be easily obtained from the lens imaging formula as a function of the pixel coordinate value n-no with the optical axis as the origin. Similarly, if the magnification of the optical axis point is known in advance even if the magnification is not equal, it is natural that the corresponding three-dimensional coordinates can be obtained by calculation from the pixel coordinates. In addition, when the measurement object 1 is continuously scanned by the measurement cross section 8 and N contour lines 9 are obtained, in order for the N contour lines 9 to express the three-dimensional coordinate data of the entire measurement object 1, For example, for the N contour lines 9, coordinate offset values n × Δy may be added to the respective y coordinates in the order in which the contour lines 9 are detected. However, (DELTA) y is a constant which shows the measurement space | interval of the outline 9, and n is a detection order number 1,2, ..., N-1, N.
[0021]
FIG. 3 shows a flow of data processing from the measurement cross-section imaging performed in the embodiment of the present invention described above to obtaining the shape measurement value of the measurement object.
【The invention's effect】
[0022]
Since the present invention is configured as described above, the following effects can be obtained.
[0023]
Unlike the conventional focusing method, the focal plane is set obliquely with respect to the optical axis of the imaging optical system, the focal plane is the measurement cross section, and the measurement target is perpendicular to or intersects the optical axis by the measurement cross section. Since the three-dimensional shape of the measurement object is obtained by scanning in the horizontal and horizontal directions, the scanning distance is not limited by the size of the measurement space in the optical axis direction and can be measured even for long objects. In addition, since there is no need to limit the movement of the measurement object in the direction of the optical axis of the imaging optical system, the degree of freedom in the configuration of the measurement system is great. Production lines in the manufacturing industry, various monitoring systems, traffic measurement systems, automated vehicles, It can be widely applied to robots that perform copying operations such as welding and sealing. In addition, since a focus adjustment mechanism is not required, it is possible to measure at high speed while moving the measurement object.
[0024]
Unlike the light cutting method, it is not necessary to irradiate the measurement portion with laser light from an oblique side of the imaging optical system, so that a blind spot that cannot be measured does not occur in the shaded portion of the irradiation light. For this reason, it is excellent in the shape measurement of a complex measurement object with unevenness such as electronic parts integrated with high density.
[0025]
In addition, the optical projection optical system for irradiating the laser beam from the oblique side of the imaging optical system is not required unlike the light cutting method, so that the measuring device can be made compact, and a tertiary to follow the work path. It is suitable for mounting on a robot as a visual sensor for original shape measurement.
[Brief description of the drawings]
FIG. 1 is a vertical sectional view showing an example of a measurement object and an imaging optical system according to the present invention.
FIG. 2 is a side view (a) showing a relationship between a measurement target and a series of measurement cross sections, and a plan view (b) showing a relationship between the measurement target and a series of detected contour lines.
FIG. 3 is a flowchart showing a flow of data processing from imaging of a measurement cross section to obtaining a shape measurement value of a measurement object.
[Explanation of symbols]
DESCRIPTION OF SYMBOLS 1 Measurement object 2 Imaging lens 3 Two-dimensional imaging device 4 Optical axis 5 of imaging lens 5 Circular light source 6 Carrier 7 Light-receiving surface 8 Measurement cross section 9 Contour line

Claims (2)

三次元空間に存在する物体の画像を用いて、物体の表面形状を非接触で測定する三次元物体形状測定方法において、撮像レンズの光軸に対して斜めに交差する測定断面の像が結ぶ面に光電変換素子を互いに近接して、各次元とも大略等ピンチで二次元配列した二次元撮像素子の受光面を設置して物体の画像を取得し、前記物体の画像から焦点の合った合焦画像領域を抽出して、前記合焦画像領域から測定断面と物体表面が交差してできる物体の輪郭線を検出することにより物体の三次元形状を測定することを特徴とする非接触三次元物体形状測定方法In a 3D object shape measurement method that measures the surface shape of an object in a non-contact manner using an image of the object that exists in the 3D space, the plane that connects the images of the measurement cross section that obliquely intersects the optical axis of the imaging lens In addition, a light receiving surface of a two-dimensional image sensor in which the photoelectric conversion elements are arranged close to each other and are arranged in two dimensions with approximately equal pinches in each dimension is used to obtain an image of the object, and the focused image is obtained from the object image. Non-contact three-dimensional measurement, wherein a three-dimensional shape of an object is measured by extracting a focus image area and detecting a contour line of the object formed by intersecting the measurement cross section and the object surface from the focused image area Object shape measurement method 三次元空間に存在する物体の画像を用いて、物体の表面形状を非接触で測定する三次元物体形状測定装置において、撮像レンズの光軸に対して斜めに交差する測定断面の像が結ぶ面に光電変換素子を互いに近接して、各次元とも大略等ピッチで二次元配列した二次元撮像素子の受光面を有する撮像装置と、前記測定断面により物体表面を撮像レンズの光軸に対し交差する方向に走査するための走査機構と、測定断面と物体表面が交差してできる物体の輪郭線を物体の画像から検出する画像処理装置とを有することを特徴とする非接触三次元物体形状測定装置In a three-dimensional object shape measurement device that measures the surface shape of an object in a non-contact manner using an image of the object existing in the three-dimensional space, the surface that connects the images of the measurement cross section that obliquely intersects the optical axis of the imaging lens in, and a photoelectric conversion element adjacent to each other, crossing the optical axis of each dimension with an imaging device having a light receiving surface of the two-dimensional imaging elements arranged two-dimensionally on a large substantially constant pitch, the imaging lens of the object surface by the measuring section A non-contact three-dimensional object shape measurement comprising: a scanning mechanism for scanning in a moving direction; and an image processing device that detects an outline of an object formed by intersecting a measurement cross section and an object surface from an image of the object apparatus
JP19228098A 1998-06-01 1998-06-01 Non-contact three-dimensional object shape measuring method and apparatus Expired - Lifetime JP4406796B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP19228098A JP4406796B2 (en) 1998-06-01 1998-06-01 Non-contact three-dimensional object shape measuring method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP19228098A JP4406796B2 (en) 1998-06-01 1998-06-01 Non-contact three-dimensional object shape measuring method and apparatus

Publications (2)

Publication Number Publication Date
JPH11344321A JPH11344321A (en) 1999-12-14
JP4406796B2 true JP4406796B2 (en) 2010-02-03

Family

ID=16288660

Family Applications (1)

Application Number Title Priority Date Filing Date
JP19228098A Expired - Lifetime JP4406796B2 (en) 1998-06-01 1998-06-01 Non-contact three-dimensional object shape measuring method and apparatus

Country Status (1)

Country Link
JP (1) JP4406796B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724489B2 (en) * 2000-09-22 2004-04-20 Daniel Freifeld Three dimensional scanning camera
JP2004085467A (en) * 2002-08-28 2004-03-18 Aisin Seiki Co Ltd Device for three dimensional measurement
JP5298664B2 (en) * 2008-06-27 2013-09-25 パナソニック株式会社 Shape measuring device
JP6372300B2 (en) * 2014-10-17 2018-08-15 新日鐵住金株式会社 Shape measuring apparatus and shape measuring method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61193013A (en) * 1985-02-22 1986-08-27 Hitachi Ltd Detecting method of fine displacement of surface
JPH0735545A (en) * 1993-07-22 1995-02-07 Nissan Motor Co Ltd Optical range finder
JP3387168B2 (en) * 1993-09-10 2003-03-17 株式会社デンソー Optical position detector
JPH11132748A (en) * 1997-10-24 1999-05-21 Hitachi Ltd Multi-focal point concurrent detecting device, stereoscopic shape detecting device, external appearance inspecting device, and its method

Also Published As

Publication number Publication date
JPH11344321A (en) 1999-12-14

Similar Documents

Publication Publication Date Title
KR100702071B1 (en) Method/system measuring object features with 2d and 3d imaging coordinated
IL138414A (en) Apparatus and method for optically measuring an object surface contour
JPH0674907A (en) Detection method for defect of tranparent plate-like body
JPH11132748A (en) Multi-focal point concurrent detecting device, stereoscopic shape detecting device, external appearance inspecting device, and its method
JP4406796B2 (en) Non-contact three-dimensional object shape measuring method and apparatus
US5568258A (en) Method and device for measuring distortion of a transmitting beam or a surface shape of a three-dimensional object
JP5255564B2 (en) Scanner system for charger
JP4275661B2 (en) Displacement measuring device
US5104216A (en) Process for determining the position and the geometry of workpiece surfaces
JP6720358B2 (en) 3D scanning system
KR100415796B1 (en) Method of determining a scanning interval in surface inspection
JPH0771931A (en) Method for locating object
JP3758763B2 (en) Method for optical measurement of hole position
JP2954381B2 (en) Pattern inspection method and apparatus
JP3180091B2 (en) Non-contact dimension measurement method by laser autofocus
JPH10185514A (en) Coil position detector
JP2006003335A (en) Noncontact measuring method of shape of three dimensional object
JPH10185515A (en) Coil position detector
KR100240259B1 (en) Apparatus for measuring three dimension using spherical lens and laser scanner
JP3939866B2 (en) Headlight optical axis adjustment method
JPH10185519A (en) Coil locator
JP3363576B2 (en) Article presence / absence determination device
JPH0961117A (en) Three-dimensional position detector
JPH0534602B2 (en)
JP3201297B2 (en) Coil position detection device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050530

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20061114

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20061212

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20070417

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070616

A911 Transfer to examiner for re-examination before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20070726

A912 Re-examination (zenchi) completed and case transferred to appeal board

Free format text: JAPANESE INTERMEDIATE CODE: A912

Effective date: 20070824

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20091028

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121120

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131120

Year of fee payment: 4

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

EXPY Cancellation because of completion of term