JP3835505B2 - Non-contact surface shape measuring device - Google Patents

Non-contact surface shape measuring device Download PDF

Info

Publication number
JP3835505B2
JP3835505B2 JP32355998A JP32355998A JP3835505B2 JP 3835505 B2 JP3835505 B2 JP 3835505B2 JP 32355998 A JP32355998 A JP 32355998A JP 32355998 A JP32355998 A JP 32355998A JP 3835505 B2 JP3835505 B2 JP 3835505B2
Authority
JP
Japan
Prior art keywords
measurement
shape data
shape
area
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP32355998A
Other languages
Japanese (ja)
Other versions
JP2000146542A (en
Inventor
章人 渡辺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tokyo Seimitsu Co Ltd
Original Assignee
Tokyo Seimitsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tokyo Seimitsu Co Ltd filed Critical Tokyo Seimitsu Co Ltd
Priority to JP32355998A priority Critical patent/JP3835505B2/en
Publication of JP2000146542A publication Critical patent/JP2000146542A/en
Application granted granted Critical
Publication of JP3835505B2 publication Critical patent/JP3835505B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Description

【0001】
【発明の属する技術分野】
本発明は非接触表面形状測定装置に係り、特に光の干渉を用いて被測定面の形状を測定する非接触表面形状測定装置に関する。
【0002】
【従来の技術】
従来、レーザ干渉計を用いて被測定面の微細形状を非接触で測定する非接触表面形状測定装置が広く知られている。この非接触表面形状測定装置は、レーザ光源からレーザ光を被測定面及び基準反射面に出射し、被測定面及び基準反射面で反射された光を干渉させて、その干渉によって生じた干渉縞をCCDカメラ等の撮像手段によって撮像する。そして、画像解析により干渉縞の形状から前記被測定面の形状を算出するというものである。
【0003】
このような光の干渉を用いた非接触表面形状測定装置は、ワーク表面に接触することなく測定を行うことができるため、ワーク表面を傷つけることがなく、また、ワーク表面の画像を測定データとして撮像手段によって瞬時に取り込むため、面形状を高速、高精度で測定することができるという特徴がある。
【0004】
【発明が解決しようとする課題】
しかしながら、従来の非接触表面形状測定装置には、以下のような問題があった。1つには、測定範囲(CCDカメラの1視野の面積)がCCD撮像面積/対物レンズ倍率で決定されるため、1回の測定可能な面積に制限があるという問題である。もう1つには、測定データの横分解能は、測定範囲/CCD素子個数で決定されるため、1視野の面積を広くとればその分横分解能が低下するという問題である。
【0005】
従って、従来の装置では高い横分解能での面形状の測定は0.数mm角〜数mm角程度の狭い測定範囲に限られていた。
本発明はこのような事情に鑑みてなされたもので、光の干渉を用いた非接触表面形状測定装置において、測定データの横分解能を高分解能に維持したまま1視野の面積を越える大面積の測定を行うことができる非接触表面形状測定装置を提供することを目的とする。
【0006】
【課題を解決するための手段】
上記目的を達成するために、請求項1に記載の発明は、被測定面で反射させた光と参照光との干渉によって生じた干渉縞を撮像手段によって撮像し、該撮像した干渉縞に基づいて前記被測定面の形状を測定する非接触表面形状測定装置において、前記被測定面の測定範囲を複数の領域に分割する手段であって、各領域の一部が他の領域と重複するように分割する測定範囲分割手段と、前記測定範囲分割手段によって分割した領域毎に前記撮像手段によって干渉縞の画像を取得する画像取得手段と、前記画像取得手段によって取得した画像に基づいて各領域の形状を示す形状データを算出する形状データ算出手段と、前記形状データ算出手段によって算出した各領域の形状データを、各領域の重複する部分において前記形状データが最も合致するように補正する形状データ補正手段であって、各領域のうち前記被測定面の測定範囲の中央部に位置する領域を初期の基準測定面とすると共に、該基準測定面の領域と重複する部分を有する周辺の領域を被補正測定面として該被補正測定面の領域の形状データを補正するものとし、補正済みの領域を新たな基準測定面とすると共に、該基準測定面の領域と重複する部分を有する未補正の領域を被補正測定面として該被補正測定面の領域の形状データを補正することによって全ての領域の形状データを順次補正する形状データ補正手段と、前記形状データ補正手段によって補正された各領域の形状データを繋ぎ合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する形状データ生成手段と、を備えたことを特徴としている。
【0007】
本発明によれば、撮像手段の1視野範囲に対して測定範囲の面積が大きい場合に、その測定範囲を複数の領域に分割すると共に、その領域の一部において隣接する他の領域と重複するように分割し、各領域毎に干渉縞を撮像して形状データを算出する。そして、各領域の重複した部分において形状データが最も合致するように各領域の形状データを補正し、各領域の形状データを繋ぎ合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する。これにより、測定データ(形状データ)の横分解能を高分解能に維持したまま1視野の面積を越える大面積の表面形状の測定を行うことができる。
【0008】
また、請求項2に記載の発明は、被測定面で反射させた光と参照光との干渉によって生じた干渉縞を撮像手段によって撮像し、該撮像した干渉縞に基づいて前記被測定面の形状を測定する非接触表面形状測定装置において、前記被測定面の測定範囲に離散した領域を設定する領域設定手段と、前記領域設定手段によって設定した領域毎に前記撮像手段によって干渉縞の画像を取得する画像取得手段と、前記画像取得手段によって取得した画像に基づいて各領域の形状を示す形状データを算出する形状データ算出手段と、前記撮像手段の視野範囲を移動させる移動機構に起因して生じる誤差を示す誤差データに基づいて、前記形状データ算出手段によって算出した各領域の形状データを補正し、該補正した各領域の形状データを合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する形状データ補正手段と、を備えたことを特徴としている。
【0009】
本発明によれば、撮像手段の1視野範囲に対して測定範囲の面積が大きい場合に、被測定面の測定範囲に離散した領域を設定し、各領域毎に干渉縞を撮像して形状データを算出する。そして、前記撮像手段の視野範囲を移動させる移動機構に起因して生じる誤差を各領域の形状データから除去し、それらの形状データを合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する。これにより、測定データ(形状データ)の横分解能を高分解能に維持したまま1視野の面積を越える大面積の測定を行うことができる。
【0010】
【発明の実施の形態】
以下添付図面に従って本発明に係る非接触表面形状測定装置の好ましい実施の形態について詳説する。
図1は、本発明に係る非接触表面形状測定装置の測定部の構成を示した斜視図である。同図に示すように本測定装置の測定部は、基台10上をX、Y方向に移動するXYステージ12と、基台10に設置された支柱14に支持され、支柱14にガイドされて上下方向(Z方向)に移動する測定部本体16とを備えている。尚、XYステージ12及び測定部本体16は、それぞれ図示しない制御部の制御によってモータ駆動され、又は、図示しないツマミの手動操作によってXY方向及びZ方向に移動するようになっている。
【0011】
前記XYステージ12は、同図に示すように被測定ワークWを載置し、上述の移動によってそのワークWをXY方向に移動させて測定位置を調整する。測定位置の調整は、後述するように、ワークWの測定面積が広く、測定範囲が測定部本体16に搭載されたCCDカメラの1視野範囲を越える場合には、予め設定された動作で自動で行われるようになっている。
【0012】
前記測定部本体16には、内部にレーザ干渉計及び撮像部が搭載されており、測定部本体16下部にはレーザ干渉計の一部である対物レンズ16Aが取り付けられている。本装置では、レーザ干渉計として、例えばトワイマン−グリーン干渉計が採用される。このレーザ干渉計は、測定部本体16内に搭載されたレーザ光源からレーザ光線を出射し、その光線をハーフミラーで2つの光線に分けてそれぞれ対物レンズを介して上記XYステージ12に載置されたワークWの表面と所定の基準反射面に照射する。そして、そのワークW表面と基準反射面によって反射された光線(反射光)を対物レンズを介して再びハーフミラーで重ね合わせ、これらの反射光を干渉させて撮像部に導く。尚、干渉計としては多くの種類のものが知られており、本実施の形態では、トワイマン−グリーン干渉計を採用することとしたが、本発明はこれに限定されるものではなく、任意の種類の干渉計を採用することができる。
【0013】
前記撮像部は、結像レンズとCCDカメラから構成され、上述のようにハーフミラーから導かれた干渉した光を結像レンズで集光し、CCDカメラの撮像面に結像する。これにより、ワークW表面で反射された反射光と基準反射面で反射された反射光(参照光)との干渉によって生じた干渉縞の画像を取得することができる。
【0014】
このようにして測定部本体16によって取得したワーク表面の干渉縞の画像は、図示しない演算部(演算部は主としてコンピュータで構成される。)に送信された後、演算部で所定の画像解析が行われて、ワーク表面の形状が求められるようになっている。
次に、上述のように測定部本体16によって取得された画像に基づいてワーク表面の形状を算出する上記演算部の構成について説明する。図2は、その演算部の構成を示した構成図である。同図に示すように演算部はCPU30、メモリ(RAM)32、外部記憶装置34、表示部36、入力部38、ステージ制御部40及び画像処理部42等から構成される。
【0015】
CPU30は、所定のプログラムに従って、上記測定部のXYテーブル12等の制御、データ処理等の各種演算処理を行う。メモリ32及び外部記憶装置34は、必要に応じてCPU30から送られるデータを記録し、また、必要に応じて記録しているデータをCPU30に出力する。尚、外部記憶装置34はデータ保存用に用いられるメモリである。表示部36には、測定開始前における測定内容の設定画面、測定後における測定結果の表示画面等の各種情報画面が文字、数値、グラフィック等で表示される。入力部38は、例えばユーザがキーボードやマウス等を用いて入力した情報をCPU30に入力する。
【0016】
ステージ制御部40は、CPU30から入力される制御信号に基づいて上記XYステージ12の駆動モータを駆動してXYステージ12をX、Y方向に移動させる。CPU30は、例えば、予めユーザが設定したXYステージ12の移動軌跡に従って前記制御信号をステージ制御部40に入力する。
画像処理部42は、上記測定部本体16の撮像部44から入力される干渉縞の画像を解析し、ワーク表面の各点の位相を検出する。この位相を示す位相データはCPU30に入力され、この位相データからワーク表面の形状を示す形状データ(3次元座標データ)が算出されるようになっている。
【0017】
次に、上述の如く構成された非接触表面形状測定装置においてCCDカメラの1視野の面積を越える測定領域の表面形状を測定する場合の処理内容について説明する。
本装置は、CCDカメラの1視野の面積を越える広い測定領域の表面形状を1つの測定結果として表示等することができ、且つ、狭い領域を測定する場合に比べて横分解能を低下させることなくその広い測定領域の表面形状を測定することができる点に特徴がある。その測定方法として、測定領域前面の表面形状を隙間なく測定する第1の方法と、測定領域の所要位置の表面形状を離散的に測定する第2の方法とがあり、測定目的に応じてこれらの方向を使い分けることができるようになっている。第1の方法では、測定領域全面の形状を隙間なく測定することができるため、測定領域全面の連続した形状を把握したい場合に有効であり、第2の方法では、測定領域全面の形状を知る必要がなく部分的に把握できれば十分な場合に測定時間を短縮することができる点で効果的である。以下、これらの第1の方法、第2の方法について順に説明する。
【0018】
初めに第1の方法の処理内容について説明する。第1の方法を実行する場合、まずユーザは測定開始前に測定部のXYステージ12に載置したワークWの測定領域を入力部38から入力する。これによって設定された測定領域の一例を図3(A)に示す。このとき、この測定領域Sの面積が、図3(B)に示すCCDカメラの1視野範囲Pの面積を越えた場合、CPU30は、図3(A)に示す測定領域Sを図3(C)に示すように1視野範囲Pの面積と等価な複数の測定面P1〜P9で分割する。また、このとき各測定面P1〜P9を隣接する他の測定面と一部で重複するように設定する。尚、ここでは、測定領域Sの面積が1視野範囲Pの面積の9倍未満であった場合について測定領域Sを測定面P1〜P9までの9つの測定面で分割している様子を示している。但し、分割数は9に限らず任意に設定できる。CPU30は、このようにして分割した各測定面P1〜P9のX、Y位置を測定面位置としてメモリ32に登録する。
【0019】
次いで、ユーザは、XYステージ12の移動方向を反転させる迂回点を入力部38から入力する。迂回点について説明すると、上述のように測定領域Sの面積がCCDカメラの1視野範囲Pの面積を越える場合には、その測定領域Sを複数の測定面に分割して各測定面を順次撮像し、その干渉縞の画像を取得する。このときCPU30はステージ制御部40(図2参照)に制御信号を出力し、XYステージ12を自動で移動させて、CCDカメラの視野を各測定面に対して図4に示すような軌跡で相対的に移動させる。同図によれば、XYステージ12の移動によってCCDカメラの視野は測定面P1から測定面P3までX方向に移動して測定面P1〜P3までの画像を取得した後、次いで測定面P3から同図に示す迂回点T1 まで移動する。そして、迂回点T1 からX方向に移動を開始して測定面P4から測定面P6まで移動し、測定面P4〜P6までの画像を取得する。次いで測定面P6から同図に示す迂回点T2 まで移動した後、迂回点T2 からX方向に移動を開始して測定面P7から測定面P9まで移動し、測定面P7〜P9までの画像を取得する。
【0020】
上述の迂回点T1 、T2 は図4に示したように測定面P4、P7よりも図中左側に位置し、測定面P4〜P6又は測定面P7〜P9の撮像を行う際に、CCDカメラの視野は測定面P4又は測定面P7の位置よりも左側から移動してくるようになっている。このようにXYテーブル12を移動させることにより、各列の測定面はXYステージ12の一方向(左方向)への移動の過程で撮像されることとなり、XYテーブル12のバッククラッシュによる撮像位置の誤差を適切に防止することができる。これらの迂回点T1 、T2 のX、Y位置を予めユーザが入力部38から入力し、メモリ32に登録しておくことにより、CPU30は、測定時にこれらの迂回点T1 、T2 の位置に基づいてXYステージ12を移動させ、図4に示したような軌跡でCCDカメラの視野を移動させることができるようになっている。
【0021】
以上の設定が終了し、ユーザが入力部38から測定開始を指示した場合、CPU30は、以下に示す処理を実行する。まず、CPU30は、ステージ制御部40に制御信号を出力して、XYステージ12を移動させ、CCDカメラの視野を測定面P1の位置に移動させる。次いで、CCDカメラから測定面P1の干渉縞の画像を取り込み、この干渉縞の画像から測定面P1の形状を求める。尚、干渉縞の画像から測定面の形状を求める方法については公知であるため、ここではその説明は省略する。ここで求めた測定面P1の形状は形状データとして、測定高さと測定位置と共に、メモリ32に保存する。尚、測定高さは、測定対象物(ワークW表面)から対物レンズ16A(図1参照)までの距離を示す値で、この値は測定部に設置されたスケールから取得することができるようになっている。また、測定位置はXYステージ12のX、Y位置を示し、この位置も測定部に設置されたスケールによって取得することができるようになっている。
【0022】
続いてCPU30は、XYテーブル12を移動させて上記図4に示したようにCCDカメラの視野を所定の軌跡に沿って移動させ、測定面P2〜P9までの各測定面の形状データを上述と同様にして求め、測定高さと測定位置と共にメモリ32に保存する。
以上、各測定面P1〜P9について形状データを求めると、次にCPU30は、これらの測定面の形状データを適切に繋ぎ合わせて測定領域Sの全面の形状を示す1つの形状データを生成するために以下の補正演算を行う。尚、CPU30は、第1と第2の2つの補正演算を行って測定領域Sの全面の形状データを生成するようにしており、第1と第2の補正演算について順に説明する。
【0023】
第1の補正演算は、全ての測定面P1〜P9について隣接する2つの測定面を対象に行う。補正する測定面の順序については後述するが、対象となる2つの測定面のうち、一方を基準測定面とし、他方を被補正測定面とする。そして、基準測定面の測定高さをZ、被補正測定面の測定高さをzとした場合、補正は被補正測定面の全形状データから(z−Z)を差し引くことにより行う。この補正は、XYテーブル12の移動によって変動するZ方向の誤差を除去することを目的としている。
【0024】
第2の補正演算は、第1の補正演算を行った後、第1の補正演算と同様2つの測定面を対象に行う。図6(A)に示すように対象となる2つの測定面のうち、一方を基準測定面とし、他方を被補正測定面とする。そして、説明上、基準測定面と被補正測定面をそれぞれΦ(X,Y)、Ψ(X,Y)とし、Ψ(X,Y)に対してここで行う補正によって得られる補正後の面をΨ′(X,Y)とすると、被補正測定面の補正後の面Ψ′(X,Y)は次式(1)で与えられる。
【0025】
Ψ′(X,Y)=Ψ(X,Y)+αX+βY+γ …(1)
但し、α、β、γは、傾斜の補正係数である。
CPU30は、基準測定面と被補正測定面の重なり合った領域の形状データに基づいて、Φ(X,Y)−Ψ′(X,Y)が最小となるような補正係数α、β、γを最小自乗法で算出する。そして、この補正係数α、β、γ を上式(1)に代入して被補正測定面の形状データを補正する。これにより図6(B)に示すように基準測定面と被補正測定面の形状データが連続した値を示すようになる。即ち、この補正演算は、XYテーブル12の移動等に起因して生じる誤差を除去し、1つの測定領域Sを複数の測定面に分割して撮像していることに起因して生じる不連続点の発生を防止している。
【0026】
図5は上記補正演算の補正順序の1例を示した図である。ここで示す測定領域は、5×5の測定面で分割されているものとし、同図に示す各測定面の数値は各測定面の補正を行う順序を示している。まず、中央の測定面(No.1)を基準測定面とし、その周辺の測定面(No.2)を被補正測定面として上記第1、第2の補正演算を行う。No.2の測定面全て(4個)について補正が終了したら、続いて補正後のNo.2の測定面を基準測定面とし、その周辺の測定面(No.3)を被補正測定面として上記第1、第2の補正演算を行う。No.3の測定面全て(8個)について補正が終了したら、以上の手順と同様にしてNo.4、No.5の測定面について上記補正演算を行っていく。このように、中央測定面から順次周辺の測定面について補正を行っていくことにより全ての測定面について補正を行うことができる。但し、補正演算の順序は、これに限らない。
【0027】
以上説明した補正演算により、CCDカメラの1視野範囲を越える測定領域に対して複数の測定面に分割して各測定面について表面形状を独立に算出した場合でも、各測定面の形状データを適切に繋ぎ合わせることができ、あたかも1視野範囲で取得した画像に基づいて測定領域全面の表面形状を測定したような形状データを得ることができる。
【0028】
CPU30は、このようにして算出した測定領域S全面についての形状データに基づいて測定領域Sの表面形状を表示部36にグラフィック表示し、又は測定領域Sの平面度、粗さ等のパラメータを算出し、その結果を表示部36に表示する。
以上の第1の方法は、測定領域の表面形状を隙間無く測定する場合に用いられる方法であるが、次に説明する第2の方法は上述のように測定領域の所要位置の表面形状を離散的に測定する場合に使用される。
【0029】
第2の方法を実行する場合、即ち、XYステージ12に載置したワークWの測定領域S(図7(A))がCCDカメラの1視野範囲P(図7(B))を越える場合であって、測定領域Sの所要位置の表面形状を離散的に測定する場合、ユーザは測定開始前にその測定位置(X、Y位置)を入力部38から入力する。例えば、図7(C)に示すように測定領域S内に9箇所の測定位置(測定面P1〜P9)を設定するときには、その9箇所の測定面P1〜P9のX、Y位置を入力部38から入力する。また、上記第1の方法の場合と同様に測定時のXYステージ12の移動に対する迂回点T1 、T2 のX、Y位置を入力部38から入力する。尚、X、Yステージ12の移動に伴うCCDカメラの視野の移動軌跡は、図8に示すように第1の方法の場合(図4参照)と同じである。
【0030】
また、ユーザは測定開始前にXYステージ12の全移動範囲における平面度マスターの測定値をステージ補正データとして取得し、メモリ32に保存しておく。ステージ補正データの取得と保存は以下のようにして行う。まず、測定物として略理想的な平面を有する平面度マスターをXYステージ12に載置する。そして、平面度マスターに対して1視野範囲の表面形状の測定を行って形状データを求め、その形状データの平均高さを測定位置(XYステージ12のX、Y位置)と共にメモリ32に保存する。この測定は、XYステージ12を1視野範囲ずつ移動させてXYステージ12の全移動範囲について行う。
【0031】
以上の設定が終了し、ユーザが入力部38から測定開始を指示した場合、CPU30は、第1の方法と同様にXYステージ12を移動させて上記離散的に設定された測定面P1〜P9の形状データを取得する。尚、形状データの取得手順については、第1の方法と同様に行われるため説明は省略する。
各測定面P1〜P9について形状データを取得すると、次にCPU30は、これらの測定面の形状データを以下のように補正する。尚、CPU30は、第1と第2との2つの補正演算を行うが、第1の補正演算については、上記第1の方法で説明した第1の補正演算と同様に行われるため説明を省略し、第2の補正演算についてのみ以下説明する。
【0032】
第2の補正演算は、上述したように平面度マスターによって作成したステージ補正データを用いて行う。ステージ補正データは離散的に存在している為、測定面毎に以下の処理を行う。
図9は、補正対象の被補正測定面Rの位置とステージ補正データが登録されている位置q、q、…との位置関係を示し、図10は、図9に示す点線枠rの拡大図を示す。CPU30は、被補正測定面Rの中央位置Aにおける補正値を比例配分計算で算出する。即ち、被補正測定面を囲むステージ補正データの登録位置Z1、Z2、Z3、Z4におけるステージ補正データZ1(d)、Z2(d)、Z3(d)、Z4(d)から補正値A(d)を算出する。
【0033】
ここで、ステージ補正データZ1(d)とは、XY平面座標Z1(X,Y)を中心とした1視野測定面の測定形状データから算出した最小自乗回帰平面を示すパラメータ(L,M,N,P) 、及び、座標Z1(X,Y)におけるその最小自乗回帰平面の高さ(d)からなるデータである。ステージ補正データZ2(d)、Z3(d)、Z4(d)についても同様である。また、補正値A(d)は、XY平面座標A(X,Y)を囲むステージ補正データの登録位置Z1、Z2、Z3、Z4の4点から算出した最小自乗回帰平面を示すパラメータ(L,M,N,P) 、及び、座標A(X,Y)におけるその最小自乗回帰平面の高さ(d)からなるデータである。
【0034】
補正値A(d)を算出するため、補正値A(d)をλX+μY+νZ=dとして、角度ベクトル(λ,μ,ν)を以下のようにして算出する。
▲1▼補正値A(d)のX方向成分:A(λ)を算出する。
まず、Z1(d)のX方向成分(L 成分) :Z1(L)とZ2(d)のX方向成分(L 成分) :Z2(L)との差を、図10に示すZ1〜X1間の距離と、X1〜Z2間の距離で比例分配してX1(λ)を求める。即ち、

Figure 0003835505
ただし、A(X)、Z1(X)、Z2(X)はそれぞれの点のX座標値を示す。
【0035】
また、同様にZ3(d)のX方向成分(L 成分) :Z3(L)とZ4(d)のX方向成分(L 成分) :Z4(L)との差を、図10に示すZ3〜X2間の距離と、X2〜Z4間の距離で比例分配してX2(λ)を求める。即ち、
Figure 0003835505
ただし、Z3(X)、Z4(X)はそれぞれの点のX座標値を示す。
【0036】
そして、X1(λ)とX2(λ)との差を、図10に示すX1〜A間の距離とA〜X2間の距離で比例配分してA(λ)を求める。即ち、
Figure 0003835505
ただし、A(Y)、Z1(Y)、Z3(Y)はそれぞれの点のY座標値を示す。
▲2▼補正値A(d)のY方向成分:A(μ)を算出する。
【0037】
▲1▼と同様にしてZ1(d)のY方向成分(M 成分):Z1(M)とZ3(d)のY方向成分(M 成分):Z3(M)からY1(μ)を求め、Z2(d)のY方向成分(M 成分):Z2(M)とZ4(d)のY方向成分(M 成分):Z4(M)からY2(μ)を求める。即ち、
Figure 0003835505
ただし、A(Y)、Z1(Y)、Z2(Y)、Z3(Y)、Z4(Y)はそれぞれの点のY座標値を示す。
【0038】
そして、Y1(μ)とY2(μ)からA(μ)を求める。即ち、
Figure 0003835505
▲3▼A(d)のZ方向成分:A(ν)を算出する。
A(λ)とA(μ)からA(ν)を求める。即ち、
α=1視野X範囲/2
β=1視野Y範囲/2
とすると、
Figure 0003835505
CPU30は、以上の▲1▼〜▲3▼の演算を行って補正値A(d)を求めた後、被補正測定面Rの全形状データから測定面中央位置Aの高さデータ(d)、角度ベクトル(λ,μ,ν)を引き算して形状データを補正する。
【0039】
以上の補正演算により、XYテーブル12の移動に起因して生じる誤差を各測定面の形状データから除去することができる。
CPU30は、以上補正演算によって補正した各測定面の形状データを合わせて測定領域全面における1つの形状データとし、この形状データに基づいて測定領域Sの表面形状を表示部36にグラフィック表示し、又は、測定領域Sの平面度、粗さ等のパラメータを算出し、その結果を表示部36に表示する。
【0040】
【発明の効果】
以上説明したように本発明に係る非接触表面形状測定装置によれば、撮像手段の1視野範囲に対して測定範囲の面積が大きい場合に、その測定範囲を複数の領域に分割すると共に、その領域の一部において隣接する他の領域と重複するように分割し、各領域毎に干渉縞を撮像して形状データを算出する。そして、各領域の重複した部分において形状データが最も合致するように各領域の形状データを補正し、各領域の形状データを繋ぎ合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する。これにより、横分解能を高分解能に維持したまま1視野の面積を越える大面積の表面形状を測定することができる。
【0041】
また、撮像手段の1視野範囲に対して測定範囲の面積が大きい場合に、測定範囲に離散した領域を設定し、各領域毎に干渉縞を撮像して形状データを算出する。そして、前記撮像手段の視野範囲を移動させる移動機構に起因して生じる誤差を各領域の形状データから除去し、それらの形状データを合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する。これにより、横分解能を高分解能に維持したまま1視野の面積を越える大面積の表面形状を測定することができる。
【図面の簡単な説明】
【図1】図1は、本発明に係る非接触表面形状測定装置の測定部の構成を示した斜視図である。
【図2】図2は、本発明に係る非接触表面形状測定装置の演算部の構成を示したブロック図である。
【図3】図3(A)、(B)、(C)は、それぞれ、形状データを求める測定領域、CCDカメラの1視野範囲及び測定領域を分割する測定面を示した図である。
【図4】図4は、CCDカメラの視野の移動軌跡及び各測定面の測定順序を示した図である。
【図5】図5は、各測定面の補正順序を示した図である。
【図6】図6(A)、(B)は、補正演算の処理内容を説明に用いた説明図である。
【図7】図7(A)、(B)、(C)は、それぞれ、形状データを求める測定領域、CCDカメラの1視野範囲及び測定領域を分割する測定面を示した図である。
【図8】図8は、CCDカメラの視野の移動軌跡及び各測定面の測定順序を示した図である。
【図9】図9は、補正データの登録位置と被補正測定面の位置を示した図である。
【図10】図10は、図9における点線領域の拡大図である。
【符号の説明】
12…XYステージ
16…測定部本体
30…CPU
32…メモリ
36…表示部
38…入力部
40…ステージ制御部
42…画像処理部
44…撮像部[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a non-contact surface shape measuring apparatus, and more particularly to a non-contact surface shape measuring apparatus that measures the shape of a surface to be measured using light interference.
[0002]
[Prior art]
2. Description of the Related Art Conventionally, a non-contact surface shape measuring apparatus that measures a fine shape of a measurement surface in a non-contact manner using a laser interferometer is widely known. This non-contact surface shape measuring apparatus emits laser light from a laser light source to a measurement surface and a reference reflection surface, and interferes light reflected by the measurement surface and the reference reflection surface, resulting in interference fringes generated by the interference. Is imaged by an imaging means such as a CCD camera. Then, the shape of the surface to be measured is calculated from the shape of the interference fringes by image analysis.
[0003]
Such a non-contact surface shape measuring apparatus using light interference can measure without contacting the workpiece surface, so that the workpiece surface is not damaged, and an image of the workpiece surface is used as measurement data. Since it is captured instantaneously by the imaging means, the surface shape can be measured with high speed and high accuracy.
[0004]
[Problems to be solved by the invention]
However, the conventional non-contact surface shape measuring apparatus has the following problems. One problem is that the area that can be measured once is limited because the measurement range (the area of one field of view of the CCD camera) is determined by the CCD imaging area / objective lens magnification. Another problem is that the lateral resolution of the measurement data is determined by the measurement range / the number of CCD elements, so that if the area of one field of view is widened, the lateral resolution is reduced accordingly.
[0005]
Therefore, the measurement of the surface shape with high lateral resolution is 0. It was limited to a narrow measurement range of several mm square to several mm square.
The present invention has been made in view of such circumstances. In a non-contact surface shape measuring apparatus using light interference, the present invention has a large area exceeding the area of one field of view while maintaining the lateral resolution of measurement data at a high resolution. An object of the present invention is to provide a non-contact surface shape measuring apparatus capable of performing measurement.
[0006]
[Means for Solving the Problems]
In order to achieve the above object, according to the first aspect of the present invention, an interference fringe generated by the interference between the light reflected by the surface to be measured and the reference light is picked up by the image pickup means, and based on the picked up interference fringes. In the non-contact surface shape measuring apparatus for measuring the shape of the surface to be measured, it is means for dividing the measurement range of the surface to be measured into a plurality of regions so that a part of each region overlaps with another region. A measurement range dividing unit that divides the image into regions, an image acquisition unit that acquires an image of interference fringes by the imaging unit for each region divided by the measurement range dividing unit, and an image acquired by the image acquisition unit. The shape data calculation means for calculating shape data indicating the shape and the shape data of each area calculated by the shape data calculation means match the shape data most closely in the overlapping part of each area. Corrected so as to A shape data correcting unit that performs an area that is located in the center of the measurement range of the measurement target surface of each area as an initial reference measurement surface, and has a portion that overlaps the reference measurement surface area The shape data of the area of the corrected measurement surface is corrected with the corrected area as the corrected measurement surface, the corrected area is set as a new reference measurement surface, and there is a portion overlapping with the area of the reference measurement surface. Using the uncorrected area as the corrected measurement surface, the shape data of all the areas are sequentially corrected by correcting the shape data of the area of the corrected measurement surface. Shape data correction means; Each corrected by the shape data correcting means Shape data indicating the shape of the entire measurement range of the surface to be measured is generated by connecting the shape data of the regions. Shape data generation means; It is characterized by having.
[0007]
According to the present invention, when the area of the measurement range is large with respect to one visual field range of the imaging means, the measurement range is divided into a plurality of regions and overlaps with other adjacent regions in a part of the region. The shape data is calculated by imaging the interference fringes for each region. Then, the shape data of each region is corrected so that the shape data is best matched in the overlapping portion of each region, and the shape data indicating the shape of the entire measurement range of the measured surface is obtained by connecting the shape data of each region. Generate. Thereby, it is possible to measure the surface shape of a large area exceeding the area of one field of view while maintaining the lateral resolution of the measurement data (shape data) at a high resolution.
[0008]
According to the second aspect of the present invention, the interference fringes generated by the interference between the light reflected by the surface to be measured and the reference light are imaged by the imaging means, and the surface of the surface to be measured is based on the captured interference fringes. In a non-contact surface shape measuring apparatus that measures a shape, an area setting unit that sets a discrete area in the measurement range of the measurement target surface, and an image of interference fringes by the imaging unit for each area set by the area setting unit Due to the image acquisition means to be acquired, the shape data calculation means for calculating the shape data indicating the shape of each area based on the image acquired by the image acquisition means, and the moving mechanism for moving the visual field range of the imaging means Based on the error data indicating the error that occurs, the shape data of each area calculated by the shape data calculating means is corrected, and the corrected shape data of each area is combined to form the subject data. It is characterized with the shape data correction means for generating shape data indicating the measurement range overall shape of Teimen, further comprising a.
[0009]
According to the present invention, when the area of the measurement range is large with respect to one visual field range of the imaging means, a discrete region is set in the measurement range of the surface to be measured, and interference fringes are captured for each region to obtain shape data. Is calculated. Then, an error caused by the moving mechanism that moves the visual field range of the imaging means is removed from the shape data of each region, and the shape data indicating the shape of the entire measurement range of the surface to be measured is combined with the shape data Is generated. As a result, it is possible to measure a large area exceeding the area of one field of view while maintaining the lateral resolution of the measurement data (shape data) at a high resolution.
[0010]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, preferred embodiments of a non-contact surface shape measuring apparatus according to the present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a perspective view showing a configuration of a measuring unit of a non-contact surface shape measuring apparatus according to the present invention. As shown in the figure, the measuring unit of this measuring apparatus is supported by an XY stage 12 that moves in the X and Y directions on the base 10 and a support 14 installed on the support 10, and is guided by the support 14. And a measuring unit main body 16 that moves in the vertical direction (Z direction). The XY stage 12 and the measurement unit main body 16 are driven by a motor under the control of a control unit (not shown), or moved in the XY direction and the Z direction by manual operation of a knob (not shown).
[0011]
The XY stage 12 mounts the workpiece W to be measured as shown in the figure, and moves the workpiece W in the XY direction by the above movement to adjust the measurement position. As will be described later, the measurement position is automatically adjusted by a preset operation when the measurement area of the workpiece W is wide and the measurement range exceeds one field of view of the CCD camera mounted on the measurement unit main body 16. To be done.
[0012]
A laser interferometer and an imaging unit are mounted inside the measurement unit main body 16, and an objective lens 16 </ b> A that is a part of the laser interferometer is attached to the lower part of the measurement unit main body 16. In this apparatus, for example, a Twyman-Green interferometer is employed as the laser interferometer. This laser interferometer emits a laser beam from a laser light source mounted in the measurement unit main body 16, and the light beam is divided into two beams by a half mirror and placed on the XY stage 12 via an objective lens, respectively. Irradiate the surface of the workpiece W and a predetermined reference reflecting surface. Then, the light beam (reflected light) reflected by the surface of the workpiece W and the reference reflecting surface is again overlapped by the half mirror through the objective lens, and the reflected light is interfered and guided to the imaging unit. It should be noted that many types of interferometers are known, and in the present embodiment, the Twiman-Green interferometer is adopted. However, the present invention is not limited to this, and an arbitrary interferometer is used. Various types of interferometers can be employed.
[0013]
The imaging unit is composed of an imaging lens and a CCD camera. As described above, the interfered light guided from the half mirror is collected by the imaging lens and imaged on the imaging surface of the CCD camera. Thereby, the image of the interference fringes generated by the interference between the reflected light reflected on the surface of the workpiece W and the reflected light (reference light) reflected on the reference reflecting surface can be acquired.
[0014]
The image of the interference fringes on the workpiece surface acquired by the measurement unit main body 16 in this way is transmitted to a calculation unit (not shown) (the calculation unit is mainly configured by a computer), and then a predetermined image analysis is performed by the calculation unit. As a result, the shape of the workpiece surface is required.
Next, the configuration of the calculation unit that calculates the shape of the workpiece surface based on the image acquired by the measurement unit main body 16 as described above will be described. FIG. 2 is a configuration diagram showing the configuration of the calculation unit. As shown in the figure, the calculation unit includes a CPU 30, a memory (RAM) 32, an external storage device 34, a display unit 36, an input unit 38, a stage control unit 40, an image processing unit 42, and the like.
[0015]
The CPU 30 performs various arithmetic processes such as control of the XY table 12 of the measuring unit and data processing according to a predetermined program. The memory 32 and the external storage device 34 record data sent from the CPU 30 as necessary, and output the recorded data to the CPU 30 as necessary. The external storage device 34 is a memory used for data storage. Various information screens such as a measurement content setting screen before the start of measurement and a measurement result display screen after the measurement are displayed on the display unit 36 as characters, numerical values, graphics, and the like. The input unit 38 inputs information input by the user using, for example, a keyboard or a mouse to the CPU 30.
[0016]
The stage controller 40 drives the drive motor of the XY stage 12 based on a control signal input from the CPU 30 to move the XY stage 12 in the X and Y directions. For example, the CPU 30 inputs the control signal to the stage control unit 40 according to the movement trajectory of the XY stage 12 set in advance by the user.
The image processing unit 42 analyzes the interference fringe image input from the imaging unit 44 of the measurement unit main body 16 and detects the phase of each point on the workpiece surface. The phase data indicating the phase is input to the CPU 30, and shape data (three-dimensional coordinate data) indicating the shape of the workpiece surface is calculated from the phase data.
[0017]
Next, processing contents when measuring the surface shape of the measurement region exceeding the area of one field of view of the CCD camera in the non-contact surface shape measuring apparatus configured as described above will be described.
This device can display the surface shape of a wide measurement area exceeding the area of one field of view of the CCD camera as one measurement result, and without reducing the lateral resolution compared to the case of measuring a narrow area. It is characterized in that the surface shape of the wide measurement area can be measured. As the measurement method, there are a first method for measuring the surface shape of the front surface of the measurement region without a gap and a second method for discretely measuring the surface shape at a required position in the measurement region, and these are determined according to the measurement purpose. The direction of the can be used properly. In the first method, the shape of the entire measurement region can be measured without any gap, and therefore, it is effective when it is desired to grasp the continuous shape of the entire measurement region. In the second method, the shape of the entire measurement region is known. It is effective in that the measurement time can be shortened when it is sufficient if it is not necessary and can be partially grasped. Hereinafter, the first method and the second method will be described in order.
[0018]
First, processing contents of the first method will be described. When executing the first method, the user first inputs the measurement area of the workpiece W placed on the XY stage 12 of the measurement unit from the input unit 38 before the measurement is started. An example of the measurement region thus set is shown in FIG. At this time, if the area of the measurement region S exceeds the area of one visual field range P of the CCD camera shown in FIG. 3B, the CPU 30 changes the measurement region S shown in FIG. As shown in FIG. 4B, the measurement area P1 is divided by a plurality of measurement planes P1 to P9 equivalent to the area of one visual field range P. At this time, the measurement surfaces P1 to P9 are set so as to partially overlap other adjacent measurement surfaces. Here, a state in which the measurement region S is divided by nine measurement surfaces P1 to P9 when the area of the measurement region S is less than nine times the area of one visual field range P is shown. Yes. However, the number of divisions is not limited to nine and can be set arbitrarily. The CPU 30 registers the X and Y positions of the measurement surfaces P1 to P9 thus divided in the memory 32 as measurement surface positions.
[0019]
Next, the user inputs a detour point that reverses the moving direction of the XY stage 12 from the input unit 38. The detour point will be described. When the area of the measurement region S exceeds the area of one visual field range P of the CCD camera as described above, the measurement region S is divided into a plurality of measurement surfaces and each measurement surface is sequentially imaged. Then, an image of the interference fringes is acquired. At this time, the CPU 30 outputs a control signal to the stage control unit 40 (see FIG. 2), and automatically moves the XY stage 12 so that the field of view of the CCD camera is relative to each measurement surface in a locus as shown in FIG. Move. According to this figure, the field of view of the CCD camera is moved in the X direction from the measurement surface P1 to the measurement surface P3 by moving the XY stage 12, and images from the measurement surfaces P1 to P3 are acquired, and then from the measurement surface P3. Detour point T shown in the figure 1 Move up. And detour point T 1 From the measurement surface P4 to the measurement surface P6, and images from the measurement surfaces P4 to P6 are acquired. Next, the detour point T shown in FIG. 2 After moving to detour point T 2 From the measurement surface P7 to the measurement surface P9, and images from the measurement surfaces P7 to P9 are acquired.
[0020]
Detour point T mentioned above 1 , T 2 4 is positioned on the left side of the measurement planes P4 and P7 as shown in FIG. 4, and when the measurement planes P4 to P6 or the measurement planes P7 to P9 are imaged, the field of view of the CCD camera is the measurement plane P4 or measurement. It moves from the left side of the position of the surface P7. By moving the XY table 12 in this way, the measurement surface of each column is imaged in the process of moving in one direction (left direction) of the XY stage 12, and the imaging position of the XY table 12 due to the back crash is changed. An error can be appropriately prevented. These detour points T 1 , T 2 The user inputs the X and Y positions in advance from the input unit 38 and registers them in the memory 32, so that the CPU 30 can make these detour points T during measurement. 1 , T 2 Based on this position, the XY stage 12 is moved, and the field of view of the CCD camera can be moved along a locus as shown in FIG.
[0021]
When the above setting is completed and the user gives an instruction to start measurement from the input unit 38, the CPU 30 executes the following process. First, the CPU 30 outputs a control signal to the stage control unit 40, moves the XY stage 12, and moves the field of view of the CCD camera to the position of the measurement plane P1. Next, an image of the interference fringe on the measurement surface P1 is taken from the CCD camera, and the shape of the measurement surface P1 is obtained from the image of the interference fringe. Since a method for obtaining the shape of the measurement surface from the interference fringe image is known, the description thereof is omitted here. The shape of the measurement surface P1 obtained here is stored as shape data in the memory 32 together with the measurement height and the measurement position. The measurement height is a value indicating the distance from the measurement object (the surface of the workpiece W) to the objective lens 16A (see FIG. 1), and this value can be obtained from a scale installed in the measurement unit. It has become. The measurement position indicates the X and Y positions of the XY stage 12, and this position can also be acquired by a scale installed in the measurement unit.
[0022]
Subsequently, the CPU 30 moves the XY table 12 to move the field of view of the CCD camera along a predetermined locus as shown in FIG. 4, and the shape data of each measurement surface from the measurement surfaces P2 to P9 is described above. Similarly, it is obtained and stored in the memory 32 together with the measurement height and the measurement position.
As described above, when the shape data is obtained for each of the measurement surfaces P1 to P9, the CPU 30 next appropriately connects the shape data of these measurement surfaces to generate one shape data indicating the entire shape of the measurement region S. The following correction calculation is performed. Note that the CPU 30 performs the first and second correction calculations to generate shape data of the entire surface of the measurement region S, and the first and second correction calculations will be described in order.
[0023]
The first correction calculation is performed on two measurement surfaces adjacent to each other for all measurement surfaces P1 to P9. Although the order of the measurement surfaces to be corrected will be described later, one of two target measurement surfaces is a reference measurement surface, and the other is a corrected measurement surface. When the measurement height of the reference measurement surface is Z and the measurement height of the corrected measurement surface is z, the correction is performed by subtracting (z−Z) from the entire shape data of the corrected measurement surface. The purpose of this correction is to remove an error in the Z direction that varies due to the movement of the XY table 12.
[0024]
The second correction calculation is performed on two measurement surfaces in the same manner as the first correction calculation after the first correction calculation. As shown in FIG. 6A, one of two target measurement surfaces is a reference measurement surface, and the other is a corrected measurement surface. For the sake of explanation, the reference measurement surface and the corrected measurement surface are Φ (X, Y) and ψ (X, Y), respectively, and the corrected surface obtained by the correction performed here for ψ (X, Y). Is Ψ ′ (X, Y), the corrected surface Ψ ′ (X, Y) of the measured surface to be corrected is given by the following equation (1).
[0025]
Ψ ′ (X, Y) = Ψ (X, Y) + αX + βY + γ (1)
Here, α, β, and γ are inclination correction coefficients.
The CPU 30 calculates correction coefficients α, β, and γ that minimize Φ (X, Y) −Ψ ′ (X, Y) based on the shape data of the overlapping region of the reference measurement surface and the measurement surface to be corrected. Calculated using the method of least squares. Then, the correction coefficient α, β, γ is substituted into the above equation (1) to correct the shape data of the corrected measurement surface. As a result, as shown in FIG. 6B, the shape data of the reference measurement surface and the corrected measurement surface show continuous values. That is, this correction calculation eliminates errors caused by movement of the XY table 12 and the like, and discontinuous points caused by imaging one measurement area S divided into a plurality of measurement surfaces. Is prevented.
[0026]
FIG. 5 is a diagram showing an example of the correction order of the correction calculation. The measurement region shown here is assumed to be divided by a 5 × 5 measurement surface, and the numerical values of the measurement surfaces shown in the figure indicate the order in which the measurement surfaces are corrected. First, the first and second correction calculations are performed with the central measurement surface (No. 1) as the reference measurement surface and the peripheral measurement surface (No. 2) as the corrected measurement surface. No. When the correction is completed for all the four measurement surfaces (four), the corrected No. 2 is then continued. The first and second correction calculations are performed with the second measurement surface as the reference measurement surface and the peripheral measurement surface (No. 3) as the corrected measurement surface. No. When the correction is completed for all the three measurement surfaces (eight), No. 3 is performed in the same manner as above. 4, no. The above correction calculation is performed for the five measurement surfaces. As described above, correction can be performed for all measurement surfaces by sequentially performing correction for the peripheral measurement surfaces from the central measurement surface. However, the order of correction calculations is not limited to this.
[0027]
Even when the surface area of each measurement surface is calculated independently by dividing the measurement area exceeding one field of view of the CCD camera into multiple measurement surfaces by the correction calculation described above, the shape data of each measurement surface is appropriate. The shape data can be obtained as if the surface shape of the entire measurement region was measured based on the image acquired in one field of view.
[0028]
The CPU 30 graphically displays the surface shape of the measurement region S on the display unit 36 based on the shape data of the entire measurement region S calculated in this way, or calculates parameters such as the flatness and roughness of the measurement region S. The result is displayed on the display unit 36.
The first method described above is a method used when measuring the surface shape of the measurement region without a gap. The second method described below is a discrete method of determining the surface shape of the required position of the measurement region as described above. Used when measuring automatically.
[0029]
When the second method is executed, that is, when the measurement area S (FIG. 7A) of the workpiece W placed on the XY stage 12 exceeds the one visual field range P (FIG. 7B) of the CCD camera. And when measuring the surface shape of the required position of the measurement area | region S discretely, a user inputs the measurement position (X, Y position) from the input part 38 before a measurement start. For example, when nine measurement positions (measurement surfaces P1 to P9) are set in the measurement region S as shown in FIG. 7C, the X and Y positions of the nine measurement surfaces P1 to P9 are input units. 38 is input. Further, as in the case of the first method, a detour point T for the movement of the XY stage 12 at the time of measurement. 1 , T 2 The X and Y positions are input from the input unit 38. Incidentally, the movement trajectory of the visual field of the CCD camera accompanying the movement of the X and Y stages 12 is the same as that in the first method (see FIG. 4) as shown in FIG.
[0030]
Further, the user acquires the measured value of the flatness master in the entire movement range of the XY stage 12 as stage correction data and stores it in the memory 32 before starting the measurement. Acquisition and storage of stage correction data is performed as follows. First, a flatness master having a substantially ideal plane as a measurement object is placed on the XY stage 12. Then, the surface shape of one visual field range is measured with respect to the flatness master to obtain shape data, and the average height of the shape data is stored in the memory 32 together with the measurement position (X and Y positions of the XY stage 12). . This measurement is performed for the entire movement range of the XY stage 12 by moving the XY stage 12 by one visual field range.
[0031]
When the above setting is completed and the user gives an instruction to start measurement from the input unit 38, the CPU 30 moves the XY stage 12 in the same manner as in the first method to set the discretely set measurement surfaces P1 to P9. Get shape data. The shape data acquisition procedure is performed in the same manner as in the first method, and a description thereof will be omitted.
If shape data is acquired about each measurement surface P1-P9, CPU30 will correct | amend the shape data of these measurement surfaces next as follows. The CPU 30 performs the first and second correction calculations, but the first correction calculation is performed in the same manner as the first correction calculation described in the first method, and thus the description thereof is omitted. Only the second correction calculation will be described below.
[0032]
The second correction calculation is performed using the stage correction data created by the flatness master as described above. Since the stage correction data exists discretely, the following processing is performed for each measurement surface.
9 shows the positional relationship between the position of the measurement surface R to be corrected and the positions q, q,... Where the stage correction data is registered, and FIG. 10 is an enlarged view of the dotted frame r shown in FIG. Indicates. The CPU 30 calculates a correction value at the center position A of the measurement surface R to be corrected by proportional distribution calculation. That is, the correction value A (d) from the stage correction data Z1 (d), Z2 (d), Z3 (d), and Z4 (d) at the registration positions Z1, Z2, Z3, and Z4 of the stage correction data surrounding the surface to be corrected. ) Is calculated.
[0033]
Here, the stage correction data Z1 (d) is a parameter (L, M, N) indicating a least square regression plane calculated from the measurement shape data of one visual field measurement surface centered on the XY plane coordinates Z1 (X, Y). , P) and the height (d) of the least square regression plane at the coordinate Z1 (X, Y). The same applies to the stage correction data Z2 (d), Z3 (d), and Z4 (d). The correction value A (d) is a parameter (L, L) indicating the least square regression plane calculated from four points of registered positions Z1, Z2, Z3 and Z4 of the stage correction data surrounding the XY plane coordinate A (X, Y). M, N, P) and the height (d) of the least square regression plane at the coordinates A (X, Y).
[0034]
In order to calculate the correction value A (d), the correction value A (d) is set to λX + μY + νZ = d, and the angle vector (λ, μ, ν) is calculated as follows.
{Circle around (1)} X direction component of correction value A (d): A (λ) is calculated.
First, the difference between the X direction component (L component) of Z1 (d): Z1 (L) and the X direction component (L component) of Z2 (d): Z2 (L) is shown in FIG. X1 (λ) is obtained by proportional distribution with the distance between and the distance between X1 and Z2. That is,
Figure 0003835505
However, A (X), Z1 (X), and Z2 (X) show the X coordinate value of each point.
[0035]
Similarly, the difference between the X direction component (L component) of Z3 (d): Z3 (L) and the X direction component (L component) of Z4 (d): Z4 (L) is shown in FIG. X2 (λ) is obtained by proportional distribution based on the distance between X2 and the distance between X2 and Z4. That is,
Figure 0003835505
However, Z3 (X) and Z4 (X) show the X coordinate value of each point.
[0036]
Then, A (λ) is obtained by proportionally distributing the difference between X1 (λ) and X2 (λ) by the distance between X1 and A and the distance between A and X2 shown in FIG. That is,
Figure 0003835505
However, A (Y), Z1 (Y), and Z3 (Y) show the Y coordinate value of each point.
{Circle around (2)} The Y-direction component A (μ) of the correction value A (d) is calculated.
[0037]
Similarly to (1), Y1 component (M component) of Z1 (d): Z1 (M) and Y direction component (M component) of Z3 (d): Y1 (μ) is obtained from Z3 (M), Y2 (μ) is obtained from the Y2 direction component (M component) of Z2 (d): Z2 (M) and the Y direction component (M component) of Z4 (d): Z4 (M). That is,
Figure 0003835505
However, A (Y), Z1 (Y), Z2 (Y), Z3 (Y), and Z4 (Y) show the Y coordinate value of each point.
[0038]
Then, A (μ) is obtained from Y1 (μ) and Y2 (μ). That is,
Figure 0003835505
{Circle around (3)} A (d) Z direction component: A (ν) is calculated.
A (ν) is obtained from A (λ) and A (μ). That is,
α = 1 field of view X range / 2
β = 1 field of view Y range / 2
Then,
Figure 0003835505
After calculating the correction values A (d) by performing the above calculations (1) to (3), the CPU 30 calculates the height data (d) of the measurement surface center position A from the total shape data of the corrected measurement surface R. Then, the shape data is corrected by subtracting the angle vector (λ, μ, ν).
[0039]
By the above correction calculation, errors caused by the movement of the XY table 12 can be removed from the shape data of each measurement surface.
The CPU 30 combines the shape data of each measurement surface corrected by the above correction calculation into one shape data for the entire measurement region, and displays the surface shape of the measurement region S on the display unit 36 based on this shape data, or Then, parameters such as the flatness and roughness of the measurement region S are calculated, and the results are displayed on the display unit 36.
[0040]
【The invention's effect】
As described above, according to the non-contact surface shape measuring apparatus according to the present invention, when the area of the measurement range is large with respect to one visual field range of the imaging means, the measurement range is divided into a plurality of regions, and A part of the region is divided so as to overlap with another adjacent region, and an interference fringe is imaged for each region to calculate shape data. Then, the shape data of each region is corrected so that the shape data is best matched in the overlapping portion of each region, and the shape data indicating the shape of the entire measurement range of the measured surface is obtained by connecting the shape data of each region. Generate. Thereby, it is possible to measure a surface shape having a large area exceeding the area of one field of view while maintaining the horizontal resolution at a high resolution.
[0041]
Further, when the area of the measurement range is large with respect to one visual field range of the imaging means, discrete regions are set in the measurement range, and interference fringes are imaged for each region to calculate shape data. Then, an error caused by the moving mechanism that moves the visual field range of the imaging means is removed from the shape data of each region, and the shape data indicating the shape of the entire measurement range of the surface to be measured is combined with the shape data Is generated. Thereby, it is possible to measure a surface shape having a large area exceeding the area of one field of view while maintaining the horizontal resolution at a high resolution.
[Brief description of the drawings]
FIG. 1 is a perspective view showing a configuration of a measuring unit of a non-contact surface shape measuring apparatus according to the present invention.
FIG. 2 is a block diagram showing a configuration of a calculation unit of the non-contact surface shape measuring apparatus according to the present invention.
FIGS. 3A, 3B, and 3C are diagrams showing a measurement area for obtaining shape data, a visual field range of a CCD camera, and a measurement surface that divides the measurement area, respectively.
FIG. 4 is a diagram showing a moving trajectory of the visual field of the CCD camera and a measurement order of each measurement surface.
FIG. 5 is a diagram illustrating a correction order of each measurement surface.
FIGS. 6A and 6B are explanatory diagrams in which the processing content of the correction calculation is used for explanation.
FIGS. 7A, 7B, and 7C are diagrams showing a measurement area for obtaining shape data, a visual field range of a CCD camera, and a measurement surface that divides the measurement area, respectively.
FIG. 8 is a diagram showing the movement trajectory of the field of view of the CCD camera and the measurement order of each measurement surface.
FIG. 9 is a diagram illustrating a registration position of correction data and a position of a corrected measurement surface.
FIG. 10 is an enlarged view of a dotted line region in FIG. 9;
[Explanation of symbols]
12 ... XY stage
16 ... Main part of measuring part
30 ... CPU
32 ... Memory
36 ... Display section
38 ... Input section
40. Stage control unit
42. Image processing unit
44 ... Imaging unit

Claims (2)

被測定面で反射させた光と参照光との干渉によって生じた干渉縞を撮像手段によって撮像し、該撮像した干渉縞に基づいて前記被測定面の形状を測定する非接触表面形状測定装置において、
前記被測定面の測定範囲を複数の領域に分割する手段であって、各領域の一部が他の領域と重複するように分割する測定範囲分割手段と、
前記測定範囲分割手段によって分割した領域毎に前記撮像手段によって干渉縞の画像を取得する画像取得手段と、
前記画像取得手段によって取得した画像に基づいて各領域の形状を示す形状データを算出する形状データ算出手段と、
前記形状データ算出手段によって算出した各領域の形状データを、各領域の重複する部分において前記形状データが最も合致するように補正する形状データ補正手段であって、各領域のうち前記被測定面の測定範囲の中央部に位置する領域を初期の基準測定面とすると共に、該基準測定面の領域と重複する部分を有する周辺の領域を被補正測定面として該被補正測定面の領域の形状データを補正するものとし、補正済みの領域を新たな基準測定面とすると共に、該基準測定面の領域と重複する部分を有する未補正の領域を被補正測定面として該被補正測定面の領域の形状データを補正することによって全ての領域の形状データを順次補正する形状データ補正手段と、
前記形状データ補正手段によって補正された各領域の形状データを繋ぎ合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する形状データ生成手段と、
を備えたことを特徴とする非接触表面形状測定装置。
In a non-contact surface shape measuring apparatus that images an interference fringe generated by interference between light reflected by a surface to be measured and reference light by an imaging unit and measures the shape of the surface to be measured based on the captured interference fringe ,
Means for dividing the measurement range of the surface to be measured into a plurality of areas, and a measurement range dividing means for dividing so that a part of each area overlaps with other areas;
Image acquisition means for acquiring an image of interference fringes by the imaging means for each region divided by the measurement range dividing means;
Shape data calculation means for calculating shape data indicating the shape of each region based on the image acquired by the image acquisition means;
Shape data correction means for correcting the shape data of each area calculated by the shape data calculation means so that the shape data most closely matches each other in the overlapping area of each area, and the shape data correction means The area located in the center of the measurement range is used as the initial reference measurement surface, and the shape data of the area of the measurement surface to be corrected is set as the measurement area to be corrected with the peripheral area having a portion overlapping the area of the reference measurement surface. The corrected region is used as a new reference measurement surface, and an uncorrected region having a portion overlapping with the reference measurement surface region is used as a corrected measurement surface. Shape data correction means for sequentially correcting the shape data of all regions by correcting the shape data;
Shape data generating means for connecting the shape data of each region corrected by the shape data correcting means to generate shape data indicating the shape of the entire measurement range of the surface to be measured ;
A non-contact surface shape measuring apparatus comprising:
被測定面で反射させた光と参照光との干渉によって生じた干渉縞を撮像手段によって撮像し、該撮像した干渉縞に基づいて前記被測定面の形状を測定する非接触表面形状測定装置において、
前記被測定面の測定範囲に離散した領域を設定する領域設定手段と、
前記領域設定手段によって設定した領域毎に前記撮像手段によって干渉縞の画像を取得する画像取得手段と、
前記画像取得手段によって取得した画像に基づいて各領域の形状を示す形状データを算出する形状データ算出手段と、
前記撮像手段の視野範囲を移動させる移動機構に起因して生じる誤差を示す誤差データに基づいて、前記形状データ算出手段によって算出した各領域の形状データを補正し、該補正した各領域の形状データを合わせて前記被測定面の測定範囲全体の形状を示す形状データを生成する形状データ補正手段と、
を備えたことを特徴とする非接触表面形状測定装置。
In a non-contact surface shape measuring apparatus that images an interference fringe generated by interference between light reflected by a surface to be measured and reference light by an imaging unit and measures the shape of the surface to be measured based on the captured interference fringe ,
Area setting means for setting a discrete area in the measurement range of the surface to be measured;
Image acquisition means for acquiring an image of interference fringes by the imaging means for each area set by the area setting means;
Shape data calculation means for calculating shape data indicating the shape of each region based on the image acquired by the image acquisition means;
Based on error data indicating an error caused by a moving mechanism that moves the visual field range of the imaging unit, the shape data of each region calculated by the shape data calculating unit is corrected, and the corrected shape data of each region And shape data correction means for generating shape data indicating the shape of the entire measurement range of the surface to be measured,
A non-contact surface shape measuring apparatus comprising:
JP32355998A 1998-11-13 1998-11-13 Non-contact surface shape measuring device Expired - Fee Related JP3835505B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP32355998A JP3835505B2 (en) 1998-11-13 1998-11-13 Non-contact surface shape measuring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP32355998A JP3835505B2 (en) 1998-11-13 1998-11-13 Non-contact surface shape measuring device

Publications (2)

Publication Number Publication Date
JP2000146542A JP2000146542A (en) 2000-05-26
JP3835505B2 true JP3835505B2 (en) 2006-10-18

Family

ID=18156056

Family Applications (1)

Application Number Title Priority Date Filing Date
JP32355998A Expired - Fee Related JP3835505B2 (en) 1998-11-13 1998-11-13 Non-contact surface shape measuring device

Country Status (1)

Country Link
JP (1) JP3835505B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7969582B2 (en) 2007-12-17 2011-06-28 Olympus Corporation Laser scanning microscope apparatus, and surface shape measuring method thereof

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4607311B2 (en) * 2000-11-22 2011-01-05 富士フイルム株式会社 Wavefront shape measurement method and measurement wavefront shape correction method for large observation object by aperture synthesis
JP2007024764A (en) * 2005-07-20 2007-02-01 Daihatsu Motor Co Ltd Noncontact three-dimensional measuring method and system
JP2009122066A (en) * 2007-11-19 2009-06-04 Mitsutoyo Corp Non-contact three-dimensional measurement method and device thereof
KR101702752B1 (en) * 2015-05-29 2017-02-03 세메스 주식회사 Method of inspecting electronic components
JP6880396B2 (en) * 2017-01-23 2021-06-02 株式会社東京精密 Shape measuring device and shape measuring method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7969582B2 (en) 2007-12-17 2011-06-28 Olympus Corporation Laser scanning microscope apparatus, and surface shape measuring method thereof

Also Published As

Publication number Publication date
JP2000146542A (en) 2000-05-26

Similar Documents

Publication Publication Date Title
JP3946499B2 (en) Method for detecting posture of object to be observed and apparatus using the same
JP4710078B2 (en) Surface shape measuring method and apparatus using the same
CN107860331B (en) Shape measuring device, shape measuring method, and structure manufacturing method
JP5346033B2 (en) Method for optically measuring the three-dimensional shape of an object
KR940003917B1 (en) Apparatus for measuring three-dimensional curved surface shapes
JP4583619B2 (en) Method for detecting fringe image analysis error and method for correcting fringe image analysis error
US6674531B2 (en) Method and apparatus for testing objects
Osten Application of optical shape measurement for the nondestructive evaluation of complex objects
JP3835505B2 (en) Non-contact surface shape measuring device
US20100157312A1 (en) Method of reconstructing a surface topology of an object
JP2020008360A (en) Dimension measurement method using projection image obtained by X-ray CT apparatus
JP2004340680A (en) Method for measuring surface profile and/or film thickness, and its apparatus
JP2000205821A (en) Instrument and method for three-dimensional shape measurement
JP2001343208A (en) Method and apparatus for fringe analysis using fourier transform
JPH11351840A (en) Noncontact type three-dimensional measuring method
JP3370418B2 (en) 3D shape measurement system
JP3509005B2 (en) Shape measurement method
JP2010060420A (en) Surface shape and/or film thickness measuring method and its system
JP4607311B2 (en) Wavefront shape measurement method and measurement wavefront shape correction method for large observation object by aperture synthesis
JP4183576B2 (en) Scanning surface shape measuring apparatus and surface shape measuring method
JP4956960B2 (en) 3D shape measuring apparatus and 3D shape measuring method
JPH05318280A (en) Grinding attitude producing device of grinding robot
JP2005172610A (en) Three-dimensional measurement apparatus
JP3098213B2 (en) Circular contour measuring method and system
JP2001041711A (en) Method and apparatus for correcting table deflection of image measuring machine

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20040713

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20060119

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060203

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060404

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20060705

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20060718

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090804

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100804

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110804

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110804

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120804

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120804

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130804

Year of fee payment: 7

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees