JP2007205807A - Detection sensor and shape measuring method - Google Patents

Detection sensor and shape measuring method Download PDF

Info

Publication number
JP2007205807A
JP2007205807A JP2006023696A JP2006023696A JP2007205807A JP 2007205807 A JP2007205807 A JP 2007205807A JP 2006023696 A JP2006023696 A JP 2006023696A JP 2006023696 A JP2006023696 A JP 2006023696A JP 2007205807 A JP2007205807 A JP 2007205807A
Authority
JP
Japan
Prior art keywords
measurement
light
pixel
imaging surface
measurement object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2006023696A
Other languages
Japanese (ja)
Inventor
Akira Takada
昭 高田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Industrial Devices SUNX Co Ltd
Original Assignee
Sunx Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sunx Ltd filed Critical Sunx Ltd
Priority to JP2006023696A priority Critical patent/JP2007205807A/en
Publication of JP2007205807A publication Critical patent/JP2007205807A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a detection sensor and a shape measuring method allowing to eliminate alignment of a two-dimensional imaging device or the like in shape measurement. <P>SOLUTION: The area of an imaging area 13A of a CMOS image sensor 13 is set to be larger than an irradiatable area in which reflected light from an object under measurement W can be irradiated on the imaging area 13A when performing the measurement of the object under measurement W. A measurement use pixel to be used in the measurement is selected on the basis of the light receiving position of the reflected light, out of light emitted from a light emitting section 11 prior to the measurement by a CPU 14, from the object under measurement W on the imaging area 13A. The measurement of the object under measurement W is performed on the basis of the level of a light receiving signal Sb of the measurement use pixel. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は、検出センサ及び形状測定方法に関する。   The present invention relates to a detection sensor and a shape measuring method.

検出センサには、例えば図2に示すように、例えばスリット板を介して出射した光を測定対象物の表面に照射させ、その反射光を二次元撮像素子にて受光するようになっているものがある。この反射光が二次元撮像素子上に形成する照射像は、測定対象物の表面(被照射面)が平坦であれば、上記スリットの開口形と同じ形状(例えば直線状)をなす。一方、測定対象物の表面に凹凸がある場合には、その凹凸に応じた形状をなす。従って、二次元撮像素子での照射像に基づいて測定対象物の表面形状を検出(測定)することができる(特許文献1参照)。
特開2002−286425公報
For example, as shown in FIG. 2, the detection sensor irradiates the surface of the measurement object with light emitted through a slit plate and receives the reflected light with a two-dimensional image sensor. There is. The irradiation image formed by the reflected light on the two-dimensional imaging device has the same shape (for example, a straight line shape) as the opening shape of the slit as long as the surface of the measurement object (irradiated surface) is flat. On the other hand, when the surface of the measurement object has irregularities, it has a shape corresponding to the irregularities. Therefore, the surface shape of the measurement object can be detected (measured) based on the irradiation image on the two-dimensional image sensor (see Patent Document 1).
JP 2002-286425 A

ところで、上記したように、二次元撮像素子での照射像に基づいて測定対象物の表面形状を検出(測定)する場合には、一般に、表面形状の検出(測定)に先立って、測定対象物からの反射光が二次元撮像素子上に照射されるように、二次元撮像素子等の位置あわせ等の煩雑な調整作業等が必要となっていた。   By the way, as described above, when detecting (measuring) the surface shape of an object to be measured based on an irradiation image by a two-dimensional imaging device, generally, the object to be measured is detected prior to the detection (measurement) of the surface shape. Therefore, complicated adjustment work such as positioning of the two-dimensional image sensor or the like is required so that the reflected light from the light is irradiated onto the two-dimensional image sensor.

本発明は上記のような事情に基づいて完成されたものであって、形状測定の際における二次元撮像素子等の位置あわせ作業を省略することが可能な検出センサを提供することを目的とする。   The present invention has been completed based on the above circumstances, and an object of the present invention is to provide a detection sensor capable of omitting an alignment operation of a two-dimensional imaging device or the like in the shape measurement. .

上記の目的を達成するための手段として、請求項1の発明の検出センサは、光を投光する投光手段と、
前記投光手段から投光された光のうち測定対象物からの反射光を受光可能な撮像面を有する二次元撮像手段と、
前記二次元撮像手段の撮像面の各画素に受光される光の受光信号のレベルに基づいて前記測定対象物の測定を行う測定手段と、を備える検出センサにおいて、
前記二次元撮像手段の撮像面の面積は、前記測定対象物の測定が行われるときに、測定範囲にある当該測定対象物からの反射光が前記撮像面に照射されうる照射可能面積よりも大きく構成されており、
前記測定手段による測定に先立って前記投光手段から投光される基準となる光のうち、前記測定対象物からの反射光の前記撮像面に受光される受光位置に基づいて前記測定手段による測定の際に使用する測定使用画素を選択する画素選択手段と、
前記画素選択手段により選択された測定使用画素についての情報を記憶する記憶手段と、を備え、
前記測定手段は、前記記憶手段に記憶された前記測定使用画素の受光信号のレベルに基づいて前記測定対象物の測定を行う構成としたところに特徴を有する。
As means for achieving the above object, the detection sensor of the invention of claim 1 is a light projecting means for projecting light,
Two-dimensional imaging means having an imaging surface capable of receiving reflected light from the measurement object among the light projected from the light projecting means;
In a detection sensor comprising: a measuring unit that measures the measurement object based on a level of a light reception signal of light received by each pixel of the imaging surface of the two-dimensional imaging unit;
The area of the imaging surface of the two-dimensional imaging means is larger than the irradiable area where the reflected light from the measurement object in the measurement range can be irradiated to the imaging surface when the measurement object is measured. Configured,
Prior to measurement by the measurement unit, of the reference light projected from the light projection unit, measurement by the measurement unit based on a light receiving position received by the imaging surface of reflected light from the measurement object Pixel selection means for selecting a measurement use pixel to be used at the time of,
Storage means for storing information about the measurement use pixel selected by the pixel selection means,
The measurement means is characterized in that the measurement object is measured based on the level of the light reception signal of the measurement use pixel stored in the storage means.

請求項2の発明は、請求項1に記載のものにおいて、前記画素選択手段は、
前記測定手段による測定に先立って、前記測定対象物を測定範囲の上限位置と下限位置とに移動させた際に、前記撮像面に受光される前記反射光の受光位置に基づいて前記測定使用画素を選択するところに特徴を有する。
According to a second aspect of the present invention, in the first aspect, the pixel selection unit includes:
Prior to measurement by the measurement means, the measurement use pixel is based on the light receiving position of the reflected light received by the imaging surface when the measurement object is moved to the upper limit position and the lower limit position of the measurement range. It has a feature in selecting.

請求項3の発明は、請求項2に記載のものにおいて、前記投光手段は、前記測定対象物に線状の光を投光するとともに、前記二次元撮像手段の撮像面には、前記測定対象物からの線状の反射光が受光されるものであり、
前記測定対象物が前記上限位置にあるときの前記撮像面の受光位置と、前記測定対象物が前記下限位置にあるときの前記撮像面の受光位置と、の間の領域の画素を前記測定使用画素として選択するところに特徴を有する。
なお、「測定対象物が上限位置にあるときの撮像面の受光位置と、測定対象物が下限位置にあるときの撮像面の受光位置と、の間の領域」には、当該撮像面における両受光位置そのもの、即ち、線状の部分が含まれる。
According to a third aspect of the present invention, in the first aspect, the light projecting unit projects linear light onto the measurement object, and the measurement surface of the two-dimensional image capturing unit includes the measurement. The linear reflected light from the object is received,
The pixel in the region between the light receiving position of the imaging surface when the measurement object is at the upper limit position and the light receiving position of the imaging surface when the measurement object is at the lower limit position is used for the measurement. It is characterized in that it is selected as a pixel.
Note that the “region between the light receiving position of the imaging surface when the measurement object is at the upper limit position and the light receiving position of the imaging surface when the measurement object is at the lower limit position” includes both on the imaging surface. The light receiving position itself, that is, a linear portion is included.

請求項4の発明の形状測定方法は、測定対象物の測定に先立って、投光手段から投光させた光のうち測定範囲にある前記測定対象物からの反射光を、当該反射光が受光される範囲よりも大きな面積の撮像面を有する二次元撮像手段の当該撮像面に受光させ、当該撮像面における受光位置に基づいて測定に使用する測定使用画素を選択し、この測定使用画素の情報を記憶手段に記憶し、当該記憶手段に記憶された前記測定使用画素の受光信号のレベルに基づいて前記測定対象物の測定を行うところに特徴を有する。   According to the shape measuring method of the invention of claim 4, prior to measurement of the measurement object, the reflected light is received by the reflected light from the measurement object in the measurement range among the light projected from the light projecting means. Information is received by the imaging surface of the two-dimensional imaging means having an imaging surface having a larger area than the range to be measured, and a measurement pixel to be used for measurement is selected based on the light receiving position on the imaging surface. Is stored in the storage means, and the measurement object is measured based on the level of the light reception signal of the measurement use pixel stored in the storage means.

<請求項1及び請求項4の発明>
予め定められた範囲の画素に測定対象物からの反射光を受光させる場合には、二次元撮像手段の位置合わせ等の煩雑な作業が必要になる。
一方、本構成によれば、実際の受光位置に基づいて測定手段による測定の際に使用される測定使用画素が選択される。したがって、形状測定の際における二次元撮像手段等の位置合わせ作業を省略することができる。
<Invention of Claims 1 and 4>
When the reflected light from the measurement object is received by the pixels in a predetermined range, complicated work such as alignment of the two-dimensional imaging means is required.
On the other hand, according to this configuration, the measurement use pixel used in the measurement by the measurement unit is selected based on the actual light receiving position. Therefore, alignment work such as a two-dimensional imaging unit in the shape measurement can be omitted.

<請求項2の発明>
本構成によれば、測定手段による測定に先立って、測定対象物からの反射光を実際に撮像面に照射させて測定使用画素を選択するから、確実に測定に使用される画素を選択することができる。
<Invention of Claim 2>
According to this configuration, prior to the measurement by the measurement means, the measurement use pixel is selected by actually irradiating the imaging surface with the reflected light from the measurement object. Can do.

<請求項3の発明>
本構成によれば、撮像面に受光される線状の反射光の2つの受光位置の間の領域の画素を測定使用画素として選択するから、簡易な構成で確実に測定使用画素を選択することができる。
<Invention of Claim 3>
According to this configuration, since the pixel in the region between the two light receiving positions of the linear reflected light received by the imaging surface is selected as the measurement use pixel, the measurement use pixel can be reliably selected with a simple configuration. Can do.

本発明の一実施形態を図1ないし図7によって説明する。
1.形状測定装置の全体構成
図1は、本発明に係る形状測定方法を利用可能な形状測定装置1の概略的構成である。
形状測定装置1は、測定対象物Wの表面形状を検出する検出センサ10と、測定対象物Wの載置されるステージ31を上下に移動(昇降)可能なステージ駆動部30と、を備えて構成されている。
An embodiment of the present invention will be described with reference to FIGS.
1. 1 is a schematic configuration of a shape measuring apparatus 1 that can use the shape measuring method according to the present invention.
The shape measuring apparatus 1 includes a detection sensor 10 that detects the surface shape of the measurement target W, and a stage drive unit 30 that can move (lift up and down) a stage 31 on which the measurement target W is placed. It is configured.

検出センサ10は、レーザ光を投光する投光部11(投光手段)と、投光部11からのレーザ光をスリット状のレーザ光L1にするスリット板12と、スリット板12を通ったレーザ光L1のうち、ステージ31上に載置された測定対象物Wの表面にて反射した反射光L2が受光されるCMOSイメージセンサ13(本発明の「二次元撮像手段」に相当)と、CPU14(Central Processing Unit)等から構成される制御回路15と、制御回路15からの情報が記憶されるRAM(Random Access Memory)等からなる記憶部17と、を備えて構成されている。   The detection sensor 10 passed through a light projecting section 11 (light projecting means) that projects laser light, a slit plate 12 that converts the laser light from the light projecting section 11 into a slit-shaped laser light L1, and a slit plate 12. A CMOS image sensor 13 (corresponding to the “two-dimensional imaging means” of the present invention) that receives the reflected light L2 reflected by the surface of the measurement object W placed on the stage 31 among the laser light L1; The control circuit 15 includes a CPU 14 (Central Processing Unit) and the like, and the storage unit 17 includes a RAM (Random Access Memory) in which information from the control circuit 15 is stored.

投光部11は、レーザ光を出射するレーザ光源11Aと、制御回路15のCPU14からの投光タイミング信号によりレーザ光源11Aに投光動作を行わせる光源駆動回路11Bと、を備えている。   The light projecting unit 11 includes a laser light source 11 </ b> A that emits laser light, and a light source driving circuit 11 </ b> B that causes the laser light source 11 </ b> A to perform a light projecting operation based on a light projection timing signal from the CPU 14 of the control circuit 15.

スリット板12は、図1の紙面奥行き方向(図2のY軸方向)に長く延びたスリット(切れ目)を有し、このスリット板12のスリットを通過した薄板状のレーザ光L1が測定対象物Wの表面に向けて照射されるようになっている。そして、投光部11からスリット板12を介して測定対象物Wに照射された光のうち、当該測定対象物W表面で反射した反射光L2がCMOSイメージセンサ13の撮像面13Aにて受光されるようになっている。   The slit plate 12 has a slit (cut) extending long in the depth direction of FIG. 1 (Y-axis direction in FIG. 2), and a thin plate-like laser beam L1 that has passed through the slit of the slit plate 12 is a measurement object. Irradiation is directed toward the surface of W. Of the light irradiated to the measuring object W from the light projecting unit 11 via the slit plate 12, the reflected light L <b> 2 reflected by the surface of the measuring object W is received by the imaging surface 13 </ b> A of the CMOS image sensor 13. It has become so.

即ち、上記反射光L2が二次元CMOSイメージセンサ13の撮像面13A上に形成する照射像16は、測定対象物Wの表面が平坦であれば、上記スリット板12のスリット形状に対応して一直線状をなす。一方、図2に示すように、測定対象物Wの表面に例えば凸部が形成されている場合には、この凸部形状に応じた凸状の照射像16となる。従って、二次元CMOSイメージセンサ13の撮像面13Aにおける照射像16を捉えることで測定対象物Wの表面形状を検出することが可能になる。   That is, the irradiation image 16 formed on the imaging surface 13A of the two-dimensional CMOS image sensor 13 by the reflected light L2 is aligned with the slit shape of the slit plate 12 if the surface of the measuring object W is flat. Shape. On the other hand, as shown in FIG. 2, when the convex part is formed in the surface of the measuring object W, for example, it becomes the convex irradiation image 16 according to this convex part shape. Therefore, the surface shape of the measurement object W can be detected by capturing the irradiation image 16 on the imaging surface 13A of the two-dimensional CMOS image sensor 13.

CMOSイメージセンサ13は、行列状に並んで撮像面13Aを構成する多数の画素20と、垂直走査シフトレジスタ25と、水平走査シフトレジスタ26、とを備えて構成されている。このCMOSイメージセンサ13の撮像面13Aの面積は、ステージ駆動部30により測定対象物Wを上限位置及び下限位置に移動させたとしても、測定対象物Wからの反射光が撮像面13Aの範囲内に収まる面積とされている。なお、以下では、図4に簡略して示すように、行方向に3個、列方向に3個の計9個の画素20を備えるものを例として説明する。   The CMOS image sensor 13 includes a large number of pixels 20, a vertical scanning shift register 25, and a horizontal scanning shift register 26 that form the imaging surface 13 </ b> A arranged in a matrix. The area of the imaging surface 13A of the CMOS image sensor 13 is such that the reflected light from the measuring object W is within the range of the imaging surface 13A even if the measuring object W is moved to the upper limit position and the lower limit position by the stage driving unit 30. It is supposed to fit in the area. In the following description, as briefly shown in FIG. 4, an example including three pixels 20 in total in the row direction and three in the column direction will be described.

画素20には、図4に示すように、フォトダイオード21と、フォトダイオード21からの出力を増幅する増幅回路22と、が設けられている。また、増幅回路22には、当該増幅回路22から出力のオンオフを制御するMOSスイッチ23(23A〜23I),24(24A〜24C)が接続されている。   As shown in FIG. 4, the pixel 20 includes a photodiode 21 and an amplifier circuit 22 that amplifies the output from the photodiode 21. The amplifier circuit 22 is connected to MOS switches 23 (23A to 23I) and 24 (24A to 24C) for controlling on / off of the output from the amplifier circuit 22.

そして、例えば、MOSスイッチ24Aがオンされている状態にあるときに、制御回路15のCPU14から垂直走査シフトレジスタ25に所定周期でパルス信号が与えられると、一端側の画素20の垂直MOSスイッチ23Aから他端側の画素20の垂直MOSスイッチ23Cへと順番にオンされる(オンオフが切り替えられる)。一列の画素20についての垂直MOSスイッチ23(垂直MOSスイッチ23C)がオンされると(一列の画素数分のパルス信号が垂直走査シフトレジスタ25に与えられると)、制御回路15からのパルス信号が水平走査シフトレジスタ26に与えられ、今度は、次の列(一行隣)の一端側の画素20の水平MOSスイッチ24Bがオンされる(オンオフが切り替えられる)。そして、全ての画素20からの出力が終了するまで上記動作を繰り返す。これにより、CMOSイメージセンサ13の全ての画素からの受光信号Saが所定のタイミングで制御回路15に順番に出力されるようになっている。   For example, when a pulse signal is given in a predetermined cycle from the CPU 14 of the control circuit 15 to the vertical scanning shift register 25 when the MOS switch 24A is in the ON state, the vertical MOS switch 23A of the pixel 20 on one end side. To the vertical MOS switch 23C of the pixel 20 on the other end side in turn (ON / OFF is switched). When the vertical MOS switch 23 (vertical MOS switch 23C) for the pixels 20 in one column is turned on (when the pulse signals for the number of pixels in one column are supplied to the vertical scanning shift register 25), the pulse signal from the control circuit 15 is This is given to the horizontal scanning shift register 26, and this time, the horizontal MOS switch 24B of the pixel 20 at one end of the next column (next to one row) is turned on (turned on and off). The above operation is repeated until the output from all the pixels 20 is completed. Thus, the light reception signals Sa from all the pixels of the CMOS image sensor 13 are sequentially output to the control circuit 15 at a predetermined timing.

これにより、CPU14から垂直走査シフトレジスタ25にパルス信号が出力されるごとに、各画素20から受光量に応じたレベルの受光信号Saが順番に出力されるから、垂直走査シフトレジスタ25(及び水平走査シフトレジスタ26)に出力するパルス信号のタイミングと、制御回路15に入力される各画素ごとの受光信号Saの順番とにより、CPU14は、入力された受光信号Saが、いずれの画素20からの受光信号Saであるかを検出するようになっている。   As a result, each time a pulse signal is output from the CPU 14 to the vertical scanning shift register 25, the light receiving signal Sa of a level corresponding to the amount of received light is sequentially output from each pixel 20, so that the vertical scanning shift register 25 (and horizontal Depending on the timing of the pulse signal output to the scanning shift register 26) and the order of the light reception signal Sa for each pixel input to the control circuit 15, the CPU 14 receives the received light reception signal Sa from any pixel 20. Whether it is the light reception signal Sa is detected.

そして、形状測定動作の前に、上記した投受光動作が行われたときには、CPU14は、受光信号Sa(又は所定の閾値以上の受光量レベルを有する受光信号)が出力された画素を(形状測定の際に使用する測定使用画素として)選択し、この画素(測定使用画素)を特定するための情報(位置情報)を記憶部17に記憶する。したがって、CPU14が本発明の「画素選択手段」に相当する。
また、形状測定動作時に、上記した投受光動作が行われたときには、CPU14は、得られた受光信号Sbが形状測定動作の前に選択された画素(測定使用画素)からの受光信号Sbである場合には、受光量レベル(又は所定の閾値以上の受光量が検出されたかどうか)を記憶部17に記憶するとともに、この記憶部17に記憶された受光量レベルに基づき、CPU14は測定対象物Wの表面形状を測定する。したがって、CPU14が本発明の「測定手段」に相当する。
When the light projecting / receiving operation described above is performed before the shape measurement operation, the CPU 14 (shape measurement) outputs a pixel to which the light reception signal Sa (or a light reception signal having a light reception amount level equal to or greater than a predetermined threshold) is output. Information (position information) for specifying this pixel (measurement use pixel) is stored in the storage unit 17. Therefore, the CPU 14 corresponds to the “pixel selection unit” of the present invention.
When the light projecting / receiving operation described above is performed during the shape measuring operation, the CPU 14 receives the received light signal Sb from the pixel (measurement use pixel) selected before the shape measuring operation. In this case, the received light amount level (or whether a received light amount equal to or greater than a predetermined threshold value) is stored in the storage unit 17, and the CPU 14 measures the measurement object based on the received light amount level stored in the storage unit 17. The surface shape of W is measured. Therefore, the CPU 14 corresponds to the “measuring means” of the present invention.

ステージ駆動部30は、図1に示すように、検出センサ10の外部にて、検出センサ10の制御回路15とケーブル等の通信手段35を介して接続されており、制御回路15からの出力を受けて、所望の高さに測定対象物Wを移動可能に構成されている。   As shown in FIG. 1, the stage drive unit 30 is connected to the control circuit 15 of the detection sensor 10 via a communication means 35 such as a cable, and outputs from the control circuit 15. In response, the measuring object W can be moved to a desired height.

ここで、ステージ駆動部30は、ステージ31に載置される測定対象物Wを所定の上限位置と下限位置との間で移動させることができるようになっている。そして、形状測定前に行われる測定使用画素を特定(選択)する際には、測定対象物Wが上限位置にあるときに、撮像面13Aに受光される光の受光位置(図3の照射像16Aの位置)の上端位置(上端部。同図の点線A)と、測定対象物Wが下限位置にあるときに、撮像面13Aに受光される光の受光位置(照射像16Bの位置)の下端位置(下端部。同図の点線B)と、照射像16の幅方向の端部(同図の点線C,D)と、に基づいて、これらの領域に囲まれる領域(各両端部間に挟まれる範囲内にある画素)が、撮像面13Aのうち、形状測定の際に使用される測定使用画素(測定使用範囲)となっている。   Here, the stage drive unit 30 can move the measurement object W placed on the stage 31 between a predetermined upper limit position and a lower limit position. Then, when specifying (selecting) the measurement use pixel to be performed before the shape measurement, when the measurement target W is at the upper limit position, the light receiving position of the light received on the imaging surface 13A (the irradiation image in FIG. 3). 16A position) and the light receiving position (position of irradiated image 16B) of light received by imaging surface 13A when measurement object W is at the lower limit position. Based on the lower end position (lower end portion, dotted line B in the figure) and the widthwise ends of the irradiation image 16 (dotted lines C and D in the figure), the regions (between each end portion) surrounded by these regions Pixels within the range sandwiched between the pixels) is a measurement use pixel (measurement use range) used in the shape measurement on the imaging surface 13A.

なお、本発明における測定対象物Wの「測定範囲」とは、上限位置と下限位置との間の測定対象物Wが移動可能な範囲(ステージ駆動部30の移動可能な範囲)により定められるものであり、かかる「測定範囲」は検出センサの仕様により予め定められているものである。なお、上限位置と下限位置との間における測定対象物Wの位置は、図示しない操作部を介してユーザが設定できるようになっている。   The “measurement range” of the measurement object W in the present invention is determined by the range in which the measurement object W is movable between the upper limit position and the lower limit position (the movable range of the stage drive unit 30). The “measurement range” is predetermined by the specification of the detection sensor. The position of the measurement object W between the upper limit position and the lower limit position can be set by the user via an operation unit (not shown).

2.制御回路のCPUにて行われる処理
(1)形状測定に先立って行われる処理
測定対象物Wの形状測定に先立って、以下の処理が行われる。
<測定使用範囲特定処理>
図5に示すように、CPU14は、ステージ駆動部30に駆動信号を与えて駆動させ、ステージ31を上限位置まで移動させる(S11)。
そして、測定に使用する画素を選択するための画素選択処理を実行する(S12)。
2. Process Performed by CPU of Control Circuit (1) Process Performed Prior to Shape Measurement The following process is performed prior to the shape measurement of the measurement object W.
<Measurement use range identification process>
As shown in FIG. 5, the CPU 14 gives a drive signal to the stage drive unit 30 to drive it, and moves the stage 31 to the upper limit position (S11).
Then, a pixel selection process for selecting a pixel to be used for measurement is executed (S12).

(画素選択処理)
画素選択処理では、図6に示すように、CPU14は、光源駆動回路11Bに投光部11を投光させるための投光タイミング信号を出力する(S21)。これにより、投光部11からレーザ光(基準となる光)が出射される。そして、測定対象物Wで反射したレーザ光L2がCMOSイメージセンサ13に受光されると、受光量に応じた受光信号SaがCPU14に出力される。
(Pixel selection processing)
In the pixel selection process, as shown in FIG. 6, the CPU 14 outputs a light projection timing signal for causing the light source drive circuit 11B to project the light projecting unit 11 (S21). As a result, laser light (reference light) is emitted from the light projecting unit 11. When the laser light L2 reflected by the measurement object W is received by the CMOS image sensor 13, a light reception signal Sa corresponding to the amount of light received is output to the CPU.

CPU14は、受光信号Saを受けたことを検出すると(S22で「Y」)、投光タイミング信号のタイミング(又はシフトレジスタに出力するパルス信号のタイミング)と、受光信号Saのタイミング(又は順番)と、から当該受光信号Saを出力した画素を求める(S23)。
求めた画素を特定する情報(位置情報。例えば、座標データ、受光タイミング等)を記憶部17に記憶する(S24)。
When the CPU 14 detects that the light reception signal Sa has been received (“Y” in S22), the timing of the light projection timing signal (or the timing of the pulse signal output to the shift register) and the timing (or order) of the light reception signal Sa. Then, a pixel that outputs the light reception signal Sa is obtained (S23).
Information (position information, for example, coordinate data, light reception timing, etc.) specifying the obtained pixel is stored in the storage unit 17 (S24).

次に、図5のS13にて、CPU14は、ステージ駆動部30に駆動信号を与えて駆動させ、ステージ31を下限位置まで移動させる。
そして、測定に使用する画素を選択するための画素選択処理を実行する(S14)。
Next, in S <b> 13 of FIG. 5, the CPU 14 gives a drive signal to the stage drive unit 30 to drive it, and moves the stage 31 to the lower limit position.
Then, a pixel selection process for selecting a pixel to be used for measurement is executed (S14).

(画素選択処理)
画素選択処理では、図6に示すように、CPU14は、光源駆動回路11Bに投光部11を投光させるための投光タイミング信号を出力する(S21)。これにより、投光部11からレーザ光(基準となる光)が出射される。そして、測定対象物Wで反射したレーザ光L2がCMOSイメージセンサ13に受光されると、受光量に応じた受光信号SaがCPU14に出力される。
(Pixel selection processing)
In the pixel selection process, as shown in FIG. 6, the CPU 14 outputs a light projection timing signal for causing the light source drive circuit 11B to project the light projecting unit 11 (S21). As a result, laser light (reference light) is emitted from the light projecting unit 11. When the laser light L2 reflected by the measurement object W is received by the CMOS image sensor 13, a light reception signal Sa corresponding to the amount of light received is output to the CPU.

CPU14は、受光信号Saを受けたことを検出すると(S22で「Y」)、投光タイミング信号のタイミング(又はシフトレジスタに出力するパルス信号のタイミング)と、受光信号Saのタイミング(又は順番)と、から当該受光信号Saを出力した画素を求める(S23)。
求めた画素を特定する情報(位置情報。例えば、座標データ、受光タイミング等)を記憶部17に記憶する(S24)。
When the CPU 14 detects that the light reception signal Sa has been received (“Y” in S22), the timing of the light projection timing signal (or the timing of the pulse signal output to the shift register) and the timing (or order) of the light reception signal Sa. Then, a pixel that outputs the light reception signal Sa is obtained (S23).
Information (position information, for example, coordinate data, light reception timing, etc.) specifying the obtained pixel is stored in the storage unit 17 (S24).

次に、図5のS15にて、CPU14は、測定対象物Wが上限位置にあるときの撮像面13Aの受光部分の上端部(図3の点線A)と、測定対象物Wが下限位置にあるときの撮像面13Aの受光部分の下端部(同図の点線B)と、撮像面13Aの受光部分の両側端部(同図の点線C,D。測定対象物Wの幅)と、で囲まれた領域(境界部分を含む)を、表面形状測定の際に使用する測定使用範囲(測定使用画素)として設定し(S15)、かかる測定使用範囲(測定使用画素)の情報を記憶部17に記憶させる(S16)。   Next, in S15 of FIG. 5, the CPU 14 sets the upper end portion (dotted line A in FIG. 3) of the light receiving portion of the imaging surface 13A when the measurement target W is at the upper limit position and the measurement target W at the lower limit position. At the lower end of the light receiving portion of the imaging surface 13A at that time (dotted line B in the figure) and both side ends of the light receiving portion of the imaging surface 13A (dotted lines C and D in the figure, the width of the measuring object W). The enclosed area (including the boundary portion) is set as a measurement use range (measurement use pixel) used in the surface shape measurement (S15), and information on the measurement use range (measurement use pixel) is stored in the storage unit 17. (S16).

(2)形状測定時に行われる処理
測定対象物Wの表面形状測定時には、以下の処理が行われる。
CPU14は、図7に示すように、光源駆動回路11Bに投光部11を投光させるための投光タイミング信号を出力する(S31)。これにより、投光部11からレーザ光が出射され、測定対象物Wで反射したレーザ光がCMOSイメージセンサ13に受光されると、受光量に応じた受光信号SbがCPU14に出力される。
(2) Processing performed at the time of shape measurement The following processing is performed at the time of measuring the surface shape of the measurement object W.
As shown in FIG. 7, the CPU 14 outputs a light projection timing signal for causing the light source drive circuit 11B to project the light projecting unit 11 (S31). Thus, when the laser light is emitted from the light projecting unit 11 and the laser light reflected by the measurement object W is received by the CMOS image sensor 13, a light reception signal Sb corresponding to the amount of received light is output to the CPU 14.

次に、CPU14は、CMOSイメージセンサ13から受光信号Sbを受けたことを検出する(S32で「Y」)と、記憶部17に記憶されている測定使用画素の位置の情報(測定使用範囲の情報)を読出し(S33)、得られた受光信号Sbが測定使用画素(測定使用範囲)からの受光信号Sbかどうか判断する(S34)。   Next, when the CPU 14 detects that the light reception signal Sb is received from the CMOS image sensor 13 (“Y” in S32), information on the position of the measurement use pixel stored in the storage unit 17 (measurement use range of the measurement use range). Information) is read (S33), and it is determined whether or not the obtained light reception signal Sb is the light reception signal Sb from the measurement use pixel (measurement use range) (S34).

受光信号Sbが測定使用画素(測定使用範囲)からの受光信号Sbであるものについては(S34で「Y」)、かかる受光信号Sbを記憶部17に記憶する。
一方、受光信号Sbが測定使用画素(測定使用範囲)からの受光信号Sbでないものについては(S32で「N」)、得られた受光信号Sbを消去(削除)する。
そして、CPU14は、記憶部17に記憶された受光信号Sbにより、測定対象物Wの表面形状を測定する。
When the light reception signal Sb is the light reception signal Sb from the measurement use pixel (measurement use range) (“Y” in S34), the light reception signal Sb is stored in the storage unit 17.
On the other hand, when the light reception signal Sb is not the light reception signal Sb from the measurement use pixel (measurement use range) (“N” in S32), the obtained light reception signal Sb is deleted (deleted).
Then, the CPU 14 measures the surface shape of the measurement object W based on the light reception signal Sb stored in the storage unit 17.

3.本実施形態の効果
(1)予め定められた範囲の画素20に測定対象物Wからの反射光を受光させる場合には、CMOSイメージセンサ(二次元撮像手段)の位置合わせ等の煩雑な作業が必要になる。また、単にCMOSイメージセンサを大きくすることにより、位置合わせを不要とする構成とする場合には、形状測定の際に、CMOSイメージセンサの全ての画素からの受光信号が制御回路15により記憶部17に記憶される。そのため、かかる多量の情報を記憶するために、記憶部17の容量を大きくする必要があり製造コスト等の観点から望ましくない。さらに、全ての画素(撮像面13A全体)からの受光信号を受光する(測定に必要でない画素からの受光信号も受光する)場合には、外乱光等のノイズにより正確な形状測定が行われないおそれがある。
3. Effects of the present embodiment (1) When the reflected light from the measurement object W is received by the pixels 20 in a predetermined range, complicated operations such as alignment of the CMOS image sensor (two-dimensional imaging means) are performed. I need it. Further, in the case where the positioning is not required by simply increasing the size of the CMOS image sensor, the control circuit 15 receives light reception signals from all the pixels of the CMOS image sensor during the shape measurement. Is remembered. Therefore, in order to store such a large amount of information, it is necessary to increase the capacity of the storage unit 17, which is not desirable from the viewpoint of manufacturing cost and the like. Furthermore, when light reception signals from all pixels (the entire imaging surface 13A) are received (light reception signals from pixels not required for measurement are also received), accurate shape measurement is not performed due to noise such as ambient light. There is a fear.

一方、本実施形態によれば、CMOSイメージセンサ13(二次元撮像手段)の撮像面13Aの面積は、測定対象物Wの測定が行われるときに、測定範囲の当該測定対象物Wからの反射光が撮像面13Aに照射されうる照射可能面積(図3の点線A,B,C,Dで囲まれた領域 )よりも大きく構成されており、CPU14(測定手段)による測定に先立って投光手段(投光部11)から投光される基準となる光のうち、測定対象物Wからの反射光の撮像面13Aに受光される受光位置に基づいてCPU14(測定手段)による測定の際に使用する測定使用画素20を選択して記憶部17(記憶手段)に記憶する。そして、CPU14(測定手段)による測定の際には、上記したように実際に投受光動作が行われることによって確かめられた受光位置に基づいて選択された測定使用画素20の受光信号Saのレベルに基づいて測定対象物Wの測定が行われる。これにより、CMOSイメージセンサ13(二次元撮像手段)の位置合わせ等の作業を省略することができるから、作業性を向上させることが可能になる。   On the other hand, according to the present embodiment, the area of the imaging surface 13A of the CMOS image sensor 13 (two-dimensional imaging means) is reflected from the measurement object W in the measurement range when the measurement object W is measured. It is configured to be larger than the irradiable area (region surrounded by dotted lines A, B, C, and D in FIG. 3) where light can be irradiated onto the imaging surface 13A, and is projected prior to measurement by the CPU 14 (measurement means). In the measurement by the CPU 14 (measurement means) based on the light receiving position where the reflected light from the measurement object W is received by the imaging surface 13A among the reference light projected from the means (light projection unit 11). The measurement use pixel 20 to be used is selected and stored in the storage unit 17 (storage means). At the time of measurement by the CPU 14 (measuring means), the level of the light reception signal Sa of the measurement use pixel 20 selected based on the light reception position confirmed by actually performing the light projecting / receiving operation as described above is set. Based on this, the measurement object W is measured. Thereby, work such as alignment of the CMOS image sensor 13 (two-dimensional imaging means) can be omitted, and workability can be improved.

(2)測定対象物Wが上限位置にあるときの撮像面13Aの受光位置と、測定対象物Wが下限位置にあるときの撮像面13Aの受光位置と、照射像16の幅方向の範囲と、に基づいて、これらの領域に囲まれる領域(これら上下左右の位置の間の範囲内にある画素)が、撮像面13Aのうち、形状測定の際に使用される測定使用画素(測定使用範囲)とされる。したがって、測定対象物Wを移動可能な上限位置と下限位置とに移動させるという簡易な構成で確実に測定使用画素を選択することができる。   (2) The light reception position of the imaging surface 13A when the measurement object W is at the upper limit position, the light reception position of the imaging surface 13A when the measurement object W is at the lower limit position, and the range in the width direction of the irradiation image 16 Based on the above, the area surrounded by these areas (pixels in the range between these upper, lower, left and right positions) is the measurement use pixel (measurement use range) used in the shape measurement on the imaging surface 13A. ). Therefore, it is possible to reliably select the measurement use pixel with a simple configuration in which the measurement object W is moved to the movable upper limit position and the lower limit position.

<他の実施形態>
本発明は上記記述及び図面によって説明した実施形態に限定されるものではなく、例えば次のような実施形態も本発明の技術的範囲に含まれ、さらに、下記以外にも要旨を逸脱しない範囲内で種々変更して実施することができる。
<Other embodiments>
The present invention is not limited to the embodiments described with reference to the above description and drawings. For example, the following embodiments are also included in the technical scope of the present invention, and further, within the scope not departing from the gist of the invention other than the following. Various modifications can be made.

(1)上記実施形態では、本発明の2次元撮像手段を、CMOSイメージセンサ13により構成することとしたが、2次元撮像手段をCCD(Charge Coupled Devices)により構成してもよい。なお、CCDは、各画素の信号を順次、隣に伝送していって一列ごとに最後に増幅される(バケツリレー方式)ため、信号伝送時にノイズが発生しにくく、また、一列ごとに最後に増幅されるから、増幅回路に起因するばらつきが少ないという利点がある。
一方、本実施形態のように、CMOSイメージセンサを用いれば、CCDと比較して消費電力を少なくすることができる。さらに、CMOSプロセスを応用しているから、他の回路(A−D変換器等)を集積しやすくなる。
(1) In the above embodiment, the two-dimensional imaging unit of the present invention is configured by the CMOS image sensor 13, but the two-dimensional imaging unit may be configured by a CCD (Charge Coupled Devices). In addition, the CCD transmits the signal of each pixel sequentially next to each other and is amplified at the end of each column (bucket relay system), so that noise is not easily generated during signal transmission, and at the end of each column. Since it is amplified, there is an advantage that there is little variation due to the amplifier circuit.
On the other hand, if a CMOS image sensor is used as in this embodiment, power consumption can be reduced compared to a CCD. Furthermore, since the CMOS process is applied, it becomes easy to integrate other circuits (A-D converter, etc.).

(2)上記実施形態では、撮像面13Aのうち、測定対象物Wが上限位置にあるときの撮像面の受光位置と、測定対象物Wが下限位置にあるときの撮像面の受光位置と、の間の領域にある画素を測定使用画素の範囲(測定使用範囲)としたが、必ずしもかかる範囲内にある画素のみを測定使用画素(測定使用範囲)としなくてもよく、例えば、受光位置の近傍であれば、受光位置の境界よりも外部にある画素(外部の領域)を測定使用画素(測定使用範囲)として用いるようにしてもよい。   (2) In the above embodiment, among the imaging surface 13A, the light receiving position of the imaging surface when the measurement object W is at the upper limit position, and the light reception position of the imaging surface when the measurement object W is at the lower limit position; The pixels in the area between are used as the measurement use pixel range (measurement use range), but it is not always necessary to set only the pixels within such a range as the measurement use pixel (measurement use range). If it is in the vicinity, a pixel outside the boundary of the light receiving position (external region) may be used as a measurement use pixel (measurement use range).

形状測定装置1の構成を概略的に示す図The figure which shows the structure of the shape measuring apparatus 1 schematically 測定対象物Wで反射したレーザ光がCMOSイメージセンサに照射される様子を示す図The figure which shows a mode that the laser beam reflected by the measuring object W is irradiated to a CMOS image sensor CMOSイメージセンサの撮像面の照射像を示す図The figure which shows the irradiation image of the imaging surface of a CMOS image sensor CMOSイメージセンサの構成を説明する図The figure explaining the structure of a CMOS image sensor 形状測定前に行われるCPUの処理のフローチャートFlowchart of CPU processing performed before shape measurement 画素選択処理のフローチャートPixel selection process flowchart 形状測定時のCPUの処理のフローチャートFlowchart of CPU processing during shape measurement

符号の説明Explanation of symbols

1…形状測定装置
10…検出センサ
11…投光部(投光手段)
13…CMOSイメージセンサ(2次元撮像手段)
13A…撮像面
14…CPU (測定手段、画素選択手段)
15…制御回路
17…記憶部(記憶手段)
20…画素
30…ステージ駆動部
L1…レーザ光
L2…反射光
W…測定対象物
DESCRIPTION OF SYMBOLS 1 ... Shape measuring apparatus 10 ... Detection sensor 11 ... Light projection part (light projection means)
13 ... CMOS image sensor (two-dimensional imaging means)
13A ... Imaging surface 14 ... CPU (measuring means, pixel selecting means)
15 ... Control circuit 17 ... Storage section (storage means)
20 ... Pixel 30 ... Stage driver L1 ... Laser light L2 ... Reflected light W ... Measurement object

Claims (4)

光を投光する投光手段と、
前記投光手段から投光された光のうち測定対象物からの反射光を受光可能な撮像面を有する二次元撮像手段と、
前記二次元撮像手段の撮像面の各画素に受光される光の受光信号のレベルに基づいて前記測定対象物の測定を行う測定手段と、を備える検出センサにおいて、
前記二次元撮像手段の撮像面の面積は、前記測定対象物の測定が行われるときに、測定範囲にある当該測定対象物からの反射光が前記撮像面に照射されうる照射可能面積よりも大きく構成されており、
前記測定手段による測定に先立って前記投光手段から投光される基準となる光のうち、前記測定対象物からの反射光の前記撮像面に受光される受光位置に基づいて前記測定手段による測定の際に使用する測定使用画素を選択する画素選択手段と、
前記画素選択手段により選択された測定使用画素についての情報を記憶する記憶手段と、を備え、
前記測定手段は、前記記憶手段に記憶された前記測定使用画素の受光信号のレベルに基づいて前記測定対象物の測定を行うことを特徴とする検出センサ。
A light projecting means for projecting light;
Two-dimensional imaging means having an imaging surface capable of receiving reflected light from the measurement object among the light projected from the light projecting means;
In a detection sensor comprising: a measuring unit that measures the measurement object based on a level of a light reception signal of light received by each pixel of the imaging surface of the two-dimensional imaging unit;
The area of the imaging surface of the two-dimensional imaging means is larger than the irradiable area where the reflected light from the measurement object in the measurement range can be irradiated to the imaging surface when the measurement object is measured. Configured,
Prior to measurement by the measurement unit, of the reference light projected from the light projection unit, measurement by the measurement unit based on a light receiving position received by the imaging surface of reflected light from the measurement object Pixel selection means for selecting a measurement use pixel to be used at the time of,
Storage means for storing information about the measurement use pixel selected by the pixel selection means,
The measurement means measures the measurement object based on a level of a light reception signal of the measurement use pixel stored in the storage means.
前記画素選択手段は、
前記測定手段による測定に先立って、前記測定対象物を測定範囲の上限位置と下限位置とに移動させた際に、前記撮像面に受光される前記反射光の受光位置に基づいて前記測定使用画素を選択することを特徴とする請求項1記載の検出センサ。
The pixel selection means includes
Prior to measurement by the measurement means, the measurement use pixel is based on the light receiving position of the reflected light received by the imaging surface when the measurement object is moved to the upper limit position and the lower limit position of the measurement range. The detection sensor according to claim 1, wherein the detection sensor is selected.
前記投光手段は、前記測定対象物に線状の光を投光するとともに、前記二次元撮像手段の撮像面には、前記測定対象物からの線状の反射光が受光されるものであり、
前記測定対象物が前記上限位置にあるときの前記撮像面の受光位置と、前記測定対象物が前記下限位置にあるときの前記撮像面の受光位置と、の間の領域の画素を前記測定使用画素として選択することを特徴とする請求項2記載の検出センサ。
The light projecting means projects linear light onto the measurement object, and linear reflected light from the measurement object is received on the imaging surface of the two-dimensional imaging means. ,
The pixel in the region between the light receiving position of the imaging surface when the measurement object is at the upper limit position and the light receiving position of the imaging surface when the measurement object is at the lower limit position is used for the measurement. The detection sensor according to claim 2, wherein the detection sensor is selected as a pixel.
測定対象物の測定に先立って、投光手段から投光させた光のうち測定範囲にある前記測定対象物からの反射光を、当該反射光が受光される範囲よりも大きな面積の撮像面を有する二次元撮像手段の当該撮像面に受光させ、当該撮像面における受光位置に基づいて測定に使用する測定使用画素を選択し、この測定使用画素の情報を記憶手段に記憶し、当該記憶手段に記憶された前記測定使用画素の受光信号のレベルに基づいて前記測定対象物の測定を行うことを特徴とする形状測定方法。 Prior to the measurement of the measurement object, the reflected light from the measurement object in the measurement range out of the light projected from the light projecting means has an imaging surface with a larger area than the range in which the reflected light is received. The two-dimensional imaging unit having the imaging surface receives light, selects a measurement use pixel to be used for measurement based on the light receiving position on the imaging surface, stores information on the measurement use pixel in the storage unit, and stores the information in the storage unit. A shape measuring method, comprising: measuring the measurement object based on a stored light reception signal level of the pixel to be used for measurement.
JP2006023696A 2006-01-31 2006-01-31 Detection sensor and shape measuring method Pending JP2007205807A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006023696A JP2007205807A (en) 2006-01-31 2006-01-31 Detection sensor and shape measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006023696A JP2007205807A (en) 2006-01-31 2006-01-31 Detection sensor and shape measuring method

Publications (1)

Publication Number Publication Date
JP2007205807A true JP2007205807A (en) 2007-08-16

Family

ID=38485410

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006023696A Pending JP2007205807A (en) 2006-01-31 2006-01-31 Detection sensor and shape measuring method

Country Status (1)

Country Link
JP (1) JP2007205807A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4821934B1 (en) * 2011-04-14 2011-11-24 株式会社安川電機 Three-dimensional shape measuring apparatus and robot system
JP2015059825A (en) * 2013-09-18 2015-03-30 株式会社ミツトヨ Three-dimensional measuring apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4821934B1 (en) * 2011-04-14 2011-11-24 株式会社安川電機 Three-dimensional shape measuring apparatus and robot system
JP2015059825A (en) * 2013-09-18 2015-03-30 株式会社ミツトヨ Three-dimensional measuring apparatus

Similar Documents

Publication Publication Date Title
JP5081559B2 (en) measuring device
US20130107032A1 (en) Profile measuring apparatus, structure manufacturing system, method for measuring profile, method for manufacturing structure, and non-transitory computer readable medium
JP4389812B2 (en) Optical information reader
CN108016433B (en) Apparatus and method for controlling exit from parking of vehicle
JP6666620B2 (en) Range image measurement device
JP2015059825A (en) Three-dimensional measuring apparatus
JP2009204425A (en) Three-dimensional shape measuring device and method
JP2007205807A (en) Detection sensor and shape measuring method
JP4427954B2 (en) Monitoring device
JP2010019762A (en) Method and apparatus for measuring surface profile
JP2001045381A (en) Picture processor and picture processing method and medium
JP2005300525A (en) Sensor device
JP2007225384A (en) Reflection characteristic measuring device
JP5662046B2 (en) Displacement sensor
JP5515039B2 (en) Image measuring device
JP5010155B2 (en) Shape measuring device
JP4773802B2 (en) Displacement sensor
JP6774929B2 (en) Three-dimensional measuring device
JP4278536B2 (en) Surface shape detector
JP2009128238A (en) Laser radar equipment
JP2010145199A (en) Recognition method and recognition system using barcode, and control method and control system using barcode
JP2008045926A (en) Optical displacement sensor and its control method
JP2007101215A (en) Shape measurement method, shape measurement system and shape measurement device
JP2016177643A (en) Information code reading device
JP5021412B2 (en) BGA solder ball height measuring device

Legal Events

Date Code Title Description
RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20070709

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20070710