WO2013175816A1 - Distance measurement apparatus - Google Patents

Distance measurement apparatus Download PDF

Info

Publication number
WO2013175816A1
WO2013175816A1 PCT/JP2013/054021 JP2013054021W WO2013175816A1 WO 2013175816 A1 WO2013175816 A1 WO 2013175816A1 JP 2013054021 W JP2013054021 W JP 2013054021W WO 2013175816 A1 WO2013175816 A1 WO 2013175816A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
color
optical system
interpolation
Prior art date
Application number
PCT/JP2013/054021
Other languages
French (fr)
Japanese (ja)
Inventor
秀彰 高橋
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Publication of WO2013175816A1 publication Critical patent/WO2013175816A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems

Definitions

  • the present invention relates to a distance measuring device for acquiring distance information of an object.
  • Japanese Patent Application Laid-Open No. 2001-174696 describes a color imaging apparatus that performs pupil division by color by interposing a pupil color division filter having different spectral characteristics for each partial pupil in the photographing optical system. .
  • a subject image formed on a color imaging device through the pupil color division filter is captured by the color imaging device and output as an image signal.
  • This image signal is color separated, and distance information is generated by detecting a relative shift amount between pupil color divided color signals.
  • This distance information may be the distance to the subject itself, but may be both focusing information of the focusing deviation direction and the focusing deviation amount, for example, when aiming at autofocus (AF).
  • a distance measuring device capable of acquiring such distance information can be used for various devices and devices such as a microscope and a digital camera.
  • a microscope when used for a microscope, the height measurement of a subject (also referred to as a sample) Will be able to
  • the colors to be pupil-color-divided are red (R) and blue (B) (see FIGS. 3 to 7 according to the present invention, etc.)
  • the height of the subject 6 as shown in FIG. 8 according to the present invention is measured by a microscope using a pupil color division filter as shown in FIG. 3 according to the present invention.
  • the upper surface of the subject 6 is as shown in FIG. 9 according to the present invention.
  • FIG. 21 is a view showing an example of the RB double image formed on the Bayer array sensor.
  • FIG. 22 is a diagram showing the pixel value of the R pixel and the pixel value of the B pixel acquired from the RB double image of FIG. 21.
  • a black circle connected by a solid line indicates a pixel value of the R pixel
  • a black square connected by a dotted line indicates a pixel value of the B pixel.
  • the data shown in FIG. 22 is subjected to, for example, ZNCC (Zero-mean Normalized Cross-Correlation) (see Equation 1 according to the embodiment of the present invention) to obtain correlation values. Calculate the pixel shift amount from.
  • ZNCC Zero-mean Normalized Cross-Correlation
  • the amount of displacement calculated from this ZNCC is in pixel coordinate units
  • interpolation of sub-pixels for example, subpixel estimation methods such as equiangular straight line fitting and parabola fitting
  • the shift amount calculated in this way can be proportionally converted based on the configuration of the imaging optical system and the pupil color division filter, so the distance in the height direction from the in-focus position can be obtained.
  • the DC component or AC component is extracted, and the correlation is accurately detected by aligning the signal levels of the extracted components. Describes a correlation calculation method, a correlation calculation device, a focus detection device, and an imaging device.
  • a proportional conversion equation obtained by the configuration of the imaging optical system and the pupil color division filter is, for example, the present invention
  • the pixel shift amount is 0 It needs to be detected with an accuracy of .33 pixels or less.
  • the waveform shape of each color image may differ, and at this time, the detection accuracy of the amount of deviation decreases. It will Therefore, there is a need for a technique for further improving the detection accuracy of the amount of deviation, and hence the distance measurement accuracy.
  • the present invention has been made in view of the above circumstances, and when acquiring information on a subject distance based on a plurality of color images obtained by imaging pupil color-divided light, distance measurement accuracy is higher. It aims to provide a distance measuring device that can
  • a distance measurement device includes an imaging optical system for forming an image of an object, and an optical path of the imaging optical system, wherein plural partial pupils of the imaging optical system have different spectral characteristics.
  • a pupil color division optical system that divides the pupil of the imaging optical system by color, and photoelectric conversion of an object image formed by the imaging optical system via the pupil color division optical system, and a plurality of pixels are arrayed Pixels of a color imaging element for outputting the selected image, and a plurality of color images obtained by color separation of the image output from the imaging element, which are arranged in a plurality of color images for different partial pupils Generating a plurality of interpolated color images by combining a pixel interpolation unit that generates the interpolation pixels, a plurality of color images related to the different partial pupils, and the interpolation pixels generated by the pixel interpolation unit; Subject to multiple interpolated color images A distance information generation unit that detects relative displacement of the body image and generates information related to the subject distance based on the
  • FIG. 6 is a diagram for explaining a pixel array of the imaging device in the first embodiment.
  • FIG. 6 is a view for explaining an example of the arrangement of a pupil color division filter according to the first embodiment;
  • FIG. 7 is a plan view showing the state of subject light flux focusing when an object at a longer distance side than the in-focus position is imaged in the first embodiment.
  • FIG. 7 is a view showing, for each color component, the shape of blur formed by light from one point on the subject that is on the far side of the in-focus position in the first embodiment.
  • FIG. 7 is a plan view showing the state of subject light flux focusing when the subject on the near distance side of the in-focus position is imaged in the first embodiment.
  • FIG. 7 is a view showing, for each color component, the shape of blur formed by light from one point on an object on the near distance side of the in-focus position in the first embodiment.
  • FIG. 2 is a perspective view showing a subject in the first embodiment.
  • FIG. 2 is a view showing the upper surface of a black box-like object in the subject of the first embodiment.
  • FIG. 7 is a view showing a state in which the optical subject image of the R component and the optical subject image of the B component of the white printed matter formed on the image pickup element in the first embodiment are shifted.
  • FIG. 7 is a diagram showing pixel values of R and B images of white printed matter “1” for pixel arrangement on line A, and interpolated pixel values and composite values in the first embodiment.
  • FIG. 16 is a diagram showing pixel values of R and B images of white printed matter “1” and a shifted pixel value and a combined value in the pixel arrangement on the line A in the second embodiment.
  • FIG. 14 is a diagram for explaining optical conditions of the imaging optical system in the third embodiment.
  • FIG. 16 is a view showing a state in which the user designates the measurement position of the subject on the screen of the monitor in the fourth embodiment.
  • the block diagram which shows the structure of the distance measuring device in Embodiment 5 of this invention.
  • FIG. 17 shows a state of a subject image of an R component and a B component in an original image formed on an imaging surface of an imaging element and a partially enlarged view of a line A in the fifth embodiment.
  • FIG. 23 is a diagram showing pixel values of an R pixel and pixel values of a B pixel acquired from the RB double image in FIG. 21 conventionally.
  • FIGS. 1 to 11 show Embodiment 1 of the present invention
  • FIG. 1 is a block diagram showing the configuration of the distance measuring device 1.
  • the distance measurement device 1 generates information on the distance to the subject based on an image obtained by capturing an image of the subject.
  • Examples of application of the distance measuring device 1 include, for example, industrial microscopes, digital cameras, digital video cameras, mobile phones with cameras, PDAs with cameras (PDAs with cameras), personal computers with cameras, surveillance cameras, endoscopes And the like, but of course are not limited thereto.
  • the subject 6 is an object whose distance is to be measured, and is also referred to as a sample, for example, in the field of a microscope.
  • the distance measuring device 1 measures the distance to the subject 6 and includes, for example, a lens barrel 2, a controller 3, a personal computer (PC) 4, and a monitor 5.
  • a lens barrel 2 measures the distance to the subject 6 and includes, for example, a lens barrel 2, a controller 3, a personal computer (PC) 4, and a monitor 5.
  • PC personal computer
  • the lens barrel 2 includes an imaging optical system 10, an imaging device 11, a pupil color division filter 14, a ring illumination 16, an imaging device control unit 21, a zoom control unit 22, a focus control unit 23, and a focus driving mechanism. And 24.
  • the imaging optical system 10 is for forming an optical subject image on the imaging element 11 and includes, for example, a zoom lens 12, an aperture 13, and an objective lens 15.
  • the zoom lens 12 is for changing the focal length of the imaging optical system 10 to perform zooming (change of imaging magnification).
  • the stop 13 changes the passing range of the light beam passing through the imaging optical system 10 to adjust the brightness of the subject image formed on the imaging element 11.
  • the pupil diameter of the imaging optical system 10 is also changed by changing the aperture diameter of the diaphragm 13.
  • the objective lens 15 is an optical element mainly responsible for the power in the imaging optical system 10.
  • the imaging device 11 is configured by arraying a plurality of pixels (imaging pixels), and photoelectrically converts an object image formed by the imaging optical system 10 through the pupil color division filter 14 to convert the plurality of pixels (image pixels). Is an output of an arrayed image.
  • the color image pickup device 11 receives an object image for each of a plurality of wavelength bands (for example, but not limited to RGB) light, photoelectrically converts them, and outputs as an electric signal. It is an element.
  • the configuration of the color imaging device may be a single-plate imaging device provided with an on-chip device color filter, or may be a three-plate system using dichroic prisms for performing color separation into RGB color light, or the same. May be an imaging device of a method capable of acquiring RGB imaging information according to the position in the depth direction of the semiconductor at the pixel position of the semiconductor, or which imaging information of a plurality of wavelength bands can be acquired in pixel units It does not matter.
  • the imaging device 11 is a single-plate imaging device provided with an element color filter of primary color Bayer arrangement as shown in FIG. 2 on a chip, for example.
  • FIG. 2 is a diagram for explaining a pixel array of the image pickup device 11.
  • a B filter for configuring a B pixel and an R filter for configuring an R pixel are arranged in one diagonal direction, and the other diagonal direction
  • the basic array has a filter configuration in which 2 ⁇ 2 pixels in which G filters for forming G pixels are arranged are arranged as a basic array.
  • the imaging device 11 can be widely applied to an imaging device such as a CMOS sensor or a CCD sensor, it is not necessary to read out all pixels in order to measure the distance to the subject. It is preferable to employ a CMOS sensor that can read out a desired pixel.
  • the pupil color division filter 14 is disposed on the light path of the imaging optical system 10, and a pupil which color-divides the pupil of the imaging optical system 10 by providing plural partial pupils in the pupil of the imaging optical system 10 with different spectral characteristics. It is a color division optical system. Accordingly, the pupil color division filter 14 can also be called a band-limiting filter because it performs band limiting of light to be transmitted for each partial pupil.
  • the pupil color division filter 14 is configured as shown in FIG.
  • FIG. 3 is a diagram for explaining an example of the configuration of the pupil color division filter 14.
  • the pupil of the imaging optical system 10 is divided into a first partial pupil and a second partial pupil.
  • the left half is a G (green) component and R
  • the RG filter 14r that passes the (red) component and blocks the B (blue) component
  • the GB filter 14b that passes the G and B components and blocks the R component on the right half. Therefore, the pupil color division filter 14 passes all the G components contained in the light passing through the aperture (and hence the pupil) of the diaphragm 11 of the imaging optical system 10, and passes the R component only in the partial pupil of the left half of the aperture.
  • the B component is allowed to pass through only the partial pupil of the other right half of the aperture.
  • the RGB spectral transmission characteristics of the pupil color dividing filter 14 and the RGB spectral of the element filter of the image sensor 11 are preferably identical or as close as possible.
  • the ring illumination 16 is an illumination device that illuminates the subject 6 with illumination light.
  • the ring illumination 16 has a plurality of light sources such as LEDs arranged in a ring around the periphery of the optical path that does not shield the optical path of the imaging optical system 10, and the shadow of the illumination hardly occurs on the subject. Is equipped.
  • the ring illumination 16 is provided because the industrial microscope is assumed as described above, but another illumination device may be used, or natural light may be used without providing the illumination device. I do not care.
  • the imaging element control unit 21 controls the imaging element 11 and performs drive control of the imaging element 11 and readout control from the imaging element 11. Further, when the imaging device 11 is an analog imaging device, the imaging device control unit 21 also performs A / D conversion of a signal read from the imaging device 11.
  • the zoom control unit 22 performs control to move the zoom lens 12 in the optical axis direction and to change the focal length of the imaging optical system 10.
  • the focus control unit 23 controls the focus drive mechanism 24 so that the optical image of the subject 6 formed by the imaging optical system 10 is positioned on the imaging surface of the imaging device 11 (that is, focused). .
  • the focus drive mechanism 24 performs drive to adjust the focus position of the imaging optical system 10 based on the control signal from the focus control unit 23.
  • the focus drive mechanism 24 moves the lens barrel 2 itself in the direction of the optical axis, that is, moves the lens barrel 2 itself in a direction away from the subject 6 or in a direction approaching the subject 6.
  • the focus drive mechanism 24 is configured as a mechanism for moving the focus lens included in the objective lens 15 described above in the optical axis direction.
  • the controller 3 is connected to the lens barrel 2 described above, and controls the entire system of the distance measuring device 1.
  • the system controller 31 controls the whole of the distance measuring device 1 in an integrated manner, including each operation unit and the like in the controller 3 and each control unit and the like in the lens barrel 2.
  • the system controller 31 also performs processing such as color separation of an image output from the imaging device 11 into a plurality of color images.
  • the memory 32 is for temporarily buffering the image signal received from the lens barrel 2, and includes, for example, an SDRAM or the like.
  • the pixel interpolation operation unit 35 interpolates interpolation pixels of pixels arranged in a plurality of color images relating to different partial pupils among a plurality of color images obtained by color separation of the image output from the imaging device 11, It is a pixel interpolation unit that generates at least a relative shift amount in a shift detection direction which is a direction to detect the shift amount.
  • the pixel interpolation operation unit 35 of the present embodiment obtains R, G, B color images obtained by color separation of one image read out from the image sensor 11 and A / D converted by the system controller 31.
  • the interpolation calculation is performed on the R-color image and the B-color image to be subjected to shift detection among the above to generate an R-interpolated pixel and a B-interpolated pixel.
  • the pixel shift detection unit 33 generates a plurality of interpolation color images by combining a plurality of color images relating to different partial pupils and the interpolation pixels generated by the pixel interpolation operation unit 35, and a plurality of interpolation color images The relative displacement amount of the subject image in the above is detected, and is a part of the distance information generation unit.
  • the pixel shift detection unit 33 of the present embodiment combines the R-interpolated color image by combining the R-color image color-separated by the system controller 31 with the R-interpolated pixels generated by the pixel interpolation operation unit 35.
  • a B-interpolated color image is generated by combining the B-color image generated and generated by the system controller 31 with the B-interpolated pixel generated by the pixel interpolation operation unit 35. Then, the pixel shift detection unit 33 detects the shift amount of the subject image in the generated R interpolation color image and B interpolation pixel.
  • the distance calculation unit 34 may be information related to the object distance (the distance to the object itself may be, or the direction and amount of displacement of focusing etc.
  • the former is suitable for measuring, for example, the shape (height) of the subject, and the latter is suitable for generating, for example, focusing on the subject.
  • the PC 4 is connected to the controller 3 described above, and has a control application 41 which is software that doubles as a user interface of the distance measuring device 1.
  • the monitor 5 displays an image signal of the subject 6 transmitted from the controller 3 via the PC 4 and application information on the control application 41.
  • the user operates the distance measuring device 1 by using an input device (a keyboard, a mouse or the like) provided in the PC 4 while observing the display on the monitor 5.
  • an input device a keyboard, a mouse or the like
  • FIG. 4 is a plan view showing how the subject light flux is collected when an object at a longer distance than the in-focus position is imaged
  • FIG. 5 is a view on the object at a longer distance than the in-focus position
  • FIG. 6 is a view showing the shape of blur formed by light from a point for each color component
  • Reference numeral 7 is a diagram showing, for each color component, the shape of a blur formed by light from one point on the subject that is on the near distance side of the in-focus position.
  • the light emitted from one point on the subject is collected at one point on the imaging device 11 regardless of which color component it is, and the subject as a point image Form an image. Therefore, no positional displacement occurs between colors, and a subject image without color blur is formed.
  • a circular blurred object image IMGg is formed, a right half semicircular blurred object image IMGr is formed for the R component, and a left half semicircular blurred object image IMGb is formed for the B component. Therefore, when the object OBJf at a longer distance side than the in-focus position is imaged, a blurred image is formed in which the R component subject image IMGr is shifted to the right and the B component subject image IMGb is shifted to the left.
  • the left and right positions of the R component and the B component in are opposite to the left and right positions of the R component transmission region (RG filter 14r) and the B component transmission region (GB filter 14b) in the pupil color division filter 14 when viewed from the imaging device 11. is there. Further, as shown in FIG. 5, the G component object image IMGg is a blurred image spanning the R component object image IMGr and the B component object image IMGb.
  • the more the subject OBJf moves away from the in-focus position to the far side, the larger the blur, and the distance between the center of gravity Cr of the subject image IMGr of the R component and the center of gravity Cb of the subject image IMGb for the B component The distance between the center of gravity Cr of the subject image IMGr and the center of gravity Cg of the object image IMGg of G component, and the distance between the center of gravity Cg of the object image IMGg of G component and the center of gravity Cb of the object image IMGb for B component Will be increased.
  • the object OBJn when the object OBJn is closer than the in-focus position, for example, the light emitted from one point on the object OBJn causes circular blurring of the G component as shown in FIGS. 6 and 7.
  • a subject image IMGg to be formed is formed, a subject image IMGr having a semicircular blur in the left half is formed for the R component, and a subject image IMGb having a semicircular blur on the right half is formed for the B component. Therefore, when imaging the object OBJn on the near distance side of the in-focus position, a blurred image is formed in which the R component subject image IMGr is shifted to the left and the B component subject image IMGb is shifted to the right.
  • the left and right positions of the R component and the B component in are the same as the left and right positions of the R component transmission region (RG filter 14r) and the B component transmission region (GB filter 14b) in the pupil color division filter 14 when viewed from the imaging device 11. is there.
  • the object image IMGg of the G component is a blurred image that straddles the object image IMGr of the R component and the object image IMGb of the B component (see FIG. 7) as well at this short distance side.
  • the distance between the center of gravity Cr of the subject image IMGr and the center of gravity Cg of the object image IMGg of G component, and the distance between the center of gravity Cg of the object image IMGg of G component and the center of gravity Cb of the object image IMGb for B component Will be increased.
  • the object distance can be calculated by calculating the amount of deviation (the amount of phase difference) based on the correlation. Is possible.
  • the amount of deviation between the subject image IMGr of the R component and the subject image IMGb of the B component with the largest separation distance between the barycentric positions is detected.
  • the accuracy is higher than that in the case of detecting the displacement between the R component subject image IMGr and the G component subject image IMGg, or the G component subject image IMGg and the B component subject image IMGb. It is thought that it will be possible to detect
  • FIG. 8 is a perspective view showing the subject 6.
  • FIG. 9 is a view showing the upper surface of the black box-like object 6 b in the subject 6.
  • FIG. 10 is a view showing a state in which the optical subject image IMGr of the R component and the optical subject image IMGb of the B component of the white printed matter formed on the imaging device 11 are deviated.
  • the R signal image (R image) and the B signal image (B image) are also It becomes a double image similar to FIG.
  • the R image and the B image are transmitted to the PC 4 through the controller 3 and displayed on the monitor 5 by the control application 41 (note that, as an image to be displayed on the monitor 5, the deviation between the R image and the B image is It may be a corrected image or an image picked up without passing through the pupil color division filter 14 (in the case of simply observing the subject, it is preferable to be an image without such deviation). Furthermore, on the monitor 5, an operation screen related to the control application 41 is also displayed.
  • Gr is a G pixel disposed between R pixels in the horizontal line
  • Gb is a G pixel disposed between B pixels in the horizontal line.
  • FIG. 11 is a diagram showing the pixel values of the R image and the B image of the white printed matter “1” for the pixel array on the line A, and the interpolation pixel value and the composite value.
  • the aperture ratio of each pixel is assumed to be 100%.
  • the pixel value for example, the pixel value when the R image is formed on the entire aperture (100%) of the R pixel is 100, and the pixel value when the R image is not formed at all on the R pixel is 0 As the intermediate value, it is assumed that the pixel value is in proportion to the percentage of the aperture of the R pixel where the R image is formed.
  • the shift amount in the X direction of the R image and the B image of the white printed matter “1” is 3.4 pixels with the pixel pitch in the X direction as a unit. Therefore, it is most desirable that the amount of displacement obtained as a detection result be equal to the actual displacement amount of 3.4 pixels.
  • the imaging element 11 since the imaging element 11 has a Bayer array, G pixels (Gr pixels or Gb pixels) are disposed between R pixels and B pixels. Therefore, the acquired pixel values of the R pixel and the pixel values of the B pixel are not only discretely arranged by one pixel in the horizontal direction, but are further arranged to be shifted by one pixel each in the horizontal direction and the vertical direction It becomes.
  • Equation 1 ZNCC (Zero-mean Normalized Cross-Correlation) [Equation 1]
  • the image shift amount in the left column of Table 1 corresponds to 2 pixels (more exactly 2) corresponding to the R pixel and the B pixel being arranged by 1 pixel skipping (every 2 pixels) in the horizontal direction. Pixel pitch is used as a unit.
  • ZNCC normalized cross correlation operation
  • SSD sum of squared difference
  • SAD sum of absolute difference
  • the correlation value 0.92 at which the image shift amount is 1 is the highest correlation value
  • the second highest correlation value is the correlation value 0.80 at which the image shift amount is 2. Therefore, the true shift amount is estimated to be intermediate between the image shift amounts 1 and 2.
  • sub-pixel interpolation is performed using the obtained correlation value.
  • conformal straight line fitting, parabola fitting, etc. are known, but here, for example, it is assumed that equiangular straight line fitting is performed using the obtained correlation value ratio. .
  • the pixel interpolation operation unit 35 interpolates and estimates an R pixel value corresponding to the Gr pixel position in the R-Gr row and a B pixel value corresponding to the Gb pixel position in the Gb-B row.
  • an average value of R pixel values on both sides is calculated as an R interpolation pixel value
  • an average value of B pixel values on both sides is calculated as a B interpolation pixel value.
  • the pixel shift detection unit 33 combines the original R pixel value and the R interpolation pixel value calculated by the pixel interpolation operation unit 35 to generate an R interpolation color image (see R pixel composite value in FIG. 11). Similarly, the original B pixel value and the calculated B interpolation pixel value are combined to generate a B interpolation color image (see the B pixel composite value in FIG. 11).
  • the image shift amount in the left column of Table 2 corresponds to one pixel (more precisely, one pixel pitch) corresponding to arrangement of the interpolation color image in the horizontal direction every one pixel (every one pixel). Unit).
  • the correlation value 0.97 at which the image shift amount is 4 pixels is the highest correlation value
  • the second highest correlation value is the correlation value 0.94 at which the image shift amount is 3 pixels. Therefore, the true shift amount is estimated to be intermediate between 3 pixels and 4 pixels.
  • the pixel shift detection unit 33 performs sub-pixel interpolation using the obtained correlation value in order to calculate the shift amount with higher accuracy.
  • isometric straight line fitting is performed using the ratio of the obtained correlation values.
  • An image shift amount of 3.64 pixels calculated using the interpolation color image has a detection error of 0.24 pixels with respect to an actual shift amount of 3.4 pixels. Become. That is, it is possible to calculate the amount of deviation with higher accuracy than the detection error 0.4 in the case without the above-described interpolation (when the obtained data is used as it is).
  • the detection error of 0.24 pixels also satisfies the accuracy of the shift amount of 0.33 pixels or less as described in the above-mentioned background art.
  • the interpolation in the pixel interpolation operation unit 35 is performed by using the average value of two pixels on both sides in the X direction, but a method of estimating pixel values based on more pixels adjacent in the X direction may be used Instead of the X direction, a method of estimation using pixel values in the vertical direction (Y direction) or an oblique direction may be used. Furthermore, an estimation method using a super resolution technique or the like may be used based on a plurality of images.
  • the distance calculation unit 34 uses the proportional conversion formula obtained based on the configuration of the imaging optical system 10 and the pupil color division filter 14 to calculate the relative shift amount calculated by the pixel shift detection unit 33, and the height of the subject. Convert to directional distance information.
  • Equation 2 Equation 2
  • the measurement height error of the distance measuring device 1 of the present embodiment is 0.036 mm.
  • This numerical value indicates the amount of deviation from the in-focus position, and when the lower surface of the black box-like object 6b (that is, the surface of the white plate 6a shown in FIG. 8) is at the current in-focus position, It indicates the height of the black box-like object 6b.
  • control application 41 can set a plurality of measurement points
  • the user calculates and displays the difference between the plurality of set measurement points, thereby the user can obtain a desired point of the observation subject. It is possible to obtain height information of
  • controller 3 various arithmetic units and the like are described as the hardware configuration in the controller 3, but the present invention is not limited to this. (In this case, the controller 3 may be omitted because the PC 4 can also function as the controller 3).
  • the distance measuring device 1 is configured to include the lens barrel 2, the controller 3, the PC 4, and the monitor 5 has been described, but the distance measuring device 1
  • the lens barrel 2 is a lens barrel of a camera or a lens driving mechanism
  • the controller 3 and the PC 4 are a CPU of a camera Liquid crystal monitor etc.
  • the number of samplings can be increased in the direction of calculating the relative shift amount. Specifically, it is possible to increase R pixels and B pixels to be subjected to shift amount detection in the horizontal direction which is the shift amount calculation direction. This makes it possible to improve the detection accuracy of the amount of deviation. As a result, the distance measurement accuracy with respect to the subject can be improved.
  • the distance measuring device of the present embodiment when acquiring information on the subject distance based on color images of multiple colors obtained by imaging the pupil color-divided light with a color imaging device, the distance measurement accuracy Can be made higher.
  • FIG. 12 is a block diagram showing the configuration of the distance measuring device 1
  • FIG. 13 is an R of white printed matter “1” for the pixel arrangement on line A. It is a figure which shows the pixel value of an image and B image, and a shift pixel value and a synthetic
  • the interpolation pixels are obtained by calculation, but in the present embodiment, the interpolation pixels are obtained by shifting the image sensor 11 and performing imaging a plurality of times.
  • the element shift unit 17 and the element shift control unit 25 are added to the lens barrel 2 with respect to the configuration shown in FIG.
  • the configuration is such that the interpolation operation unit 35 is removed.
  • the element shift unit 17 and the element shift control unit 25 are pixel interpolation units, and move the image pickup device 11 in parallel in the direction of detecting deviation in a plane perpendicular to the optical axis of the imaging optical system 10 It has become.
  • the element shift unit 17 is for minutely moving the imaging element 11 in a direction (for example, a horizontal pixel array direction) in which at least the amount of deviation is detected.
  • a direction for example, a horizontal pixel array direction
  • this camera shake correction mechanism may be used as the device shift unit 17.
  • the element shift control unit 25 controls driving of the element shift unit 17.
  • the subject is shaped as shown in FIG. 8 and FIG. 9 etc., and the obtained RB double image is as shown in FIG. It is assumed that the subject conditions such that the actual displacement amount of is 3.4 pixels are the same as in the first embodiment described above.
  • the system controller 31 When the user inputs an instruction to start measurement via the control application 41, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1, and first captures an original image with the imaging element 11.
  • the original image thus taken is temporarily stored in the memory 32.
  • the system controller 31 transmits, to the element shift control unit 25, an instruction to shift the imaging element 11.
  • the element shift control unit 25 When receiving this command, the element shift control unit 25 generates a drive signal and transmits the drive signal to the element shift unit 17.
  • the element shift unit 17 When the element shift unit 17 receives this drive signal, it shifts the image sensor 11 in the direction to detect the shift amount, here, right shift by one pixel (more exactly by one horizontal pixel pitch) in the X direction of the pixel array.
  • the shift amount here, right shift by one pixel (more exactly by one horizontal pixel pitch) in the X direction of the pixel array.
  • the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1 to cause the imaging element 11 to capture a shift image.
  • the shift image thus captured is also temporarily stored in the memory 32.
  • the pixel displacement detection unit 33 combines the R pixel value of the original image stored in the memory 32 and the R pixel value of the shift image to generate an R interpolation color image (see the R pixel composite value in FIG. 13). Similarly, the B pixel value of the original image and the B pixel value of the shifted image are combined to generate a B-interpolated color image (see the B pixel combined value in FIG. 13).
  • the image shift amount in the left column of this Table 3 corresponds to one pixel (more precisely, one pixel pitch) corresponding to arrangement of the interpolation color image in each pixel (each pixel) in the horizontal direction. Unit).
  • the correlation value 0.97 in which the image shift amount is 3 pixels is the highest correlation value
  • the second highest correlation value is the correlation value 0.92 in which the image shift amount is 4 pixels. Therefore, the true shift amount is estimated to be intermediate between 3 pixels and 4 pixels.
  • the pixel shift detection unit 33 performs sub-pixel interpolation using the obtained correlation value in order to calculate the shift amount with higher accuracy.
  • isometric straight line fitting is performed using the ratio of the obtained correlation values.
  • the image shift amount 3.39 pixels calculated using the interpolation color image obtained by the shift of the imaging device 11 is 0.01 pixels with respect to the actual shift amount 3.4 pixels. It means that a detection error has occurred. That is, the amount of deviation can be calculated with extremely high accuracy than the detection error of 0.4 when the obtained data as described above is used as it is (in the case shown in Table 1). Further, the amount of deviation can be calculated more accurately than the detection error of 0.24 pixels when the interpolation color image obtained by the interpolation of the first embodiment is used (in the case shown in Table 2). The detection error of 0.01 pixel satisfies the accuracy of the shift amount of 0.33 pixel or less as described in the above-mentioned background art.
  • the distance calculation unit 34 uses the proportional conversion formula obtained based on the configurations of the imaging optical system 10 and the pupil color division filter 14 to calculate the amount of deviation calculated by the pixel deviation detection unit 33, and calculates the distance in the height direction of the subject. Convert to information.
  • the condition that the shifted image does not completely match the original image is satisfied (for example, The multiple, the pixel pitch, etc., and the shift amount may be properly selected.
  • the imaging optical system 10 may be shifted to perform imaging a plurality of times.
  • the element shift unit 17 and the element shift control unit 25 shift the imaging optical system 10.
  • both of the imaging optical system 10 and the imaging element 11 may be shifted to perform imaging a plurality of times.
  • the element shift unit 17 and the element shift control unit 25 shift the imaging optical system 10 and the imaging element 11.
  • the image sensor 11 is moved in parallel by the device shift unit 17 and the device shift control unit 25 which are device moving units.
  • a plurality of color images obtained by acquiring a plurality of images at different positions in the detection direction from the imaging device 11 and performing color separation on the plurality of acquired images, the color images relating to the same color Interpolated color images are generated by combining the pixel shift detection unit 33.
  • FIG. 14 is a block diagram showing the configuration of the distance measuring device 1
  • FIG. 15 is a diagram for explaining the optical conditions of the imaging optical system 10. It is.
  • the interpolation pixel is obtained by shifting the image sensor 11 and performing imaging a plurality of times, but the shift amount of the image sensor 11 at this time It is set based on the optical conditions.
  • the distance measuring device 1 of this embodiment has a configuration in which an element shift amount calculation unit 36 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
  • the element shift amount calculation unit 36 is a movement amount calculation unit that calculates the parallel movement amount (shift amount) of the imaging element 11 during acquisition of two images based on the optical conditions of the imaging optical system 10. .
  • the element shift unit 17 and the element shift control unit 25 which are element movement units translate the imaging optical system 10 in parallel by the amount of parallel movement calculated by the element shift amount calculation unit 36.
  • the imaging optical system 10 has optical conditions as schematically shown in FIG.
  • the optical axis of the imaging optical system 10 is shown as O.
  • D indicates the diameter of the diaphragm of the diaphragm 13 in the imaging optical system 10 (therefore, D / 2 is the radius from the optical axis O to the diaphragm 13).
  • LG indicates the distance (gravity center distance) from the optical axis O to the gravity center position of the RG filter 14r in the pupil color division filter 14, for example. However, the distance from the optical axis O to the gravity center position of the GB filter 14 b (gravity center distance) is also LG.
  • the focal length of the imaging optical system 10 is f.
  • the focal length f is an amount that changes as the zoom lens 12 moves.
  • NA numerical aperture
  • the numerical aperture NA is more precisely the object side NA, and depends on the size of the aperture diameter D of the aperture 13 (the smaller the aperture diameter D, the smaller the NA. As the aperture diameter D increases, the NA also increases.
  • the distance between the focal position and the imaging surface 11 a of the imaging device 11 and the multiplier of the optical magnification are assumed to be Z.
  • the shift amount between the R and B subject images on the imaging surface 11a (located at an optical distance from the focal position Z) of the imaging element 11 is the image of the barycentric position of the RG filter 14r on the imaging surface 11a and the GB filter 14b.
  • the distance to the image at the center of gravity, which is X in the figure (therefore, X / 2 is the distance from the optical axis O to the image of the center of gravity of the RG filter 14r or the image of the center of gravity of the GB filter 14b) .
  • the optical lens constituting the imaging optical system 10 is generally configured to obtain a desired characteristic by combining a plurality of lenses, R subject image and B subject image at an arbitrary zoom magnification It is difficult to express the displacement amount X by a simple mathematical expression.
  • this resolution is the resolution in the optical axis direction
  • this resolution is the resolution in the optical axis direction
  • a certain resolution for example, resolution of 0.05 mm or more
  • Depth of focus determined based on the optical condition of the set observation magnification (DOF: Depth
  • DOF Depth
  • the element shift amount calculation unit 36 obtains the resolution (for example, 1/2 DOF, more generally (k ⁇ DOF) with k as a predetermined coefficient) based on the Of Focus).
  • the shift amount of the image sensor 11 for obtaining the resolution of the required accuracy as shown in (1), (2) etc. in the height direction of the subject 6 is acquired.
  • the above-mentioned respective parameters may be input by arithmetic operation, or look-up tables (LUTs) for the respective parameters may be prepared in advance. It may be stored in the amount calculator 36, and this LUT may be referred to at the time of distance measurement.
  • LUTs look-up tables
  • one of various optical conditions such as the zoom magnification, the numerical aperture NA, and the aperture diameter D is determined.
  • the depth of focus (DOF) is determined based on this optical condition.
  • the required resolution (k ⁇ DOF) in (2) is determined based on the depth of focus (DOF).
  • the detection accuracy of the deviation required to detect the resolution is determined, and a value equal to or less than the determined deviation (for example, a predetermined value of 1 or less (the predetermined value If it is too small, it is preferable to set a predetermined value close to 1 or a value close to 1) in order to obtain a high accuracy that is excessively high than the required accuracy.
  • Such a procedure is repeated while changing the optical conditions of the imaging optical system 10, for example, the zoom magnification.
  • the imaging device 11 When the actual shift amount of the imaging device 11 is determined, not only the optical conditions of the imaging optical system 10 but also the pixel pitch of the imaging device 11 (the imaging device 11 is shifted with an accuracy finer than the pixel pitch) Also, limitations in accordance with various conditions of the entire imaging system (the entire imaging system including the imaging optical system 10, the pupil color division filter 14, the imaging element 11 and the like) are imposed.
  • the optical magnification is 0.7
  • the R pixel and the B pixel are shifted by one pixel (for example, to the right) after capturing the original image (in this case, the position of the G pixel at the original image capturing) R pixel and B pixel are located in the image)
  • the shift image is photographed in this state
  • the original image and the shift image are combined to generate the interpolation color image
  • the relative color image is generated based on the generated interpolation color image.
  • the amount of deviation will be calculated.
  • the optical magnification is 1.0
  • the image is shifted by 2/3 pixels (for example, to the right), and in this state, the shifted image is captured, and further by 2/3 pixels (for example, Shift to the right (in this case, it is shifted by 4/3 pixels from the original image shooting position), shoot the shifted image in this state, combine the original image and the two shifted images An interpolation color image is generated, and the amount of deviation is calculated based on the generated interpolation color image.
  • the optical magnification is 2.0 times
  • after capturing the original image it is shifted by 1/2 pixel (for example, to the right), and in this state, the first shifted image is captured, and then 1/2 pixel Shift the image by a half (for example, to the right) (shift amount of one pixel from the original image capture position), capture a second shift image in this state, and then shift it for
  • the third shift image is captured in this state, and the four images, ie, the original image and the first to third shift images, are synthesized to obtain an interpolation color.
  • An image is generated, and a relative displacement amount is calculated based on the generated interpolated color image.
  • the element shift amount calculation unit 36 refers to, for example, the LUT shown in Table 4, and when the current optical magnification of the imaging optical system 10 is less than 0.85, for example, 0.7 times the optical magnification
  • the shift amount is determined with reference to the column, and when the optical magnification is 0.85 or more and less than 1.5, the shift amount is determined with reference to the optical magnification column of 1.0. When it is 5 times or more, the shift amount is determined with reference to the optical magnification column of 2.0 times and the like.
  • the amount of shifting the imaging element 11 is determined based on the optical conditions of the imaging optical system 10. It is possible to efficiently obtain a shifted image for generating an interpolated color image with appropriate accuracy, without being accompanied by insufficient accuracy or excessive accuracy.
  • the shift amount in the plane perpendicular to the optical axis of the imaging device 11 is calculated based on the value obtained by multiplying the predetermined value by the depth of focus which is the amount in the optical axis direction, it is suitable for industrial microscopes, for example. It becomes composition.
  • FIG. 16 is a block diagram showing the configuration of the distance measuring device 1.
  • FIG. 17 shows the user designating the measurement position of the subject on the screen of the monitor 5. It is a figure which shows a mode that it does.
  • the present embodiment is basically the same configuration as Embodiment 2 described above, but the user can specify which part of the subject is to be subjected to distance measurement, and the distance on the imaging device 11 Only the part to be measured can be read out.
  • the imaging device 11 in this embodiment adopts a configuration that can read out any pixel, and a specific example is a CMOS sensor or the like. Because this can not be done, it is not adopted in this embodiment).
  • the distance measuring device 1 of this embodiment has a configuration in which a reading area calculation unit 37 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
  • the read area calculation unit 37 is a pixel area setting unit that calculates and sets a pixel area for detecting a shift amount by the pixel shift detection unit 33 when capturing a shift image and reading out pixel information from the imaging device 11 .
  • the imaging element control unit 21 drives the imaging element 11 so as to read out only the pixel values of the pixel area calculated by the reading area calculation unit 37 for the shift image.
  • the system controller 31 When the user inputs an instruction to start measurement via the control application 41, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1, and first captures an original image with the imaging element 11.
  • the original image thus taken is temporarily stored in the memory 32.
  • the photographed original image is transmitted to the PC 4 via the controller 3 and displayed on the screen 5 a of the monitor 5 by the control application 41.
  • a message such as "Please specify a measurement part" may be displayed on the screen 5a of the monitor 5 and displayed.
  • the user operates the input device such as a mouse to move the pointer 5p on the screen 5a, and a portion to be measured (in the example shown, white printed matter “1” in the black box 6b
  • a portion to be measured in the example shown, white printed matter “1” in the black box 6b
  • the designated measurement point is transmitted from the control application 41 to the read area calculation unit 37.
  • the reading area calculation unit 37 calculates a pixel area to be read from the image pickup device 11 at the time of shift imaging in the subsequent stage.
  • the calculation of the pixel area includes the line including the measurement point and the line adjacent to the line (in this selection method, one line is a line including an R pixel and the other line is a B pixel). For example, it is possible to calculate, as a region, R pixels and B pixels included in a constant range on the left and right around the measurement point in the included line).
  • the measurement point designated by the user is (2000: 1500), and this measurement point is the B pixel in the Gb-B row.
  • the read area operation unit 37 selects one of the 1499th line and the 1501th line adjacent to the line including the measurement point as a line including the R pixel.
  • line 1499 is selected.
  • the reading area calculation unit 37 selects, for example, the range of 32 pixels (specifically, the range of the X coordinate of 1985 to 2016) centered on the measurement point in the 1500th line, and the selected pixel range and X Further, a pixel range (a range of 32 pixels in this example) in line 1499 having the same coordinates is further selected.
  • the pixel range selected here corresponds to the line A described in the second embodiment, that is, the line A is set so as to include the measurement points designated by the user.
  • the R pixel and the B pixel included in the pixel range selected by the read area operation unit 37 are as follows.
  • the reading area calculation unit 37 sets the R pixel and the B pixel included in the selected pixel range as the pixel area, and transmits the information of the set pixel area to the imaging element control unit 21.
  • the imaging element control unit 21 generates a read address from the imaging element 11 so that only the received pixel area, that is, a total of 32 pixels of the R pixel of 16 pixels and the B pixel of 16 pixels is read out. Read control 11.
  • the R pixel and the B pixel in the pixel range of 32 pixels in the X direction centered on the designated measurement point are uniquely read out, but the original image is After photographing, based on the state of the subject in the original image, only the area more suitable for the correlation calculation may be set as the pixel area to be read out.
  • edge detection of R pixel and B pixel in the vicinity of the measurement point is performed, and R pixels in a region including the detected edge And B pixels may be set in the pixel area.
  • R pixels on the line segment connecting the two specified points And B pixels may be set in the pixel region.
  • the entire selected pixel range that is, a pixel range that also includes Gr pixels and Gb pixels not used for the shift amount calculation
  • the readout control of the imaging device 11 becomes somewhat easier.
  • FIG. 18 is a block diagram showing a configuration of the distance measuring device 1, and FIG. 19 shows an image formed on the imaging surface 11a of the imaging device 11.
  • FIG. 20 shows a state of a subject image IMGr of R component and a subject image IMGb of B component in a subject image 6i according to an image, and a partially enlarged view of line A, FIG. 20 forms an image on the imaging surface 11a of the imaging device 11.
  • FIG. 17A is a partially enlarged view of line A
  • FIG. 18B is a diagram showing a state of a subject image IMGr of an R component and a subject image IMGb of a B component when a subject image 6i related to a shift image is inclined and slightly larger than an original image;
  • This embodiment has basically the same configuration as that of the second embodiment described above, but before and after shifting the imaging device 11, whether or not the relative movement of the subject 6 with respect to the imaging device 11 occurred except for the shifting It is determined whether or not there is movement, and since it is not possible to perform accurate distance measurement, unnecessary arithmetic processing is not performed.
  • the distance measurement device 1 of this embodiment has a configuration in which a shift image comparison operation unit 38 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
  • the shift image comparison operation unit 38 acquires an original image acquired before shifting the imaging element 11 and a shift amount after moving the imaging element 11 at least in a pixel area where the shift amount is detected by the pixel shift detection unit 33. It is an image comparison unit that compares the match between a pixel shift image that is an image.
  • FIG. 19 shows a subject image 6i related to the original image formed on the imaging surface 11a of the imaging element 11, and on the line A, an image portion AR of the subject image IMGr of R component; In the image portion AB of the B component subject image IMGb, an amount of deviation as shown in the drawing occurs.
  • FIG. 20 shows a subject related to a shift image (an image obtained by shifting the image sensor 11 by, for example, one pixel in the X direction after capturing the original image) formed on the imaging surface 11a of the image sensor 11
  • the image 6i is shown.
  • the subject image 6i is somewhat rotated clockwise in the XY plane and slightly larger in size than the original image of FIG.
  • the reason why the size of the subject image 6i slightly increases is considered to be that the subject 6 has moved somewhat in the direction in which the subject 6 approaches the lens barrel 2 (upward in the height direction).
  • An interpolation color image is generated by combining the original image captured before and after the relative movement of the subject 6 with respect to the imaging device 11 (but excluding the movement due to the shift) and the shift image as described above. Even if the amount of deviation is calculated based on the interpolated color image, an accurate amount of deviation can not be obtained, that is, accurate distance measurement can not be performed.
  • the shift image comparison operation unit 38 detects an image deviation that is inappropriate for generating the interpolation color image by comparing the original image and the shift image, generation of the interpolation color image, interpolation, The generation of information on the subject distance by the pixel shift detection unit 33 and the distance calculation unit 34 based on the color image is stopped.
  • the pixel shift detection unit 33 calculates the amount of shift on the line A based on the R and B color images of the original image stored in the memory 32.
  • the pixel shift detection unit 33 calculates the shift amount on the line A.
  • the shift image comparison operation unit 38 compares the shift amount obtained based on the original image with the shift amount obtained based on the shift image, and determines whether the difference between the two is -1 or more and +1 or less. Determine if
  • the system controller 31 determines that the careless motion has not occurred and sets the original image and the shift image.
  • the pixel shift detection unit 33 causes the pixel shift detection unit 33 to perform processing of generating an interpolated color image by combining the above and a highly accurate shift amount detection based on the interpolated color image. Further, the system controller 31 causes the distance calculation unit 34 to perform processing of generating information on the subject distance based on the detected shift amount.
  • the system controller 31 determines that an inadvertent movement has occurred and performs control. The user is notified of that via the application 41.
  • the system controller 31 cancels the generation process of the interpolation color image by the pixel shift detection unit 33 and the detection process of the shift amount based on the interpolation color image. Therefore, the generation of the information on the subject distance based on the displacement amount obtained from the interpolation color image by the distance calculation unit 34 is also automatically stopped.
  • system controller 31 performs processing of the pixel shift detection unit 33 and processing of the distance calculation unit 34 based on only the original image instead of highly accurate processing based on the interpolated color image, and generates information on the subject distance.
  • the system controller 31 when notifying the user that careless movement has occurred, information on the subject distance obtained only from the original image and information on the subject distance obtained only from the shift image are combined as a reference value. You may make it notify.
  • the system controller 31 further causes the processing of the pixel shift detection unit 33 and the processing of the distance calculation unit 34 to be performed based on only the shift image to generate information on the subject distance.
  • the determination condition is The present invention is not limited to the above, and other determination conditions in consideration of subject conditions etc. may be adopted, and the determination conditions may be variably controlled according to the subject conditions.
  • substantially the same effect as that of the second embodiment described above can be obtained, and it is determined whether or not the relative movement of the subject 6 with respect to the imaging device 11 occurs carelessly.
  • the synthesis of the corresponding image is stopped, and the subsequent processing based on the interpolated color image is also stopped, so that the unnecessary load increase due to unnecessary calculation processing is suppressed and erroneous It is possible to prevent giving information on the subject distance to the user.
  • the user is notified of the fact, so that the user can be given an opportunity to perform precise measurement again.
  • a control method may be used to control the distance measuring device as described above, or a control program for causing a computer to control the distance measuring device as described above, It may be a non-transitory recording medium readable by a computer that records the control program.
  • the present invention is not limited to the above-described embodiment as it is, and in the implementation stage, the constituent elements can be modified and embodied without departing from the scope of the invention.
  • various aspects of the invention can be formed by appropriate combinations of a plurality of constituent elements disclosed in the above-described embodiment. For example, some components may be deleted from all the components shown in the embodiment.
  • the constituent elements in different embodiments may be combined as appropriate. As a matter of course, various modifications and applications are possible without departing from the scope of the invention.

Abstract

A distance measurement apparatus is provided with: an imaging optical system (10) for forming an object image; a pupil colour separation filter (14) for subjecting a pupil of the imaging optical system (10) to colour separation; a colour imaging element (11) which subjects the formed object image to photoelectric conversion and outputs an image; a pixel-interpolation calculation unit (35) for generating interpolation pixels of pixels arranged in a plurality of colour images obtained by separating the colours of the image outputted from the imaging element (11); a pixel deviation detector (33) which synthesizes the colour images obtained by colour separation and the interpolation pixels generated by the pixel-interpolation calculation unit (35) to generate a plurality of interpolated colour images, and detects the relative deviation amount of the object image in the plurality of interpolated colour images; and a distance calculation unit (34) for generating information related to object distance on the basis of the detected deviation amount.

Description

距離測定装置Distance measuring device
 本発明は、被写体の距離情報を取得する距離測定装置に関する。 The present invention relates to a distance measuring device for acquiring distance information of an object.
 例えば特開2001-174696号公報には、部分瞳毎に異なる分光特性をもたせた瞳色分割用フィルタを撮影光学系に介在させることにより、色による瞳分割を行うカラー撮像装置が記載されている。この瞳色分割用フィルタを介してカラー撮像素子に結像された被写体像は、カラー撮像素子により撮像されて画像信号として出力される。この画像信号は、色分離されて、瞳色分割された色信号間の相対的なずれ量を検知することにより、距離情報が生成される。この距離情報は、被写体までの距離自体であっても構わないが、例えばオートフォーカス(AF)を目的とする場合には、フォーカシングずれ方向とフォーカシングずれ量の両フォーカシング情報であっても良い。 For example, Japanese Patent Application Laid-Open No. 2001-174696 describes a color imaging apparatus that performs pupil division by color by interposing a pupil color division filter having different spectral characteristics for each partial pupil in the photographing optical system. . A subject image formed on a color imaging device through the pupil color division filter is captured by the color imaging device and output as an image signal. This image signal is color separated, and distance information is generated by detecting a relative shift amount between pupil color divided color signals. This distance information may be the distance to the subject itself, but may be both focusing information of the focusing deviation direction and the focusing deviation amount, for example, when aiming at autofocus (AF).
 こうした距離情報を取得することができる距離測定装置は、顕微鏡やデジタルカメラ等の各種の装置や機器に利用可能であり、例えば顕微鏡に用いた場合には、被写体(標本ともいう)の高さ測定を行うことができるようになる。 A distance measuring device capable of acquiring such distance information can be used for various devices and devices such as a microscope and a digital camera. For example, when used for a microscope, the height measurement of a subject (also referred to as a sample) Will be able to
 具体的に、瞳色分割される色が赤(R)と青(B)であるとすると(本発明に係る図3~図7等参照)、RとBのずれ量を撮像エリア内における複数点で測定することにより、測定した複数点間の相対的な高さ距離を求めることができる。さらに、合焦位置を基準とした絶対距離を求めることも可能である。 Specifically, assuming that the colors to be pupil-color-divided are red (R) and blue (B) (see FIGS. 3 to 7 according to the present invention, etc.) By measuring at points, it is possible to determine the relative height distance between the measured points. Furthermore, it is also possible to obtain an absolute distance based on the in-focus position.
 例えば、本発明に係る図8に示すような被写体6の高さを、本発明に係る図3に示すような瞳色分割用フィルタを用いた顕微鏡によって測定する場合を考える。ここに、被写体6の上面は、本発明に係る図9に示すようになっている。 For example, it is assumed that the height of the subject 6 as shown in FIG. 8 according to the present invention is measured by a microscope using a pupil color division filter as shown in FIG. 3 according to the present invention. Here, the upper surface of the subject 6 is as shown in FIG. 9 according to the present invention.
 顕微鏡の撮像光学系を図8に示す白いプレート6a面に合焦させた場合には、撮像素子に結像される黒い箱状物体6b上の白色印刷物「1」の光学被写体像6i中のR成分の被写体像IMGrおよびB成分の被写体像IMGbは、本発明に係る図10に示すような二重像となる。従って、撮像素子により撮像して得られるカラー画像の内の、R信号の像とB信号の像も、図10と同様となる。 When the imaging optical system of the microscope is focused on the surface of the white plate 6a shown in FIG. 8, R in the optical object image 6i of the white printed matter “1” on the black box-like object 6b formed on the imaging device. The subject image IMGr of the component and the subject image IMGb of the B component become a double image as shown in FIG. 10 according to the present invention. Accordingly, the image of the R signal and the image of the B signal in the color image obtained by imaging with the imaging device are the same as in FIG.
 このRB二重像のずれ量を図10のラインA部分で測定する場合を考える。顕微鏡の撮像素子として、本発明に係る図2に示すようなベイヤー配列のセンサを用いると、ラインA部分におけるRB二重像は、例えば図21に示すようになる。ここに、図21はベイヤー配列センサ上に結像するRB二重像の一例を示す図である。 A case will be considered in which the shift amount of this RB double image is measured in the line A portion of FIG. When a Bayer array sensor as shown in FIG. 2 according to the present invention is used as an imaging element of a microscope, an RB double image at the line A part is as shown in FIG. Here, FIG. 21 is a view showing an example of the RB double image formed on the Bayer array sensor.
 図21に示す水平方向に1~6まで並んだ2×2画素の基本配列から、R画素の画素値とB画素の画素値をそれぞれ取得すると、例えば図22に示すようになる。ここに、図22は図21のRB二重像から取得したR画素の画素値とB画素の画素値を示す線図である。ここに、実線で結ばれた黒丸がR画素の画素値を示し、点線で結ばれた黒四角がB画素の画素値を示している。 When the pixel value of the R pixel and the pixel value of the B pixel are respectively acquired from the basic array of 2 × 2 pixels arranged from 1 to 6 in the horizontal direction shown in FIG. 21, for example, the result is as shown in FIG. FIG. 22 is a diagram showing the pixel value of the R pixel and the pixel value of the B pixel acquired from the RB double image of FIG. 21. Here, a black circle connected by a solid line indicates a pixel value of the R pixel, and a black square connected by a dotted line indicates a pixel value of the B pixel.
 この図22に示したようなデータに対して、例えば正規化相互相関演算(ZNCC:Zero-mean Normalized Cross-Correlation)(本発明の実施形態に係る下記の数式1参照)を行って、相関値から画素のずれ量を算出する。 For example, the data shown in FIG. 22 is subjected to, for example, ZNCC (Zero-mean Normalized Cross-Correlation) (see Equation 1 according to the embodiment of the present invention) to obtain correlation values. Calculate the pixel shift amount from.
 このZNCCから求められるずれ量は画素座標単位であるため、サブピクセルの補間演算(例えば、等角直線フィッティングやパラボラフィッティング等のサブピクセル推定方法)を行って、より高精度にずれ量を算出する。こうして算出したずれ量は、撮像光学系および瞳色分割用フィルタの構成に基づいて比例換算することができるために、合焦位置からの高さ方向の距離が求められる。 Since the amount of displacement calculated from this ZNCC is in pixel coordinate units, interpolation of sub-pixels (for example, subpixel estimation methods such as equiangular straight line fitting and parabola fitting) are performed to calculate the amount of displacement with higher accuracy. . The shift amount calculated in this way can be proportionally converted based on the configuration of the imaging optical system and the pupil color division filter, so the distance in the height direction from the in-focus position can be obtained.
 しかし、ベイヤー配列上においてはR画素とB画素の水平方向位置(および垂直方向位置)が離散的であることから、図22に示したような場合には、Rの波形形状とBの波形形状が異なってしまい、正確なずれ量を検出することができなくなってしまう。 However, since the horizontal positions (and vertical positions) of the R and B pixels are discrete on the Bayer array, the waveform shape of R and the waveform shape of B are as shown in FIG. The difference between the two can not be detected correctly.
 また、例えば特開2008-122835号公報には、2つのデータ列の相関度を演算する際にDC成分またはAC成分を抽出し、抽出した成分の信号レベルを揃えることで相関関係を正確に検出する相関演算方法、相関演算装置、焦点検出装置および撮像装置が記載されている。 Further, for example, in Japanese Patent Application Laid-Open No. 2008-122835, when calculating the degree of correlation between two data strings, the DC component or AC component is extracted, and the correlation is accurately detected by aligning the signal levels of the extracted components. Describes a correlation calculation method, a correlation calculation device, a focus detection device, and an imaging device.
 しかし、図22に示したような場合には、Rの波形形状とBの波形形状がそもそも異なっているために、特開2008-122835号公報に記載の技術をもってしても十分な効果を得ることができるとは考えられない。 However, in the case shown in FIG. 22, since the waveform shape of R and the waveform shape of B are originally different, sufficient effects can be obtained even with the technique described in Japanese Patent Application Laid-Open No. 2008-122835. I can not think that I can do it.
 そして、zを被写体高さ(mm)、xをRB二重像のずれ量(ピクセル)としたときに、撮像光学系および瞳色分割用フィルタの構成により得られる比例換算式が例えば、本発明の実施形態に係る数式2に示す「z=0.15x」であって、かつ必要とされるz方向の精度(測距分解能)が0.05mmである場合には、画素のずれ量を0.33ピクセル以下の精度で検出する必要がある。 Then, when z is the subject height (mm) and x is the displacement amount (pixel) of the RB double image, a proportional conversion equation obtained by the configuration of the imaging optical system and the pupil color division filter is, for example, the present invention In the case of “z = 0.15x” shown in Formula 2 according to the embodiment of the present invention, and the required accuracy in the z direction (distance measurement resolution) is 0.05 mm, the pixel shift amount is 0 It needs to be detected with an accuracy of .33 pixels or less.
 上述したように、各色の画素が離散的に配置されている、例えばベイヤー配列のような撮像素子では、各色画像の波形形状が異なってしまうことがあり、このときにはずれ量の検出精度が低下してしまう。従って、ずれ量の検出精度、ひいては距離測定精度をより高くする技術が望まれている。 As described above, in the case of an imaging element in which pixels of each color are discretely arranged, for example, in a Bayer array, the waveform shape of each color image may differ, and at this time, the detection accuracy of the amount of deviation decreases. It will Therefore, there is a need for a technique for further improving the detection accuracy of the amount of deviation, and hence the distance measurement accuracy.
 本発明は上記事情に鑑みてなされたものであり、瞳色分割された光を撮像して得られた複数色の色画像に基づき被写体距離に関する情報を取得する際に、距離測定精度をより高くすることができる距離測定装置を提供することを目的としている。 The present invention has been made in view of the above circumstances, and when acquiring information on a subject distance based on a plurality of color images obtained by imaging pupil color-divided light, distance measurement accuracy is higher. It aims to provide a distance measuring device that can
 本発明の一態様による距離測定装置は、被写体像を結像する撮像光学系と、前記撮像光学系の光路上に配置され、該撮像光学系の複数の部分瞳に互いに異なる分光特性を備えさせることにより該撮像光学系の瞳を色分割する瞳色分割光学系と、前記瞳色分割光学系を介して前記撮像光学系により結像された被写体像を光電変換して、複数の画素が配列された画像を出力するカラーの撮像素子と、前記撮像素子から出力された画像を色分離して得られる複数の色画像の内の、異なる部分瞳に係る複数の色画像に配列されている画素の補間画素を生成する画素補間部と、前記異なる部分瞳に係る複数の色画像と、前記画素補間部により生成された補間画素と、を合成することにより複数の補間色画像を生成し、前記複数の補間色画像における被写体像の相対的なずれ量を検出して、前記ずれ量に基づき被写体距離に関する情報を生成する距離情報生成部と、を具備し、前記画素補間部は、少なくとも前記ずれ量を検出する方向であるずれ検出方向に、前記補間画素を生成する。 A distance measurement device according to an aspect of the present invention includes an imaging optical system for forming an image of an object, and an optical path of the imaging optical system, wherein plural partial pupils of the imaging optical system have different spectral characteristics. A pupil color division optical system that divides the pupil of the imaging optical system by color, and photoelectric conversion of an object image formed by the imaging optical system via the pupil color division optical system, and a plurality of pixels are arrayed Pixels of a color imaging element for outputting the selected image, and a plurality of color images obtained by color separation of the image output from the imaging element, which are arranged in a plurality of color images for different partial pupils Generating a plurality of interpolated color images by combining a pixel interpolation unit that generates the interpolation pixels, a plurality of color images related to the different partial pupils, and the interpolation pixels generated by the pixel interpolation unit; Subject to multiple interpolated color images A distance information generation unit that detects relative displacement of the body image and generates information related to the subject distance based on the displacement, and the pixel interpolation unit detects at least the displacement. The interpolation pixel is generated in a certain shift detection direction.
本発明の実施形態1における距離測定装置の構成を示すブロック図。BRIEF DESCRIPTION OF THE DRAWINGS The block diagram which shows the structure of the distance measuring device in Embodiment 1 of this invention. 上記実施形態1における撮像素子の画素配列を説明するための図。FIG. 6 is a diagram for explaining a pixel array of the imaging device in the first embodiment. 上記実施形態1における瞳色分割フィルタの一構成例を説明するための図。FIG. 6 is a view for explaining an example of the arrangement of a pupil color division filter according to the first embodiment; 上記実施形態1において、合焦位置よりも遠距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図。FIG. 7 is a plan view showing the state of subject light flux focusing when an object at a longer distance side than the in-focus position is imaged in the first embodiment. 上記実施形態1において、合焦位置よりも遠距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図。FIG. 7 is a view showing, for each color component, the shape of blur formed by light from one point on the subject that is on the far side of the in-focus position in the first embodiment. 上記実施形態1において、合焦位置よりも近距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図。FIG. 7 is a plan view showing the state of subject light flux focusing when the subject on the near distance side of the in-focus position is imaged in the first embodiment. 上記実施形態1において、合焦位置よりも近距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図。FIG. 7 is a view showing, for each color component, the shape of blur formed by light from one point on an object on the near distance side of the in-focus position in the first embodiment. 上記実施形態1における被写体を示す斜視図。FIG. 2 is a perspective view showing a subject in the first embodiment. 上記実施形態1の被写体における黒い箱状物体の上面を示す図。FIG. 2 is a view showing the upper surface of a black box-like object in the subject of the first embodiment. 上記実施形態1において、撮像素子に結像される白色印刷物のR成分の光学被写体像およびB成分の光学被写体像がずれている様子を示す図。FIG. 7 is a view showing a state in which the optical subject image of the R component and the optical subject image of the B component of the white printed matter formed on the image pickup element in the first embodiment are shifted. 上記実施形態1において、ラインA上の画素配列に対する白色印刷物「1」のR画像およびB画像の画素値と、補間画素値および合成値とを示す図。FIG. 7 is a diagram showing pixel values of R and B images of white printed matter “1” for pixel arrangement on line A, and interpolated pixel values and composite values in the first embodiment. 本発明の実施形態2における距離測定装置の構成を示すブロック図。The block diagram which shows the structure of the distance measuring device in Embodiment 2 of this invention. 上記実施形態2において、ラインA上の画素配列に対する白色印刷物「1」のR画像およびB画像の画素値と、シフト画素値および合成値とを示す図。FIG. 16 is a diagram showing pixel values of R and B images of white printed matter “1” and a shifted pixel value and a combined value in the pixel arrangement on the line A in the second embodiment. 本発明の実施形態3における距離測定装置の構成を示すブロック図。The block diagram which shows the structure of the distance measuring device in Embodiment 3 of this invention. 上記実施形態3における撮像光学系の光学的条件を説明するための図。FIG. 14 is a diagram for explaining optical conditions of the imaging optical system in the third embodiment. 本発明の実施形態4における距離測定装置の構成を示すブロック図。The block diagram which shows the structure of the distance measuring device in Embodiment 4 of this invention. 上記実施形態4において、ユーザがモニタの画面上において被写体の測定位置を指定する様子を示す図。FIG. 16 is a view showing a state in which the user designates the measurement position of the subject on the screen of the monitor in the fourth embodiment. 本発明の実施形態5における距離測定装置の構成を示すブロック図。The block diagram which shows the structure of the distance measuring device in Embodiment 5 of this invention. 上記実施形態5において、撮像素子の撮像面に結像している元画像中のR成分およびB成分の被写体像の様子を示す図およびラインAの部分拡大図。FIG. 17 shows a state of a subject image of an R component and a B component in an original image formed on an imaging surface of an imaging element and a partially enlarged view of a line A in the fifth embodiment. 上記実施形態5において、撮像素子の撮像面に結像しているシフト画像が元画像に対して傾きかつやや大きくなったときのR成分およびB成分の被写体像の様子を示す図およびラインAの部分拡大図。In the fifth embodiment, a diagram showing a state of a subject image of an R component and a B component when the shift image formed on the imaging surface of the imaging device is inclined and slightly larger than the original image and line A Partially enlarged view. 従来において、ベイヤー配列センサ上に結像するRB二重像の一例を示す図。The figure which shows an example of RB double image conventionally imaged on a Bayer array sensor conventionally. 従来において、図21のRB二重像から取得したR画素の画素値とB画素の画素値を示す線図。FIG. 23 is a diagram showing pixel values of an R pixel and pixel values of a B pixel acquired from the RB double image in FIG. 21 conventionally.
 以下、図面を参照して本発明の実施の形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[実施形態1]
 図1から図11は本発明の実施形態1を示したものであり、図1は距離測定装置1の構成を示すブロック図である。
Embodiment 1
FIGS. 1 to 11 show Embodiment 1 of the present invention, and FIG. 1 is a block diagram showing the configuration of the distance measuring device 1.
 距離測定装置1は、被写体像を撮像して得た画像に基づき、被写体までの距離に関する情報を生成するものである。この距離測定装置1の適用例としては、例えば工業用顕微鏡、デジタルカメラ、デジタルビデオカメラ、カメラ付携帯電話、カメラ付携帯情報端末(カメラ付PDA)、カメラ付パーソナルコンピュータ、監視カメラ、内視鏡などが挙げられるが、勿論これらに限定されるものではない。 The distance measurement device 1 generates information on the distance to the subject based on an image obtained by capturing an image of the subject. Examples of application of the distance measuring device 1 include, for example, industrial microscopes, digital cameras, digital video cameras, mobile phones with cameras, PDAs with cameras (PDAs with cameras), personal computers with cameras, surveillance cameras, endoscopes And the like, but of course are not limited thereto.
 なお、以下においては、工業用顕微鏡を想定した撮像装置に適用された距離測定装置1を説明する。 In addition, below, the distance measurement apparatus 1 applied to the imaging device supposing an industrial microscope is demonstrated.
 まず、被写体6は、距離を測定する対象であり、例えば顕微鏡の分野においては標本とも呼ばれる。 First, the subject 6 is an object whose distance is to be measured, and is also referred to as a sample, for example, in the field of a microscope.
 そして、距離測定装置1は、被写体6までの距離を測定するものであり、例えば、鏡筒2と、コントローラ3と、パーソナルコンピュータ(PC)4と、モニタ5と、を備えている。 The distance measuring device 1 measures the distance to the subject 6 and includes, for example, a lens barrel 2, a controller 3, a personal computer (PC) 4, and a monitor 5.
 鏡筒2は、撮像光学系10と、撮像素子11と、瞳色分割フィルタ14と、リング照明16と、撮像素子制御部21と、ズーム制御部22と、フォーカス制御部23と、フォーカス駆動機構24と、を備えている。 The lens barrel 2 includes an imaging optical system 10, an imaging device 11, a pupil color division filter 14, a ring illumination 16, an imaging device control unit 21, a zoom control unit 22, a focus control unit 23, and a focus driving mechanism. And 24.
 撮像光学系10は、光学的な被写体像を撮像素子11上に結像するためのものであり、例えば、ズームレンズ12と、絞り13と、対物レンズ15と、を備えている。ここに、ズームレンズ12は、撮像光学系10の焦点距離を変化させてズーム(結像倍率の変更)を行うためのものである。また、絞り13は、撮像光学系10を通過する光束の通過範囲を変化させて、撮像素子11上に結像される被写体像の明るさを調整するものである。なお、この絞り13の開口径を変化させることにより、撮像光学系10の瞳径も変化する。さらに、対物レンズ15は、撮像光学系10におけるパワーを主として担う光学素子である。 The imaging optical system 10 is for forming an optical subject image on the imaging element 11 and includes, for example, a zoom lens 12, an aperture 13, and an objective lens 15. Here, the zoom lens 12 is for changing the focal length of the imaging optical system 10 to perform zooming (change of imaging magnification). The stop 13 changes the passing range of the light beam passing through the imaging optical system 10 to adjust the brightness of the subject image formed on the imaging element 11. The pupil diameter of the imaging optical system 10 is also changed by changing the aperture diameter of the diaphragm 13. Furthermore, the objective lens 15 is an optical element mainly responsible for the power in the imaging optical system 10.
 撮像素子11は、複数の画素(撮像画素)を配列して構成され、瞳色分割フィルタ14を介して撮像光学系10により結像された被写体像を光電変換して、複数の画素(画像画素)が配列された画像を出力するものである。ここに、撮像素子11は、被写体像を、複数の波長帯(例えば、RGBが挙げられるが、これに限るものではない)光毎にそれぞれ受光して光電変換し、電気信号として出力するカラー撮像素子となっている。 The imaging device 11 is configured by arraying a plurality of pixels (imaging pixels), and photoelectrically converts an object image formed by the imaging optical system 10 through the pupil color division filter 14 to convert the plurality of pixels (image pixels). Is an output of an arrayed image. Here, the color image pickup device 11 receives an object image for each of a plurality of wavelength bands (for example, but not limited to RGB) light, photoelectrically converts them, and outputs as an electric signal. It is an element.
 カラー撮像素子の構成としては、オンチップの素子カラーフィルタを備えた単板の撮像素子でも良いし、RGB各色光への色分離を行うダイクロイックプリズムを用いた3板式であっても良いし、同一の画素位置で半導体の深さ方向位置に応じてRGBの撮像情報を取得可能な方式の撮像素子であっても良いし、複数の波長帯光の撮像情報を画素単位で取得可能であればどのようなものでも構わない。 The configuration of the color imaging device may be a single-plate imaging device provided with an on-chip device color filter, or may be a three-plate system using dichroic prisms for performing color separation into RGB color light, or the same. May be an imaging device of a method capable of acquiring RGB imaging information according to the position in the depth direction of the semiconductor at the pixel position of the semiconductor, or which imaging information of a plurality of wavelength bands can be acquired in pixel units It does not matter.
 ただし、本実施形態においては、撮像素子11が、例えば図2に示すような原色ベイヤー配列の素子カラーフィルタをオンチップで備えた単板の撮像素子であるものとする。ここに図2は、撮像素子11の画素配列を説明するための図である。 However, in the present embodiment, it is assumed that the imaging device 11 is a single-plate imaging device provided with an element color filter of primary color Bayer arrangement as shown in FIG. 2 on a chip, for example. Here, FIG. 2 is a diagram for explaining a pixel array of the image pickup device 11.
 原色ベイヤー配列のカラーフィルタは、図2に示すように、一方の対角方向にB画素を構成するためのBフィルタとR画素を構成するためのRフィルタとが配列され、他方の対角方向にG画素を構成するためのGフィルタがそれぞれ配列された2×2画素を基本配列として、この基本配列が敷き詰められたフィルタ構成となっている。 In the color filter of the primary color Bayer arrangement, as shown in FIG. 2, a B filter for configuring a B pixel and an R filter for configuring an R pixel are arranged in one diagonal direction, and the other diagonal direction The basic array has a filter configuration in which 2 × 2 pixels in which G filters for forming G pixels are arranged are arranged as a basic array.
 また、撮像素子11は、CMOSセンサやCCDセンサ等の撮像素子を広く適用することができるが、被写体までの距離を測定することを目的とする場合には全画素を読み出す必要がないために、所望の画素を読み出すことができるCMOSセンサを採用すると好適である。 In addition, although the imaging device 11 can be widely applied to an imaging device such as a CMOS sensor or a CCD sensor, it is not necessary to read out all pixels in order to measure the distance to the subject. It is preferable to employ a CMOS sensor that can read out a desired pixel.
 瞳色分割フィルタ14は、撮像光学系10の光路上に配置され、撮像光学系10の瞳における複数の部分瞳に互いに異なる分光特性を備えさせることにより撮像光学系10の瞳を色分割する瞳色分割光学系である。従って、瞳色分割フィルタ14は、部分瞳毎に通過させる光の帯域制限を行っているために、帯域制限フィルタと呼ぶこともできる。 The pupil color division filter 14 is disposed on the light path of the imaging optical system 10, and a pupil which color-divides the pupil of the imaging optical system 10 by providing plural partial pupils in the pupil of the imaging optical system 10 with different spectral characteristics. It is a color division optical system. Accordingly, the pupil color division filter 14 can also be called a band-limiting filter because it performs band limiting of light to be transmitted for each partial pupil.
 瞳色分割フィルタ14は、具体的には、例えば図3に示したような構成となっている。ここに図3は、瞳色分割フィルタ14の一構成例を説明するための図である。 Specifically, for example, the pupil color division filter 14 is configured as shown in FIG. FIG. 3 is a diagram for explaining an example of the configuration of the pupil color division filter 14.
 図3に示す瞳色分割フィルタ14は、撮像光学系10の瞳が、第1の部分瞳と第2の部分瞳とに2分されており、例えば、左半分がG(緑)成分およびR(赤)成分を通過させB(青)成分を遮断するRGフィルタ14r、右半分がG成分およびB成分を通過させR成分を遮断するGBフィルタ14bとなっている。従って、瞳色分割フィルタ14は、撮像光学系10の絞り11の開口(ひいては瞳)を通過する光に含まれるG成分を全て通過させ、R成分を開口の左半分の部分瞳だけ通過させ、B成分を開口の残り右半分の部分瞳だけ通過させる。 In the pupil color division filter 14 shown in FIG. 3, the pupil of the imaging optical system 10 is divided into a first partial pupil and a second partial pupil. For example, the left half is a G (green) component and R The RG filter 14r that passes the (red) component and blocks the B (blue) component, and the GB filter 14b that passes the G and B components and blocks the R component on the right half. Therefore, the pupil color division filter 14 passes all the G components contained in the light passing through the aperture (and hence the pupil) of the diaphragm 11 of the imaging optical system 10, and passes the R component only in the partial pupil of the left half of the aperture. The B component is allowed to pass through only the partial pupil of the other right half of the aperture.
 なお、分光特性のミスマッチによる光量ロスの発生や、距離測定精度の低下を防ぐために、瞳色分割フィルタ14のRGB各分光透過特性と、撮像素子11の素子フィルタ(図2参照)のRGB各分光透過特性とは、同一または可能な限り近似していることが望ましい。 In addition, in order to prevent the occurrence of light quantity loss due to the mismatch of the spectral characteristics and the deterioration of the distance measurement accuracy, the RGB spectral transmission characteristics of the pupil color dividing filter 14 and the RGB spectral of the element filter of the image sensor 11 (see FIG. 2) The transmission characteristics are preferably identical or as close as possible.
 リング照明16は、被写体6に照明光を照射する照明装置である。このリング照明16は、撮像光学系10の光路を遮蔽しない光路周りの周辺部に、例えばLED等の光源を複数リング状に配置したものとなっており、被写体に照明の影がほぼ生じない利点を備えている。なお、ここでは上述したように工業用顕微鏡を想定しているためにリング照明16を設けているが、他の照明装置を用いても良いし、照明装置を設けることなく自然光を利用しても構わない。 The ring illumination 16 is an illumination device that illuminates the subject 6 with illumination light. The ring illumination 16 has a plurality of light sources such as LEDs arranged in a ring around the periphery of the optical path that does not shield the optical path of the imaging optical system 10, and the shadow of the illumination hardly occurs on the subject. Is equipped. Here, the ring illumination 16 is provided because the industrial microscope is assumed as described above, but another illumination device may be used, or natural light may be used without providing the illumination device. I do not care.
 撮像素子制御部21は、撮像素子11を制御するものであり、撮像素子11の駆動制御と、撮像素子11からの読出制御と、を行う。また、撮像素子制御部21は、撮像素子11がアナログ撮像素子である場合には、撮像素子11から読み出した信号のA/D変換も行う。 The imaging element control unit 21 controls the imaging element 11 and performs drive control of the imaging element 11 and readout control from the imaging element 11. Further, when the imaging device 11 is an analog imaging device, the imaging device control unit 21 also performs A / D conversion of a signal read from the imaging device 11.
 ズーム制御部22は、ズームレンズ12を光軸方向に移動させ、撮像光学系10の焦点距離を変化させる制御を行う。 The zoom control unit 22 performs control to move the zoom lens 12 in the optical axis direction and to change the focal length of the imaging optical system 10.
 フォーカス制御部23は、撮像光学系10により結像される被写体6の光学像が、撮像素子11の撮像面に位置する(つまり合焦する)ように、フォーカス駆動機構24を制御するものである。 The focus control unit 23 controls the focus drive mechanism 24 so that the optical image of the subject 6 formed by the imaging optical system 10 is positioned on the imaging surface of the imaging device 11 (that is, focused). .
 フォーカス駆動機構24は、フォーカス制御部23からの制御信号に基づいて、撮像光学系10のフォーカス位置を調整する駆動を行う。フォーカス駆動機構24は、工業用顕微鏡を例に挙げれば、例えば上述した鏡筒2自体を光軸方向に移動、つまり被写体6から離隔させる方向または被写体6に近接させる方向に移動させる鏡筒駆動機構として構成される。また、フォーカス駆動機構24は、デジタルカメラを例に挙げれば、例えば上述した対物レンズ15に含まれるフォーカスレンズを光軸方向に移動させる機構として構成される。 The focus drive mechanism 24 performs drive to adjust the focus position of the imaging optical system 10 based on the control signal from the focus control unit 23. For example, in the case of an industrial microscope, the focus drive mechanism 24 moves the lens barrel 2 itself in the direction of the optical axis, that is, moves the lens barrel 2 itself in a direction away from the subject 6 or in a direction approaching the subject 6. Configured as Further, in the case of a digital camera, for example, the focus drive mechanism 24 is configured as a mechanism for moving the focus lens included in the objective lens 15 described above in the optical axis direction.
 コントローラ3は、上述した鏡筒2と接続されていて、この距離測定装置1のシステム全体の制御を行うものであり、システムコントローラ31と、メモリ32と、画素ずれ検出部33と、距離演算部34と、画素補間演算部35と、を備えている。 The controller 3 is connected to the lens barrel 2 described above, and controls the entire system of the distance measuring device 1. The controller 3, the memory 32, the pixel shift detection unit 33, and the distance calculation unit 34 and a pixel interpolation operation unit 35.
 システムコントローラ31は、コントローラ3内の各演算部等や鏡筒2内の各制御部等を含む、この距離測定装置1の全体を統括的に制御するものである。また、システムコントローラ31は、撮像素子11から出力された画像を複数の色画像に色分離する処理等も行う。 The system controller 31 controls the whole of the distance measuring device 1 in an integrated manner, including each operation unit and the like in the controller 3 and each control unit and the like in the lens barrel 2. The system controller 31 also performs processing such as color separation of an image output from the imaging device 11 into a plurality of color images.
 メモリ32は、鏡筒2から受信した画像信号を一時的にバッファリングするためのものであり、例えばSDRAM等を備えて構成されている。 The memory 32 is for temporarily buffering the image signal received from the lens barrel 2, and includes, for example, an SDRAM or the like.
 画素補間演算部35は、撮像素子11から出力された画像を色分離して得られる複数の色画像の内の、異なる部分瞳に係る複数の色画像に配列されている画素の補間画素を、少なくとも相対的なずれ量を検出する方向であるずれ検出方向に生成する画素補間部である。具体的に、本実施形態の画素補間演算部35は、撮像素子11から読み出されA/D変換された1枚の画像をシステムコントローラ31により色分離して得られるR,G,B各色画像の内の、ずれ検出の対象となるR色画像およびB色画像に対して補間演算を行いR補間画素とB補間画素とを生成するものとなっている。 The pixel interpolation operation unit 35 interpolates interpolation pixels of pixels arranged in a plurality of color images relating to different partial pupils among a plurality of color images obtained by color separation of the image output from the imaging device 11, It is a pixel interpolation unit that generates at least a relative shift amount in a shift detection direction which is a direction to detect the shift amount. Specifically, the pixel interpolation operation unit 35 of the present embodiment obtains R, G, B color images obtained by color separation of one image read out from the image sensor 11 and A / D converted by the system controller 31. The interpolation calculation is performed on the R-color image and the B-color image to be subjected to shift detection among the above to generate an R-interpolated pixel and a B-interpolated pixel.
 画素ずれ検出部33は、異なる部分瞳に係る複数の色画像と、画素補間演算部35により生成された補間画素と、を合成することにより複数の補間色画像を生成し、複数の補間色画像における被写体像の相対的なずれ量を検出するものであり、距離情報生成部の一部である。具体的に、本実施形態の画素ずれ検出部33は、システムコントローラ31により色分離されたR色画像と画素補間演算部35により生成されたR補間画素とを合成することによりR補間色画像を生成し、システムコントローラ31により色分離されたB色画像と画素補間演算部35により生成されたB補間画素とを合成することによりB補間色画像を生成する。そして、画素ずれ検出部33は、生成したR補間色画像およびB補間画素における被写体像のずれ量を検出する。 The pixel shift detection unit 33 generates a plurality of interpolation color images by combining a plurality of color images relating to different partial pupils and the interpolation pixels generated by the pixel interpolation operation unit 35, and a plurality of interpolation color images The relative displacement amount of the subject image in the above is detected, and is a part of the distance information generation unit. Specifically, the pixel shift detection unit 33 of the present embodiment combines the R-interpolated color image by combining the R-color image color-separated by the system controller 31 with the R-interpolated pixels generated by the pixel interpolation operation unit 35. A B-interpolated color image is generated by combining the B-color image generated and generated by the system controller 31 with the B-interpolated pixel generated by the pixel interpolation operation unit 35. Then, the pixel shift detection unit 33 detects the shift amount of the subject image in the generated R interpolation color image and B interpolation pixel.
 距離演算部34は、画素ずれ検出部33により検出されたずれ量に基づき、被写体距離に関する情報(被写体までの距離自体であっても構わないし、フォーカシングのずれ方向およびずれ量などであっても良い。前者は例えば被写体の形状(高さ)を測定する場合に好適であり、後者は例えば被写体への合焦を行う場合に好適である。)を生成するものであり、距離情報生成部の他の一部である。 Based on the displacement amount detected by the pixel displacement detection unit 33, the distance calculation unit 34 may be information related to the object distance (the distance to the object itself may be, or the direction and amount of displacement of focusing etc. The former is suitable for measuring, for example, the shape (height) of the subject, and the latter is suitable for generating, for example, focusing on the subject. Part of
 PC4は、上述したコントローラ3と接続されていて、この距離測定装置1のユーザインタフェースを兼ねたソフトウェアである制御アプリケーション41を有している。 The PC 4 is connected to the controller 3 described above, and has a control application 41 which is software that doubles as a user interface of the distance measuring device 1.
 モニタ5は、コントローラ3からPC4を介して送信された被写体6の画像信号や、制御アプリケーション41に関するアプリケーション情報を表示するものである。 The monitor 5 displays an image signal of the subject 6 transmitted from the controller 3 via the PC 4 and application information on the control application 41.
 従って、ユーザは、モニタ5の表示を観察しながら、PC4に備えられている入力デバイス(キーボードやマウス等)を用いることにより、この距離測定装置1を操作することになる。 Therefore, the user operates the distance measuring device 1 by using an input device (a keyboard, a mouse or the like) provided in the PC 4 while observing the display on the monitor 5.
 次に、上述したような距離測定装置1において取得される各色画像の空間的な位置ズレについて、図4~図7を参照して説明する。ここに、図4は合焦位置よりも遠距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図、図5は合焦位置よりも遠距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図、図6は合焦位置よりも近距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図、図7は合焦位置よりも近距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図である。 Next, spatial positional deviation of each color image acquired in the distance measuring device 1 as described above will be described with reference to FIGS. 4 to 7. FIG. Here, FIG. 4 is a plan view showing how the subject light flux is collected when an object at a longer distance than the in-focus position is imaged, and FIG. 5 is a view on the object at a longer distance than the in-focus position. FIG. 6 is a view showing the shape of blur formed by light from a point for each color component, FIG. 6 is a plan view showing a state of subject light flux condensing when an object at a near distance side from the focusing position is imaged Reference numeral 7 is a diagram showing, for each color component, the shape of a blur formed by light from one point on the subject that is on the near distance side of the in-focus position.
 まず、被写体が合焦位置にあるときには、被写体上の1点から放射された光は、どの色成分であるかに関わらず、撮像素子11上の1点に集光され、点像としての被写体像を形成する。従って、色間に位置ズレは発生せず、色にじみのない被写体像が結像される。 First, when the subject is in the in-focus position, the light emitted from one point on the subject is collected at one point on the imaging device 11 regardless of which color component it is, and the subject as a point image Form an image. Therefore, no positional displacement occurs between colors, and a subject image without color blur is formed.
 これに対して、被写体OBJfが例えば合焦位置よりも遠距離側にある場合には、被写体OBJf上の1点から放射された光により、図4、図5に示すように、G成分については円形ボケをなす被写体像IMGgが形成され、R成分については右半分の半円形ボケをなす被写体像IMGrが形成され、B成分については左半分の半円形ボケをなす被写体像IMGbが形成される。従って、合焦位置よりも遠距離側にある被写体OBJfを撮像したときには、R成分の被写体像IMGrが右側にずれ、B成分の被写体像IMGbが左側にずれたボケ画像が形成され、このボケ画像におけるR成分とB成分の左右位置は、撮像素子11から見たときの瞳色分割フィルタ14におけるR成分透過領域(RGフィルタ14r)とB成分透過領域(GBフィルタ14b)の左右位置と逆である。また、G成分の被写体像IMGgについては、図5に示すように、R成分の被写体像IMGrとB成分の被写体像IMGbとにまたがったボケ画像となる。そして、被写体OBJfが合焦位置から遠距離側へ離れるほど、ボケが大きくなって、R成分の被写体像IMGrの重心位置CrとB成分の被写体像IMGbの重心位置Cbとの離間距離、R成分の被写体像IMGrの重心位置CrとG成分の被写体像IMGgの重心位置Cgとの離間距離、およびG成分の被写体像IMGgの重心位置CgとB成分の被写体像IMGbの重心位置Cbとの離間距離が大きくなることになる。 On the other hand, when the object OBJf is on the far side of the in-focus position, for example, the light emitted from one point on the object OBJf causes the G component to be changed as shown in FIGS. 4 and 5. A circular blurred object image IMGg is formed, a right half semicircular blurred object image IMGr is formed for the R component, and a left half semicircular blurred object image IMGb is formed for the B component. Therefore, when the object OBJf at a longer distance side than the in-focus position is imaged, a blurred image is formed in which the R component subject image IMGr is shifted to the right and the B component subject image IMGb is shifted to the left. The left and right positions of the R component and the B component in are opposite to the left and right positions of the R component transmission region (RG filter 14r) and the B component transmission region (GB filter 14b) in the pupil color division filter 14 when viewed from the imaging device 11. is there. Further, as shown in FIG. 5, the G component object image IMGg is a blurred image spanning the R component object image IMGr and the B component object image IMGb. Then, the more the subject OBJf moves away from the in-focus position to the far side, the larger the blur, and the distance between the center of gravity Cr of the subject image IMGr of the R component and the center of gravity Cb of the subject image IMGb for the B component The distance between the center of gravity Cr of the subject image IMGr and the center of gravity Cg of the object image IMGg of G component, and the distance between the center of gravity Cg of the object image IMGg of G component and the center of gravity Cb of the object image IMGb for B component Will be increased.
 一方、被写体OBJnが例えば合焦位置よりも近距離側にある場合には、被写体OBJn上の1点から放射された光により、図6、図7に示すように、G成分については円形ボケをなす被写体像IMGgが形成され、R成分については左半分の半円形ボケをなす被写体像IMGrが形成され、B成分については右半分の半円形ボケをなす被写体像IMGbが形成される。従って、合焦位置よりも近距離側にある被写体OBJnを撮像したときには、R成分の被写体像IMGrが左側にずれ、B成分の被写体像IMGbが右側にずれたボケ画像が形成され、このボケ画像におけるR成分とB成分の左右位置は、撮像素子11から見たときの瞳色分割フィルタ14におけるR成分透過領域(RGフィルタ14r)とB成分透過領域(GBフィルタ14b)の左右位置と同じである。また、G成分の被写体像IMGgがR成分の被写体像IMGrとB成分の被写体像IMGbとにまたがったボケ画像となる(図7参照)のは、この近距離側においても同様である。そして、被写体OBJnが合焦位置から近距離側へ離れるほど、ボケが大きくなって、R成分の被写体像IMGrの重心位置CrとB成分の被写体像IMGbの重心位置Cbとの離間距離、R成分の被写体像IMGrの重心位置CrとG成分の被写体像IMGgの重心位置Cgとの離間距離、およびG成分の被写体像IMGgの重心位置CgとB成分の被写体像IMGbの重心位置Cbとの離間距離が大きくなることになる。 On the other hand, when the object OBJn is closer than the in-focus position, for example, the light emitted from one point on the object OBJn causes circular blurring of the G component as shown in FIGS. 6 and 7. A subject image IMGg to be formed is formed, a subject image IMGr having a semicircular blur in the left half is formed for the R component, and a subject image IMGb having a semicircular blur on the right half is formed for the B component. Therefore, when imaging the object OBJn on the near distance side of the in-focus position, a blurred image is formed in which the R component subject image IMGr is shifted to the left and the B component subject image IMGb is shifted to the right. The left and right positions of the R component and the B component in are the same as the left and right positions of the R component transmission region (RG filter 14r) and the B component transmission region (GB filter 14b) in the pupil color division filter 14 when viewed from the imaging device 11. is there. In addition, the object image IMGg of the G component is a blurred image that straddles the object image IMGr of the R component and the object image IMGb of the B component (see FIG. 7) as well at this short distance side. Then, the more the subject OBJn moves away from the in-focus position toward the near distance side, the larger the blur becomes, and the distance between the center of gravity Cr of the subject image IMGr of the R component and the center of gravity Cb of the subject image IMGb for the B component The distance between the center of gravity Cr of the subject image IMGr and the center of gravity Cg of the object image IMGg of G component, and the distance between the center of gravity Cg of the object image IMGg of G component and the center of gravity Cb of the object image IMGb for B component Will be increased.
 従って、撮像光学系10の瞳領域を通過する際の重心位置が異なる2つの色画像の組み合わせに対して、相関性に基づきずれ量(位相差量)を算出すれば、被写体距離を算出することが可能である。そして本実施形態においては、重心位置同士の離間距離が最も大きいR成分の被写体像IMGrとB成分の被写体像IMGbとのずれ量を検出することにする。この場合には、R成分の被写体像IMGrとG成分の被写体像IMGgとのずれ量、あるいはG成分の被写体像IMGgとB成分の被写体像IMGbとのずれ量を検出する場合よりも、高精度の検出が可能になると考えられるためである。 Therefore, for the combination of two color images having different barycentric positions when passing through the pupil region of the imaging optical system 10, the object distance can be calculated by calculating the amount of deviation (the amount of phase difference) based on the correlation. Is possible. In the present embodiment, the amount of deviation between the subject image IMGr of the R component and the subject image IMGb of the B component with the largest separation distance between the barycentric positions is detected. In this case, the accuracy is higher than that in the case of detecting the displacement between the R component subject image IMGr and the G component subject image IMGg, or the G component subject image IMGg and the B component subject image IMGb. It is thought that it will be possible to detect
 次に、被写体6は、本実施形態においては、図8に示すような形状であるものとする。ここに、図8は被写体6を示す斜視図である。 Next, it is assumed that the subject 6 has a shape as shown in FIG. 8 in the present embodiment. Here, FIG. 8 is a perspective view showing the subject 6.
 すなわち、被写体6は、白いプレート6a上に、高さのある黒い箱状物体6bが配置されているものとする。そして、この黒い箱状物体6bの上面には、一例としての白色印刷物「1」が形成されている。図9は被写体6における黒い箱状物体6bの上面を示す図である。 That is, in the subject 6, it is assumed that a black box-like object 6b having a height is disposed on the white plate 6a. Then, a white print “1” as an example is formed on the upper surface of the black box-like object 6 b. FIG. 9 is a view showing the upper surface of the black box-like object 6 b in the subject 6.
 撮像光学系10を白いプレート6a面に合焦させた場合には、撮像素子11に結像される黒い箱状物体6b上の白色印刷物「1」の光学被写体像6i中のR成分の被写体像IMGrおよびB成分の被写体像IMGbは、図10に示すような二重像となる。ここに、図10は撮像素子11に結像される白色印刷物のR成分の光学被写体像IMGrおよびB成分の光学被写体像IMGbがずれている様子を示す図である。 When the imaging optical system 10 is focused on the surface of the white plate 6a, the subject image of the R component in the optical subject image 6i of the white printed matter “1” on the black box-like object 6b formed on the imaging device 11 The object image IMGb of the ING Gr and B component is a double image as shown in FIG. Here, FIG. 10 is a view showing a state in which the optical subject image IMGr of the R component and the optical subject image IMGb of the B component of the white printed matter formed on the imaging device 11 are deviated.
 従って、撮像素子11により黒い箱状物体6b上の白色印刷物「1」を撮像して得られたカラー画像の内の、R信号の像(R画像)とB信号の像(B画像)も、図10と同様の二重像となる。 Therefore, among the color images obtained by imaging the white printed matter “1” on the black box-like object 6 b by the imaging device 11, the R signal image (R image) and the B signal image (B image) are also It becomes a double image similar to FIG.
 このR画像およびB画像は、コントローラ3を介してPC4へ送信され、制御アプリケーション41によって、モニタ5上に表示される(なお、モニタ5に表示する画像としては、R画像とB画像のずれを補正した画像、あるいは瞳色分割フィルタ14を介することなく撮像された画像であっても構わない(単に被写体を観察するためであれば、こうしたずれのない画像である方が好ましい))。さらに、モニタ5上には、制御アプリケーション41に係る操作画面も表示される。 The R image and the B image are transmitted to the PC 4 through the controller 3 and displayed on the monitor 5 by the control application 41 (note that, as an image to be displayed on the monitor 5, the deviation between the R image and the B image is It may be a corrected image or an image picked up without passing through the pupil color division filter 14 (in the case of simply observing the subject, it is preferable to be an image without such deviation). Furthermore, on the monitor 5, an operation screen related to the control application 41 is also displayed.
 そして、ユーザがマウス等を操作することにより、制御アプリケーション41に対して白色印刷物「1」が形成された黒い箱状物体6bの上面の高さを測定する指示入力がなされたものとする。 Then, it is assumed that an instruction input for measuring the height of the upper surface of the black box-like object 6b on which the white printed matter “1” is formed is performed by the user operating the mouse or the like.
 このR画像とB画像のずれ量を、図10に示す水平なラインAの部分で測定する場合を考える。そして、以下では適宜、水平方向をX方向、垂直方向をY方向と称する。 A case is considered in which the amount of deviation between the R image and the B image is measured at the horizontal line A portion shown in FIG. Further, hereinafter, the horizontal direction is referred to as an X direction, and the vertical direction is referred to as a Y direction.
 説明を簡略化するために、ラインA上における、[(X方向の32画素)×(ベイヤー配列におけるR-Gr行および次のGb-B行でなるY方向の2画素)]のみについて考える。ここに、Grは水平ラインにおけるR画素間に配置されたG画素、Gbは水平ラインにおけるB画素間に配置されたG画素である。 To simplify the description, only [(32 pixels in the X direction) × (2 pixels in the Y direction formed by the R-Gr row and the next Gb-B row in the Bayer arrangement) on the line A] will be considered. Here, Gr is a G pixel disposed between R pixels in the horizontal line, and Gb is a G pixel disposed between B pixels in the horizontal line.
 図11はラインA上の画素配列に対する白色印刷物「1」のR画像およびB画像の画素値と、補間画素値および合成値とを示す図である。 FIG. 11 is a diagram showing the pixel values of the R image and the B image of the white printed matter “1” for the pixel array on the line A, and the interpolation pixel value and the composite value.
 なお、実際の撮像素子11には画素間に物理的な不感領域(例えば、電気回路に電荷が発生するのを抑制するために遮光膜等で覆われた領域)があるが、本実施形態では簡単のために無視し、各画素の開口率が100%であるものとして説明する。 In the actual imaging element 11, there is a physically insensitive area between the pixels (for example, an area covered with a light shielding film or the like to suppress generation of electric charge in the electric circuit), but in the present embodiment In the following description, the aperture ratio of each pixel is assumed to be 100%.
 そして画素値に関して、例えばR画素の開口全部(100%)にR画像が結像しているときの画素値を100とし、R画素にR画像が全く結像していないときの画素値を0として、中間値については、R画素の開口の何%にR画像が結像しているかに比例した画素値をとるものとする。 Then, regarding the pixel value, for example, the pixel value when the R image is formed on the entire aperture (100%) of the R pixel is 100, and the pixel value when the R image is not formed at all on the R pixel is 0 As the intermediate value, it is assumed that the pixel value is in proportion to the percentage of the aperture of the R pixel where the R image is formed.
 まず、図11に示した例においては、白色印刷物「1」のR画像とB画像のX方向のずれ量が、X方向の画素ピッチを単位として、3.4画素となっている。従って、検出結果として得られるずれ量が、この実際のずれ量である3.4画素と等しいことが最も望ましい。 First, in the example shown in FIG. 11, the shift amount in the X direction of the R image and the B image of the white printed matter “1” is 3.4 pixels with the pixel pitch in the X direction as a unit. Therefore, it is most desirable that the amount of displacement obtained as a detection result be equal to the actual displacement amount of 3.4 pixels.
 撮像素子11は上述したようにベイヤー配列であるために、R画素間およびB画素間にはそれぞれG画素(Gr画素またはGb画素)が配置されている。従って、取得されるR画素の画素値およびB画素の画素値は、水平方向に1画素飛びの離散的な配置になるだけでなく、さらに、水平方向および垂直方向に各1画素ずつずれた配置となる。 As described above, since the imaging element 11 has a Bayer array, G pixels (Gr pixels or Gb pixels) are disposed between R pixels and B pixels. Therefore, the acquired pixel values of the R pixel and the pixel values of the B pixel are not only discretely arranged by one pixel in the horizontal direction, but are further arranged to be shifted by one pixel each in the horizontal direction and the vertical direction It becomes.
 そして、座標XのR画素値をR(X)、B画素値をB(X)としたときに、取得された画素値が、R(9)=R(11)=R(13)=R(15)=100でそれ以外のR(X)=0、B(12)=60、B(14)=B(16)=B(18)=100、B(20)=40でそれ以外のB(X)=0となったものとする。 When the R pixel value at coordinate X is R (X) and the B pixel value is B (X), the acquired pixel value is R (9) = R (11) = R (13) = R (15) = 100 for other R (X) = 0, B (12) = 60, B (14) = B (16) = B (18) = 100, B (20) = 40 for other It is assumed that B (X) = 0.
 ここで、得られたデータをそのまま用いて、例えば次の数式1に示すような正規化相互相関演算(ZNCC:Zero-mean Normalized Cross-Correlation)
[数1]
Figure JPOXMLDOC01-appb-I000001
によりずれ量の相関値を求めた場合には、以下の表1に示すような結果が得られる。
Here, the obtained data is used as it is, for example, as shown in the following Equation 1, ZNCC (Zero-mean Normalized Cross-Correlation)
[Equation 1]
Figure JPOXMLDOC01-appb-I000001
When the correlation value of the amount of deviation is obtained by the above, the results shown in Table 1 below are obtained.
[表1]
Figure JPOXMLDOC01-appb-I000002
 なお、この表1の左欄における像ずれ量は、R画素およびB画素が水平方向に1画素飛び(2ピクセル毎)に配置されているのに対応して、2ピクセル(より正確には2画素ピッチ)を単位としている。
[Table 1]
Figure JPOXMLDOC01-appb-I000002
The image shift amount in the left column of Table 1 corresponds to 2 pixels (more exactly 2) corresponding to the R pixel and the B pixel being arranged by 1 pixel skipping (every 2 pixels) in the horizontal direction. Pixel pitch is used as a unit.
 また、ここではずれ量の相関値を求めるのに正規化相互相関演算(ZNCC)を用いたが、勿論これに限定されるものではなく、SSD(Sum of Squared Difference)やSAD(Sum of Absolute Difference)等の他の演算を用いても構わない。 Further, although the normalized cross correlation operation (ZNCC) is used to obtain the correlation value of the amount of deviation here, it is of course not limited to this, and a sum of squared difference (SSD) or a sum of absolute difference (SAD) Other operations such as) may be used.
 この表1から、像ずれ量が1となる相関値0.92が最も高い相関値であり、2番目に高い相関値は像ずれ量が2となる相関値0.80である。従って、真のずれ量は、像ずれ量1と2の中間であると推定される。 From Table 1, the correlation value 0.92 at which the image shift amount is 1 is the highest correlation value, and the second highest correlation value is the correlation value 0.80 at which the image shift amount is 2. Therefore, the true shift amount is estimated to be intermediate between the image shift amounts 1 and 2.
 そこで、より高い精度でずれ量を算出するために、得られた相関値を用いてサブピクセル補間を行う。サブピクセル補間演算としては、上述したように、等角直線フィッティングやパラボラフィッティング等が知られているが、ここでは例えば、得られた相関値の比を用いて等角直線フィッティングを行うものとする。 Therefore, in order to calculate the shift amount with higher accuracy, sub-pixel interpolation is performed using the obtained correlation value. As the sub-pixel interpolation calculation, as described above, conformal straight line fitting, parabola fitting, etc. are known, but here, for example, it is assumed that equiangular straight line fitting is performed using the obtained correlation value ratio. .
 この場合には、像ずれ量が0,1,2となる相関値の比から、
 補間値=(0.34-0.80)/(2×(0.34-0.92))=0.4
となり、位相差検出方向の画素ピッチを単位として、ずれ量は1.4画素であると算出される(以下、単に「画素」あるいは「ピクセル」と記載しても、ずれ量や画素間隔等の単位はより正確には画素ピッチであるものとする)。
In this case, from the ratio of the correlation value at which the image shift amount is 0, 1, 2
Interpolation value = (0.34-0.80) / (2 x (0.34-0.92)) = 0.4
The shift amount is calculated to be 1.4 pixels with the pixel pitch in the phase difference detection direction as a unit (hereinafter referred to simply as "pixel" or "pixel", the shift amount, pixel spacing, etc. The unit is more precisely the pixel pitch).
 上述したように、演算に用いたRの画素間隔およびBの画素間隔は、撮像素子11上の実際の(色成分に関わらず1画素と考えたときの)画素間隔の2倍であり、しかもR画素とB画素とは水平方向に1画素ずれているために、この点を補正すると、
          ずれ量=1.4×2+1=3.8画素
が得られる。
As described above, the R pixel spacing and the B pixel spacing used in the calculation are twice the actual pixel spacing (when considered as one pixel regardless of the color component) on the imaging device 11, and Since the R pixel and the B pixel are shifted by one pixel in the horizontal direction, if this point is corrected,
An amount of shift = 1.4 × 2 + 1 = 3.8 pixels is obtained.
 上述したように実際のずれ量は3.4画素であるから、得られたデータをそのまま用いた場合に算出される像ずれ量である3.8画素には、0.4画素の検出誤差が発生していることになる。 As described above, since the actual shift amount is 3.4 pixels, a detection error of 0.4 pixels is generated in 3.8 pixels which is an image shift amount calculated when the obtained data is used as it is. It will be occurring.
 次に、画素補間演算部35により補間演算を行って、補間画素を生成した場合について説明する。 Next, the case where interpolation calculation is performed by the pixel interpolation calculation unit 35 to generate interpolation pixels will be described.
 画素補間演算部35は、R-Gr行におけるGr画素位置に相当するR画素値、およびGb-B行におけるGb画素位置に相当するB画素値を補間演算して推定する。本実施形態においては、説明を簡略化するために、R補間画素値として両隣のR画素値の平均値を算出し、B補間画素値として両隣のB画素値の平均値を算出するものとする(図11の該当箇所参照)。 The pixel interpolation operation unit 35 interpolates and estimates an R pixel value corresponding to the Gr pixel position in the R-Gr row and a B pixel value corresponding to the Gb pixel position in the Gb-B row. In this embodiment, in order to simplify the description, an average value of R pixel values on both sides is calculated as an R interpolation pixel value, and an average value of B pixel values on both sides is calculated as a B interpolation pixel value. (Refer to the corresponding part in FIG. 11).
 画素ずれ検出部33は、元のR画素値と、画素補間演算部35により算出されたR補間画素値とを合成してR補間色画像(図11のR画素合成値を参照)を生成し、同様に、元のB画素値と算出されたB補間画素値とを合成してB補間色画像(図11のB画素合成値を参照)を生成する。 The pixel shift detection unit 33 combines the original R pixel value and the R interpolation pixel value calculated by the pixel interpolation operation unit 35 to generate an R interpolation color image (see R pixel composite value in FIG. 11). Similarly, the original B pixel value and the calculated B interpolation pixel value are combined to generate a B interpolation color image (see the B pixel composite value in FIG. 11).
 そして、生成した補間色画像(R補間色画像およびB補間色画像)を用いて、画素ずれ検出部33が上記数式1に示したようなZNCCによりずれ量の相関値を求めた場合には、以下の表2に示すような結果が得られる。 Then, using the generated interpolation color image (R interpolation color image and B interpolation color image), when the pixel displacement detection unit 33 determines the correlation value of the displacement amount by ZNCC as shown in the above equation 1, The results as shown in Table 2 below are obtained.
[表2]
Figure JPOXMLDOC01-appb-I000003
 なお、この表2の左欄における像ずれ量は、補間色画像が水平方向に1画素毎(1ピクセル毎)に配置されているのに対応して、1ピクセル(より正確には1画素ピッチ)を単位としている。
[Table 2]
Figure JPOXMLDOC01-appb-I000003
The image shift amount in the left column of Table 2 corresponds to one pixel (more precisely, one pixel pitch) corresponding to arrangement of the interpolation color image in the horizontal direction every one pixel (every one pixel). Unit).
 この表2から、像ずれ量が4画素となる相関値0.97が最も高い相関値であり、2番目に高い相関値は像ずれ量が3画素となる相関値0.94である。従って、真のずれ量は、3画素と4画素の中間であると推定される。 From Table 2, the correlation value 0.97 at which the image shift amount is 4 pixels is the highest correlation value, and the second highest correlation value is the correlation value 0.94 at which the image shift amount is 3 pixels. Therefore, the true shift amount is estimated to be intermediate between 3 pixels and 4 pixels.
 そこで、画素ずれ検出部33は、より高い精度でずれ量を算出するために、得られた相関値を用いてサブピクセル補間を行う。ここでは上述と同様に、得られた相関値の比を用いて等角直線フィッティングを行うものとする。 Therefore, the pixel shift detection unit 33 performs sub-pixel interpolation using the obtained correlation value in order to calculate the shift amount with higher accuracy. Here, in the same manner as described above, it is assumed that isometric straight line fitting is performed using the ratio of the obtained correlation values.
 すると、像ずれ量が3画素、4画素、5画素となる相関値の比から、
 補間値=(0.94-0.86)/(2×(0.86-0.97))=-0.36
となり、ずれ量は(4-0.36)=3.64画素であると算出される。
Then, from the ratio of the correlation value when the image shift amount is 3 pixels, 4 pixels, 5 pixels,
Interpolation value = (0.94-0.86) / (2 x (0.86-0.97)) =-0.36
The shift amount is calculated as (4-0.36) = 3.64 pixels.
 補間色画像を用いた場合に算出される像ずれ量である3.64画素は、実際のずれ量である3.4画素に対して、0.24画素の検出誤差が発生していることになる。つまり、上述した補間なしの場合(得られたデータをそのまま用いた場合)の検出誤差0.4よりも、高精度にずれ量を演算することができたことになる。そして、この0.24画素の検出誤差は、上記背景技術において説明したような、0.33ピクセル以下のずれ量の精度をも満たすものとなっている。 An image shift amount of 3.64 pixels calculated using the interpolation color image has a detection error of 0.24 pixels with respect to an actual shift amount of 3.4 pixels. Become. That is, it is possible to calculate the amount of deviation with higher accuracy than the detection error 0.4 in the case without the above-described interpolation (when the obtained data is used as it is). The detection error of 0.24 pixels also satisfies the accuracy of the shift amount of 0.33 pixels or less as described in the above-mentioned background art.
 なお、上述では、画素補間演算部35における補間をX方向両隣の2画素の平均値により行ったが、X方向に近接するより多くの画素に基づいて画素値を推定する方法を用いても良いし、X方向に限らず、縦方向(Y方向)や斜め方向の画素値を用いて推定する方法を用いても構わない。さらに、複数枚の画像に基づいて超解像技術等を用いた推定方法を用いても構わない。 In the above description, the interpolation in the pixel interpolation operation unit 35 is performed by using the average value of two pixels on both sides in the X direction, but a method of estimating pixel values based on more pixels adjacent in the X direction may be used Instead of the X direction, a method of estimation using pixel values in the vertical direction (Y direction) or an oblique direction may be used. Furthermore, an estimation method using a super resolution technique or the like may be used based on a plurality of images.
 距離演算部34は、画素ずれ検出部33により算出された相対的なずれ量を、撮像光学系10および瞳色分割フィルタ14の構成に基づいて得られる比例換算式を用いて、被写体の高さ方向の距離情報に変換する。 The distance calculation unit 34 uses the proportional conversion formula obtained based on the configuration of the imaging optical system 10 and the pupil color division filter 14 to calculate the relative shift amount calculated by the pixel shift detection unit 33, and the height of the subject. Convert to directional distance information.
 ここに、被写体高さをz(mm)、ずれ量をx(ピクセル)であるとすると、撮像光学系10および瞳色分割フィルタ14の構成により得られる比例換算式が、本実施形態では例えば、以下の数式2
[数2]
                z=0.15x
となるものとする。なお、被写体高さzは「被写体距離に関する情報」の一例である。
Here, assuming that the subject height is z (mm) and the deviation amount is x (pixel), a proportional conversion equation obtained by the configuration of the imaging optical system 10 and the pupil color division filter 14 is, for example, Equation 2 below
[Equation 2]
z = 0.15x
Shall be The subject height z is an example of “information regarding subject distance”.
 このときには、画素ずれ検出部33により算出されたずれ量x=3.64を代入すると、被写体高さzとして0.546mmが得られる。なお、実際のずれ量x=3.4から得られる被写体高さzは0.51mmであるから、本実施形態の距離測定装置1の測定高さ誤差は0.036mmとなる。 At this time, when the displacement amount x = 3.64 calculated by the pixel displacement detection unit 33 is substituted, 0.546 mm is obtained as the subject height z. Since the subject height z obtained from the actual displacement amount x = 3.4 is 0.51 mm, the measurement height error of the distance measuring device 1 of the present embodiment is 0.036 mm.
 この数値は、合焦位置からのずれ量を示すものであり、黒い箱状物体6bの下面(すなわち、図8に示した白いプレート6a面)を現在の合焦位置にしている場合には、黒い箱状物体6bの高さを示すことになる。 This numerical value indicates the amount of deviation from the in-focus position, and when the lower surface of the black box-like object 6b (that is, the surface of the white plate 6a shown in FIG. 8) is at the current in-focus position, It indicates the height of the black box-like object 6b.
 制御アプリケーション41が複数の測定点を設定することができるものであるとした場合には、設定された複数の測定点間の差分を算出して表示することにより、ユーザは観察被写体の所望の点の高さ情報を取得することが可能となる。 In the case where the control application 41 can set a plurality of measurement points, the user calculates and displays the difference between the plurality of set measurement points, thereby the user can obtain a desired point of the observation subject. It is possible to obtain height information of
 なお、本実施形態(および以下の実施形態)では、各種の演算部等をコントローラ3内のハードウェア構成として記載しているが、これに限定されるものではなく、PC4内等におけるソフトウェア構成としても構わない(この場合には、PC4がコントローラ3の機能を兼ねるようにすることができるために、コントローラ3を省略しても良い)。 In the present embodiment (and the following embodiments), various arithmetic units and the like are described as the hardware configuration in the controller 3, but the present invention is not limited to this. (In this case, the controller 3 may be omitted because the PC 4 can also function as the controller 3).
 さらに、上述では工業用顕微鏡を想定しているために、距離測定装置1が、鏡筒2とコントローラ3とPC4とモニタ5とを含んで構成される例を説明したが、距離測定装置1が例えばデジタルカメラ等である場合には、これらが一体であっても構わない。すなわち、距離測定装置1がデジタルカメラ等である場合には、鏡筒2がカメラのレンズ鏡筒やレンズ駆動機構、コントローラ3およびPC4がカメラのCPUや画像処理部、モニタ5がカメラに設けられている液晶モニタなどとなる。 Furthermore, since the industrial microscope is assumed above, the example in which the distance measuring device 1 is configured to include the lens barrel 2, the controller 3, the PC 4, and the monitor 5 has been described, but the distance measuring device 1 For example, in the case of a digital camera or the like, these may be integrated. That is, when the distance measuring device 1 is a digital camera or the like, the lens barrel 2 is a lens barrel of a camera or a lens driving mechanism, the controller 3 and the PC 4 are a CPU of a camera Liquid crystal monitor etc.
 このような実施形態1によれば、相対的なずれ量を演算する方向にサンプリング数を増やすことができる。具体的には、ずれ量検出の対象となるR画素およびB画素を、ずれ量演算方向である水平方向に増やすことができる。これにより、ずれ量の検出精度を向上することが可能となる。その結果、被写体に対する測距精度を向上することができる。 According to the first embodiment, the number of samplings can be increased in the direction of calculating the relative shift amount. Specifically, it is possible to increase R pixels and B pixels to be subjected to shift amount detection in the horizontal direction which is the shift amount calculation direction. This makes it possible to improve the detection accuracy of the amount of deviation. As a result, the distance measurement accuracy with respect to the subject can be improved.
 また、サンプリング数の増加を、画素の補間演算により行っているために、撮像素子11を移動する機構等(下記の実施形態2等参照)が不要である利点がある。このとき、撮像素子11から得られる画像データの内の、必要箇所のみを補間演算するようにすれば、演算負荷を軽減することも可能となる。また、補間画素値を隣接する画素値の平均値により算出する場合には、演算が簡便であり、高速処理も可能となる。 In addition, since the increase in the number of samplings is performed by interpolation calculation of pixels, there is an advantage that a mechanism or the like for moving the imaging device 11 (see Embodiment 2 and the like below) is unnecessary. At this time, if only necessary portions of the image data obtained from the imaging device 11 are subjected to interpolation calculation, it is also possible to reduce the calculation load. In addition, when the interpolation pixel value is calculated from the average value of adjacent pixel values, the calculation is simple and high-speed processing is also possible.
 こうして本実施形態の距離測定装置によれば、瞳色分割された光をカラーの撮像素子で撮像して得られた複数色の色画像に基づき被写体距離に関する情報を取得する際に、距離測定精度をより高くすることができる。 Thus, according to the distance measuring device of the present embodiment, when acquiring information on the subject distance based on color images of multiple colors obtained by imaging the pupil color-divided light with a color imaging device, the distance measurement accuracy Can be made higher.
[実施形態2]
 図12および図13は本発明の実施形態2を示したものであり、図12は距離測定装置1の構成を示すブロック図、図13はラインA上の画素配列に対する白色印刷物「1」のR画像およびB画像の画素値と、シフト画素値および合成値とを示す図である。
Second Embodiment
12 and 13 show Embodiment 2 of the present invention, and FIG. 12 is a block diagram showing the configuration of the distance measuring device 1, and FIG. 13 is an R of white printed matter “1” for the pixel arrangement on line A. It is a figure which shows the pixel value of an image and B image, and a shift pixel value and a synthetic | combination value.
 この実施形態2において、上述の実施形態1と同様である部分については同一の符号を付して説明を省略し、主として異なる点についてのみ説明する。 In the second embodiment, the same parts as those of the first embodiment described above are denoted by the same reference numerals and descriptions thereof will be omitted, and only differences will be mainly described.
 実施形態1では補間画素を演算により求めたが、本実施形態では撮像素子11をシフトさせて複数回の撮像を行うことにより補間画素を得るものとなっている。 In the first embodiment, the interpolation pixels are obtained by calculation, but in the present embodiment, the interpolation pixels are obtained by shifting the image sensor 11 and performing imaging a plurality of times.
 すなわち、本実施形態の距離測定装置1は、上述した実施形態1の図1に示した構成に対して、鏡筒2に素子シフト部17および素子シフト制御部25を追加し、コントローラ3から画素補間演算部35を取り除いた構成となっている。 That is, in the distance measuring device 1 of this embodiment, the element shift unit 17 and the element shift control unit 25 are added to the lens barrel 2 with respect to the configuration shown in FIG. The configuration is such that the interpolation operation unit 35 is removed.
 ここに、素子シフト部17および素子シフト制御部25は、画素補間部であって、撮像素子11を、撮像光学系10の光軸に垂直な面内におけるずれ検出方向に平行移動する素子移動部となっている。 Here, the element shift unit 17 and the element shift control unit 25 are pixel interpolation units, and move the image pickup device 11 in parallel in the direction of detecting deviation in a plane perpendicular to the optical axis of the imaging optical system 10 It has become.
 素子シフト部17は、撮像素子11を、少なくともずれ量を検出する方向(例えば、水平な画素配列方向)に微小移動するためのものである。具体的には、素子シフト部17として、ピエゾ圧電素子等を用いて撮像素子11を移動することができる機構系を採用することが考えられる。また、距離測定装置1が素子駆動方式の手ぶれ補正機構を備えたデジタルカメラ等である場合には、この手ぶれ補正機構を素子シフト部17として利用しても構わない。 The element shift unit 17 is for minutely moving the imaging element 11 in a direction (for example, a horizontal pixel array direction) in which at least the amount of deviation is detected. Specifically, it is conceivable to adopt a mechanical system capable of moving the imaging element 11 using a piezoelectric element or the like as the element shift unit 17. When the distance measuring device 1 is a digital camera or the like equipped with a device driving type camera shake correction mechanism, this camera shake correction mechanism may be used as the device shift unit 17.
 素子シフト制御部25は、素子シフト部17の駆動を制御するものである。 The element shift control unit 25 controls driving of the element shift unit 17.
 なお、本実施形態においても、被写体が図8、図9等に示したような形状であり、得られるRB二重像が図10に示すようになり、ラインA上におけるR画像とB画像との実際のずれ量が3.4画素である等の被写体条件は、上述した実施形態1と同様であるものとする。 Also in the present embodiment, the subject is shaped as shown in FIG. 8 and FIG. 9 etc., and the obtained RB double image is as shown in FIG. It is assumed that the subject conditions such that the actual displacement amount of is 3.4 pixels are the same as in the first embodiment described above.
 次に、本実施形態において被写体距離に関する情報を測定する際の動作を説明する。 Next, an operation at the time of measuring information on a subject distance in the present embodiment will be described.
 ユーザが制御アプリケーション41を介して測定開始の指示を入力すると、システムコントローラ31は、距離測定装置1の画像撮影に係る各制御系を統括制御して、撮像素子11によりまず元画像の撮影を行わせる。 When the user inputs an instruction to start measurement via the control application 41, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1, and first captures an original image with the imaging element 11. Let
 こうして撮影された元画像は、メモリ32に一時的に蓄積される。 The original image thus taken is temporarily stored in the memory 32.
 元画像の撮影が完了したら、次に、システムコントローラ31は、素子シフト制御部25に対して撮像素子11をシフトさせる命令を送信する。 When the photographing of the original image is completed, next, the system controller 31 transmits, to the element shift control unit 25, an instruction to shift the imaging element 11.
 素子シフト制御部25は、この命令を受けると、駆動信号を生成して素子シフト部17へ送信する。 When receiving this command, the element shift control unit 25 generates a drive signal and transmits the drive signal to the element shift unit 17.
 素子シフト部17は、この駆動信号を受けると、撮像素子11をずれ量を検出する方向、ここでは画素配列のX方向へ、1画素分(より正確には、1水平画素ピッチ分)右シフトさせる。このシフトの結果、例えばX方向座標1にあったR画素はX方向座標2へ移動し、他の画素も同様にX方向座標が1増加する位置へ移動する。 When the element shift unit 17 receives this drive signal, it shifts the image sensor 11 in the direction to detect the shift amount, here, right shift by one pixel (more exactly by one horizontal pixel pitch) in the X direction of the pixel array. Let As a result of this shift, for example, the R pixel at X-direction coordinate 1 moves to X-direction coordinate 2, and the other pixels also move to a position at which the X-direction coordinate increases by one.
 こうして1画素分の右シフトが完了したら、システムコントローラ31は、距離測定装置1の画像撮影に係る各制御系を統括制御して、撮像素子11によりシフト画像の撮影を行わせる。 When the right shift for one pixel is completed in this way, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1 to cause the imaging element 11 to capture a shift image.
 こうして撮影されたシフト画像も、メモリ32に一時的に蓄積される。 The shift image thus captured is also temporarily stored in the memory 32.
 画素ずれ検出部33は、メモリ32に記憶された元画像のR画素値とシフト画像のR画素値とを合成してR補間色画像(図13のR画素合成値を参照)を生成し、同様に、元画像のB画素値とシフト画像のB画素値とを合成してB補間色画像(図13のB画素合成値を参照)を生成する。 The pixel displacement detection unit 33 combines the R pixel value of the original image stored in the memory 32 and the R pixel value of the shift image to generate an R interpolation color image (see the R pixel composite value in FIG. 13). Similarly, the B pixel value of the original image and the B pixel value of the shifted image are combined to generate a B-interpolated color image (see the B pixel combined value in FIG. 13).
 そして、生成した補間色画像(R補間色画像およびB補間色画像)を用いて、画素ずれ検出部33が上記数式1に示したようなZNCCによりずれ量の相関値を求めた場合には、以下の表3に示すような結果が得られる。 Then, using the generated interpolation color image (R interpolation color image and B interpolation color image), when the pixel displacement detection unit 33 determines the correlation value of the displacement amount by ZNCC as shown in the above equation 1, The results as shown in Table 3 below are obtained.
[表3]
Figure JPOXMLDOC01-appb-I000004
 なお、この表3の左欄における像ずれ量は、補間色画像が水平方向に1画素毎(1ピクセル毎)に配置されているのに対応して、1ピクセル(より正確には1画素ピッチ)を単位としている。
[Table 3]
Figure JPOXMLDOC01-appb-I000004
The image shift amount in the left column of this Table 3 corresponds to one pixel (more precisely, one pixel pitch) corresponding to arrangement of the interpolation color image in each pixel (each pixel) in the horizontal direction. Unit).
 この表3から、像ずれ量が3画素となる相関値0.97が最も高い相関値であり、2番目に高い相関値は像ずれ量が4画素となる相関値0.92である。従って、真のずれ量は、3画素と4画素の中間であると推定される。 From Table 3, the correlation value 0.97 in which the image shift amount is 3 pixels is the highest correlation value, and the second highest correlation value is the correlation value 0.92 in which the image shift amount is 4 pixels. Therefore, the true shift amount is estimated to be intermediate between 3 pixels and 4 pixels.
 そこで、画素ずれ検出部33は、より高い精度でずれ量を算出するために、得られた相関値を用いてサブピクセル補間を行う。ここでは上述と同様に、得られた相関値の比を用いて等角直線フィッティングを行うものとする。 Therefore, the pixel shift detection unit 33 performs sub-pixel interpolation using the obtained correlation value in order to calculate the shift amount with higher accuracy. Here, in the same manner as described above, it is assumed that isometric straight line fitting is performed using the ratio of the obtained correlation values.
 すると、像ずれ量が2画素、3画素、4画素となる相関値の比から、
 補間値=(0.74-0.92)/(2×(0.74-0.97))=0.39
となり、ずれ量は(3+0.39)=3.39画素であると算出される。
Then, from the ratio of the correlation value at which the image shift amount is 2, 3, and 4 pixels,
Interpolation value = (0.74-0.92) / (2 x (0.74-0.97)) = 0.39
The shift amount is calculated as (3 + 0.39) = 3.39 pixels.
 撮像素子11のシフトにより得られた補間色画像を用いた場合に算出される像ずれ量である3.39画素は、実際のずれ量である3.4画素に対して、0.01画素の検出誤差が発生していることになる。つまり、上述したような、得られたデータをそのまま用いた場合(表1に示した場合)の検出誤差0.4よりも、極めて高精度にずれ量を演算することができたことになり、さらに実施形態1の補間により得られた補間色画像を用いた場合(表2に示した場合)の検出誤差0.24画素よりも高精度にずれ量を演算することができたことになる。そして、この0.01画素の検出誤差は、上記背景技術において説明したような、0.33ピクセル以下のずれ量の精度を満たすものとなっている。 The image shift amount 3.39 pixels calculated using the interpolation color image obtained by the shift of the imaging device 11 is 0.01 pixels with respect to the actual shift amount 3.4 pixels. It means that a detection error has occurred. That is, the amount of deviation can be calculated with extremely high accuracy than the detection error of 0.4 when the obtained data as described above is used as it is (in the case shown in Table 1). Further, the amount of deviation can be calculated more accurately than the detection error of 0.24 pixels when the interpolation color image obtained by the interpolation of the first embodiment is used (in the case shown in Table 2). The detection error of 0.01 pixel satisfies the accuracy of the shift amount of 0.33 pixel or less as described in the above-mentioned background art.
 距離演算部34は、画素ずれ検出部33により算出されたずれ量を、撮像光学系10および瞳色分割フィルタ14の構成に基づいて得られる比例換算式を用いて、被写体の高さ方向の距離情報に変換する。 The distance calculation unit 34 uses the proportional conversion formula obtained based on the configurations of the imaging optical system 10 and the pupil color division filter 14 to calculate the amount of deviation calculated by the pixel deviation detection unit 33, and calculates the distance in the height direction of the subject. Convert to information.
 本実施形態においても、比例換算式が上述した数式2に示すものになるとすると、画素ずれ検出部33により算出されたずれ量x=3.39を代入すれば、被写体高さzとして0.509mmが得られる。なお、実際のずれ量x=3.4から得られる被写体高さz=0.51mmと比較すれば、本実施形態の距離測定装置1の測定高さ誤差は0.001mmとなる。 Also in the present embodiment, assuming that the proportional conversion equation is as shown in the above-mentioned equation 2, if the displacement amount x = 3. 39 calculated by the pixel displacement detection unit 33 is substituted, 0.509 mm as the subject height z Is obtained. The measured height error of the distance measuring device 1 of the present embodiment is 0.001 mm in comparison with the subject height z = 0.51 mm obtained from the actual displacement amount x = 3.4.
 なお、上述では、1ピクセルずらして2枚の静止画像を取得する場合を説明したが、例えば2/3ピクセルずらして3枚の静止画像を取得しても構わないし、さらにより多くの静止画像を取得するようにしても良い。 In the above, the case where two still images are acquired by shifting one pixel is described, but for example, three still images may be acquired by shifting 2/3 pixels, and still more still images It may be acquired.
 また、元画像の画素と補間画像の画素とを等間隔に配置する必要はないため、シフト後の画像が元画像と完全には一致しないという条件を満たせば(例えば、シフト量を(2の倍数)画素ピッチとしない等)、シフト量も適宜で構わない。 In addition, since it is not necessary to arrange the pixels of the original image and the pixels of the interpolation image at equal intervals, if the condition that the shifted image does not completely match the original image is satisfied (for example, The multiple, the pixel pitch, etc., and the shift amount may be properly selected.
 そして、上述では画像の取得を静止画撮影的に行っているが、撮像素子11を継続的に移動(シフト)させながら、元画像に引き続いて1つ以上のシフト画像を動画撮影的に順次取得するようにしても構わない。 And although acquisition of an image is performed like a still image in the above-mentioned, while making the image pick-up element 11 move continuously (shift), one or more shift images are sequentially acquired like a moving image photography succeeding an original image It does not matter if you do.
 また、本実施形態では、撮像素子11をシフトさせて複数回の撮像を行うことにより補間画素を得る実施形態を説明したが、撮像光学系10をシフトさせて複数回の撮影を行っても良い。この場合、素子シフト部17および素子シフト制御部25は、撮像光学系10をシフトさせる。 Further, in the present embodiment, the embodiment in which the interpolation pixel is obtained by shifting the imaging element 11 and performing imaging a plurality of times has been described, but the imaging optical system 10 may be shifted to perform imaging a plurality of times. . In this case, the element shift unit 17 and the element shift control unit 25 shift the imaging optical system 10.
 あるいは、撮像光学系10と撮像素子11の両方をシフトさせて複数回の撮影を行っても良い。この場合、素子シフト部17および素子シフト制御部25は、撮像光学系10と撮像素子11をシフトさせる。 Alternatively, both of the imaging optical system 10 and the imaging element 11 may be shifted to perform imaging a plurality of times. In this case, the element shift unit 17 and the element shift control unit 25 shift the imaging optical system 10 and the imaging element 11.
 このような実施形態2によれば、上述した実施形態1とほぼ同様の効果を奏するとともに、素子移動部である素子シフト部17および素子シフト制御部25により撮像素子11を平行移動して、ずれ検出方向の位置が異なる複数枚の画像を撮像素子11から取得し、取得された複数枚の画像を色分離して得られる複数枚の色画像であって、同一色に係る色画像同士を、画素ずれ検出部33が合成することにより補間色画像を生成するようにしている。その結果、補間演算で得られる画素値よりも正確な実測値を得ることができるために、より高精度に被写体距離に関する情報を取得することが可能となる。 According to the second embodiment, substantially the same effect as that of the first embodiment described above is obtained, and the image sensor 11 is moved in parallel by the device shift unit 17 and the device shift control unit 25 which are device moving units. A plurality of color images obtained by acquiring a plurality of images at different positions in the detection direction from the imaging device 11 and performing color separation on the plurality of acquired images, the color images relating to the same color Interpolated color images are generated by combining the pixel shift detection unit 33. As a result, since it is possible to obtain an actual measurement value more accurate than the pixel value obtained by the interpolation operation, it is possible to obtain information on the subject distance with higher accuracy.
[実施形態3]
 図14および図15は本発明の実施形態3を示したものであり、図14は距離測定装置1の構成を示すブロック図、図15は撮像光学系10の光学的条件を説明するための図である。
Third Embodiment
14 and 15 show Embodiment 3 of the present invention, and FIG. 14 is a block diagram showing the configuration of the distance measuring device 1, and FIG. 15 is a diagram for explaining the optical conditions of the imaging optical system 10. It is.
 この実施形態3において、上述の実施形態1,2と同様である部分については同一の符号を付して説明を省略し、主として異なる点についてのみ説明する。 In the third embodiment, the same parts as those in the first and second embodiments described above will be assigned the same reference numerals and descriptions thereof will be omitted, and only differences will be mainly described.
 本実施形態は、上述した実施形態2と同様に撮像素子11をシフトさせて複数回の撮像を行うことにより補間画素を得るが、このときの撮像素子11のシフト量を、撮像光学系10の光学的条件に基づいて設定するものとなっている。 In the present embodiment, as in the second embodiment described above, the interpolation pixel is obtained by shifting the image sensor 11 and performing imaging a plurality of times, but the shift amount of the image sensor 11 at this time It is set based on the optical conditions.
 すなわち、本実施形態の距離測定装置1は、上述した実施形態2の図12に示した構成に対して、コントローラ3に素子シフト量演算部36を追加した構成となっている。 That is, the distance measuring device 1 of this embodiment has a configuration in which an element shift amount calculation unit 36 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
 この素子シフト量演算部36は、2枚の画像を取得する間の撮像素子11の平行移動量(シフト量)を、撮像光学系10の光学的条件に基づいて演算する移動量演算部である。 The element shift amount calculation unit 36 is a movement amount calculation unit that calculates the parallel movement amount (shift amount) of the imaging element 11 during acquisition of two images based on the optical conditions of the imaging optical system 10. .
 従って、素子移動部である素子シフト部17および素子シフト制御部25は、素子シフト量演算部36により演算された平行移動量だけ、撮像光学系10を平行移動することになる。 Therefore, the element shift unit 17 and the element shift control unit 25 which are element movement units translate the imaging optical system 10 in parallel by the amount of parallel movement calculated by the element shift amount calculation unit 36.
 撮像光学系10は、図15に模式的に示すような光学的条件を備えている。ここに、図15において、撮像光学系10の光軸をOとして示している。 The imaging optical system 10 has optical conditions as schematically shown in FIG. Here, in FIG. 15, the optical axis of the imaging optical system 10 is shown as O.
 また、Dは撮像光学系10における絞り13の絞り径を示している(従って、D/2が光軸Oから絞り13までの半径である)。LGは、光軸Oから瞳色分割フィルタ14における例えばRGフィルタ14rの重心位置までの距離(重心距離)を示している。ただし、光軸OからGBフィルタ14bの重心位置までの距離(重心距離)も、同様にLGであるものとする。 Further, D indicates the diameter of the diaphragm of the diaphragm 13 in the imaging optical system 10 (therefore, D / 2 is the radius from the optical axis O to the diaphragm 13). LG indicates the distance (gravity center distance) from the optical axis O to the gravity center position of the RG filter 14r in the pupil color division filter 14, for example. However, the distance from the optical axis O to the gravity center position of the GB filter 14 b (gravity center distance) is also LG.
 なお、本実施形態においても、図3に示したような、RGフィルタ14rおよびGBフィルタ14bがそれぞれ半円状をなすフィルタ(図15中、上部側がR透過、下部側がB透過)を用いており、
       LG=4×(D/2)/(3π)=(2D)/(3π)
となる。従って、重心距離LGは、絞り径Dの大きさに依存する。
Also in this embodiment, as shown in FIG. 3, filters in which the RG filter 14 r and the GB filter 14 b are respectively formed in a semicircular shape (in FIG. 15, R transmission at the upper side and B transmission at the lower side) are used. ,
LG = 4 × (D / 2) / (3π) = (2D) / (3π)
It becomes. Therefore, the gravity center distance LG depends on the size of the aperture diameter D.
 また、撮像光学系10の焦点距離をfとする。この焦点距離fは、ズームレンズ12の移動により変化する量である。 Further, the focal length of the imaging optical system 10 is f. The focal length f is an amount that changes as the zoom lens 12 moves.
 焦点位置から絞り13の絞り半径D/2を見込む角度をθとすると、撮像光学系10の開口数NA(Numerical Aperture)は、
               NA=sinθ
により表される。なお、図15を見れば分かるように、この開口数NAは、より正確には物体側NAであり、絞り13の絞り径Dの大きさに依存する(絞り径Dが小さくなればNAも小さくなり、絞り径Dが大きくなればNAも大きくなる)。
Assuming that the angle at which the diaphragm radius D / 2 of the diaphragm 13 is viewed from the focal position is θ, the numerical aperture NA (Numerical Aperture) of the imaging optical system 10 is
NA = sin θ
Is represented by As is apparent from FIG. 15, the numerical aperture NA is more precisely the object side NA, and depends on the size of the aperture diameter D of the aperture 13 (the smaller the aperture diameter D, the smaller the NA. As the aperture diameter D increases, the NA also increases.
 さらに、焦点位置から撮像素子11の撮像面11aまでの距離と光学倍率の乗数をZとする。 Further, the distance between the focal position and the imaging surface 11 a of the imaging device 11 and the multiplier of the optical magnification are assumed to be Z.
 撮像素子11の撮像面11a(焦点位置からZの光学的距離に位置する)におけるR被写体像とB被写体像のずれ量は、撮像面11aにおけるRGフィルタ14rの重心位置の像とGBフィルタ14bの重心位置の像との距離であり、図示のXとなる(従って、X/2が光軸OからRGフィルタ14rの重心位置の像、またはGBフィルタ14bの重心位置の像までの距離である)。 The shift amount between the R and B subject images on the imaging surface 11a (located at an optical distance from the focal position Z) of the imaging element 11 is the image of the barycentric position of the RG filter 14r on the imaging surface 11a and the GB filter 14b. The distance to the image at the center of gravity, which is X in the figure (therefore, X / 2 is the distance from the optical axis O to the image of the center of gravity of the RG filter 14r or the image of the center of gravity of the GB filter 14b) .
 図15に示したような幾何学的関係から、次の数式3に示す比例関係が得られる。 From the geometrical relationship as shown in FIG. 15, the proportional relationship shown in the following Equation 3 is obtained.
[数3]
             (X/2):Z=LG:f
 距離測定装置1を使用する際に、例えば観察倍率を変更すると、実際の撮像光学系10の場合には、ズームレンズ12を制御すると同時に絞り13を調整することになる。このために観察倍率を変更すると、撮像光学系10における焦点距離f、重心距離LG、NA値等の光学的条件が変化する。
[Equation 3]
(X / 2): Z = LG: f
When the distance measuring device 1 is used, for example, when the observation magnification is changed, in the case of an actual imaging optical system 10, the zoom lens 12 is controlled and the diaphragm 13 is adjusted at the same time. Therefore, when the observation magnification is changed, optical conditions such as the focal length f, the barycentric distance LG, and the NA value in the imaging optical system 10 change.
 さらに、撮像光学系10を構成する光学レンズは、一般的には複数のレンズを組み合わせて所望の特性を得るように構成されているために、任意のズーム倍率におけるR被写体像とB被写体像とのずれ量Xを単純な数式で表現することは困難である。 Furthermore, since the optical lens constituting the imaging optical system 10 is generally configured to obtain a desired characteristic by combining a plurality of lenses, R subject image and B subject image at an arbitrary zoom magnification It is difficult to express the displacement amount X by a simple mathematical expression.
 一方、例えば工業用顕微鏡を用いて被写体(標本)の高さ測定を行う際に必要とされる分解能(従って、この分解能は光軸方向の分解能である)の決定の仕方は、以下の(1)または(2)の例に挙げるようなものである。 On the other hand, the method of determining the resolution (therefore, this resolution is the resolution in the optical axis direction) required when performing height measurement of a subject (specimen) using, for example, an industrial microscope is described below (1 Or (2).
(1)どの観察倍率であっても一定の分解能(例えば0.05mm以上の分解能)が得られる
(2)設定されている観察倍率の光学的条件に基づいて決定される焦点深度(DOF:Depth Of Focus)を基準とする分解能(例えば1/2DOFであり、より一般にはkを所定係数として(k×DOF))が得られる
 素子シフト量演算部36は、上述したような光学的条件(条件パラメータおよび光学パラメータ)を入力値として、被写体6の高さ方向における(1)、(2)等に示したような必要な精度の分解能を得るための、撮像素子11のシフト量を取得する。
(1) A certain resolution (for example, resolution of 0.05 mm or more) can be obtained regardless of observation magnification (2) Depth of focus determined based on the optical condition of the set observation magnification (DOF: Depth The element shift amount calculation unit 36 obtains the resolution (for example, 1/2 DOF, more generally (k × DOF) with k as a predetermined coefficient) based on the Of Focus). Using the parameters and the optical parameters as input values, the shift amount of the image sensor 11 for obtaining the resolution of the required accuracy as shown in (1), (2) etc. in the height direction of the subject 6 is acquired.
 この素子シフト量演算部36によるシフト量の取得方法としては、上記各パラメータを入力として算術演算により行っても構わないし、上記各パラメータに対するルックアップテーブル(LUT)を予め作成しておいて素子シフト量演算部36内に記憶しておき、距離測定時にこのLUTを参照する方式であっても良い。 As a method of acquiring the shift amount by the element shift amount calculation unit 36, the above-mentioned respective parameters may be input by arithmetic operation, or look-up tables (LUTs) for the respective parameters may be prepared in advance. It may be stored in the amount calculator 36, and this LUT may be referred to at the time of distance measurement.
 例えば(2)の分解能を採用してLUTを作成する場合には、以下のような手順となる。 For example, in the case where the resolution of (2) is adopted to create the LUT, the following procedure is performed.
 まず、撮像光学系10の構成に基づいて、ズーム倍率や開口数NAや絞り径Dなどの各種の光学的条件を1つ決定する。 First, based on the configuration of the imaging optical system 10, one of various optical conditions such as the zoom magnification, the numerical aperture NA, and the aperture diameter D is determined.
 すると、この光学的条件に基づいて焦点深度(DOF)が決定される。 Then, the depth of focus (DOF) is determined based on this optical condition.
 次に、この焦点深度(DOF)に基づいて、(2)において必要とされる分解能(k×DOF)を決定する。 Next, the required resolution (k × DOF) in (2) is determined based on the depth of focus (DOF).
 さらに、分解能(k×DOF)を検出するために必要なずれ量の検出精度を求め、求めたずれ量以下の値(例えば、求めたずれ量に1以下の所定値(なお、この所定値をあまり小さくすると、必要な精度よりも過剰に高精度を求めることになるために、1もしくは1に近い所定値とすることが好ましい)を乗算した値)を撮像素子11のシフト量として設定する。 Further, the detection accuracy of the deviation required to detect the resolution (k × DOF) is determined, and a value equal to or less than the determined deviation (for example, a predetermined value of 1 or less (the predetermined value If it is too small, it is preferable to set a predetermined value close to 1 or a value close to 1) in order to obtain a high accuracy that is excessively high than the required accuracy.
 このような手順を、撮像光学系10の光学的条件、例えばズーム倍率を変更しながら、繰り返して行う。 Such a procedure is repeated while changing the optical conditions of the imaging optical system 10, for example, the zoom magnification.
 これにより、撮像光学系10が取り得る各種の光学的条件に応じた撮像素子11のシフト量のLUTが作成される。 As a result, a LUT of the shift amount of the image sensor 11 according to various optical conditions that can be taken by the imaging optical system 10 is created.
 なお、実際の撮像素子11のシフト量の決定に際しては、撮像光学系10の光学的条件だけでなく、撮像素子11の画素ピッチ(画素ピッチよりも過剰に細かい精度で撮像素子11をシフトさせても実用的でないため)や、その他撮像系全体(撮像光学系10、瞳色分割フィルタ14、撮像素子11等を含む撮像系全体)の各種の条件に応じた制限を課すことになる。 When the actual shift amount of the imaging device 11 is determined, not only the optical conditions of the imaging optical system 10 but also the pixel pitch of the imaging device 11 (the imaging device 11 is shifted with an accuracy finer than the pixel pitch) Also, limitations in accordance with various conditions of the entire imaging system (the entire imaging system including the imaging optical system 10, the pupil color division filter 14, the imaging element 11 and the like) are imposed.
 このようにして決定される撮像素子11のシフト量の一例は、次の表4に示すようなものとなる。 An example of the shift amount of the imaging device 11 determined in this manner is as shown in Table 4 below.
[表4]
  光学倍率   シフト量
  0.7倍   1画素
  1.0倍  2/3画素
  2.0倍  1/2画素
 ここに、シフト量における単位とした「画素」は、より正確には、上述したようにシフト方向の画素ピッチである。
[Table 4]
Optical magnification Shift amount 0.7 times 1 pixel 1.0 times 2/3 pixels 2.0 times 1/2 pixels Here, the “pixel” as a unit in the shift amount is more accurately shifted as described above It is a pixel pitch in the direction.
 従って、例えば光学倍率が0.7倍であるときには、元画像を撮影した後に、R画素およびB画素を1画素分(例えば右に)シフトし(このときには、元画像撮影時のG画素の位置にR画素およびB画素が位置することになる)、この状態でシフト画像を撮影し、元画像とシフト画像とを合成して補間色画像を生成し、生成した補間色画像に基づき相対的なずれ量を算出することになる。 Therefore, for example, when the optical magnification is 0.7, the R pixel and the B pixel are shifted by one pixel (for example, to the right) after capturing the original image (in this case, the position of the G pixel at the original image capturing) R pixel and B pixel are located in the image), the shift image is photographed in this state, the original image and the shift image are combined to generate the interpolation color image, and the relative color image is generated based on the generated interpolation color image. The amount of deviation will be calculated.
 また、光学倍率が1.0倍であるときには、元画像を撮影した後に、2/3画素分(例えば右に)シフトし、この状態でシフト画像を撮影し、さらに2/3画素分(例えば右に)シフトし(このときには、元画像撮影位置から4/3画素分シフトしていることになる)、この状態でシフト画像を撮影し、元画像と2枚のシフト画像とを合成して補間色画像を生成し、生成した補間色画像に基づきずれ量を算出することになる。 In addition, when the optical magnification is 1.0, after capturing the original image, the image is shifted by 2/3 pixels (for example, to the right), and in this state, the shifted image is captured, and further by 2/3 pixels (for example, Shift to the right (in this case, it is shifted by 4/3 pixels from the original image shooting position), shoot the shifted image in this state, combine the original image and the two shifted images An interpolation color image is generated, and the amount of deviation is calculated based on the generated interpolation color image.
 さらに、光学倍率が2.0倍であるときには、元画像を撮影した後に、1/2画素分(例えば右に)シフトし、この状態で第1のシフト画像を撮影し、さらに1/2画素分(例えば右に)シフトし(元画像撮影位置から1画素分のシフト量)、この状態で第2のシフト画像を撮影し、続いて1/2画素分(例えば右に)シフトし(元画像撮影位置から3/2画素分のシフト量)、この状態で第3のシフト画像を撮影し、4枚の画像、すなわち元画像と第1~第3のシフト画像とを合成して補間色画像を生成し、生成した補間色画像に基づき相対的なずれ量を算出することになる。 Furthermore, when the optical magnification is 2.0 times, after capturing the original image, it is shifted by 1/2 pixel (for example, to the right), and in this state, the first shifted image is captured, and then 1/2 pixel Shift the image by a half (for example, to the right) (shift amount of one pixel from the original image capture position), capture a second shift image in this state, and then shift it for The third shift image is captured in this state, and the four images, ie, the original image and the first to third shift images, are synthesized to obtain an interpolation color. An image is generated, and a relative displacement amount is calculated based on the generated interpolated color image.
 そして、素子シフト量演算部36は、例えば表4に示したようなLUTを参照して、撮像光学系10の現在の光学倍率が例えば0.85倍未満であるときには0.7倍の光学倍率欄を参照してシフト量を決定し、光学倍率が0.85倍以上1.5倍未満であるときには1.0倍の光学倍率欄を参照してシフト量を決定し、光学倍率が1.5倍以上であるときには2.0倍の光学倍率欄を参照してシフト量を決定する、等を行う。 Then, the element shift amount calculation unit 36 refers to, for example, the LUT shown in Table 4, and when the current optical magnification of the imaging optical system 10 is less than 0.85, for example, 0.7 times the optical magnification The shift amount is determined with reference to the column, and when the optical magnification is 0.85 or more and less than 1.5, the shift amount is determined with reference to the optical magnification column of 1.0. When it is 5 times or more, the shift amount is determined with reference to the optical magnification column of 2.0 times and the like.
 このような実施形態3によれば、上述した実施形態2とほぼ同様の効果を奏するとともに、撮像光学系10の光学的条件に基づいて撮像素子11をシフトさせる量を決定するようにしたために、補間色画像を生成するためのシフト画像を、精度不足や過剰な精度を伴うことなく、適切な精度で効率的に取得することが可能となる。 According to the third embodiment, substantially the same effect as that of the second embodiment described above is obtained, and the amount of shifting the imaging element 11 is determined based on the optical conditions of the imaging optical system 10. It is possible to efficiently obtain a shifted image for generating an interpolated color image with appropriate accuracy, without being accompanied by insufficient accuracy or excessive accuracy.
 そして、光軸方向の量である焦点深度に所定係数を掛けた値に基づいて、撮像素子11の光軸に垂直な面内のシフト量を演算する場合には、例えば工業用顕微鏡に好適な構成となる。 And, when the shift amount in the plane perpendicular to the optical axis of the imaging device 11 is calculated based on the value obtained by multiplying the predetermined value by the depth of focus which is the amount in the optical axis direction, it is suitable for industrial microscopes, for example. It becomes composition.
 また、LUTを参照する方式を採用すれば、撮像光学系10の光学的条件を変更する毎に演算を行う必要がなく、処理負荷を軽減して応答性を高めることができる。 Further, if a method of referring to the LUT is adopted, it is not necessary to perform calculation every time the optical condition of the imaging optical system 10 is changed, and processing load can be reduced to improve responsiveness.
[実施形態4]
 図16および図17は本発明の実施形態4を示したものであり、図16は距離測定装置1の構成を示すブロック図、図17はユーザがモニタ5の画面上において被写体の測定位置を指定する様子を示す図である。
Fourth Embodiment
16 and 17 show Embodiment 4 of the present invention, and FIG. 16 is a block diagram showing the configuration of the distance measuring device 1. FIG. 17 shows the user designating the measurement position of the subject on the screen of the monitor 5. It is a figure which shows a mode that it does.
 この実施形態4において、上述の実施形態1~3と同様である部分については同一の符号を付して説明を省略し、主として異なる点についてのみ説明する。 In the fourth embodiment, the same parts as those in the first to third embodiments described above will be assigned the same reference numerals and descriptions thereof will be omitted, and mainly different points will be mainly described.
 本実施形態は、上述した実施形態2と基本的に同様の構成であるが、被写体のどの部分を距離測定の対象とするかを、ユーザが指定することができ、かつ撮像素子11上における距離測定の対象部分のみを読み出すことができるようにしたものとなっている。 The present embodiment is basically the same configuration as Embodiment 2 described above, but the user can specify which part of the subject is to be subjected to distance measurement, and the distance on the imaging device 11 Only the part to be measured can be read out.
 従って、本実施形態における撮像素子11は、任意の画素を読み出すことができる構成のものを採用しており、具体例としてはCMOSセンサ等が挙げられる(なお、CCDセンサは原理的に部分読み出しを行うことができないために、本実施形態では採用していない)。 Therefore, the imaging device 11 in this embodiment adopts a configuration that can read out any pixel, and a specific example is a CMOS sensor or the like. Because this can not be done, it is not adopted in this embodiment).
 そして、本実施形態の距離測定装置1は、上述した実施形態2の図12に示した構成に対して、コントローラ3に読出領域演算部37を追加した構成となっている。 The distance measuring device 1 of this embodiment has a configuration in which a reading area calculation unit 37 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
 この読出領域演算部37は、シフト画像を撮影して撮像素子11から画素情報を読み出す際に、画素ずれ検出部33によりずれ量を検出する画素領域を演算して設定する画素領域設定部である。 The read area calculation unit 37 is a pixel area setting unit that calculates and sets a pixel area for detecting a shift amount by the pixel shift detection unit 33 when capturing a shift image and reading out pixel information from the imaging device 11 .
 そして、撮像素子制御部21は、シフト画像について、この読出領域演算部37により演算された画素領域の画素値のみを読み出すように、撮像素子11を駆動する。 Then, the imaging element control unit 21 drives the imaging element 11 so as to read out only the pixel values of the pixel area calculated by the reading area calculation unit 37 for the shift image.
 次に、本実施形態において被写体距離に関する情報を測定する際の動作を説明する。 Next, an operation at the time of measuring information on a subject distance in the present embodiment will be described.
 ユーザが制御アプリケーション41を介して測定開始の指示を入力すると、システムコントローラ31は、距離測定装置1の画像撮影に係る各制御系を統括制御して、撮像素子11によりまず元画像の撮影を行わせる。 When the user inputs an instruction to start measurement via the control application 41, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1, and first captures an original image with the imaging element 11. Let
 こうして撮影された元画像は、メモリ32に一時的に蓄積される。 The original image thus taken is temporarily stored in the memory 32.
 元画像の撮影が完了すると、撮影された元画像は、コントローラ3を介してPC4へ送信され、制御アプリケーション41によりモニタ5の画面5aに表示される。このとき、モニタ5の画面5aに「測定部分を指定してください」等のメッセージを合わせて表示しても良い。 When the photographing of the original image is completed, the photographed original image is transmitted to the PC 4 via the controller 3 and displayed on the screen 5 a of the monitor 5 by the control application 41. At this time, a message such as "Please specify a measurement part" may be displayed on the screen 5a of the monitor 5 and displayed.
 そして、図17に示すように、ユーザがマウス等の入力デバイスを操作して画面5a上のポインタ5pを移動し、測定したい部分(図示の例では、黒い箱状物体6bにおける白色印刷物「1」のほぼ中央部分)上で確定操作することにより、測定対象部分上の1点(測定点)が指定される。指定された測定点は、制御アプリケーション41から読出領域演算部37へ送信される。 Then, as shown in FIG. 17, the user operates the input device such as a mouse to move the pointer 5p on the screen 5a, and a portion to be measured (in the example shown, white printed matter “1” in the black box 6b By performing the confirmation operation on the substantially central portion of (1), one point (measurement point) on the measurement target portion is designated. The designated measurement point is transmitted from the control application 41 to the read area calculation unit 37.
 読出領域演算部37は、測定点の情報を受信すると、後段のシフト撮影時に撮像素子11から読み出しを行う画素領域を演算する。 When the information on the measurement point is received, the reading area calculation unit 37 calculates a pixel area to be read from the image pickup device 11 at the time of shift imaging in the subsequent stage.
 この画素領域の演算は、具体的には、測定点を含むラインおよびこのラインに隣接するライン(この選択の仕方をすれば、一方のラインはR画素を含むライン、他方のラインはB画素を含むラインとなる)における、測定点を中心とした左右の一定範囲に含まれるR画素およびB画素を領域として算出することが一例として挙げられる。 Specifically, the calculation of the pixel area includes the line including the measurement point and the line adjacent to the line (in this selection method, one line is a line including an R pixel and the other line is a B pixel). For example, it is possible to calculate, as a region, R pixels and B pixels included in a constant range on the left and right around the measurement point in the included line).
 より具体的な一例を説明する。 A more specific example will be described.
 まず、撮像素子11が、X方向4000画素、Y方向3000画素の画素配列をもつセンサであるものとする。そして、この撮像素子11上の画素の座標を、左上角を(X:Y)=(1:1)、右下角を(X:Y)=(4000:3000)として設定することにする。 First, it is assumed that the image sensor 11 is a sensor having a pixel array of 4000 pixels in the X direction and 3000 pixels in the Y direction. Then, the coordinates of the pixel on the image pickup device 11 are set such that the upper left corner is (X: Y) = (1: 1) and the lower right corner is (X: Y) = (4000: 3000).
 このような座標系において、ユーザにより指定された測定点が(2000:1500)であり、この測定点がGb-B行におけるB画素であったものする。 In such a coordinate system, the measurement point designated by the user is (2000: 1500), and this measurement point is the B pixel in the Gb-B row.
 すると、読出領域演算部37は、R画素を含むラインとして、測定点を含むラインに隣接する第1499ラインまたは第1501ラインの何れか一方を選択する。ここでは例えば、第1499ラインを選択したものとする。 Then, the read area operation unit 37 selects one of the 1499th line and the 1501th line adjacent to the line including the measurement point as a line including the R pixel. Here, for example, line 1499 is selected.
 さらに、読出領域演算部37は、第1500ラインにおいて、測定点を中心とした例えば32画素の範囲(具体的には、X座標が1985~2016の範囲)を選択し、選択した画素範囲とX座標が同一の第1499ラインにおける画素範囲(この例では32画素の範囲)をさらに選択する。なお、ここで選択された画素範囲が実施形態2で述べたラインAに相当し、つまりユーザが指定した測定点を含むようにラインAを設定することになる。 Further, the reading area calculation unit 37 selects, for example, the range of 32 pixels (specifically, the range of the X coordinate of 1985 to 2016) centered on the measurement point in the 1500th line, and the selected pixel range and X Further, a pixel range (a range of 32 pixels in this example) in line 1499 having the same coordinates is further selected. The pixel range selected here corresponds to the line A described in the second embodiment, that is, the line A is set so as to include the measurement points designated by the user.
 このような処理の結果、読出領域演算部37により選択された画素範囲に含まれるR画素およびB画素は、次のようになる。 As a result of such processing, the R pixel and the B pixel included in the pixel range selected by the read area operation unit 37 are as follows.
     R画素          B画素
 (1985:1499)  (1986:1500)
 (1987:1499)  (1988:1500)
      ・             ・
      ・             ・
      ・             ・
 (2015:1499)  (2016:1500)
 こうして読出領域演算部37は、選択された画素範囲に含まれるR画素およびB画素を画素領域に設定して、設定した画素領域の情報を撮像素子制御部21へ送信する。
R pixel B pixel (1985: 1499) (1986: 1500)
(1987: 1499) (1988: 1500)
· ·
· ·
· ·
(2015: 1499) (2016: 1500)
In this way, the reading area calculation unit 37 sets the R pixel and the B pixel included in the selected pixel range as the pixel area, and transmits the information of the set pixel area to the imaging element control unit 21.
 すると、撮像素子制御部21は、受信した画素領域、つまり16画素のR画素および16画素のB画素の合計32画素のみを読み出すように、撮像素子11からの読み出しアドレスを生成して、撮像素子11を読出制御する。 Then, the imaging element control unit 21 generates a read address from the imaging element 11 so that only the received pixel area, that is, a total of 32 pixels of the R pixel of 16 pixels and the B pixel of 16 pixels is read out. Read control 11.
 こうしてシフト画像については、上述したような撮像素子11のシフトを行いながら、ずれ量を検出するために必要な画素のみが撮像素子11から読み出されることになる。こうして読み出された画素値は、上述と同様に、メモリ32に格納される。 In this way, with regard to the shift image, only the pixels necessary for detecting the amount of deviation are read out from the image sensor 11 while the image sensor 11 is shifted as described above. The pixel values thus read out are stored in the memory 32 as described above.
 その後のずれ量の演算等に関しては、上述した実施形態2と同様である。 The subsequent calculation of the shift amount and the like are the same as in the second embodiment described above.
 なお、上述では、説明を簡略化するために、指定された測定点を中心としたX方向の32画素の画素範囲のR画素とB画素を一義的に読み出すようにしているが、元画像を撮影した後に、元画像中の被写体の状態に基づいて、より相関演算に適した領域のみを読み出し対象とする画素領域に設定するようにしても良い。 In the above description, in order to simplify the description, the R pixel and the B pixel in the pixel range of 32 pixels in the X direction centered on the designated measurement point are uniquely read out, but the original image is After photographing, based on the state of the subject in the original image, only the area more suitable for the correlation calculation may be set as the pixel area to be read out.
 例えば、元画像における指定された測定点を含むライン、および該ラインに隣接するラインについて、測定点の近傍におけるR画素およびB画素のエッジ検出を行い、検出されたエッジを含む領域内のR画素およびB画素を画素領域に設定するようにしても良い。 For example, for a line including a designated measurement point in the original image and a line adjacent to the line, edge detection of R pixel and B pixel in the vicinity of the measurement point is performed, and R pixels in a region including the detected edge And B pixels may be set in the pixel area.
 また、ユーザに測定点を水平方向において(より一般には、瞳色分割フィルタ14により瞳が分割されている方向において)2点指定してもらい、指定された2点を結ぶ線分上のR画素およびB画素を画素領域に設定するようにしても構わない。 Also, have the user specify two measurement points in the horizontal direction (more generally, in the direction in which the pupil is divided by the pupil color division filter 14), and R pixels on the line segment connecting the two specified points And B pixels may be set in the pixel region.
 そして、上述では選択された画素範囲に含まれるR画素およびB画素のみを読み出し対象として、読み出しに要する時間を極力短くするようにしているが、読出時間が幾らか(例えば2倍程度に)長くなっても構わないのであれば、選択された画素範囲の全体(つまり、ずれ量演算に用いないGr画素およびGb画素も含む画素範囲)を読み出し対象の画素領域に設定しても構わない。この場合には、撮像素子11の読出制御が幾らか容易になる利点がある。 In the above description, only the R pixel and the B pixel included in the selected pixel range are targeted for readout, and the time required for readout is shortened as much as possible. However, the readout time is somewhat longer (for example, about twice) If it does not matter, the entire selected pixel range (that is, a pixel range that also includes Gr pixels and Gb pixels not used for the shift amount calculation) may be set as the pixel region to be read. In this case, there is an advantage that the readout control of the imaging device 11 becomes somewhat easier.
 このような実施形態4によれば、上述した実施形態2と同様の高精度な被写体高さ測定機能を維持しつつ、さらに、より高速な演算処理が可能となる。 According to the fourth embodiment, higher-speed arithmetic processing can be performed while maintaining the subject height measurement function with high accuracy similar to that of the second embodiment described above.
[実施形態5]
 図18から図20は本発明の実施形態5を示したものであり、図18は距離測定装置1の構成を示すブロック図、図19は撮像素子11の撮像面11aに結像している元画像に係る被写体像6i中のR成分の被写体像IMGrおよびB成分の被写体像IMGbの様子を示す図およびラインAの部分拡大図、図20は撮像素子11の撮像面11aに結像しているシフト画像に係る被写体像6iが元画像に対して傾きかつやや大きくなったときのR成分の被写体像IMGrおよびB成分の被写体像IMGbの様子を示す図およびラインAの部分拡大図である。
Fifth Embodiment
18 to 20 show Embodiment 5 of the present invention, and FIG. 18 is a block diagram showing a configuration of the distance measuring device 1, and FIG. 19 shows an image formed on the imaging surface 11a of the imaging device 11. FIG. 20 shows a state of a subject image IMGr of R component and a subject image IMGb of B component in a subject image 6i according to an image, and a partially enlarged view of line A, FIG. 20 forms an image on the imaging surface 11a of the imaging device 11. FIG. 17A is a partially enlarged view of line A and FIG. 18B is a diagram showing a state of a subject image IMGr of an R component and a subject image IMGb of a B component when a subject image 6i related to a shift image is inclined and slightly larger than an original image;
 この実施形態5において、上述の実施形態1~4と同様である部分については同一の符号を付して説明を省略し、主として異なる点についてのみ説明する。 In the fifth embodiment, the same parts as those in the first to fourth embodiments described above will be assigned the same reference numerals and descriptions thereof will be omitted, and only different points will be mainly described.
 本実施形態は、上述した実施形態2と基本的に同様の構成であるが、撮像素子11をシフトする前後において、該シフトを除いた、撮像素子11に対する被写体6の相対的な動きが生じたか否かを判定して、動きが生じている場合には、正確な距離測定を行うことができないために不要な演算処理を行わないようにしたものとなっている。 This embodiment has basically the same configuration as that of the second embodiment described above, but before and after shifting the imaging device 11, whether or not the relative movement of the subject 6 with respect to the imaging device 11 occurred except for the shifting It is determined whether or not there is movement, and since it is not possible to perform accurate distance measurement, unnecessary arithmetic processing is not performed.
 すなわち、本実施形態の距離測定装置1は、上述した実施形態2の図12に示した構成に対して、コントローラ3にシフト画像比較演算部38を追加した構成となっている。 That is, the distance measurement device 1 of this embodiment has a configuration in which a shift image comparison operation unit 38 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
 このシフト画像比較演算部38は、少なくとも画素ずれ検出部33によりずれ量を検出する画素領域において、撮像素子11をシフト移動する前に取得した元画像と、撮像素子11をシフト移動した後に取得した画像である画素シフト画像と、の一致性を比較する画像比較部である。 The shift image comparison operation unit 38 acquires an original image acquired before shifting the imaging element 11 and a shift amount after moving the imaging element 11 at least in a pixel area where the shift amount is detected by the pixel shift detection unit 33. It is an image comparison unit that compares the match between a pixel shift image that is an image.
 具体的に、図19は、撮像素子11の撮像面11aに結像している元画像に係る被写体像6iを示しており、ラインA上における、R成分の被写体像IMGrの画像部分ARと、B成分の被写体像IMGbの画像部分ABとには、図示のようなずれ量が生じている。 Specifically, FIG. 19 shows a subject image 6i related to the original image formed on the imaging surface 11a of the imaging element 11, and on the line A, an image portion AR of the subject image IMGr of R component; In the image portion AB of the B component subject image IMGb, an amount of deviation as shown in the drawing occurs.
 一方、図20は、撮像素子11の撮像面11aに結像しているシフト画像(元画像を撮影した後に、撮像素子11をX方向に例えば1画素分シフトして得た画像)に係る被写体像6iを示している。図示のように、被写体像6iは、図19の元画像に比して、XY平面における時計回りに幾らか回転し、かつ大きさがやや大きくなっている。ここに、被写体像6iの大きさがやや大きくなったのは、被写体6が鏡筒2に近接する方向(高さ方向における上方向)に幾らか移動したためであると考えられる。 On the other hand, FIG. 20 shows a subject related to a shift image (an image obtained by shifting the image sensor 11 by, for example, one pixel in the X direction after capturing the original image) formed on the imaging surface 11a of the image sensor 11 The image 6i is shown. As shown, the subject image 6i is somewhat rotated clockwise in the XY plane and slightly larger in size than the original image of FIG. Here, the reason why the size of the subject image 6i slightly increases is considered to be that the subject 6 has moved somewhat in the direction in which the subject 6 approaches the lens barrel 2 (upward in the height direction).
 こうして被写体像6iが回転しかつ大きくなったために、ラインA上における、R成分の被写体像IMGrの画像部分ARと、B成分の被写体像IMGbの画像部分ABとに生じているずれ量は、図19の元画像に比して大きくなっている。 Since the subject image 6i is thus rotated and enlarged, the amount of deviation occurring in the image portion AR of the R component subject image IMGr and the image portion AB of the B component subject image IMGb on the line A is shown in FIG. It is larger than the 19 original images.
 このような、撮像素子11に対する被写体6の相対的な動き(ただし、シフトによる動きを除く)が生じた前後に撮影された元画像とシフト画像とを合成して補間色画像を生成し、生成した補間色画像に基づきずれ量を算出しても、正確なずれ量を得ることはできず、つまり正確な距離測定を行うことができない。 An interpolation color image is generated by combining the original image captured before and after the relative movement of the subject 6 with respect to the imaging device 11 (but excluding the movement due to the shift) and the shift image as described above. Even if the amount of deviation is calculated based on the interpolated color image, an accurate amount of deviation can not be obtained, that is, accurate distance measurement can not be performed.
 そこで、シフト画像比較演算部38により、元画像とシフト画像とを比較して、補間色画像を生成するのに不適切な画像ずれが検出された場合には、補間色画像の生成や、補間色画像に基づく画素ずれ検出部33および距離演算部34による被写体距離に関する情報の生成を中止するようにしている。 Therefore, when the shift image comparison operation unit 38 detects an image deviation that is inappropriate for generating the interpolation color image by comparing the original image and the shift image, generation of the interpolation color image, interpolation, The generation of information on the subject distance by the pixel shift detection unit 33 and the distance calculation unit 34 based on the color image is stopped.
 次に、本実施形態において被写体距離に関する情報を測定する際の動作を説明する。 Next, an operation at the time of measuring information on a subject distance in the present embodiment will be described.
 元画像を撮影した後に、メモリ32に格納された元画像に係るR色画像とB色画像に基づき、画素ずれ検出部33によりラインA上におけるずれ量を演算する。 After capturing the original image, the pixel shift detection unit 33 calculates the amount of shift on the line A based on the R and B color images of the original image stored in the memory 32.
 次に、上述した実施形態2において説明したように撮像素子11をシフトさせてから、シフト画像を撮影する。 Next, after shifting the imaging device 11 as described in the second embodiment described above, a shift image is taken.
 そして、メモリ32に格納されたシフト画像に係るR色画像とB色画像に基づき、画素ずれ検出部33によりラインA上におけるずれ量を演算する。 Then, based on the R-color image and the B-color image related to the shift image stored in the memory 32, the pixel shift detection unit 33 calculates the shift amount on the line A.
 シフト画像比較演算部38は、元画像に基づき得られたずれ量と、シフト画像に基づき得られたずれ量とを比較して、両者の差が-1画素以上かつ+1画素以下であるか否かを判定する。 The shift image comparison operation unit 38 compares the shift amount obtained based on the original image with the shift amount obtained based on the shift image, and determines whether the difference between the two is -1 or more and +1 or less. Determine if
 システムコントローラ31は、シフト画像比較演算部38により得られた差が-1画素以上かつ+1画素以下である場合には、不用意な動きは生じていないと判定して、元画像とシフト画像とを合成して補間色画像を生成する処理、および補間色画像に基づく高精度なずれ量の検出を画素ずれ検出部33に行わせる。さらに、システムコントローラ31は、検出したずれ量に基づいて被写体距離に関する情報を生成する処理を、距離演算部34に行わせる。 When the difference obtained by the shift image comparison operation unit 38 is -1 pixel or more and +1 pixel or less, the system controller 31 determines that the careless motion has not occurred and sets the original image and the shift image. The pixel shift detection unit 33 causes the pixel shift detection unit 33 to perform processing of generating an interpolated color image by combining the above and a highly accurate shift amount detection based on the interpolated color image. Further, the system controller 31 causes the distance calculation unit 34 to perform processing of generating information on the subject distance based on the detected shift amount.
 一方、システムコントローラ31は、シフト画像比較演算部38により得られた差が、-1画素よりも小さいか、あるいは+1画素よりも大きい場合には、不用意な動きが生じたと判定して、制御アプリケーション41を介してユーザへその旨を通知する。 On the other hand, when the difference obtained by the shift image comparison operation unit 38 is smaller than -1 pixel or larger than +1 pixel, the system controller 31 determines that an inadvertent movement has occurred and performs control. The user is notified of that via the application 41.
 そして、システムコントローラ31は、画素ずれ検出部33による補間色画像の生成処理、および補間色画像に基づくずれ量の検出処理を中止させる。従って、距離演算部34による、補間色画像から得られたずれ量に基づく被写体距離に関する情報の生成も自動的に中止される。 Then, the system controller 31 cancels the generation process of the interpolation color image by the pixel shift detection unit 33 and the detection process of the shift amount based on the interpolation color image. Therefore, the generation of the information on the subject distance based on the displacement amount obtained from the interpolation color image by the distance calculation unit 34 is also automatically stopped.
 さらに、システムコントローラ31は、補間色画像に基づく高精度な処理に代えて、元画像のみに基づき画素ずれ検出部33の処理および距離演算部34の処理を行わせて、被写体距離に関する情報を生成させる。 Furthermore, the system controller 31 performs processing of the pixel shift detection unit 33 and processing of the distance calculation unit 34 based on only the original image instead of highly accurate processing based on the interpolated color image, and generates information on the subject distance. Let
 なお、不用意な動きが生じたことをユーザへ通知する際には、参考値として、元画像のみから求めた被写体距離に関する情報と、シフト画像のみから求めた被写体距離に関する情報と、を合わせて通知するようにしても良い。この場合には、システムコントローラ31は、さらに、シフト画像のみに基づき画素ずれ検出部33の処理および距離演算部34の処理を行わせて、被写体距離に関する情報を生成させることになる。 In addition, when notifying the user that careless movement has occurred, information on the subject distance obtained only from the original image and information on the subject distance obtained only from the shift image are combined as a reference value. You may make it notify. In this case, the system controller 31 further causes the processing of the pixel shift detection unit 33 and the processing of the distance calculation unit 34 to be performed based on only the shift image to generate information on the subject distance.
 なお、上述では、画素配列中のG画素を距離検出に用いないことを考慮して、差が-1画素以上かつ+1画素以下であるか否かを判定するようにしたが、判定条件はこれに限定されるものではなく、被写体条件等を考慮した他の判定条件を採用しても良いし、被写体条件に応じて判定条件を可変制御するようにしても構わない。 In the above description, in consideration of not using G pixels in the pixel array for distance detection, it is determined whether the difference is −1 or more and +1 or less, but the determination condition is The present invention is not limited to the above, and other determination conditions in consideration of subject conditions etc. may be adopted, and the determination conditions may be variably controlled according to the subject conditions.
 また、上述では、元画像とシフト画像の同座標におけるそれぞれのずれ量演算結果に基づいて不用意な動きがあったか否かを判定したが、この技術に限定されるものではなく、画像マッチング等を行って被写体の移動ベクトルを検出する比較方法などの、既知の他の技術を適宜用いるようにしても構わない。 Also, in the above description, it was determined whether or not there was an inadvertent movement based on the respective displacement amount calculation results at the same coordinates of the original image and the shift image, but it is not limited to this technology. Other known techniques such as a comparison method of detecting the movement vector of the subject may be used as appropriate.
 このような実施形態5によれば、上述した実施形態2とほぼ同様の効果を奏するとともに、撮像素子11に対する被写体6の相対的な動きが不用意に生じたか否かを判定して、不用意な動きが生じたと判定されたときには該当画像の合成を中止し、補間色画像に基づくその後の処理も中止するようにしたために、不要な演算処理による無駄な負荷の増加を抑制し、かつ誤った被写体距離に関する情報をユーザに与えるのを防止することができる。 According to the fifth embodiment, substantially the same effect as that of the second embodiment described above can be obtained, and it is determined whether or not the relative movement of the subject 6 with respect to the imaging device 11 occurs carelessly. When it is determined that a certain movement has occurred, the synthesis of the corresponding image is stopped, and the subsequent processing based on the interpolated color image is also stopped, so that the unnecessary load increase due to unnecessary calculation processing is suppressed and erroneous It is possible to prevent giving information on the subject distance to the user.
 また、不用意な動きが生じたと判定されたときにはユーザにその旨を通知するようにしたために、再度精密な測定をする機会をユーザに与えることができる。 In addition, when it is determined that the careless movement has occurred, the user is notified of the fact, so that the user can be given an opportunity to perform precise measurement again.
 なお、実施形態2を基本としている上述した実施形態3~5の構成は、適宜組み合わせるようにしても構わないのは勿論である。 Of course, the configurations of the above-described third to fifth embodiments based on the second embodiment may be combined as appropriate.
 さらに、上述では主として距離測定装置について説明したが、距離測定装置を上述したように制御する制御方法であっても良いし、コンピュータに距離測定装置を上述したように制御させるための制御プログラム、該制御プログラムを記録するコンピュータにより読み取り可能な一時的でない記録媒体、等であっても構わない。 Furthermore, although the above description mainly describes the distance measuring device, a control method may be used to control the distance measuring device as described above, or a control program for causing a computer to control the distance measuring device as described above, It may be a non-transitory recording medium readable by a computer that records the control program.
 なお、本発明は上述した実施形態そのままに限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化することができる。また、上記実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明の態様を形成することができる。例えば、実施形態に示される全構成要素から幾つかの構成要素を削除しても良い。さらに、異なる実施形態にわたる構成要素を適宜組み合わせても良い。このように、発明の主旨を逸脱しない範囲内において種々の変形や応用が可能であることは勿論である。 The present invention is not limited to the above-described embodiment as it is, and in the implementation stage, the constituent elements can be modified and embodied without departing from the scope of the invention. In addition, various aspects of the invention can be formed by appropriate combinations of a plurality of constituent elements disclosed in the above-described embodiment. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, the constituent elements in different embodiments may be combined as appropriate. As a matter of course, various modifications and applications are possible without departing from the scope of the invention.
 本出願は、2012年5月25日に日本国に出願された特願2012-120044号を優先権主張の基礎として出願するものであり、上記の開示内容は、本願明細書、請求の範囲、図面に引用されたものとする。 This application is based on Japanese Patent Application No. 2012-120044 filed on May 25, 2012 as a basis for claiming priority, and the above disclosure is made of the present specification and claims. It shall be quoted in the drawings.

Claims (8)

  1.  被写体像を結像する撮像光学系と、
     前記撮像光学系の光路上に配置され、該撮像光学系の複数の部分瞳に互いに異なる分光特性を備えさせることにより該撮像光学系の瞳を色分割する瞳色分割光学系と、
     前記瞳色分割光学系を介して前記撮像光学系により結像された被写体像を光電変換して、複数の画素が配列された画像を出力するカラーの撮像素子と、
     前記撮像素子から出力された画像を色分離して得られる複数の色画像の内の、異なる部分瞳に係る複数の色画像に配列されている画素の補間画素を生成する画素補間部と、
     前記異なる部分瞳に係る複数の色画像と、前記画素補間部により生成された補間画素と、を合成することにより複数の補間色画像を生成し、前記複数の補間色画像における被写体像の相対的なずれ量を検出して、前記ずれ量に基づき被写体距離に関する情報を生成する距離情報生成部と、
     を具備し、
     前記画素補間部は、少なくとも前記ずれ量を検出する方向であるずれ検出方向に、前記補間画素を生成することを特徴とする距離測定装置。
    An imaging optical system for forming an image of an object;
    A pupil color division optical system which is disposed on the optical path of the imaging optical system and which divides the pupil of the imaging optical system by providing different partial characteristics to the plurality of partial pupils of the imaging optical system;
    A color imaging element that photoelectrically converts an object image formed by the imaging optical system via the pupil color division optical system, and outputs an image in which a plurality of pixels are arranged;
    A pixel interpolation unit that generates interpolation pixels of pixels arranged in a plurality of color images related to different partial pupils among a plurality of color images obtained by color separating an image output from the image pickup device;
    A plurality of interpolated color images are generated by combining the plurality of color images relating to the different partial pupils and the interpolation pixels generated by the pixel interpolation unit, and the relativeness of the subject images in the plurality of interpolation color images A distance information generation unit that detects an amount of deviation and generates information on a subject distance based on the amount of deviation;
    Equipped with
    The distance measuring device, wherein the pixel interpolation unit generates the interpolation pixel in a shift detection direction which is at least a direction in which the shift amount is detected.
  2.  前記画素補間部は、前記撮像光学系と前記撮像素子との少なくとも一方を、前記撮像光学系の光軸に垂直な面内における前記ずれ検出方向に平行移動する素子移動部を備え、前記素子移動部により前記撮像光学系と前記撮像素子との少なくとも一方を平行移動して、前記ずれ検出方向の位置が異なる複数枚の画像を前記撮像素子から取得することにより前記補間画素を生成し、
     前記距離情報生成部は、取得された複数枚の画像を色分離して得られる複数枚の色画像であって、同一色に係る色画像同士を合成することにより、前記補間色画像を生成することを特徴とする請求項1に記載の距離測定装置。
    The pixel interpolation unit includes an element moving unit configured to move at least one of the imaging optical system and the imaging element in parallel in the deviation detection direction in a plane perpendicular to the optical axis of the imaging optical system. The interpolation pixel is generated by translating at least one of the imaging optical system and the imaging device by the unit to obtain a plurality of images having different positions in the shift detection direction from the imaging device,
    The distance information generation unit generates the interpolation color image by combining color images of a plurality of color images obtained by color-separating a plurality of acquired images and relating to the same color. The distance measuring device according to claim 1, characterized in that:
  3.  複数枚の画像を取得する間の前記撮像光学系と前記撮像素子との少なくとも一方の平行移動量を、前記撮像光学系の光学的条件に基づいて演算する移動量演算部をさらに具備し、
     前記素子移動部は、前記移動量演算部により演算された平行移動量だけ、前記撮像光学系と前記撮像素子との少なくとも一方を平行移動することを特徴とする請求項2に記載の距離測定装置。
    The apparatus further comprises a movement amount calculation unit that calculates the parallel movement amount of at least one of the imaging optical system and the imaging element while acquiring a plurality of images based on the optical conditions of the imaging optical system,
    The distance measuring apparatus according to claim 2, wherein the element moving unit moves in parallel at least one of the imaging optical system and the image pickup element by the parallel movement amount calculated by the movement amount calculating unit. .
  4.  前記移動量演算部が平行移動量を演算するために用いる前記撮像光学系の光学的条件は、該撮像光学系の焦点深度を含むことを特徴とする請求項3に記載の距離測定装置。 The distance measurement apparatus according to claim 3, wherein the optical condition of the imaging optical system used by the movement amount calculation unit to calculate the parallel movement amount includes a focal depth of the imaging optical system.
  5.  前記移動量演算部は、前記焦点深度に所定係数を掛けた値に基づいて、前記撮像光学系と前記撮像素子との少なくとも一方の平行移動量を演算することを特徴とする請求項4に記載の距離測定装置。 The movement amount calculation unit calculates a parallel movement amount of at least one of the imaging optical system and the imaging device based on a value obtained by multiplying the focal depth by a predetermined coefficient. Distance measuring device.
  6.  前記距離情報生成部により前記ずれ量を検出する画素領域を設定する画素領域設定部をさらに具備し、
     前記撮像光学系と前記撮像素子との少なくとも一方を平行移動した後は、前記画素領域設定部により設定された画素領域のみを前記撮像素子から読み出すことを特徴とする請求項2に記載の距離測定装置。
    The image processing apparatus further includes a pixel area setting unit configured to set a pixel area for detecting the displacement amount by the distance information generation unit.
    3. The distance measurement according to claim 2, wherein after at least one of the imaging optical system and the imaging element is moved in parallel, only the pixel area set by the pixel area setting unit is read out from the imaging element. apparatus.
  7.  少なくとも前記距離情報生成部により前記ずれ量を検出する画素領域において、前記撮像光学系と前記撮像素子との少なくとも一方を平行移動する前に取得した画像と、平行移動した後に取得した画像であるシフト画像と、を比較する画像比較部をさらに具備し、
     前記画像比較部により、前記画像と前記シフト画像とを比較して、前記補間色画像を生成するのに不適切な画像ずれが検出された場合には、前記補間色画像に基づく前記距離情報生成部による被写体距離に関する情報の生成を中止することを特徴とする請求項2に記載の距離測定装置。
    At least in a pixel area where the distance information generation unit detects the shift amount, an image acquired before translating at least one of the imaging optical system and the imaging element and a shift that is an image acquired after translating the image And an image comparison unit that compares the image with the image;
    The distance information generation based on the interpolation color image when the image comparison unit compares the image and the shift image and detects an image deviation that is inappropriate for generating the interpolation color image. The distance measuring apparatus according to claim 2, wherein generation of information on the subject distance by the unit is stopped.
  8.  前記画素補間部は、1枚の画像を色分離して得られる色画像に対して補間演算を行うことにより、前記補間画素を生成する画素補間演算部を備えることを特徴とする請求項1に記載の距離測定装置。 The pixel interpolation unit includes a pixel interpolation operation unit that generates the interpolation pixel by performing an interpolation operation on a color image obtained by color separation of one image. Distance measuring device as described.
PCT/JP2013/054021 2012-05-25 2013-02-19 Distance measurement apparatus WO2013175816A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012120044A JP2013246052A (en) 2012-05-25 2012-05-25 Distance measuring apparatus
JP2012-120044 2012-05-25

Publications (1)

Publication Number Publication Date
WO2013175816A1 true WO2013175816A1 (en) 2013-11-28

Family

ID=49623521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/054021 WO2013175816A1 (en) 2012-05-25 2013-02-19 Distance measurement apparatus

Country Status (2)

Country Link
JP (1) JP2013246052A (en)
WO (1) WO2013175816A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10914960B2 (en) 2016-11-11 2021-02-09 Kabushiki Kaisha Toshiba Imaging apparatus and automatic control system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101566619B1 (en) * 2014-06-03 2015-11-09 중앙대학교 산학협력단 Apparatus and method for estimating distance using dual off-axis color filter aperture
JP2016102733A (en) 2014-11-28 2016-06-02 株式会社東芝 Lens and image capturing device
WO2018193544A1 (en) * 2017-04-19 2018-10-25 オリンパス株式会社 Image capturing device and endoscope device
JP6818702B2 (en) 2018-01-15 2021-01-20 株式会社東芝 Optical inspection equipment and optical inspection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001174696A (en) * 1999-12-15 2001-06-29 Olympus Optical Co Ltd Color image pickup unit
JP2006285094A (en) * 2005-04-04 2006-10-19 Nikon Corp Auto-focus camera and auto-focus device
JP2010139665A (en) * 2008-12-10 2010-06-24 Canon Inc Focus detecting device and control method for the same
JP2010210810A (en) * 2009-03-09 2010-09-24 Olympus Imaging Corp Focus detector
JP2012054867A (en) * 2010-09-03 2012-03-15 Olympus Imaging Corp Imaging apparatus
JP2012063456A (en) * 2010-09-14 2012-03-29 Olympus Corp Imaging apparatus
JP2012068761A (en) * 2010-09-21 2012-04-05 Toshiba Digital Media Engineering Corp Image processing device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001174696A (en) * 1999-12-15 2001-06-29 Olympus Optical Co Ltd Color image pickup unit
JP2006285094A (en) * 2005-04-04 2006-10-19 Nikon Corp Auto-focus camera and auto-focus device
JP2010139665A (en) * 2008-12-10 2010-06-24 Canon Inc Focus detecting device and control method for the same
JP2010210810A (en) * 2009-03-09 2010-09-24 Olympus Imaging Corp Focus detector
JP2012054867A (en) * 2010-09-03 2012-03-15 Olympus Imaging Corp Imaging apparatus
JP2012063456A (en) * 2010-09-14 2012-03-29 Olympus Corp Imaging apparatus
JP2012068761A (en) * 2010-09-21 2012-04-05 Toshiba Digital Media Engineering Corp Image processing device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10914960B2 (en) 2016-11-11 2021-02-09 Kabushiki Kaisha Toshiba Imaging apparatus and automatic control system

Also Published As

Publication number Publication date
JP2013246052A (en) 2013-12-09

Similar Documents

Publication Publication Date Title
US11099459B2 (en) Focus adjustment device and method capable of executing automatic focus detection, and imaging optical system storing information on aberrations thereof
US9247227B2 (en) Correction of the stereoscopic effect of multiple images for stereoscope view
JP4699995B2 (en) Compound eye imaging apparatus and imaging method
JP6173156B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US9742982B2 (en) Image capturing apparatus and method for controlling image capturing apparatus
JP5947601B2 (en) FOCUS DETECTION DEVICE, ITS CONTROL METHOD, AND IMAGING DEVICE
US10122911B2 (en) Image pickup apparatus, control method, and non-transitory computer-readable storage medium with aberration and object information acquisition for correcting automatic focus detection
WO2013027504A1 (en) Imaging device
JP2010271670A (en) Imaging apparatus
WO2013175816A1 (en) Distance measurement apparatus
JP5882789B2 (en) Image processing apparatus, image processing method, and program
JP2016061609A (en) Distance measuring device, imaging apparatus, and distance measuring method
JP5784395B2 (en) Imaging device
WO2013005489A1 (en) Image capture device and image processing device
JP6357646B2 (en) Imaging device
JP5378283B2 (en) Imaging apparatus and control method thereof
JP2013097154A (en) Distance measurement device, imaging apparatus, and distance measurement method
JP6326631B2 (en) Imaging device
JP5786355B2 (en) Defocus amount detection device and electronic camera
JP2014215436A (en) Image-capturing device, and control method and control program therefor
JP6012396B2 (en) Image processing apparatus, image processing method, and program.
JP6370004B2 (en) Imaging apparatus and imaging method
KR20170015158A (en) Control apparatus, image pickup apparatus, and control method
WO2013133115A1 (en) Defocus amount detection device and camera
JP6331279B2 (en) Imaging apparatus, imaging method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13793759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13793759

Country of ref document: EP

Kind code of ref document: A1