WO2016042721A1 - Positional shift amount calculation apparatus and imaging apparatus - Google Patents

Positional shift amount calculation apparatus and imaging apparatus Download PDF

Info

Publication number
WO2016042721A1
WO2016042721A1 PCT/JP2015/004474 JP2015004474W WO2016042721A1 WO 2016042721 A1 WO2016042721 A1 WO 2016042721A1 JP 2015004474 W JP2015004474 W JP 2015004474W WO 2016042721 A1 WO2016042721 A1 WO 2016042721A1
Authority
WO
WIPO (PCT)
Prior art keywords
shift amount
positional shift
size
image
optical system
Prior art date
Application number
PCT/JP2015/004474
Other languages
French (fr)
Inventor
Kazuya Nobayashi
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2015155151A external-priority patent/JP6642998B2/en
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Priority to US15/508,625 priority Critical patent/US10339665B2/en
Priority to EP15842093.5A priority patent/EP3194886A4/en
Publication of WO2016042721A1 publication Critical patent/WO2016042721A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/02Stereoscopic photography by sequential recording
    • G03B35/06Stereoscopic photography by sequential recording with axial movement of lens or gate between exposures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion

Definitions

  • the present invention relates to a technique to calculate the positional shift amount between images.
  • a known depth measuring apparatus measures depth by calculating a positional shift amount (also called “parallax”), which is a relative positional shift amount between two images having different points of view (hereafter called “image A” and “image B”).
  • a positional shift amount also called “parallax”
  • image A and image B an area-based corresponding points search technique
  • template matching either image A or image B is set as a base image, and the other image which is not the base image is set as a reference image.
  • a base area around a target point also called “base window” is set on the base image, and a reference area around a reference point corresponding to the target point (also called “reference window”) is set on the reference image.
  • the base area and the reference area are collectively called “matching windows”.
  • a reference point at which the similarity of an image in the base area and an image in the reference area is highest (correlation thereof is highest) is searched for while sequentially moving the reference point, and the positional shift amount is calculated using the relative positional shift amount between the target point and the reference point.
  • a calculation error occurs to the positional shift amount due to a local mathematical operation if the size of the base area is small, hence a relatively large area size is used.
  • the depth (distance) to an object can be calculated by converting the positional shift amount into a defocus amount or into an object depth using a conversion coefficient. This allows measuring the depth at high-speed and at high accuracy, since it is unnecessary to move the lens to measure the depth.
  • the depth measurement accuracy improves by accurately determining the positional shift amount.
  • Factors that cause an error to the positional shift amount are: changes of the positional shift amount in each pixel of the base area; and noise generated in the process of acquiring image data.
  • the base area must be small. If the base area is small however, a calculation error to the positional shift amount may be generated by the influence of noise or because of the existence of similar image patterns.
  • the positional shift amount is calculated for each scanning line (e.g. horizontal line), and the positional shift amount at the adjacent scanning line is calculated based on the calculated positional shift amount data.
  • a method of setting a base area independently for each pixel, so that a boundary where the calculated positional shift amount changes is not included, has been proposed.
  • Patent Literature 2 a method of decreasing the size of the base area in steps and gradually limiting the search range to search for a corresponding point is proposed.
  • Patent Literature 1 a problem of the positional shift amount calculation method disclosed in Patent Literature 1 is that the memory amount and computation amount required for calculating the positional shift amount are large. This is because the positional shift amount of a spatially adjacent area is calculated and evaluated in advance to determine the size of the base area. Furthermore, in the case when the positional shift amount changes continuously, the base area becomes small since the base area is set in a range where the positional shift amount is approximately the same, and a calculation error may occur to the positional shift amount. In other words, the depth may be miscalculated depending on the way of changing the object depth.
  • a problem of the positional shift amount calculating method disclosed in Patent Literature 2 is that the computation amount is large since a plurality of base areas is set at each pixel position, and the correlation degree is evaluated.
  • a first aspect of the present invention is a positional shift amount calculation apparatus that calculates a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image
  • the apparatus having: a calculation unit adapted to calculate the positional shift amount based on data within a predetermined area out of first image data representing the first image and second image data representing the second image; and a setting unit adapted to set a relative size of the area to the first and second image data, and in this positional shift amount calculation apparatus, the calculation unit is adapted to calculate a first positional shift amount using the first image data and the second image data in the area having a first size which is preset, the setting unit is adapted to set a second size of the area based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system, and the calculation unit is adapted to calculate a second positional shift amount using the first image data and the second image data in the area having the second size
  • a second aspect of the present invention is a positional shift amount calculation method for a positional shift amount calculation apparatus to calculate a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the method having: a first calculation step of calculating a first positional shift amount based on data within an area having a predetermined first size, out of first image data representing the first image and second image data representing the second image; a setting step of setting a second size, which is a relative size of the area to the first and second image data, based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system; and a second calculation step of calculating a second positional shift amount using the first image data and the second image data in the area having the second size.
  • the positional shift amount can be calculated at high accuracy by an easy operation. Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
  • Fig. 1A and Fig. 1B are diagrams depicting a configuration of a digital camera that includes a depth calculation apparatus.
  • Fig. 2 is a diagram depicting a luminous flux that the digital camera receives.
  • Fig. 3 is a flow chart depicting a depth calculation procedure according to the first embodiment.
  • Fig. 4A to Fig. 4D are diagrams depicting a positional shift amount calculation method and a factor that generates a positional shift amount error.
  • Fig. 5A and Fig. 5B are diagrams depicting a base line length.
  • Fig. 6A and Fig. 6B are diagrams depicting a depth calculation unit according to the first embodiment.
  • FIGS. 6F are diagrams depicting a size of second base area according to the first positional shift amount.
  • Fig. 7A to Fig. 7D are diagrams depicting the reason why the positional shift amount can be accurately calculated in the first embodiment.
  • Fig. 8A and Fig. 8B are diagrams depicting the depth calculation unit according to a modification.
  • Fig. 8C to Fig. 8E are diagrams depicting the depth calculation unit according to a modification.
  • Fig. 9 is a flow chart depicting a general operation of the digital camera.
  • Fig. 10 is a diagram depicting a configuration of the digital camera according to the modification.
  • Fig. 11 is a flow chart depicting an example of the positional shift amount calculation procedure according to the first embodiment.
  • a digital camera is described as an example of an imaging apparatus that includes a depth calculation apparatus (positional shift calculation apparatus), but application of the present invention is not limited to this.
  • the positional shift amount calculation apparatus of the present invention can be applied to a digital depth measuring instrument.
  • FIG. 1A is a diagram depicting a configuration of a digital camera 100 that includes a depth measurement apparatus.
  • an imaging optical system 120, an imaging element 101, a depth calculation unit 102, an image storage unit 104, an image generation unit (not illustrated), a lens driving control unit (not illustrated), and a control unit (not illustrated) are disposed inside a camera casing 130.
  • the imaging optical system 120, the imaging element 101, the depth calculation unit 102, and the image storage unit 104 constitute a depth calculation apparatus 110.
  • the depth calculation unit 102 can be constructed using a logic circuit.
  • the depth calculation unit 102 may be constituted by a central processing unit (CPU) and a memory storing arithmetic processing programs.
  • the depth calculation unit 102 corresponds to the positional shift amount calculation apparatus according to the present invention.
  • the imaging optical system 120 is a photographing lens of the digital camera 100, and has a function to form an image of the object on the imaging element 101, which is an imaging surface.
  • the imaging optical system 120 is constituted by a plurality of lens groups (not illustrated) and a diaphragm (not illustrated), and has an exit pupil 103 at a position apart from the imaging element 101 by a predetermined distance.
  • the reference number 140 in Fig. 1A denotes an optical axis of the imaging optical system 120, and in this description, the optical axis is assumed to be parallel with the z axis.
  • the x axis and the y axis are assumed to be orthogonal to each other, and are orthogonal to the optical axis.
  • Fig. 9 is a flow chart depicting an operation flow after the main power of the digital camera 100 is turned ON and the shutter button (not illustrated) is half depressed.
  • the control unit reads the information on the imaging optical system 120 (e.g. focal length, diaphragm value), and stores the information in the memory unit (not illustrated). Then the control unit executes the processing in steps S902, S903 and S904 to adjust the focal point.
  • the depth calculation unit 102 calculates a defocus amount using the depth calculation procedure shown in Fig.
  • step S903 the control unit determines whether the imaging optical system 120 is in the focused state or not based on the calculated defocus amount. If not focused, the control unit drives the imaging optical system 120 to the focused position based on the defocus amount using the lens driving control unit, and then processing returns to step S902. If it is determined that the imaging optical system 120 is in the focused state in step S903, the control unit determines whether the shutter was released (fully depressed) by the operation of the shutter button (not illustrated) in step S905. If not released, processing returns to step S902, and the above mentioned processing is repeated.
  • the control unit reads image data from the imaging element 101, and stores the image data in the image storage unit 104.
  • the image generation unit performs development processing on the image data stored in the image storage unit 104, whereby a final image can be generated. Further, an object depth image (object depth distribution) corresponding to the final image can be generated by applying the depth calculation procedure, which will be described later with reference to Fig. 3, to the image data stored in the image storage unit 104.
  • the imaging element 101 is constituted by a CMOS (Complementary Metal-Oxide Semiconductor) or a CCD (Charge-Coupled Device).
  • the object image is formed on the imaging element 101 via the imaging optical system 120, and the imaging element 101 performs photoelectric conversion on the received luminous flux, and generates image data based on the object image.
  • the imaging element 101 according to this embodiment will now be described in detail with reference to Fig. 1B.
  • Fig. 1B is an xy cross-sectional view of the imaging element 101.
  • the imaging element 101 has a configuration where a plurality of pixel groups (2 rows ⁇ 2 columns) is arranged.
  • Each pixel group 150 is constituted by green pixels 150G1 and 150G2, which are disposed in diagonal positions, and a red pixel 150R and a blue pixel 150B, which are the other two pixels.
  • first photoelectric conversion unit 161 and second photoelectric conversion unit 162
  • first photoelectric conversion unit 161 and second photoelectric conversion unit 162
  • the light receiving layer 203 in Fig. 2
  • the luminous flux received by the first photoelectric conversion unit 161 and the second photoelectric conversion unit 162 in the imaging element 101 will be described with reference to Fig. 2.
  • Fig. 2 is a schematic diagram depicting only the exit pupil 103 of the imaging optical system 120 and the green pixel 150G1 as an example representing the pixels disposed in the imaging element 101.
  • the pixel 150G1 shown in Fig. 2 is constituted by a color filter 201, a micro lens 202 and a light receiving layer 203, and the first photoelectric conversion unit 161 and the second photoelectric conversion unit 162 are included in the light receiving layer 203.
  • the micro lens 202 is disposed such that the exit pupil 103 and the light receiving layer 203 are in a conjugate relationship.
  • the luminous flux 210 that has passed through a first pupil area (261) of the exit pupil enters the first photoelectric conversion unit
  • the plurality of first photoelectric conversion units 161 disposed in each pixel performs photoelectric conversion on the received luminous flux and generates the first image data.
  • the plurality of second photoelectric conversion units 162 disposed in each pixel performs photoelectric conversion on the received luminous flux and generates the second image data.
  • the intensity distribution of the first image (image A), which the luminous flux that has mainly passed through the first pupil area forms on the imaging element 101 can be acquired.
  • the intensity distribution of the second image (image B) which the luminous flux that has mainly passed through the second pupil area forms on the imaging element 101, can be acquired. Therefore the relative positional shift amount of the first image and the second image is the positional shift amount of the image A and image B.
  • step S1 the imaging element 101 acquires the first image data and the second image data, and transfers the acquired data to the depth calculation unit 102.
  • step S2 the light quantity balance correction processing is performed to correct the balance of the light quantity between the first image data and the second image data.
  • a known method can be used. For example, a coefficient to correct the light quantity balance between the first image data and the second image data is calculated based on an image acquired by photographing a uniform surface light source in advance using the digital camera 100.
  • step S3 the depth calculation unit 102 calculates the positional shift amount based on the first image data and the second image data.
  • the calculation method for the positional shift amount will be described later with reference to Fig. 4A to Fig. 4D and Fig. 6A to Fig. 6F.
  • step S4 the depth calculation unit 102 converts the positional shift amount into an image-side defocus amount using a predetermined conversion coefficient.
  • the image-side defocus amount is a distance from an estimated focal position (imaging element surface) to the focal position of the imaging optical system 120.
  • Fig. 5A shows a light receiving sensitivity incident angle characteristic of each pixel.
  • the abscissa indicates the incident angle of the light that enters the pixel (angle formed by the ray projected to the xz plane and the z axis), and the ordinate indicates the light receiving sensitivity.
  • the solid line 501 indicates the light receiving sensitivity of the first photoelectric conversion unit, and the broken line 502 indicates the light receiving sensitivity of the second photoelectric conversion unit.
  • Fig. 5B shows the light receiving sensitivity distribution on the exit pupil 103 when this receiving sensitivity is projected onto the exit pupil 103.
  • Fig. 5B 511 indicates a center of gravity position of the light receiving sensitivity distribution of the first photoelectric conversion unit, and 512 indicates a center of gravity position of the light receiving sensitivity distribution of the second photoelectric conversion unit.
  • the distance 513 between the center of gravity position 511 and the center of gravity position 512 is called “base line length”, and is used as the conversion coefficient to convert the positional shift amount into the image side defocus amount.
  • base line length is used as the conversion coefficient to convert the positional shift amount into the image side defocus amount.
  • the positional shift amount is converted into the image-side defocus amount using Expression 1, but the positional shift amount may be converted into the image side defocus amount by a different method. For example, based on the assumption that the base line length w is sufficiently larger than the positional shift amount r in Expression 1, a gain value Gain may be calculated using Expression 2, and the positional shift amount may be converted into the image side defocus amount based on Expression 3.
  • the positional shift amount can be easily converted into the image side defocus amount, and the computation amount to calculate the object depth can be reduced.
  • a lookup table for conversion may be used to convert the positional shift amount into the image side defocus amount. In this case as well, the computation amount to calculate the object depth can be reduced.
  • x is positive in the first pupil area
  • x is negative in the second pupil area.
  • the actual light that reaches the light receiving layer 203 has a certain spread due to the light diffraction phenomenon, and therefore the first pupil area and the second pupil area overlap, as shown in the light receiving sensitivity distribution in Fig. 5B.
  • the first pupil area 261 and the second pupil area 262 are assumed to be clearly separated in the description of this embodiment.
  • step S5 the image side defocus amount calculated in step S4 is converted into the object depth based on the image forming relationship of the imaging optical system (object depth calculation processing). Conversion into the object depth may be performed by a different method. For example, the image side defocus amount is converted into the object-side defocus amount, and the sum of the object side defocus amount and the object-side focal position, which is calculated based on the focal length of the imaging optical system 120, is calculated, whereby the depth to the object is calculated.
  • the object-side defocus amount can be calculated using the image-side defocus amount and the longitudinal magnification of the imaging optical system 120.
  • the positional shift amount is converted into the image-side defocus amount in step S4, and then the image-side defocus amount is converted into the object depth in step S5.
  • the processing executed after calculating the positional shift amount may be other than the above mentioned processing.
  • the image-side defocus amount and the object-side defocus amount, or the image-side defocus amount and the object depth can be converted into each other using the image forming relationship of the imaging optical system 120. Therefore the positional shift amount may be directly converted into the object-side defocus amount or the object depth, without being converted into the image-side defocus amount. In either case, the defocus amount (image-side and/or object-side) and the object depth can be accurately calculated by accurately calculating the positional shift amount.
  • the image-side defocus amount is converted into the object depth in step S5, but step S5 need not always be executed, and the depth calculation procedure may complete in step S4.
  • the image-side defocus amount may be the final output.
  • the blur amount of the object in the final image depends on the image-side defocus amount, and as the image-side defocus amount of the object becomes greater, a more blurred image is photographed.
  • the image-side defocus amount can be converted into/from the object side defocus amount or the positional shift amount, hence the final output may be the object-side defocus amount or the positional shift amount.
  • Fig. 4A is a diagram depicting the calculation method for the positional shift amount, where the first image data 401, second image data 402 and photographing object 400 are shown.
  • a target point 410 is set for the first image data 401, and a base area 420 is set centering around the target point 410.
  • a reference point 411 is set at a position corresponding to the target point 410, and the reference area 421 is set centering around the reference point 411.
  • the sizes of the base area 420 and the reference are 421 are the same.
  • the positional shift amount searching range is determined based on the maximum depth and the minimum depth to calculate. For example, the maximum depth is set to infinity, and the minimum depth is set to the minimum photographing depth of the imaging optical system 120, and the range of the maximum positional shift amount and the minimum positional shift amount, which are determined by the maximum depth and the minimum depth respectively, is set as the positional shift amount searching range.
  • the positional shift amount is a relative positional shift amount between the target point 410 and the corresponding point.
  • the positional shift amount at each data position (each pixel position) in the first image data can be calculated.
  • a known method can be used, such as the SSD method, where a square-sum of the difference between each pixel data (each pixel value) in the base area 420 and each pixel data in the reference area 421 is used as an evaluation value.
  • Fig. 4B the abscissa indicates the positional shift amount, and the ordinate indicates the correlation degree evaluation value based on SSD.
  • a curve that indicates the correlation degree evaluation value for each positional shift amount is hereafter called “correlation value curve”.
  • the solid line in Fig. 4B indicates the correlation value curve when the contrast of the object image is high, and the broken line indicates the correlation value curve when the contrast of the object image is low.
  • the correlation value curve has a minimum value 430.
  • the positional shift amount, where the correlation degree evaluation value is the minimum value is determined as the positional shift amount of which correlation is highest, that is, is determined as the positional shift amount.
  • Contrast does not deteriorate very much in an area near the focal position of the imaging optical system 120, hence a high contrast object image can be acquired near the focal position.
  • the contrast drops, and the contrast of the acquired image also decreases. If the defocus amount is plotted on the abscissa and the positional shift amount error is plotted on the ordinate, as shown in Fig. 4C, the positional shift amount error increases as the defocus amount increases.
  • a bimodal correlation value curve having two minimum values, is acquired, as shown in Fig. 4D.
  • the positional shift amount is calculated based on the smaller of the two minimum values, which means that the positional shift amount may be miscalculated.
  • Fig. 6A is a diagram depicting a detailed configuration of the depth calculation unit 102
  • Fig. 6B is a flow chart depicting the positional shift amount calculation procedure.
  • the depth calculation unit 102 is constituted by a positional shift amount calculation unit 602, a base area setting unit 603, and a depth conversion unit 604.
  • the positional shift amount calculation unit 602 calculates the positional shift amount of the first image data and the second image data stored in the image storage unit 104 using a base area having a predetermined size, or a base area having a size set by the base area setting unit 603.
  • the base area setting unit 603 receives the positional shift amount (first positional shift amount) from the positional shift amount calculation unit 602, and outputs the size of the base area corresponding to this positional shift amount to the positional shift amount calculation unit 602.
  • the first image data and the second image data, on which light quantity balance correction has been performed as described with reference to step S2 in Fig. 3, are stored in the image storage unit 104.
  • step S3-1 in Fig. 6B the positional shift amount calculation unit 602 calculates the first positional shift amount based on the first image data and the second image data acquired from the image storage unit 104.
  • the first positional shift amount is calculated by the corresponding point search method described with reference to Fig. 4A to Fig. 4D, using the base area (first base area) having a size which is set in advance (first size).
  • the base area setting unit 603 sets a size of the second base area (second size) based on the first positional shift amount.
  • a second base area of which area size is larger than the first base area, is set when the absolute value of the first positional shift amount acquired by the positional shift amount calculation unit 602 exceeds a predetermined threshold.
  • Fig. 6C shows the relationship between the absolute value of the first positional shift amount and the size of the second base area.
  • the abscissa indicates the absolute value of the first positional shift amount
  • the ordinate indicates the size of the second base area.
  • the broken line 620 parallel with the abscissa indicates the area size of the first base area (first size). If the absolute value of the first positional shift amount is greater than the threshold 610, the size of the second base area (second size) becomes larger than the area size of the first base area (first size).
  • step S3-3 in Fig. 6B the positional shift amount calculation unit 602 searches for a corresponding point again using the second base area, and calculates the second positional shift amount. According to this embodiment, if the absolute value of the first positional shift amount is a predetermined threshold or less, the positional shift amount calculation unit 602 does not recalculate the positional shift amount, and regards the first positional shift amount as the second positional shift amount.
  • the positional shift amount calculation procedure S3 completes.
  • the depth conversion unit 604 converts the second positional shift amount into the object depth by the method described in step S4 and S5 in Fig. 3, and outputs the object depth information.
  • Fig. 7A is a diagram depicting acquisition of the images of an object 701 and an object 702 in the digital camera 100.
  • the object 701 is disposed in a focal position of the imaging optical system 120, and the blur size 711 on the imaging element 101 is small.
  • the object 702, on the other hand, is disposed in a position distant from the focal position of the imaging optical system 120, and the blur size 712 on the imaging element 101 is larger. If the object 701 and the object 702 have a brightness distribution shown in Fig. 7B, then the image of the object 701 becomes like Fig. 7C (with little blur), and the image of the object 702 becomes like Fig. 7D (with considerable blur).
  • the defocus amount is large, as in the case of the object 702 (Fig. 7D), the acquired image is considerably blurred, hence the positional shift amount gently changes. Further, the acquired object image has low contrast. Therefore in order to reduce an error of the positional shift amount, it is preferable to set a large base area. If the defocus amount is small, as in the case of the object 701 (Fig. 7C), the acquired image is not blurred very much, hence the positional shift amount may sharply change. Further, the acquired object image has high contrast, therefore in order to reduce an error in the positional shift amount, it is preferable to set a small base area.
  • the depth calculation unit 102 of this embodiment sets a large base area (second base area), and calculates the positional shift amount again. In other words, if the contrast of the image acquired via the imaging optical system 120 is low and the changes of the positional shift amount are gentle, a large base area is set.
  • the depth calculation unit 102 of this embodiment sets a larger second base area. This makes it unnecessary to calculate the positional shift amount of spatially adjacent pixels, and both reducing the influence of the changes of positional shift amount and the influence of noise generated upon acquiring image signals can be implemented by a simple operation. Furthermore, the base area is set according to the optical characteristic of the imaging optical system 120, hence dependency of the object depth on the changes of the positional shift amount can be reduced, and the object depth can be accurately measured.
  • the depth calculation unit 102 of this embodiment sets a larger size for the second base area when the absolute value of the first positional shift amount is greater than a predetermined threshold 610, as shown in Fig. 6C, but the size of the second base area may be determined by a different method. For example, as shown in Fig. 6D, the size of the second base area may be determined using a linear function of the absolute value of the first positional shift amount. Or, as a more standard approach, the size of the second base area may be determined, not based on the absolute value of the first positional shift amount, but based on the first positional shift amount itself.
  • the inclination in the graph may be changed depending on whether the first positional shift amount is 0 or more (solid line 630) or the first positional shift amount is smaller than 0 (broken line 640), as shown in Fig. 6E.
  • the relationship between the defocus amount and the error of the positional shift amount is shown in Fig. 4C, the error of the positional shift amount quadratically increases as the defocus amount increases.
  • the second area size may be set as an increasing function so that the error of the defocus amount is confined within a predetermined target value, as shown in Fig. 6E.
  • the changes of the positional shift amount in the base area and the influence of the noise generated upon acquiring image signals can be reduced by setting the size of the second base area considering the optical characteristic of the imaging optical system 120.
  • the depth calculation unit 102 of this embodiment need not calculate the first positional shift amount by setting the target point 410 for all the pixel positions in the first image data.
  • the depth calculation unit 102 may calculate the first positional shift amount by sequentially moving the target point 410 by a predetermined space.
  • the first positional shift amount is calculated by keeping ten pixels of space in the horizontal direction and vertical direction, and two-dimensional distribution of the first positional shift amount is expanded by a known expansion method (e.g. bilinear interpolation, nearest neighbor interpolation), and is referred to in order to set the second base area.
  • a known expansion method e.g. bilinear interpolation, nearest neighbor interpolation
  • the changes of the positional shift amount in the base area and the influence of noise generated upon acquiring the image signals are reduced by setting the size of the second base area considering the optical characteristic of the imaging optical system 120. Therefore it is only required that the ratio of the surface area, included in the base area in the first image data, can be changed to set the size of the base area. If the size of the base area is enlarged to increase the number of pixels included in the base area, the influence of the noise generated upon acquiring the image signals can be reduced. Further, the influence of the noise generated upon acquiring the image signals can also be suppressed by reducing (thinning out) the image data while keeping the number of pixels included in the base area constant. The influence of the noise generated upon acquiring the image signals can be reduced either by increasing a number of pixels included in the base area, or by reducing the image data while keeping the number of pixels included in the base area constant.
  • the positional shift amount calculation unit 602 shown in Fig. 6A uses the positional shift amount calculation procedure shown in Fig. 11.
  • step S3-6 the sizes of the first image and the second image are changed using a reduction ratio in accordance with the size of the second base area. For example, if the size of the second base area is double the size of the first base area (double in the horizontal direction and vertical direction respectively, four times in terms of area), 0.5 is set as the reduction ratio (0.5 times in the horizontal and vertical directions respectively, 1/4 times in terms of number of pixels).
  • a known method such as the bilinear method, can be used.
  • step S3-3 by using the first image (first reduced image) and the second image (second reduced image) after the size change, the second positional shift amount is calculated based on the second base area, of which number of pixels included in the base area is the same as the first base area.
  • the reduction ratio is set in accordance with the size of the second base area, but the base area setting unit 603 may output the reduction ratio of the image to the positional shift amount calculation unit 602 in accordance with the positional shift amount (first positional shift amount) received from the positional shift amount calculation unit 602.
  • both the size of the base area and the size of the image data may be changed.
  • the processing executed by the base area setting unit 603 may not be limited to a specific manner only as long as the relative sizes of the base areas to the first image data and the second image data are changed.
  • the influence of noise can be reduced if the relative sizes of the base areas to the first and second image data upon calculating the second positional shift amount are larger than the relative sizes upon calculating the first positional shift amount.
  • the configuration shown in Fig. 8A may be used as a modification of the depth calculation unit 102 of this embodiment.
  • the depth calculation unit 102 in Fig. 8A includes a PSF size storage unit 804 in addition to the above mentioned configuration.
  • a size of a point spread function (hereafter called “PSF”) of the imaging optical system 120 is stored, so as to correspond to the first positional shift amount.
  • PSF point spread function
  • the general flow of the depth calculation procedure of this modification is the same as above, but in the step of setting the size of the second base area in step S3-2 in Fig. 6B, the size of the second base area is set based on the PSF size outputted from the PSF size storage unit 804.
  • the base area setting unit 603 acquires the first positional shift amount from the positional shift amount calculation unit 602. Then the base area setting unit 603 acquires the PSF size corresponding to the first positional shift amount from the PSF size storage unit 804.
  • the base area setting unit 603 sets the size of the second base area in accordance with the PSF size acquired from the PSF size storage unit 804.
  • the area size of the second base area can be more appropriately set by setting the size of the second base area in accordance with the PSF size based on the defocus of the imaging optical system 120.
  • the area size of the second base area is not set too large (or too small), and an increase of computation amount and positional shift amount error can be prevented.
  • the blur size of the imaging optical system 120 can be expressed by 3 ⁇ (three times the standard deviation ⁇ ) of PSF, for example. Therefore the PSF size storage unit 804 outputs 3 ⁇ of PSF of the imaging optical system 120 as the size of PSF. It is sufficient if the PSF size storage unit 804 stores the PSF size only for the central angle of view, but it is preferable to store the PSF size of the peripheral angle of view as well if the aberration of the imaging optical system 120 at the peripheral angle of view is large.
  • the PSF size may be expressed as a function representing the relationship of the PSF size and the positional shift amount, so that the coefficients are stored in the PSF storage unit 804.
  • the PSF size may be calculated using a linear function in which a reciprocal number of the diaphragm value (F value) of the imaging optical system is a coefficient, as shown in Expression 4, and the coefficients k1 and k2 may be stored in the PSF size storage unit 804.
  • PSFsize is a PSF size
  • r is a first positional shift amount
  • F is an F value of the imaging optical system 120
  • k1 and k2 are predetermined coefficients.
  • the PSF size may be determined as shown in Expression 5, considering that the ratio of the base line length (distance 513) described with reference to Fig. 5B and the diameter of the exit pupil 103 of the imaging optical system 120 are approximately the same as the ratio of the absolute value of the first positional shift amount and the PSF size.
  • w is a base line length
  • D is a diameter of the exit pupil
  • k1 and k2 are predetermined coefficients.
  • the size of PSF and the defocus amount have an approximate proportional relationship. If it is considered that the defocus amount and the positional shift amount have an approximate proportional relationship, as shown in Expression 3, the coefficient k2 in Expression 4 and Expression 5 is not essentially required.
  • the base area setting unit 603 sets the size of the second base area in accordance with the PSF size acquired from the PSF size storage unit 804.
  • Fig. 8B is a graph of which abscissa indicates the PSF size, and the ordinate indicates the size of the second base area. As the solid line in Fig. 8B indicates, the size of the second base area in accordance with the blur of the acquired image can be set by setting the second base area and the PSF size to have a proportional relationship.
  • the PSF size and the size of the second base area may be set to have a proportional relationship if the PSF size exceeds the threshold 810, and may be set such that the size of the second base area becomes constant if the PSF size is less than the threshold 810 as the broken line in Fig. 8B indicates.
  • the second base area is more appropriately set by setting the second base area in accordance with the change of the PSF size due to the defocus of the imaging optical system 120.
  • the area size of the second base area is not set too large (or too small), and an increase of computation amount and positional shift amount error can be prevented.
  • the depth calculation unit 102 may include an imaging performance value storage unit instead of the PSF size storage unit 804. From the imaging performance value storage unit, a value representing the imaging performance of the object image formed by the imaging optical system 120 is outputted.
  • the imaging performance can be expressed, for example, by an absolute value of the optical transfer function (that is modulation transfer function, and hereafter called “MTF”), which indicates the imaging performance of the imaging optical system 120.
  • MTF modulation transfer function
  • the base area setting unit 603 may acquire the MTF corresponding to the first positional shift amount from the imaging performance value storage unit, as shown in Fig. 8D, and set a smaller second base area as the MTF is higher.
  • the base area considering the optical characteristic of the imaging optical system 120, can be set by setting the second base area based on the MTF corresponding to the first positional shift amount. Thereby the second base area is not set too large (or too small), and an increase of the computation amount and positional shift amount error can be prevented.
  • the information stored in the PSF size storage unit 804 need not be information that indicates the size of the PSF, information that indicates the MTF, or information that indicates the optical characteristic of the imaging optical system 120, as shown in modification 1 and modification 2 of this embodiment.
  • Required is information indicating that the optical characteristic of the imaging optical system 120 is stored.
  • ⁇ Modification 3 of depth calculation unit> As another modification of this embodiment, the procedure shown in Fig. 8E may be used. In the following description, it is assumed that the depth calculation unit 102 includes a base area setting determination unit in addition to the configuration shown in Fig. 6A.
  • step S3-4 determination processing is executed in which processing advances to step S3-2 if the absolute value of the first positional shift amount is greater than a predetermined threshold, or otherwise advances to step S3-5.
  • step S3-5 processing to set the first positional shift amount as the second positional shift amount is executed.
  • the second base area is set based on the first positional shift amount, then the second positional shift amount is calculated. If the defocus amount is small, the first positional shift amount is set as the second positional shift amount.
  • the depth calculation unit 102 includes the base area setting determination unit in addition to the configuration shown in Fig. 6A, but may include the base area setting determination unit in addition to the configuration shown in Fig. 8A. In this case as well, a number of pixels for which the positional shift amount is calculated twice is decreased, and the computation amount can be further decreased.
  • first image data and second image data acquisition method two image data having different points of view are acquired by splitting the luminous flux of one imaging optical system, but two image data may be acquired using two imaging optical systems.
  • the stereo camera 1000 shown in Fig. 10 may be used.
  • two imaging optical systems 1020 and 1021, two imaging elements 1010 and 1011, a depth calculation unit 102, an image storage unit 104, an image generation unit (not illustrated), and a lens driving control unit (not illustrated) are disposed inside a camera casing 130.
  • the imaging optical systems 1020 and 1021, the imaging elements 1010 and 1011, the depth calculation unit 102 and the image storage unit 104 constitute a depth calculation apparatus 110.
  • the depth calculation unit 102 can calculate the depth to the object according to the depth calculation procedure described with reference to Fig. 3.
  • the base line length in the case of the stereo camera 1000 can be the depth between the center position of the exit pupil of the imaging optical system 1020 and the center position of the exit pupil of the imaging optical system 1021.
  • the changes of the positional shift amount in the base area and the influence of noise generated upon acquiring images can be reduced by setting the size of the second base area considering the optical characteristic of the imaging optical systems 1020 and 1021.
  • the F values of the imaging optical systems 1020 and 1021 must be small in order to acquire high resolution images.
  • a drop in contrast due to defocus becomes conspicuous, hence the depth calculation apparatus that includes the depth calculation unit 102 according to this embodiment can ideally calculate the depth to the object.
  • the above mentioned depth calculation apparatus can be installed by software (programs) or hardware.
  • a computer program is stored in memory of a computer (e.g. microcomputer, CPU, MPU, FPGA) included in the imaging apparatus or image processing apparatuses, and the computer executes the program to implement each processing. It is also preferable to dispose a dedicated processor, such as an ASIC, to implement all or a part of the processing of the present invention using logic circuits.
  • a dedicated processor such as an ASIC
  • the present invention is also applicable to a server in a cloud environment.
  • the present invention may be implemented by a method constituted by steps to be executed by a computer of a system or an apparatus, which implements the above mentioned functions of the embodiment by reading and executing a program recorded in a storage apparatus.
  • this program is provided to the computer via a network, or via various types of recording media that can function as a storage apparatus (that is, a computer readable recording media that holds data non-temporarily), for example. Therefore, this computer (including such a device as a CPU and an MPU), this method, this program (including program codes and program products), and the computer readable recording media that non-temporarily stores this program are all included within the scope of the present invention.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD) TM ), a flash memory device, a memory card, and the like.

Abstract

The positional shift amount calculation apparatus for calculating a positional shift amount, includes: a calculation unit for calculating the positional shift amount based on data within a predetermined area out of first image data representing the first image and second image data representing the second image; and a setting unit for setting a relative size of the area to the first and second image data. The calculation unit calculates a first shift amount using the first and second image data in the area having a first preset size, the setting unit sets a second size of the area based on the size of the first shift amount and an optical characteristic of the first imaging optical system, and the calculation unit calculates a second shift amount using the first and second image data in the area having the second size.

Description

POSITIONAL SHIFT AMOUNT CALCULATION APPARATUS AND IMAGING APPARATUS
The present invention relates to a technique to calculate the positional shift amount between images.
A known depth measuring apparatus measures depth by calculating a positional shift amount (also called “parallax”), which is a relative positional shift amount between two images having different points of view (hereafter called “image A” and “image B”). To calculate the positional shift amount, an area-based corresponding points search technique called “template matching” is often used. In template matching, either image A or image B is set as a base image, and the other image which is not the base image is set as a reference image. A base area around a target point (also called “base window”) is set on the base image, and a reference area around a reference point corresponding to the target point (also called “reference window”) is set on the reference image. The base area and the reference area are collectively called “matching windows”. A reference point at which the similarity of an image in the base area and an image in the reference area is highest (correlation thereof is highest) is searched for while sequentially moving the reference point, and the positional shift amount is calculated using the relative positional shift amount between the target point and the reference point. Generally a calculation error occurs to the positional shift amount due to a local mathematical operation if the size of the base area is small, hence a relatively large area size is used.
The depth (distance) to an object can be calculated by converting the positional shift amount into a defocus amount or into an object depth using a conversion coefficient. This allows measuring the depth at high-speed and at high accuracy, since it is unnecessary to move the lens to measure the depth.
The depth measurement accuracy improves by accurately determining the positional shift amount. Factors that cause an error to the positional shift amount are: changes of the positional shift amount in each pixel of the base area; and noise generated in the process of acquiring image data. To minimize the influence of the changes of the positional shift amount in the base area, the base area must be small. If the base area is small however, a calculation error to the positional shift amount may be generated by the influence of noise or because of the existence of similar image patterns.
In Patent Literature 1, the positional shift amount is calculated for each scanning line (e.g. horizontal line), and the positional shift amount at the adjacent scanning line is calculated based on the calculated positional shift amount data. In this case, a method of setting a base area independently for each pixel, so that a boundary where the calculated positional shift amount changes is not included, has been proposed.
In Patent Literature 2, a method of decreasing the size of the base area in steps and gradually limiting the search range to search for a corresponding point is proposed.
Japanese Patent Application Laid-Open No. 2011-013706 Japanese Patent Application Laid-Open No. H10-283474
However a problem of the positional shift amount calculation method disclosed in Patent Literature 1 is that the memory amount and computation amount required for calculating the positional shift amount are large. This is because the positional shift amount of a spatially adjacent area is calculated and evaluated in advance to determine the size of the base area. Furthermore, in the case when the positional shift amount changes continuously, the base area becomes small since the base area is set in a range where the positional shift amount is approximately the same, and a calculation error may occur to the positional shift amount. In other words, the depth may be miscalculated depending on the way of changing the object depth.
A problem of the positional shift amount calculating method disclosed in Patent Literature 2 is that the computation amount is large since a plurality of base areas is set at each pixel position, and the correlation degree is evaluated.
With the foregoing in view, it is an object of the present invention to provide a technique that can calculate the positional shift amount at high accuracy by an easy operation.
A first aspect of the present invention is a positional shift amount calculation apparatus that calculates a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the apparatus having: a calculation unit adapted to calculate the positional shift amount based on data within a predetermined area out of first image data representing the first image and second image data representing the second image; and a setting unit adapted to set a relative size of the area to the first and second image data, and in this positional shift amount calculation apparatus, the calculation unit is adapted to calculate a first positional shift amount using the first image data and the second image data in the area having a first size which is preset, the setting unit is adapted to set a second size of the area based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system, and the calculation unit is adapted to calculate a second positional shift amount using the first image data and the second image data in the area having the second size.
A second aspect of the present invention is a positional shift amount calculation method for a positional shift amount calculation apparatus to calculate a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the method having: a first calculation step of calculating a first positional shift amount based on data within an area having a predetermined first size, out of first image data representing the first image and second image data representing the second image; a setting step of setting a second size, which is a relative size of the area to the first and second image data, based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system; and a second calculation step of calculating a second positional shift amount using the first image data and the second image data in the area having the second size.
According to the present invention, the positional shift amount can be calculated at high accuracy by an easy operation.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Fig. 1A and Fig. 1B are diagrams depicting a configuration of a digital camera that includes a depth calculation apparatus. Fig. 2 is a diagram depicting a luminous flux that the digital camera receives. Fig. 3 is a flow chart depicting a depth calculation procedure according to the first embodiment. Fig. 4A to Fig. 4D are diagrams depicting a positional shift amount calculation method and a factor that generates a positional shift amount error. Fig. 5A and Fig. 5B are diagrams depicting a base line length. Fig. 6A and Fig. 6B are diagrams depicting a depth calculation unit according to the first embodiment. Fig. 6C to Fig. 6F are diagrams depicting a size of second base area according to the first positional shift amount. Fig. 7A to Fig. 7D are diagrams depicting the reason why the positional shift amount can be accurately calculated in the first embodiment. Fig. 8A and Fig. 8B are diagrams depicting the depth calculation unit according to a modification. Fig. 8C to Fig. 8E are diagrams depicting the depth calculation unit according to a modification. Fig. 9 is a flow chart depicting a general operation of the digital camera. Fig. 10 is a diagram depicting a configuration of the digital camera according to the modification. Fig. 11 is a flow chart depicting an example of the positional shift amount calculation procedure according to the first embodiment.
Description of the Embodiments
Embodiments of the present invention will now be described with reference to the drawings. In the following description, a digital camera is described as an example of an imaging apparatus that includes a depth calculation apparatus (positional shift calculation apparatus), but application of the present invention is not limited to this. For example, the positional shift amount calculation apparatus of the present invention can be applied to a digital depth measuring instrument.
In the description with reference to the drawings, as a rule a same segment is denoted by a same reference number, even if the figure number is different, and redundant description is minimized.
(First Embodiment)
<Configuration of digital camera>
Fig. 1A is a diagram depicting a configuration of a digital camera 100 that includes a depth measurement apparatus. In the digital camera 100, an imaging optical system 120, an imaging element 101, a depth calculation unit 102, an image storage unit 104, an image generation unit (not illustrated), a lens driving control unit (not illustrated), and a control unit (not illustrated) are disposed inside a camera casing 130. The imaging optical system 120, the imaging element 101, the depth calculation unit 102, and the image storage unit 104 constitute a depth calculation apparatus 110. The depth calculation unit 102 can be constructed using a logic circuit. As another mode, the depth calculation unit 102 may be constituted by a central processing unit (CPU) and a memory storing arithmetic processing programs. The depth calculation unit 102 corresponds to the positional shift amount calculation apparatus according to the present invention.
The imaging optical system 120 is a photographing lens of the digital camera 100, and has a function to form an image of the object on the imaging element 101, which is an imaging surface. The imaging optical system 120 is constituted by a plurality of lens groups (not illustrated) and a diaphragm (not illustrated), and has an exit pupil 103 at a position apart from the imaging element 101 by a predetermined distance. The reference number 140 in Fig. 1A denotes an optical axis of the imaging optical system 120, and in this description, the optical axis is assumed to be parallel with the z axis. The x axis and the y axis are assumed to be orthogonal to each other, and are orthogonal to the optical axis.
An operation example of this digital camera 100 will now be described with reference to Fig. 9. The following is merely an example, and operation of the digital camera 100 is not limited to this example. Fig. 9 is a flow chart depicting an operation flow after the main power of the digital camera 100 is turned ON and the shutter button (not illustrated) is half depressed. First in step S901, the control unit reads the information on the imaging optical system 120 (e.g. focal length, diaphragm value), and stores the information in the memory unit (not illustrated). Then the control unit executes the processing in steps S902, S903 and S904 to adjust the focal point. In other words, in step S902, the depth calculation unit 102 calculates a defocus amount using the depth calculation procedure shown in Fig. 3, based on the image data outputted from the imaging element 101. The depth calculation procedure will be described in detail later. In step S903, the control unit determines whether the imaging optical system 120 is in the focused state or not based on the calculated defocus amount. If not focused, the control unit drives the imaging optical system 120 to the focused position based on the defocus amount using the lens driving control unit, and then processing returns to step S902. If it is determined that the imaging optical system 120 is in the focused state in step S903, the control unit determines whether the shutter was released (fully depressed) by the operation of the shutter button (not illustrated) in step S905. If not released, processing returns to step S902, and the above mentioned processing is repeated. If it is determined that the shutter is released in step S905, the control unit reads image data from the imaging element 101, and stores the image data in the image storage unit 104. The image generation unit performs development processing on the image data stored in the image storage unit 104, whereby a final image can be generated. Further, an object depth image (object depth distribution) corresponding to the final image can be generated by applying the depth calculation procedure, which will be described later with reference to Fig. 3, to the image data stored in the image storage unit 104.
<Configuration of imaging element>
The imaging element 101 is constituted by a CMOS (Complementary Metal-Oxide Semiconductor) or a CCD (Charge-Coupled Device). The object image is formed on the imaging element 101 via the imaging optical system 120, and the imaging element 101 performs photoelectric conversion on the received luminous flux, and generates image data based on the object image. The imaging element 101 according to this embodiment will now be described in detail with reference to Fig. 1B.
Fig. 1B is an xy cross-sectional view of the imaging element 101. The imaging element 101 has a configuration where a plurality of pixel groups (2 rows ´ 2 columns) is arranged. Each pixel group 150 is constituted by green pixels 150G1 and 150G2, which are disposed in diagonal positions, and a red pixel 150R and a blue pixel 150B, which are the other two pixels.
<Principle of depth measurement>
In each pixel constituting the pixel group 150 of this embodiment, two photoelectric conversion units (first photoelectric conversion unit 161, and second photoelectric conversion unit 162), of which shapes are symmetric in the xy cross-section, are disposed in the light receiving layer (203 in Fig. 2) in the pixel. The luminous flux received by the first photoelectric conversion unit 161 and the second photoelectric conversion unit 162 in the imaging element 101 will be described with reference to Fig. 2.
Fig. 2 is a schematic diagram depicting only the exit pupil 103 of the imaging optical system 120 and the green pixel 150G1 as an example representing the pixels disposed in the imaging element 101. The pixel 150G1 shown in Fig. 2 is constituted by a color filter 201, a micro lens 202 and a light receiving layer 203, and the first photoelectric conversion unit 161 and the second photoelectric conversion unit 162 are included in the light receiving layer 203. The micro lens 202 is disposed such that the exit pupil 103 and the light receiving layer 203 are in a conjugate relationship. As a result, the luminous flux 210 that has passed through a first pupil area (261) of the exit pupil enters the first photoelectric conversion unit, and the luminous flux 220 that has passed through a second pupil area (262) thereof enters the photoelectric conversion unit 162, as shown in Fig. 2.
The plurality of first photoelectric conversion units 161 disposed in each pixel performs photoelectric conversion on the received luminous flux and generates the first image data. In the same manner, the plurality of second photoelectric conversion units 162 disposed in each pixel performs photoelectric conversion on the received luminous flux and generates the second image data. From the first image data, the intensity distribution of the first image (image A), which the luminous flux that has mainly passed through the first pupil area forms on the imaging element 101, can be acquired. From the second image data, the intensity distribution of the second image (image B), which the luminous flux that has mainly passed through the second pupil area forms on the imaging element 101, can be acquired. Therefore the relative positional shift amount of the first image and the second image is the positional shift amount of the image A and image B. By calculating this positional shift amount according to a later mentioned method and converting the calculated positional shift amount into the defocus amount using a conversion coefficient, the depth (distance) to the object can be calculated.
<Description on depth calculation procedure>
The depth calculation procedure of this embodiment will now be described with reference to Fig. 3.
In step S1, the imaging element 101 acquires the first image data and the second image data, and transfers the acquired data to the depth calculation unit 102.
In step S2, the light quantity balance correction processing is performed to correct the balance of the light quantity between the first image data and the second image data. To correct the light quantity balance, a known method can be used. For example, a coefficient to correct the light quantity balance between the first image data and the second image data is calculated based on an image acquired by photographing a uniform surface light source in advance using the digital camera 100.
In step S3, the depth calculation unit 102 calculates the positional shift amount based on the first image data and the second image data. The calculation method for the positional shift amount will be described later with reference to Fig. 4A to Fig. 4D and Fig. 6A to Fig. 6F.
In step S4, the depth calculation unit 102 converts the positional shift amount into an image-side defocus amount using a predetermined conversion coefficient. The image-side defocus amount is a distance from an estimated focal position (imaging element surface) to the focal position of the imaging optical system 120.
The calculation method for the conversion coefficient that is used for converting the positional shift amount into the image side defocus amount will now be described with reference to Fig. 5A and Fig. 5B. Fig. 5A shows a light receiving sensitivity incident angle characteristic of each pixel. The abscissa indicates the incident angle of the light that enters the pixel (angle formed by the ray projected to the xz plane and the z axis), and the ordinate indicates the light receiving sensitivity. The solid line 501 indicates the light receiving sensitivity of the first photoelectric conversion unit, and the broken line 502 indicates the light receiving sensitivity of the second photoelectric conversion unit. Fig. 5B shows the light receiving sensitivity distribution on the exit pupil 103 when this receiving sensitivity is projected onto the exit pupil 103. The darker the color, the higher the light receiving sensitivity. In Fig. 5B, 511 indicates a center of gravity position of the light receiving sensitivity distribution of the first photoelectric conversion unit, and 512 indicates a center of gravity position of the light receiving sensitivity distribution of the second photoelectric conversion unit. The distance 513 between the center of gravity position 511 and the center of gravity position 512 is called “base line length”, and is used as the conversion coefficient to convert the positional shift amount into the image side defocus amount. When r is the positional shift amount, w is the base line length and L is the pupil distance from the imaging element 101 to the exit pupil 103, the positional shift amount can be converted into the image side defocus amount ΔL using the following Expression 1.
Figure JPOXMLDOC01-appb-M000001
In this embodiment, the positional shift amount is converted into the image-side defocus amount using Expression 1, but the positional shift amount may be converted into the image side defocus amount by a different method. For example, based on the assumption that the base line length w is sufficiently larger than the positional shift amount r in Expression 1, a gain value Gain may be calculated using Expression 2, and the positional shift amount may be converted into the image side defocus amount based on Expression 3.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
By using Expression 3, the positional shift amount can be easily converted into the image side defocus amount, and the computation amount to calculate the object depth can be reduced. A lookup table for conversion may be used to convert the positional shift amount into the image side defocus amount. In this case as well, the computation amount to calculate the object depth can be reduced.
In Fig. 2, it is assumed that x is positive in the first pupil area, and x is negative in the second pupil area. However the actual light that reaches the light receiving layer 203 has a certain spread due to the light diffraction phenomenon, and therefore the first pupil area and the second pupil area overlap, as shown in the light receiving sensitivity distribution in Fig. 5B. For convenience however, the first pupil area 261 and the second pupil area 262 are assumed to be clearly separated in the description of this embodiment.
In step S5, the image side defocus amount calculated in step S4 is converted into the object depth based on the image forming relationship of the imaging optical system (object depth calculation processing). Conversion into the object depth may be performed by a different method. For example, the image side defocus amount is converted into the object-side defocus amount, and the sum of the object side defocus amount and the object-side focal position, which is calculated based on the focal length of the imaging optical system 120, is calculated, whereby the depth to the object is calculated. The object-side defocus amount can be calculated using the image-side defocus amount and the longitudinal magnification of the imaging optical system 120.
In the depth calculation procedure of this embodiment, the positional shift amount is converted into the image-side defocus amount in step S4, and then the image-side defocus amount is converted into the object depth in step S5. However, the processing executed after calculating the positional shift amount may be other than the above mentioned processing. As mentioned above, the image-side defocus amount and the object-side defocus amount, or the image-side defocus amount and the object depth can be converted into each other using the image forming relationship of the imaging optical system 120. Therefore the positional shift amount may be directly converted into the object-side defocus amount or the object depth, without being converted into the image-side defocus amount. In either case, the defocus amount (image-side and/or object-side) and the object depth can be accurately calculated by accurately calculating the positional shift amount.
In this embodiment, the image-side defocus amount is converted into the object depth in step S5, but step S5 need not always be executed, and the depth calculation procedure may complete in step S4. In other words, the image-side defocus amount may be the final output. The blur amount of the object in the final image depends on the image-side defocus amount, and as the image-side defocus amount of the object becomes greater, a more blurred image is photographed. To perform refocusing processing for adjusting the focal position in the image processing in a subsequent step, it is sufficient if the image-side defocus amount is known, and conversion into the object depth is unnecessary. As mentioned above, the image-side defocus amount can be converted into/from the object side defocus amount or the positional shift amount, hence the final output may be the object-side defocus amount or the positional shift amount.
<Factor for generating positional shift amount error>
A calculation method for the positional shift amount will be described first with reference to Fig. 4A to Fig. 4D. Fig. 4A is a diagram depicting the calculation method for the positional shift amount, where the first image data 401, second image data 402 and photographing object 400 are shown. A target point 410 is set for the first image data 401, and a base area 420 is set centering around the target point 410. For the second image data 402, on the other hand, a reference point 411 is set at a position corresponding to the target point 410, and the reference area 421 is set centering around the reference point 411. The sizes of the base area 420 and the reference are 421 are the same. While the reference point 411 is sequentially moved within a predetermined positional shift amount searching range, a correlation value between the first image data in the base area 420 and the second image data in the reference area 421 is calculated, and the reference point 411, at which the highest correlation value is acquired, is regarded as a corresponding point of the target point 410. The positional shift amount searching range is determined based on the maximum depth and the minimum depth to calculate. For example, the maximum depth is set to infinity, and the minimum depth is set to the minimum photographing depth of the imaging optical system 120, and the range of the maximum positional shift amount and the minimum positional shift amount, which are determined by the maximum depth and the minimum depth respectively, is set as the positional shift amount searching range. The positional shift amount is a relative positional shift amount between the target point 410 and the corresponding point. By searching for the corresponding point while sequentially moving the target point 410, the positional shift amount at each data position (each pixel position) in the first image data can be calculated. To calculate the correlation value, a known method can be used, such as the SSD method, where a square-sum of the difference between each pixel data (each pixel value) in the base area 420 and each pixel data in the reference area 421 is used as an evaluation value.
Now a factor that generates the positional shift amount error will be described. In Fig. 4B, the abscissa indicates the positional shift amount, and the ordinate indicates the correlation degree evaluation value based on SSD. A curve that indicates the correlation degree evaluation value for each positional shift amount is hereafter called “correlation value curve”. The solid line in Fig. 4B indicates the correlation value curve when the contrast of the object image is high, and the broken line indicates the correlation value curve when the contrast of the object image is low. The correlation value curve has a minimum value 430. The positional shift amount, where the correlation degree evaluation value is the minimum value, is determined as the positional shift amount of which correlation is highest, that is, is determined as the positional shift amount. As the correlation value curve changes more sharply, the influence of noise decreases, therefore a calculation error of the positional shift amount decreases. Hence if the contrast of the object image is high, the positional shift amount can be accurately calculated. If the contrast of the object image is low, on the other hand, the change of the correlation value curve becomes gentle, hence a calculation error of the positional shift amount increases.
Contrast does not deteriorate very much in an area near the focal position of the imaging optical system 120, hence a high contrast object image can be acquired near the focal position. As the position of the object moves away from the focal position of the imaging optical system 120 (as the object is defocused), the contrast drops, and the contrast of the acquired image also decreases. If the defocus amount is plotted on the abscissa and the positional shift amount error is plotted on the ordinate, as shown in Fig. 4C, the positional shift amount error increases as the defocus amount increases.
If the positional shift amount changes within the base area, a bimodal correlation value curve, having two minimum values, is acquired, as shown in Fig. 4D. In this case, the positional shift amount is calculated based on the smaller of the two minimum values, which means that the positional shift amount may be miscalculated.
<Detailed description on positional shift amount calculation method>
The depth calculation unit 102 of this embodiment and the positional shift amount calculation procedure S3 will now be described in detail with reference to Fig. 6A to Fig. 6C. Fig. 6A is a diagram depicting a detailed configuration of the depth calculation unit 102, and Fig. 6B is a flow chart depicting the positional shift amount calculation procedure.
The depth calculation unit 102 is constituted by a positional shift amount calculation unit 602, a base area setting unit 603, and a depth conversion unit 604. The positional shift amount calculation unit 602 calculates the positional shift amount of the first image data and the second image data stored in the image storage unit 104 using a base area having a predetermined size, or a base area having a size set by the base area setting unit 603. The base area setting unit 603 receives the positional shift amount (first positional shift amount) from the positional shift amount calculation unit 602, and outputs the size of the base area corresponding to this positional shift amount to the positional shift amount calculation unit 602. The first image data and the second image data, on which light quantity balance correction has been performed as described with reference to step S2 in Fig. 3, are stored in the image storage unit 104.
In step S3-1 in Fig. 6B, the positional shift amount calculation unit 602 calculates the first positional shift amount based on the first image data and the second image data acquired from the image storage unit 104. In concrete terms, the first positional shift amount is calculated by the corresponding point search method described with reference to Fig. 4A to Fig. 4D, using the base area (first base area) having a size which is set in advance (first size).
In step S3-2 in Fig. 6B, the base area setting unit 603 sets a size of the second base area (second size) based on the first positional shift amount. According to this embodiment, a second base area, of which area size is larger than the first base area, is set when the absolute value of the first positional shift amount acquired by the positional shift amount calculation unit 602 exceeds a predetermined threshold. Fig. 6C shows the relationship between the absolute value of the first positional shift amount and the size of the second base area. In Fig. 6C, the abscissa indicates the absolute value of the first positional shift amount, and the ordinate indicates the size of the second base area. The broken line 620 parallel with the abscissa indicates the area size of the first base area (first size). If the absolute value of the first positional shift amount is greater than the threshold 610, the size of the second base area (second size) becomes larger than the area size of the first base area (first size).
In step S3-3 in Fig. 6B, the positional shift amount calculation unit 602 searches for a corresponding point again using the second base area, and calculates the second positional shift amount. According to this embodiment, if the absolute value of the first positional shift amount is a predetermined threshold or less, the positional shift amount calculation unit 602 does not recalculate the positional shift amount, and regards the first positional shift amount as the second positional shift amount.
By the above processing, the positional shift amount calculation procedure S3 completes. Then the depth conversion unit 604 converts the second positional shift amount into the object depth by the method described in step S4 and S5 in Fig. 3, and outputs the object depth information.
<Reason why changes of positional shift amount and influence of noise can be reduced>
The reason why changes of the positional shift amount in the base area and influence of noise generated upon acquiring image signals can be reduced by the depth calculation method executed by the depth calculation unit 102 of this embodiment will be described with reference to Fig. 7A to Fig. 7D.
Fig. 7A is a diagram depicting acquisition of the images of an object 701 and an object 702 in the digital camera 100. Here the object 701 is disposed in a focal position of the imaging optical system 120, and the blur size 711 on the imaging element 101 is small. The object 702, on the other hand, is disposed in a position distant from the focal position of the imaging optical system 120, and the blur size 712 on the imaging element 101 is larger. If the object 701 and the object 702 have a brightness distribution shown in Fig. 7B, then the image of the object 701 becomes like Fig. 7C (with little blur), and the image of the object 702 becomes like Fig. 7D (with considerable blur).
If the defocus amount is large, as in the case of the object 702 (Fig. 7D), the acquired image is considerably blurred, hence the positional shift amount gently changes. Further, the acquired object image has low contrast. Therefore in order to reduce an error of the positional shift amount, it is preferable to set a large base area. If the defocus amount is small, as in the case of the object 701 (Fig. 7C), the acquired image is not blurred very much, hence the positional shift amount may sharply change. Further, the acquired object image has high contrast, therefore in order to reduce an error in the positional shift amount, it is preferable to set a small base area.
If the absolute value of the first positional shift amount is large (that is, if the defocus amount is large), the depth calculation unit 102 of this embodiment sets a large base area (second base area), and calculates the positional shift amount again. In other words, if the contrast of the image acquired via the imaging optical system 120 is low and the changes of the positional shift amount are gentle, a large base area is set.
If the absolute value of the first positional shift amount is greater than a predetermined threshold, the depth calculation unit 102 of this embodiment sets a larger second base area. This makes it unnecessary to calculate the positional shift amount of spatially adjacent pixels, and both reducing the influence of the changes of positional shift amount and the influence of noise generated upon acquiring image signals can be implemented by a simple operation. Furthermore, the base area is set according to the optical characteristic of the imaging optical system 120, hence dependency of the object depth on the changes of the positional shift amount can be reduced, and the object depth can be accurately measured.
The depth calculation unit 102 of this embodiment sets a larger size for the second base area when the absolute value of the first positional shift amount is greater than a predetermined threshold 610, as shown in Fig. 6C, but the size of the second base area may be determined by a different method. For example, as shown in Fig. 6D, the size of the second base area may be determined using a linear function of the absolute value of the first positional shift amount. Or, as a more standard approach, the size of the second base area may be determined, not based on the absolute value of the first positional shift amount, but based on the first positional shift amount itself. Furthermore, considering the case when the object is closer to the digital camera 100 than the focal position, and the case when the object is more distant from the digital camera 100 than the focal position, the inclination in the graph may be changed depending on whether the first positional shift amount is 0 or more (solid line 630) or the first positional shift amount is smaller than 0 (broken line 640), as shown in Fig. 6E. As the relationship between the defocus amount and the error of the positional shift amount is shown in Fig. 4C, the error of the positional shift amount quadratically increases as the defocus amount increases. Therefore only when the absolute value of the first positional shift amount exceeds the threshold 650, the second area size may be set as an increasing function so that the error of the defocus amount is confined within a predetermined target value, as shown in Fig. 6E. In any case, the changes of the positional shift amount in the base area and the influence of the noise generated upon acquiring image signals can be reduced by setting the size of the second base area considering the optical characteristic of the imaging optical system 120.
The depth calculation unit 102 of this embodiment need not calculate the first positional shift amount by setting the target point 410 for all the pixel positions in the first image data. The depth calculation unit 102 may calculate the first positional shift amount by sequentially moving the target point 410 by a predetermined space. For example, the first positional shift amount is calculated by keeping ten pixels of space in the horizontal direction and vertical direction, and two-dimensional distribution of the first positional shift amount is expanded by a known expansion method (e.g. bilinear interpolation, nearest neighbor interpolation), and is referred to in order to set the second base area. By decreasing a number of target points which are set for calculating the first positional shift amount, a computation amount required for calculating the first positional shift amount can be reduced.
In the present embodiment, the changes of the positional shift amount in the base area and the influence of noise generated upon acquiring the image signals are reduced by setting the size of the second base area considering the optical characteristic of the imaging optical system 120. Therefore it is only required that the ratio of the surface area, included in the base area in the first image data, can be changed to set the size of the base area. If the size of the base area is enlarged to increase the number of pixels included in the base area, the influence of the noise generated upon acquiring the image signals can be reduced. Further, the influence of the noise generated upon acquiring the image signals can also be suppressed by reducing (thinning out) the image data while keeping the number of pixels included in the base area constant. The influence of the noise generated upon acquiring the image signals can be reduced either by increasing a number of pixels included in the base area, or by reducing the image data while keeping the number of pixels included in the base area constant.
In the case of reducing the image data while keeping the number of pixels included in the base area constant, the positional shift amount calculation unit 602 shown in Fig. 6A uses the positional shift amount calculation procedure shown in Fig. 11. In step S3-6, the sizes of the first image and the second image are changed using a reduction ratio in accordance with the size of the second base area. For example, if the size of the second base area is double the size of the first base area (double in the horizontal direction and vertical direction respectively, four times in terms of area), 0.5 is set as the reduction ratio (0.5 times in the horizontal and vertical directions respectively, 1/4 times in terms of number of pixels). To change the size of the image, a known method, such as the bilinear method, can be used. In step S3-3, by using the first image (first reduced image) and the second image (second reduced image) after the size change, the second positional shift amount is calculated based on the second base area, of which number of pixels included in the base area is the same as the first base area. In Fig. 11, the reduction ratio is set in accordance with the size of the second base area, but the base area setting unit 603 may output the reduction ratio of the image to the positional shift amount calculation unit 602 in accordance with the positional shift amount (first positional shift amount) received from the positional shift amount calculation unit 602.
To calculate the second positional shift amount, both the size of the base area and the size of the image data may be changed. In other words, the processing executed by the base area setting unit 603 may not be limited to a specific manner only as long as the relative sizes of the base areas to the first image data and the second image data are changed. The influence of noise can be reduced if the relative sizes of the base areas to the first and second image data upon calculating the second positional shift amount are larger than the relative sizes upon calculating the first positional shift amount.
<Modification 1 of depth calculation unit>
The configuration shown in Fig. 8A may be used as a modification of the depth calculation unit 102 of this embodiment. The depth calculation unit 102 in Fig. 8A includes a PSF size storage unit 804 in addition to the above mentioned configuration. In the PSF size storage unit 804, a size of a point spread function (hereafter called “PSF”) of the imaging optical system 120 is stored, so as to correspond to the first positional shift amount.
The general flow of the depth calculation procedure of this modification is the same as above, but in the step of setting the size of the second base area in step S3-2 in Fig. 6B, the size of the second base area is set based on the PSF size outputted from the PSF size storage unit 804. In concrete terms, the base area setting unit 603 acquires the first positional shift amount from the positional shift amount calculation unit 602. Then the base area setting unit 603 acquires the PSF size corresponding to the first positional shift amount from the PSF size storage unit 804. The base area setting unit 603 sets the size of the second base area in accordance with the PSF size acquired from the PSF size storage unit 804.
In the depth calculation unit 102 shown in Fig. 8A, the area size of the second base area can be more appropriately set by setting the size of the second base area in accordance with the PSF size based on the defocus of the imaging optical system 120. As a result, the area size of the second base area is not set too large (or too small), and an increase of computation amount and positional shift amount error can be prevented.
The blur size of the imaging optical system 120 can be expressed by 3σ (three times the standard deviation σ) of PSF, for example. Therefore the PSF size storage unit 804 outputs 3σ of PSF of the imaging optical system 120 as the size of PSF. It is sufficient if the PSF size storage unit 804 stores the PSF size only for the central angle of view, but it is preferable to store the PSF size of the peripheral angle of view as well if the aberration of the imaging optical system 120 at the peripheral angle of view is large. The PSF size may be expressed as a function representing the relationship of the PSF size and the positional shift amount, so that the coefficients are stored in the PSF storage unit 804. For example, the PSF size may be calculated using a linear function in which a reciprocal number of the diaphragm value (F value) of the imaging optical system is a coefficient, as shown in Expression 4, and the coefficients k1 and k2 may be stored in the PSF size storage unit 804.
Figure JPOXMLDOC01-appb-M000004
Here PSFsize is a PSF size, r is a first positional shift amount, F is an F value of the imaging optical system 120, and k1 and k2 are predetermined coefficients.
The PSF size may be determined as shown in Expression 5, considering that the ratio of the base line length (distance 513) described with reference to Fig. 5B and the diameter of the exit pupil 103 of the imaging optical system 120 are approximately the same as the ratio of the absolute value of the first positional shift amount and the PSF size.
Figure JPOXMLDOC01-appb-M000005
Here w is a base line length, D is a diameter of the exit pupil, and k1 and k2 are predetermined coefficients. The size of PSF and the defocus amount have an approximate proportional relationship. If it is considered that the defocus amount and the positional shift amount have an approximate proportional relationship, as shown in Expression 3, the coefficient k2 in Expression 4 and Expression 5 is not essentially required.
The base area setting unit 603 according to this modification sets the size of the second base area in accordance with the PSF size acquired from the PSF size storage unit 804. Fig. 8B is a graph of which abscissa indicates the PSF size, and the ordinate indicates the size of the second base area. As the solid line in Fig. 8B indicates, the size of the second base area in accordance with the blur of the acquired image can be set by setting the second base area and the PSF size to have a proportional relationship. Further, the PSF size and the size of the second base area may be set to have a proportional relationship if the PSF size exceeds the threshold 810, and may be set such that the size of the second base area becomes constant if the PSF size is less than the threshold 810 as the broken line in Fig. 8B indicates.
In either case, the second base area is more appropriately set by setting the second base area in accordance with the change of the PSF size due to the defocus of the imaging optical system 120. Thereby the area size of the second base area is not set too large (or too small), and an increase of computation amount and positional shift amount error can be prevented.
<Modification 2 of depth calculation unit>
As another modification of this embodiment, the depth calculation unit 102 may include an imaging performance value storage unit instead of the PSF size storage unit 804. From the imaging performance value storage unit, a value representing the imaging performance of the object image formed by the imaging optical system 120 is outputted. The imaging performance can be expressed, for example, by an absolute value of the optical transfer function (that is modulation transfer function, and hereafter called “MTF”), which indicates the imaging performance of the imaging optical system 120. In Fig. 8C, the abscissa indicates the first positional shift amount, and the ordinate indicates the MTF of the imaging optical system 120 at a predetermined spatial frequency. As the absolute value of the first positional shift amount is smaller, the defocus amount is smaller, and a higher MTF can be acquired. The base area setting unit 603 may acquire the MTF corresponding to the first positional shift amount from the imaging performance value storage unit, as shown in Fig. 8D, and set a smaller second base area as the MTF is higher. As the MTF of the imaging optical system 120 is higher, contrast of the object image deteriorates less, and a clearer image can be acquired. Therefore the base area, considering the optical characteristic of the imaging optical system 120, can be set by setting the second base area based on the MTF corresponding to the first positional shift amount. Thereby the second base area is not set too large (or too small), and an increase of the computation amount and positional shift amount error can be prevented. The information stored in the PSF size storage unit 804 need not be information that indicates the size of the PSF, information that indicates the MTF, or information that indicates the optical characteristic of the imaging optical system 120, as shown in modification 1 and modification 2 of this embodiment. Required is information indicating that the optical characteristic of the imaging optical system 120 is stored. By setting the second base area in accordance with the blur size of the imaging optical system 120, using the information that indicates the optical characteristic of the imaging optical system 120 and the first positional shift amount, an increase of the computation amount and positional shift amount error can be prevented, and the depth to the object can be calculated at high accuracy.
<Modification 3 of depth calculation unit>
As another modification of this embodiment, the procedure shown in Fig. 8E may be used. In the following description, it is assumed that the depth calculation unit 102 includes a base area setting determination unit in addition to the configuration shown in Fig. 6A.
In Fig. 8E, a base area setting determination step in step S3-4 and a step of setting the first positional shift amount as the second positional shift amount in step S3-5 are added to the procedure in Fig. 6B. In step S3-4, determination processing is executed in which processing advances to step S3-2 if the absolute value of the first positional shift amount is greater than a predetermined threshold, or otherwise advances to step S3-5. In step S3-5, processing to set the first positional shift amount as the second positional shift amount is executed.
In the procedure shown in Fig. 8E, only when the contrast of the image drops because the defocus amount is large, the second base area is set based on the first positional shift amount, then the second positional shift amount is calculated. If the defocus amount is small, the first positional shift amount is set as the second positional shift amount. By following this procedure, a number of pixels for which the positional shift amount is calculated twice is decreased, thereby the computation amount can be further decreased while reducing the positional shift amount error.
In the above description, it is assumed that the depth calculation unit 102 includes the base area setting determination unit in addition to the configuration shown in Fig. 6A, but may include the base area setting determination unit in addition to the configuration shown in Fig. 8A. In this case as well, a number of pixels for which the positional shift amount is calculated twice is decreased, and the computation amount can be further decreased.
<Other examples of first image data and second image data acquisition method>
In the first embodiment, two image data having different points of view are acquired by splitting the luminous flux of one imaging optical system, but two image data may be acquired using two imaging optical systems. For example, the stereo camera 1000 shown in Fig. 10 may be used. In the stereo camera 1000, two imaging optical systems 1020 and 1021, two imaging elements 1010 and 1011, a depth calculation unit 102, an image storage unit 104, an image generation unit (not illustrated), and a lens driving control unit (not illustrated) are disposed inside a camera casing 130. The imaging optical systems 1020 and 1021, the imaging elements 1010 and 1011, the depth calculation unit 102 and the image storage unit 104 constitute a depth calculation apparatus 110.
In the case of the stereo camera, it is assumed that the image data generated by the imaging element 1010 is the first image data, and the image data generated by the imaging element 1011 is the second image data. The optical characteristic of the imaging optical system 1020 and that of the imaging optical system 1021 are preferably similar. The depth calculation unit 102 can calculate the depth to the object according to the depth calculation procedure described with reference to Fig. 3. The base line length in the case of the stereo camera 1000 can be the depth between the center position of the exit pupil of the imaging optical system 1020 and the center position of the exit pupil of the imaging optical system 1021. To convert the second positional shift amount, which is calculated according to the positional shift amount calculation procedure described with reference to Fig. 6B, into the depth to the object, a known method can be used.
In this modification as well, the changes of the positional shift amount in the base area and the influence of noise generated upon acquiring images can be reduced by setting the size of the second base area considering the optical characteristic of the imaging optical systems 1020 and 1021. Particularly in the case of the stereo camera 1000, the F values of the imaging optical systems 1020 and 1021 must be small in order to acquire high resolution images. In this case, a drop in contrast due to defocus becomes conspicuous, hence the depth calculation apparatus that includes the depth calculation unit 102 according to this embodiment can ideally calculate the depth to the object.
The above mentioned depth calculation apparatus according to the first embodiment can be installed by software (programs) or hardware. For example, a computer program is stored in memory of a computer (e.g. microcomputer, CPU, MPU, FPGA) included in the imaging apparatus or image processing apparatuses, and the computer executes the program to implement each processing. It is also preferable to dispose a dedicated processor, such as an ASIC, to implement all or a part of the processing of the present invention using logic circuits. The present invention is also applicable to a server in a cloud environment.
The present invention may be implemented by a method constituted by steps to be executed by a computer of a system or an apparatus, which implements the above mentioned functions of the embodiment by reading and executing a program recorded in a storage apparatus. For this purpose, this program is provided to the computer via a network, or via various types of recording media that can function as a storage apparatus (that is, a computer readable recording media that holds data non-temporarily), for example. Therefore, this computer (including such a device as a CPU and an MPU), this method, this program (including program codes and program products), and the computer readable recording media that non-temporarily stores this program are all included within the scope of the present invention.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2014-188472, filed on September 17, 2014, and Japanese Patent Application No. 2015-155151, filed on August 5, 2015, which are hereby incorporated by reference herein in their entirety.

Claims (15)

  1. A positional shift amount calculation apparatus that calculates a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the apparatus comprising:
    a calculation unit adapted to calculate the positional shift amount based on data within a predetermined area out of first image data representing the first image and second image data representing the second image; and
    a setting unit adapted to set a relative size of the area to the first and second image data, wherein
    the calculation unit is adapted to calculate a first positional shift amount using the first image data and the second image data in the area having a first size which is preset,
    the setting unit is adapted to set a second size of the area based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system, and
    the calculation unit is adapted to calculate a second positional shift amount using the first image data and the second image data in the area having the second size.
  2. The positional shift amount calculation apparatus according to Claim 1, wherein
    the setting unit sets the second size by changing sizes of the first and second images.
  3. The positional shift amount calculation apparatus according to Claim 1 or 2, wherein
    when an absolute value of the first positional shift amount is greater than a predetermined threshold, the setting unit sets the second size to be larger as the absolute value of the first positional shift amount is greater.
  4. The positional shift amount calculation apparatus according to Claim 3, wherein
    when the absolute value of the first positional shift amount is greater than the predetermined threshold, the setting unit sets the second size and the calculation unit calculates the second positional shift amount, and
    when the absolute value of the first positional shift amount is the predetermined threshold or less, the first positional shift amount is set as the second positional shift amount.
  5. The positional shift amount calculation apparatus according to any one of Claims 1 to 4, wherein
    the first image is an image based on a luminous flux that has passed through a first pupil area in an exit pupil of the first imaging optical system, and
    the second image is an image based on a luminous flux that has passed through a second pupil area, which is different from the first pupil area, in the exit pupil of the first imaging optical system.
  6. The positional shift amount calculation apparatus according to any one of Claim 1 to 4, wherein
    the first image is an image based on the luminous flux that has passed through the first imaging optical system, and
    the second image is an image based on a luminous flux that has passed through a second imaging optical system which is different from the first imaging optical system.
  7. The positional shift amount calculation apparatus according to Claim 5 or 6, further comprising a storage unit in which a PSF size, which is a size of a point spread function of the first imaging optical system, is stored, wherein
    the setting unit acquires the PSF size corresponding to the first positional shift amount from the storage unit, and sets the second size based on the acquired PSF size.
  8. The positional shift amount calculation apparatus according to Claim 7, wherein
    the PSF size is given by the following expression, where r is the first positional shift amount, F is a diaphragm value of the first imaging optical system, and k1 and k2 are predetermined coefficients.
    Figure JPOXMLDOC01-appb-M000006
  9. The positional shift amount calculation apparatus according to Claim 7, wherein
    the PSF size is three times a standard deviation of the point spread function corresponding to the first positional shift amount.
  10. The positional shift amount calculation apparatus according to Claim 5, further comprising a storage unit in which a PSF size, which is a size of a point spread function of the first imaging optical system, is stored, wherein
    the setting unit acquires the PSF size corresponding to the first positional shift amount from the storage unit, and sets the second size based on the acquired PSF size, and
    the PSF size is given by the following expression, where r is the first positional shift amount, w is a depth between a center of gravity position of the first pupil area and a center of gravity position of the second pupil area, D is a diameter of the exit pupil of the imaging optical system, and k1 and k2 are predetermined coefficients。
    Figure JPOXMLDOC01-appb-M000007
  11. The positional shift amount calculation apparatus according to Claim 5, further comprising a storage unit in which an evaluation value representing imaging performance of the first imaging optical system is stored, wherein
    the setting unit acquires the evaluation value corresponding to the first positional shift amount from the storage unit, and sets the second size based on the acquired evaluation value.
  12. The positional shift amount calculation apparatus according to Claim 11, wherein
    the evaluation value indicates an optical transfer function of the first imaging optical system at a predetermined spatial frequency.
  13. The positional shift amount calculation apparatus according to any one of Claims 1 to 12, further comprising a depth conversion unit adapted to convert the second positional shift amount, calculated by the calculation unit, into a defocus amount that is a depth from an estimated focal position to a focal position of an imaging optical system, based on a predetermined conversion coefficient.
  14. An imaging apparatus, comprising:
    an imaging optical system;
    an imaging element adapted to acquire image data based on a luminous flux that has passed through the imaging optical system; and
    the positional shift amount calculation apparatus according to any one of Claims 1 to 13.
  15. A positional shift amount calculation method for a positional shift amount calculation apparatus to calculate a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the method comprising:
    a first calculation step of calculating a first positional shift amount based on data within an area having a predetermined first size, out of first image data representing the first image and second image data representing the second image;
    a setting step of setting a second size, which is a relative size of the area to the first and second image data, based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system; and
    a second calculation step of calculating a second positional shift amount using the first image data and the second image data in the area having the second size.
PCT/JP2015/004474 2014-09-17 2015-09-03 Positional shift amount calculation apparatus and imaging apparatus WO2016042721A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/508,625 US10339665B2 (en) 2014-09-17 2015-09-03 Positional shift amount calculation apparatus and imaging apparatus
EP15842093.5A EP3194886A4 (en) 2014-09-17 2015-09-03 Positional shift amount calculation apparatus and imaging apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2014188472 2014-09-17
JP2014-188472 2014-09-17
JP2015-155151 2015-08-05
JP2015155151A JP6642998B2 (en) 2014-09-17 2015-08-05 Image shift amount calculating apparatus, imaging apparatus, and image shift amount calculating method

Publications (1)

Publication Number Publication Date
WO2016042721A1 true WO2016042721A1 (en) 2016-03-24

Family

ID=55532779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/004474 WO2016042721A1 (en) 2014-09-17 2015-09-03 Positional shift amount calculation apparatus and imaging apparatus

Country Status (1)

Country Link
WO (1) WO2016042721A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4067813A1 (en) * 2021-03-30 2022-10-05 Canon Kabushiki Kaisha Distance measurement device, moving device, distance measurement method, control method for moving device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812869A (en) 1986-07-10 1989-03-14 Canon Kabushiki Kaisha Focus detecting apparatus
JP2007233032A (en) 2006-03-01 2007-09-13 Nikon Corp Focusing device and imaging apparatus
JP2008134641A (en) * 1997-08-27 2008-06-12 Nikon Corp Interchangeable lens
JP2010117593A (en) * 2008-11-13 2010-05-27 Olympus Corp Device for acquiring distance information, imaging apparatus, and program
JP2014038151A (en) * 2012-08-13 2014-02-27 Olympus Corp Imaging apparatus and phase difference detection method
US20140247344A1 (en) * 2011-06-24 2014-09-04 Konica Minolta, Inc. Corresponding point search device and distance measurement device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812869A (en) 1986-07-10 1989-03-14 Canon Kabushiki Kaisha Focus detecting apparatus
JP2008134641A (en) * 1997-08-27 2008-06-12 Nikon Corp Interchangeable lens
JP2007233032A (en) 2006-03-01 2007-09-13 Nikon Corp Focusing device and imaging apparatus
JP2010117593A (en) * 2008-11-13 2010-05-27 Olympus Corp Device for acquiring distance information, imaging apparatus, and program
US20140247344A1 (en) * 2011-06-24 2014-09-04 Konica Minolta, Inc. Corresponding point search device and distance measurement device
JP2014038151A (en) * 2012-08-13 2014-02-27 Olympus Corp Imaging apparatus and phase difference detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KANADE T ET AL.: "PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SACRAMENTO", vol. 7, 9 April 1991, IEEE COMP. SOC. PRESS, article "disclosing a stereo matching algorithm with an adaptive window: theory and experiment", pages: 1088 - 1095
MASATOSHI OKUTOMI ET AL.: "INTERNATIONAL JOURNAL OF COMPUTER VISION", vol. 7, 1 January 1992, KLUWER ACADEMIC PUBLISHERS, article "disclosing a locally adaptive window for signal matching", pages: 143 - 162
See also references of EP3194886A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4067813A1 (en) * 2021-03-30 2022-10-05 Canon Kabushiki Kaisha Distance measurement device, moving device, distance measurement method, control method for moving device, and storage medium

Similar Documents

Publication Publication Date Title
US10061182B2 (en) Systems and methods for autofocus trigger
JP5173665B2 (en) Image capturing apparatus, distance calculation method thereof, and focused image acquisition method
CN109255810B (en) Image processing apparatus and image processing method
US10070038B2 (en) Image processing apparatus and method calculates distance information in a depth direction of an object in an image using two images whose blur is different
JP6786225B2 (en) Image processing equipment, imaging equipment and image processing programs
US10356381B2 (en) Image output apparatus, control method, image pickup apparatus, and storage medium
WO2016079965A1 (en) Depth detection apparatus, imaging apparatus and depth detection method
US10339665B2 (en) Positional shift amount calculation apparatus and imaging apparatus
US10204400B2 (en) Image processing apparatus, imaging apparatus, image processing method, and recording medium
JP6353233B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US20190297267A1 (en) Control apparatus, image capturing apparatus, control method, and storage medium
JP2017134561A (en) Image processing device, imaging apparatus and image processing program
US10084978B2 (en) Image capturing apparatus and image processing apparatus
US10122939B2 (en) Image processing device for processing image data and map data with regard to depth distribution of a subject, image processing system, imaging apparatus, image processing method, and recording medium
US20150145988A1 (en) Image processing apparatus, imaging apparatus, and image processing method
US10326951B2 (en) Image processing apparatus, image processing method, image capturing apparatus and image processing program
WO2016042721A1 (en) Positional shift amount calculation apparatus and imaging apparatus
US9936121B2 (en) Image processing device, control method of an image processing device, and storage medium that stores a program to execute a control method of an image processing device
JP2018081378A (en) Image processing apparatus, imaging device, image processing method, and image processing program
JP2013141192A (en) Depth-of-field expansion system, and depth-of-field expansion method
JP2021071793A (en) Image processing device, image processing method, imaging device, program, and storage medium
US20190089891A1 (en) Image shift amount calculation apparatus and method, image capturing apparatus, defocus amount calculation apparatus, and distance calculation apparatus
JP6639155B2 (en) Image processing apparatus and image processing method
JP2015203756A (en) Parallax amount calculation device, distance calculation device, imaging apparatus, and parallax amount calculation method
JP2009237652A (en) Image processing apparatus and method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15842093

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15508625

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2015842093

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015842093

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE