WO2013005602A1 - Image capture device and image processing device - Google Patents

Image capture device and image processing device Download PDF

Info

Publication number
WO2013005602A1
WO2013005602A1 PCT/JP2012/066187 JP2012066187W WO2013005602A1 WO 2013005602 A1 WO2013005602 A1 WO 2013005602A1 JP 2012066187 W JP2012066187 W JP 2012066187W WO 2013005602 A1 WO2013005602 A1 WO 2013005602A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blur
image signal
center
band
Prior art date
Application number
PCT/JP2012/066187
Other languages
French (fr)
Japanese (ja)
Inventor
高宏 矢野
一哉 山中
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Publication of WO2013005602A1 publication Critical patent/WO2013005602A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • G03B35/12Stereoscopic photography by simultaneous recording involving recording of different viewpoint images in different colours on a colour film
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems

Definitions

  • the present invention relates to an imaging apparatus and an image processing apparatus that can acquire distance information based on an image obtained from an imaging element.
  • the distance information is used for AF processing by an automatic focus adjustment mechanism (AF), for creating a stereoscopic image, or for image processing (for example, subject extraction processing, background extraction processing, or image processing processing for blur amount control). And the like can be used to realize various functions in the imaging apparatus.
  • AF automatic focus adjustment mechanism
  • image processing for example, subject extraction processing, background extraction processing, or image processing processing for blur amount control. And the like can be used to realize various functions in the imaging apparatus.
  • an active distance measurement method in which illumination light is irradiated and reflected light from a subject is received to perform distance measurement, or a baseline length.
  • the focus lens is driven so as to increase the contrast of images acquired by a plurality of image pickup devices (for example, stereo cameras) arranged in accordance with the principle of triangulation, or the image acquired by the image pickup device itself.
  • image pickup devices for example, stereo cameras
  • Various methods such as a contrast AF method have been proposed.
  • the active distance measurement method requires a dedicated member for distance measurement such as a light projection device for distance measurement, and the triangular distance measurement method requires a plurality of image pickup devices. Will increase.
  • the contrast AF method uses an image acquired by the imaging apparatus itself, a dedicated distance measuring member or the like is not required.
  • the contrast value peak is obtained by performing multiple imaging while changing the position of the focus lens. Therefore, it takes time to search for a peak corresponding to the in-focus position, and it is difficult to perform high-speed AF.
  • a light beam that passes through the pupil of the lens is divided into a plurality of light beams, and a light beam that passes through one pupil region in the lens Has been proposed to obtain distance information to the subject by performing a correlation operation between the pixel signal obtained from the pixel signal and the pixel signal obtained from the light beam that has passed through another pupil region in the lens.
  • a technique for simultaneously acquiring such pupil division images for example, there is a technique for disposing a light shielding plate (mask) on a pixel for distance detection.
  • Japanese Patent Laid-Open No. 2001-174696 includes a pupil color division filter having different spectral characteristics for each partial pupil in an imaging optical system.
  • a technique is described in which a subject image from a photographic optical system is received by a color image sensor to perform pupil division by color.
  • the image signal output from the color image sensor is color-separated, and the relative shift amount between the same subject on each color image is detected, so that it is shifted from the in-focus position to the short distance side or the long distance
  • Two pieces of focusing information that is, a focusing shift direction that is shifted to the side and a focusing shift amount that is a shift amount from the in-focus position in that direction, are acquired.
  • Japanese Patent Application Laid-Open No. 11-344661 describes an AF diaphragm in which different color filters (G filter and M filter) are respectively arranged in openings arranged at different eccentric positions with respect to the center of the AF lens.
  • G filter and M filter different color filters
  • the cross-correlation calculating unit determines between these image data for each color.
  • the cross-correlation is calculated, and the distance direction calculation unit calculates the distance and direction to the in-focus position of the AF lens based on the cross-correlation, and drives the AF lens to the in-focus position.
  • the present invention has been made in view of the above circumstances, and an imaging apparatus and image processing that can make an image captured by light that has passed through different pupil regions of the imaging optical system depending on the band into a more preferable image for viewing.
  • the object is to provide a device.
  • an imaging device receives and photoelectrically converts at least first-band light and second-band light to each of a first image signal and a second image signal.
  • a color imaging device that generates an image signal, an imaging optical system that forms a subject image on the imaging device, and an optical path of a photographic light flux from the subject side of the imaging optical system to the imaging device.
  • the first band limitation for blocking the light in the first band and allowing the light in the second band to pass with respect to the light that attempts to pass through a part of the pupil region of the imaging optical system is performed.
  • a band-limiting filter that performs a second band limitation that blocks the light in the second band and allows the light in the first band to pass with respect to light that is about to pass through another part of the pupil region; and the first image signal And calculating the phase difference amount based on the second image signal
  • a correction unit that performs a second band limitation that blocks the light in the second band and allows the light in the first band to pass with respect to light that is about to pass through another part of the pupil region.
  • An image processing apparatus receives at least light in the first band and light in the second band and performs photoelectric conversion to generate a first image signal and a second image signal.
  • a first band restriction for blocking light in the first band and allowing light in the second band to pass through a part of the pupil area of the system Processing an image obtained by an imaging device having a second band limiting filter that blocks the second band light and allows the first band light to pass with respect to light that is about to pass through a part
  • An image processing apparatus for performing the first processing A calculation unit that calculates a phase difference amount based on an image signal and the second image signal; a center of gravity position of a blur of the first image signal based on the phase difference amount calculated by the calculation unit; And an image correction unit that corrects a positional deviation between the center of gravity position of the blur of the second image signal.
  • FIG. 1 is a block diagram illustrating a configuration of an imaging apparatus according to Embodiment 1 of the present invention.
  • FIG. 3 is a diagram for explaining a pixel array of the image sensor in the first embodiment. The figure for demonstrating one structural example of the band-limiting filter in the said Embodiment 1.
  • FIG. 3 is a plan view showing a state of subject light flux condensing when an image of a subject closer to the in-focus position is imaged in the first embodiment.
  • FIG. 4 is a diagram showing a blur shape formed by light from one point on a subject that is closer to the in-focus side than the in-focus position in the first embodiment.
  • FIG. 4 is a diagram showing a blur shape formed by light from one point on a subject that is farther than the in-focus position in the first embodiment.
  • the figure which shows the mode of an image when the to-be-photographed object and the to-be-photographed object in the near distance and far distance are imaged.
  • FIG. 1 The figure which shows the 4th modification of the band-limiting filter in the said Embodiment 1.
  • FIG. 1 the figure which shows the outline
  • FIG. 1 The figure which shows the shape of the filter kernel for B applied to the B image of the to-be-photographed object in the far distance side from the focus position in the colorization process of the said Embodiment 1.
  • FIG. 1 is a flowchart showing colorization processing performed by a color image generation unit in the first embodiment.
  • FIG. 2 is a block diagram illustrating a configuration of an image processing apparatus according to the first embodiment.
  • FIG. FIG. 10 is a diagram illustrating a state in which a blur diffusion partial region of an original R image is copied and added to an R copy image in the second embodiment.
  • FIG. 10 is a diagram showing a state in which the blur diffusion partial area of the original B image is copied and added to the B copy image in the second embodiment.
  • FIG. 10 is a diagram illustrating an outline of a PSF table for each color according to a phase difference amount in the third embodiment of the present invention.
  • FIG. 10 is a diagram illustrating an outline of colorization processing performed by a color image generation unit in the third embodiment.
  • 9 is a flowchart illustrating colorization processing performed by a color image generation unit in the third embodiment.
  • FIG. 1 is a block diagram showing a configuration of an imaging apparatus.
  • the imaging apparatus of the present embodiment is configured as a digital still camera, for example.
  • a digital still camera is taken as an example, but the imaging device may be any device that has a color imaging device and has an imaging function.
  • the imaging apparatus includes a lens unit 1 and a body unit 2 that is a main body portion to which the lens unit 1 is detachably attached via a lens mount.
  • a case where the lens unit 1 is detachable will be described as an example, but of course, it may not be detachable.
  • the lens unit 1 includes an imaging optical system 9 including a lens 10 and a diaphragm 11, a band limiting filter 12, a lens control unit 14, and a lens side communication connector 15.
  • the body unit 2 includes a shutter 21, an imaging element 22, an imaging circuit 23, an imaging drive unit 24, an image processing unit 25, an image memory 26, a display unit 27, an interface (IF) 28, and a system controller. 30, a sensor unit 31, an operation unit 32, a strobe control circuit 33, a strobe 34, and a body side communication connector 35.
  • this recording medium 29 is also described in the body unit 2 of FIG. 1, this recording medium 29 is a memory card (smart media, SD card, xD picture card, etc.) which is detachable with respect to the imaging device. Since it is configured, the configuration may not be unique to the imaging apparatus.
  • the imaging optical system 9 in the lens unit 1 is for forming a subject image on the imaging element 22.
  • the lens 10 of the imaging optical system 9 includes a focus lens for performing focus adjustment.
  • the lens 10 is generally composed of a plurality of lenses in general, only one lens is shown in FIG. 1 for simplicity.
  • the diaphragm 11 of the imaging optical system 9 is for adjusting the brightness of the subject image formed on the imaging element 22 by regulating the passage range of the subject luminous flux that passes through the lens 10.
  • the band limiting filter 12 is disposed on the optical path of the photographing light beam from the subject side of the imaging optical system 9 to the imaging element 22 (preferably at the position of the diaphragm 11 of the imaging optical system 9 or in the vicinity thereof).
  • the first band restriction for blocking light in the first band and allowing light in the second band to pass through a part of the pupil area of the system 9 is performed, and the other pupil area of the imaging optical system 9 It is a filter that performs second band limitation for blocking light in the second band and allowing light in the first band to pass through the light that is about to pass through the section.
  • the band limiting filter 12 in the present embodiment blocks the first band light and the second band and the second band in the imaging light flux that attempts to pass through the first area that is a part of the pupil area of the imaging optical system 9.
  • the first band limitation that allows light in the three bands to pass through and the second band light in the imaging light flux that attempts to pass through the second area that is another part of the pupil area of the imaging optical system 9 are blocked.
  • FIG. 3 is a diagram for explaining a configuration example of the band limiting filter 12.
  • the band limiting filter 12 of the configuration example shown in FIG. 3 the pupil region of the imaging optical system 9 is divided into a first region and a second region.
  • the band limiting filter 12 has a left half of the G (green) component and R (red) when the image pickup device is viewed from the image pickup device 22 in a standard posture (so-called posture in which the camera is held in a normal horizontal position).
  • the RG filter 12r that passes the component and blocks the B (blue) component (that is, the B (blue) component is one of the first band and the second band), and the right half is the G component and the B component.
  • the band limiting filter 12 passes all the G components included in the light passing through the aperture 11 of the imaging optical system 9 (that is, the G component is the third band), and the R component is half of the aperture. Only the region is allowed to pass, and the B component is allowed to pass only for the remaining half of the aperture. If the RGB spectral transmission characteristics of the band limiting filter 12 are different from the RGB spectral transmission characteristics of an element filter (see FIG.
  • the spectral transmission characteristic of the band limiting filter 12 is the same as or as close as possible to the spectral transmission characteristic of the element filter of the image sensor 22. Further, other configuration examples of the band limiting filter 12 will be described later with reference to FIGS.
  • the lens control unit 14 controls the lens unit 1. That is, the lens control unit 14 drives and focuses the focus lens in the lens 10 based on a command received from the system controller 30 via the lens-side communication connector 15 and the body-side communication connector 35, or the aperture 11 is moved. It is driven to change the aperture diameter.
  • the lens-side communication connector 15 enables communication between the lens control unit 14 and the system controller 30 by connecting the lens unit 1 and the body unit 2 with a lens mount and connecting to the body-side communication connector 35. Connector.
  • the shutter 21 in the body unit 2 is an optical shutter for adjusting the exposure time of the image sensor 22 by regulating the passage time of the subject light beam reaching the image sensor 22 from the lens 10.
  • an optical shutter is used here, an element shutter (electronic shutter) by the image sensor 22 may be used instead of or in addition to the optical shutter.
  • the imaging device 22 receives and photoelectrically converts the subject image formed by the imaging optical system 9 for each of a plurality of wavelength bands (for example, but not limited to RGB), and performs photoelectric conversion.
  • a color image sensor that outputs a signal for example, a CCD or CMOS.
  • the configuration of the color image sensor may be a single-plate image sensor provided with an on-chip element color filter, or a three-plate system using a dichroic prism that performs color separation into RGB color lights.
  • it may be an image sensor of a method capable of acquiring RGB imaging information according to the position in the depth direction of the semiconductor at the same pixel position, or any imaging element that can acquire imaging information of a plurality of wavelength bands. It does n’t matter.
  • FIG. 2 is a diagram for explaining the pixel arrangement of the image sensor 22.
  • the plurality of wavelength band lights transmitted through the element color filter mounted on-chip are R, G, and B.
  • An image sensor is configured. Therefore, when the image pickup device 22 has the configuration shown in FIG. 2, only one color component is obtained per pixel, and therefore the image processor 25 performs a demosaicing process. A color image in which three colors of RGB are aligned per pixel is generated.
  • the image sensor 22 receives and photoelectrically converts at least R light, for example, as light in the first band and B light, for example, as light in the second band, and performs R conversion as the first image signal.
  • a signal and a B image signal as a second image signal are generated.
  • the image pickup circuit 23 performs A / D conversion when the image signal output from the image pickup device 22 is amplified (gain adjustment), or when the image pickup device 22 is an analog image pickup device and outputs an analog image signal. Or a digital image signal (hereinafter also referred to as “image information”) (when the image pickup device 22 is a digital image pickup device, it is already digital when it is input to the image pickup circuit 23). Therefore, A / D conversion is not performed).
  • the imaging circuit 23 outputs an image signal to the image processing unit 25 in a format corresponding to the imaging mode switched by the imaging driving unit 24 as will be described later.
  • the imaging drive unit 24 supplies a timing signal and power to the imaging element 22 and the imaging circuit 23 based on a command from the system controller 30 to cause the imaging element to perform exposure, reading, element shuttering, and the like, and also to the imaging element 22. Control is performed to execute gain adjustment and A / D conversion by the imaging circuit 23 in synchronism with the above operations. The imaging drive unit 24 also performs control to switch the imaging mode of the image sensor 22.
  • the image processing unit 25 is a digital image such as WB (white balance) adjustment, black level correction, ⁇ correction, defective pixel correction, demosaicing, image information color information conversion processing, image information pixel number conversion processing, and the like. The processing is performed.
  • the image processing unit 25 further includes a color correction unit 36 and a color image generation unit 37 which is an image correction unit and is a setting unit.
  • the inter-color correction unit 36 is for correcting such a difference in brightness between bands (between colors).
  • the brightness difference between bands (between colors) can be easily corrected according to the area of the pass region for each band, but the amount of light in the periphery is larger than the center of the image.
  • more detailed correction according to the optical characteristics of the imaging optical system 9 may be performed in consideration of the tendency to decrease.
  • the correction value is not limited to be calculated in the imaging apparatus, and the correction value may be held as table data or the like.
  • the color image generation unit 37 performs a colorization process that is a digital process for forming color image information.
  • a spatial positional deviation may occur between the R image and the B image. Therefore, the spatial positional deviation is corrected.
  • FIG. 4 is a plan view showing a state of subject light flux condensing when imaging a subject at the in-focus position
  • FIG. 5 is a subject light flux concentrating when imaging a subject closer to the in-focus position
  • FIG. 6 is a plan view showing the state of light
  • FIG. 6 is a diagram showing the shape of a blur formed by light from one point on the subject that is closer to the focus position
  • FIG. 7 is a closer distance than the focus position
  • FIG. 8 is a diagram showing the shape of blur formed by light from one point on a subject on the side for each color component
  • FIG. 8 is a diagram of subject light flux collection when imaging a subject farther than the in-focus position.
  • FIG. 9 is a plan view showing the state
  • FIG. 9 is a diagram showing the shape of a blur formed by light from one point on the subject that is on the far side from the in-focus position
  • FIG. 10 is on the far side from the in-focus position
  • FIG. 11 is a diagram showing the shape of a blur formed by light from one point on a subject for each color component. Location and than a diagram showing a state of image obtained by imaging the object at a short distance and long distance.
  • the aperture of the diaphragm 11 has a circular shape will be described as an example.
  • the light emitted from one point on the subject OBJc has a G component that passes through the entire band limiting filter 12 as shown in FIG.
  • Both the R component that passes only through the filter 12r and the B component that passes only through the other half of the GB filter 12b of the band limiting filter 12 are collected at one point on the image sensor 22 to form a point image IMGrgb.
  • a point image IMGrgb having no color blur is formed as shown in FIG.
  • the subject OBJn when the subject OBJn is, for example, closer to the in-focus position, the light emitted from one point on the subject OBJn is used for the G component as shown in FIGS.
  • a subject image IMGg that forms a circular blur is formed
  • a subject image IMGr that forms a semicircular blur of the left half is formed for the R component
  • a subject image IMGb that forms a semicircular blur of the right half is formed for the B component. Therefore, when the subject OBJn that is closer to the in-focus position is imaged, as shown in FIG. 11, the blur component image in which the R component subject image IMGr is shifted to the left and the B component subject image IMGb is shifted to the right.
  • the left and right positions of the R component and B component in this blurred image are the R component transmission region (RG filter 12r) and B component transmission region (GB filter 12b) of the band limiting filter 12 when viewed from the image sensor 22.
  • RG filter 12r R component transmission region
  • GB filter 12b B component transmission region
  • the deviation is emphasized and illustration of the blurred shape actually generated is omitted (the same applies to the case of the far side below)).
  • the subject OBJn moves away from the in-focus position, the blur increases, and the distance between the center of gravity of the R component subject image IMGr and the center of gravity of the B component subject image IMGb increases. Become.
  • the subject OBJf when the subject OBJf is on the far side from the in-focus position, for example, the light emitted from one point on the subject OBJf causes a circular blur for the G component as shown in FIGS.
  • a subject image IMGg is formed
  • a subject image IMGr that forms a semicircular blur of the right half is formed for the R component
  • a subject image IMGb that forms a semicircular blur of the left half is formed for the B component. Therefore, when the subject OBJf that is farther than the in-focus position is imaged, as shown in FIG. 11, the blur component image in which the R component subject image IMGr is shifted to the right side and the B component subject image IMGb is shifted to the left side.
  • the left and right positions of the R component and B component in this blurred image are the R component transmission region (RG filter 12r) and B component transmission region (GB filter 12b) of the band limiting filter 12 when viewed from the image sensor 22. It is the opposite of the left and right position.
  • RG filter 12r R component transmission region
  • GB filter 12b B component transmission region
  • the image memory 26 is a memory capable of high-speed writing and reading.
  • the image memory 26 is composed of, for example, SDRAM (Synchronous Dynamic Random Access Memory), and is used as a work area for image processing. Also used as For example, the image memory 26 not only stores the final image processed by the image processing unit 25 but also appropriately stores each intermediate image in a plurality of processing steps by the image processing unit 25.
  • the display unit 27 includes an LCD or the like, and an image processed for display by the image processing unit 25 (an image read from the recording medium 29 and processed for display by the image processing unit 25 is also included). Display). Specifically, the display unit 27 performs live view image display, confirmation display when recording a still image, reproduction display of a still image or a moving image read from the recording medium 29, and the like.
  • the interface (IF) 28 is detachably connected to the recording medium 29, and transmits information to be recorded on the recording medium 29 and information read from the recording medium 29.
  • the recording medium 29 is for recording an image processed for recording by the image processing unit 25 and various data related to the image, and is configured as a memory card or the like as described above.
  • the sensor unit 31 is, for example, a camera shake sensor configured by an acceleration sensor or the like for detecting blur of the imaging device, a temperature sensor for measuring the temperature of the imaging device 22, and a brightness for measuring the brightness around the imaging device. Includes brightness sensor, etc.
  • the detection result by the sensor unit 31 is input to the system controller 30.
  • the detection result by the camera shake sensor is used to drive the image pickup device 22 and the lens 10 to perform camera shake correction or to perform camera shake correction by image processing.
  • the detection result by the temperature sensor is used to control the drive clock by the imaging drive unit 24 and to estimate the amount of noise in the image obtained from the image sensor 22.
  • the detection result by the brightness sensor is used, for example, to appropriately control the luminance of the display unit 27 according to the ambient brightness.
  • the operation unit 32 changes a power switch for turning on / off the power of the image pickup device, a release button including a two-stage press button for inputting an image pickup operation such as a still image or a moving image, an image pickup mode, and the like. Mode buttons, cross keys used to change selection items, numerical values, and the like.
  • a signal generated by the operation of the operation unit 32 is input to the system controller 30.
  • the strobe control circuit 33 controls the light emission amount and the light emission timing of the strobe 34 based on a command from the system controller 30.
  • the strobe 34 is a light source that emits illumination light to the subject under the control of the strobe control circuit 33.
  • the body side communication connector 35 is connected between the lens control unit 14 and the system controller 30 when the lens unit 1 and the body unit 2 are coupled by the lens mount and connected to the lens side communication connector 15. It is a connector that enables communication.
  • the system controller 30 controls the body unit 2 and also controls the lens unit 1 via the lens control unit 14, and is a control unit that integrally controls the imaging apparatus.
  • the system controller 30 reads a basic control program of the image pickup apparatus from a non-illustrated non-volatile memory such as a flash memory, and controls the entire image pickup apparatus in accordance with an input from the operation unit 32.
  • the system controller 30 controls the diaphragm adjustment of the diaphragm 11 via the lens control unit 14, controls and drives the shutter 21, or shake correction (not shown) based on the detection result of the acceleration sensor of the sensor unit 31.
  • the mechanism is controlled to perform camera shake correction or the like.
  • the system controller 30 sets the mode setting of the imaging device (a still image mode for capturing a still image, a moving image mode for capturing a moving image, a stereoscopic image) in response to an input from the mode button of the operation unit 32. 3D mode or the like for imaging).
  • the system controller 30 includes a contrast AF control unit 38 and a distance calculation unit 39 as a calculation unit.
  • the system controller 30 performs AF control by the contrast AF control unit 38, or uses the distance information calculated by the distance calculation unit 39. Based on this, the lens unit 1 is controlled to perform AF.
  • the contrast AF control unit 38 may output an image signal output from the image processing unit 25 (this image signal may be a G image having a high ratio including a luminance component, and color misregistration is corrected by a colorization process described later.
  • a contrast value (also referred to as an AF evaluation value) is generated from the luminance signal image related to the image, and the focus lens in the lens 10 is controlled via the lens control unit 14. That is, the contrast AF control unit 38 extracts a high-frequency component by applying a filter, for example, a high-pass filter, to the image signal to obtain a contrast value. Then, the contrast AF control unit 38 acquires the contrast value by changing the focus lens position, moves the focus lens in the direction in which the contrast value increases, and further acquires the contrast value. By repeatedly performing such processing, control is performed so that the focus lens is driven to the focus lens position (focus position) at which the maximum contrast value is obtained.
  • the distance calculation unit 39 calculates the phase difference amount between the first image in the first band and the second image in the second band obtained from the image sensor 22, and based on the calculated phase difference amount.
  • the distance to the subject is calculated.
  • the distance calculation unit 39 calculates distance information according to the lens formula from the amount of deviation between the R component and the B component as shown in FIG. 7, FIG. 10, FIG.
  • the distance calculation unit 39 extracts the R component and the B component from among the components of each RGB color obtained as the captured image. Then, by calculating the correlation between the R component and the B component, the direction of the deviation and the magnitude of the deviation occurring in the R component image and the B component image are calculated (however, the present invention is not limited to this). Alternatively, it is possible to calculate the direction of displacement and the size of the displacement occurring between the R image and the G image or between the B image and the G image).
  • the direction of deviation is information for determining whether the subject of interest is closer or farther than the in-focus position.
  • the magnitude of the deviation is information for determining how far the subject of interest is away from the in-focus position.
  • the distance calculation unit 39 calculates how far the subject of interest is away to the side closer to or farther from the in-focus position based on the calculated direction of deviation and magnitude of deviation. It is like that.
  • the distance calculation unit 39 is based on an input from the operation unit 32 or based on a basic control program of the imaging apparatus, and a distance calculation area determined by the system controller 30 (for example, the entire area of the captured image or distance information in the captured image).
  • the distance calculation as described above is performed for a part of the region where it is desired to acquire the
  • a technique described in Japanese Patent Application Laid-Open No. 2001-16611 can be used as such a technique for obtaining the deviation amount and calculating the distance information.
  • the distance information acquired by the distance calculation unit 39 can be used for, for example, autofocus (AF).
  • AF autofocus
  • the distance calculation unit 39 acquires distance information based on the difference between the R component and the B component, and the system controller 30 drives the focus lens of the lens 10 via the lens control unit 14 based on the acquired distance information, that is, AF.
  • Phase difference AF can be performed. Thereby, high-speed AF is possible based on one captured image.
  • AF control may be performed based on the calculation result by the distance calculation unit 39. It may be performed by the contrast AF control unit 38.
  • contrast AF by the contrast AF control unit 38 has high focusing accuracy, but it requires a plurality of captured images, and therefore there is a problem that the focusing speed cannot be said to be fast.
  • the focusing speed since the calculation of the subject distance by the distance calculation unit 39 can be performed based on one captured image, the focusing speed is fast, but the focusing accuracy may be inferior to the contrast AF.
  • the AF assist control unit 38a provided in the contrast AF control unit 38 may perform AF by combining the contrast AF control unit 38 and the distance calculation unit 39. That is, the distance calculation unit 39 performs a distance calculation based on the deviation between the R component and the B component of the image acquired through the band limiting filter 12 to determine whether the subject is on the far side from the current focus position. Or it is acquired whether it exists in the short distance side. Alternatively, the distance that the subject is away from the current focus position is acquired. Next, the AF assist control unit 38a controls the contrast AF control unit 38 to drive the focus lens toward the acquired long distance side or the short distance side (by the acquired distance) and perform contrast AF. By performing such processing, it is possible to obtain high focusing accuracy at a fast focusing speed.
  • the R image and B image obtained from the image sensor 22 can be used as, for example, a stereo stereoscopic image (3D image).
  • the 3D image only needs to be able to observe the image from the left pupil with the left eye and the image from the right pupil with the right eye.
  • An anaglyph method has been conventionally known as such a 3D image observation method (see also Japanese Patent Laid-Open No. 4-251239 described above).
  • This anaglyph method generally generates a red left-eye image and a blue right-eye image, displays both images, and arranges a red transmission filter on the left eye side and a blue transmission filter on the right eye side. By observing this image using blue glasses, a monochrome stereoscopic image can be observed.
  • an R image and a B image obtained from the image sensor 22 with a standard posture are performed.
  • a standard posture an image in which color processing for correcting the positional deviation between the R component and the B component in the color image generation unit 37 is not performed.
  • the anaglyph type red / blue glasses are used for observation, a stereoscopic view is possible as it is. That is, as described with reference to FIG. 3, when the imaging apparatus is in the standard posture, the RG filter 12r of the band limiting filter 12 is arranged on the left side when the subject is viewed from the imaging element 22, and the GB filter 12b is on the right side. It is comprised so that it may be arrange
  • the R component light transmitted through the left RG filter 12r is observed only by the left eye
  • the B component light transmitted through the right GB filter 12b is observed only by the right eye. Is possible.
  • FIG. 12 is a diagram showing a first modification of the band limiting filter 12.
  • FIG. 13 is a diagram showing a second modification of the band limiting filter 12.
  • the band limiting filter 12 shown in FIG. 13 has a configuration in which the pupil region of the imaging optical system 9 is sandwiched between the first region and the second region through which the imaging light flux that is not subjected to the band limitation passes. ing.
  • the band limiting filter 12 is a filter in which a W filter 12w that passes components of all RGB colors (that is, white light W) is disposed between the left RG filter 12r and the right GB filter 12b. .
  • FIG. 14 is a diagram showing a third modification of the band limiting filter 12.
  • the first region and the second region are arranged with different vertical and horizontal positions when the imaging apparatus is in the standard posture. Yes. That is, the band limiting filter 12 divides a circular filter into four crosses, for example, and arranges the RG filter 12r in the lower left (third quadrant in the graph) and the GB filter 12b in the upper right (first quadrant in the graph). In addition, the first G filter 12g1 is arranged in the upper left (second quadrant in the graph), and the second G filter 12g2 is arranged in the lower right (fourth quadrant in the graph).
  • FIG. 15 is a diagram showing a fourth modification of the band limiting filter 12.
  • band limiting filter 12 shown in FIG. 3 and FIGS. 12 to 14 is a normal color filter
  • the band limiting filter 12 shown in FIG. 15 is configured by a color selective transmission element 18.
  • the color selective transmission element 18 includes a member capable of rotating the polarization transmission axis corresponding to the color (wavelength) and a member capable of selectively controlling whether or not to rotate the polarization transmission axis such as an LCD.
  • This element is realized by combining a plurality of elements, and can change the color distribution.
  • a specific example of the color selective transmission element 18 is a color switch manufactured by Color Link as disclosed in “SID '00 Digest, Vol. 31, p. 92” in SID2000 in April 2000 AD. .
  • the color selective transmission element 18 has the left half as the first color selection transmission element 18L and the right half as the second color selection transmission element 18R when viewed from the imaging element 22 with the imaging apparatus in the standard posture (lateral position). It is configured.
  • the first color selective transmission element 18L can selectively take an RG state that passes the G component and the R component and blocks the B component, and a W state that passes all the RGB components (W). It has become.
  • the second color selective transmission element 18R can selectively take a GB state in which the G component and the B component are allowed to pass and an R component is blocked, and a W state in which all the RGB components (W) are allowed to pass. ing.
  • the first color selection / transmission element 18L and the second color selection / transmission element 18R are independently driven by a color selection drive unit (not shown), and the first color selection / transmission element 18L is set in the RG state.
  • the second color selective transmission element 18R by causing the second color selective transmission element 18R to assume the GB state, the same function as the band limiting filter 12 shown in FIG. 3 can be achieved.
  • the color selective transmission element 18 can perform the same function as the band limiting filter 12 shown in FIGS.
  • the band limiting filter 12 can also be configured.
  • the colorization process is a process for correcting the blur having the shape shown in FIG. 7 or 10 to the blur having the shape shown in FIG.
  • FIG. 16 is a diagram showing an outline of the blurred shape after the colorization process.
  • FIG. 17 is a diagram showing the shape of the R filter kernel applied to the R image of the subject on the far side from the in-focus position in the colorization processing
  • FIG. 18 is far from the in-focus position in the colorization processing. It is a figure which shows the shape of the filter kernel for B applied with respect to the B image of the to-be-photographed object on the distance side.
  • color misregistration correction and blur shape correction are performed by filtering processing (processing that convolves a filter kernel with an image). That is, by performing R filtering processing on the R image, the shape of the blur of the R image and the centroid position (Ch, Cr) (this centroid position (Ch, Cr) is determined in the example shown in FIG.
  • the peak position of the filter coefficient of the filter kernel for R) is the ideal shape of circular blur and the center of gravity (Ch, Cg) (this center of gravity (Ch, Cg) is shown in FIG. 17 and FIG.
  • the B filter is made close to the coordinate center of the R filter kernel and the B filter kernel), and the B filtering process is performed on the B image, whereby the blur shape and the gravity center position of the B image ( Ch, Cb) (this centroid position (Ch, Cb) is also the peak position of the filter coefficient of the B filter kernel in the example shown in FIG. 18).
  • the shape and position of the center of gravity of the circular blur an ideal blur mentioned above (Ch, Cg) has become a process of close to.
  • an ideal blur center of gravity position for the first and second image signals is set by the color image generation unit 37 as setting means.
  • the setting of the ideal center of gravity of the blur shape by the color image generation unit 37 as the setting means can be set to a desired position.
  • the blur shape of the R image can be set as a desired position.
  • Some examples include an example in which the geometrical positional relationship between the center of gravity position and the center of gravity position of the blur shape of the B image is set constant, and an example of setting as the center of gravity position of the blur shape of the G image that is the standard image. Is considered.
  • an example in which the former geometric positional relationship is set to be constant is more specifically an inner division point (more than a predetermined ratio between the center of gravity position of the blur shape of the R image and the center of gravity position of the blur shape of the B image.
  • an inner division point more specifically an inner division point (more than a predetermined ratio between the center of gravity position of the blur shape of the R image and the center of gravity position of the blur shape of the B image.
  • setting as a midpoint can be considered.
  • the GB filter 12b and the RB filter 12r in the band limiting filter 12 shown in FIG. 3 each occupy a half of the pupil region and have a line-symmetric shape as shown in the figure, so that an ideal blur is obtained.
  • the center of gravity position of the shape is set to the midpoint between the center of gravity position of the blur shape of the R image and the center of gravity position of the blur shape of the B image, it can be considered to be an appropriate setting.
  • the reason why the center of gravity position of the ideal circular blur shape is the center of gravity position of the blur shape of the G image that is the standard image is that the G image is an actually acquired image and the entire pupil of the band limiting filter 12 This is because a natural circular blur shape corresponding to the region is exhibited. Then, when matching the blurring shape of the R image and the B image with the blurring shape of the G image, the blurring of the R image and the B image becomes a shape exhibiting a circular shape equivalent to the blurring of the G image. Since the size of the image is larger than the size of the blur of the R image and the B image, advantages such as easier processing can be obtained by matching the blur shape of the R image and the B image with the blur shape of the G image. It is also for the purpose.
  • the center of gravity (Ch, Cg) of the ideal circular blur shape set as described above becomes the filter center, and the center of gravity (Ch, Cr) of the blur shape of the R image or the blur shape of the B image.
  • the filter kernel is determined so that the barycentric position (Ch, Cb) is the peak position of the filter coefficient.
  • the reason why the center of gravity positions of the R image and the B image are brought close is to correct the color shift, and the blur shape of the R image and the B image is approximated by correcting that the blur shape is different for each color. This is because the blur in the color image is made to have a natural shape.
  • the R filter kernel shown in FIG. 17 is for performing a convolution operation on the R image, and a Gaussian filter is arranged as an example of a blur filter.
  • the peak of the filter coefficient of the Gaussian filter (corresponding approximately to the position of the center of gravity of the filter coefficient) is the position of the center of the kernel (the horizontal line Ch that divides the top and bottom of the kernel into two equal parts)
  • the filter shape is at a position shifted in phase difference (a position (Ch, Cr) where the horizontal line Ch and the vertical line Cr passing through the center of gravity of the blur shape of the R image intersect).
  • the B filter kernel shown in FIG. 18 is for performing a convolution operation on the B image, and similarly to the R filter kernel, a Gaussian filter is arranged as an example of a blur filter.
  • the peak of the filter coefficient of the Gaussian filter (corresponding substantially to the position of the center of gravity of the filter coefficient) is the position of the kernel center (position where the horizontal line Ch and the vertical line Cg intersect (Ch, Cg)). From the position obtained by shifting the phase difference between the image obtained from the ideal circular blur shape and the B image (the position where the horizontal line Ch and the vertical line Cb passing through the center of gravity of the blur shape of the B image intersect (Ch, Cb)).
  • FIG. 19 is a diagram showing how the R image and B image of the subject located farther than the in-focus position in the color processing of another example
  • FIG. 20 shows the color processing of another example. It is a figure which shows the shape of the filter applied with respect to R image and B image.
  • the colorization processing is performed by correcting the color misregistration by parallel movement (shift) and correcting the blurred shape by the filtering processing.
  • the filter kernel shown in FIG. 20 is for performing a convolution operation on the R image and the B image, and a Gaussian filter is arranged as an example of the blur filter.
  • the filter coefficient peak of the Gaussian filter (corresponding to the position of the center of gravity of the filter coefficient) is at the kernel center position (position (Ch, Cg) where the horizontal line Ch and the vertical line Cg intersect). It has a shape.
  • the band limiting filter 12 in which the first area and the second area are symmetric is assumed here as shown in FIG. 3, the same filtering process is applied to the R image and the B image.
  • the first region and the second region are asymmetric, it is needless to say that different filtering processes may be performed on the R image and the B image.
  • a Gaussian filter (circular Gaussian filter) is taken as an example of a blur filter, but the present invention is not limited to this.
  • a Gaussian filter circular Gaussian filter
  • FIG. 3 shows the filter shape shown in FIG. 3, as shown in FIGS. 7 and 10.
  • blurring generated in the R image and the B image is semicircular in the vertical direction (that is, shorter in the horizontal direction than in the circular shape).
  • Shape if an elliptical Gaussian filter as shown in FIG. 21, FIG. 22 or the like having the horizontal direction (more generally, the direction in which the phase difference is generated) as the major axis direction is used, the blur is more accurately formed into a circular shape. It is possible to perform a correction process to approach FIG.
  • 21 is a diagram showing the shape of an elliptical Gaussian filter with a large standard deviation in the horizontal direction
  • FIG. 22 is a diagram showing the shape of an elliptical Gaussian filter with a small standard deviation in the vertical direction.
  • 21 and 22 show examples in which the peak of the filter coefficient (corresponding to the position of the center of gravity of the filter coefficient) is located at the center of the filter kernel corresponding to FIG. 20, but it corresponds to FIG. 17 and FIG. In this case, it goes without saying that the peak of the filter coefficient (which substantially corresponds to the position of the center of gravity of the filter coefficient) is shifted from the center of the filter kernel.
  • the blur filter is not limited to the circular Gaussian filter and the elliptic Gaussian filter, and the blur shape of the R image or the B image can be approximated to an ideal circular blur shape.
  • a filter can be widely applied.
  • FIG. 23 is a flowchart showing the colorization process performed by the color image generation unit 37.
  • Step S1 When this process is started, initialization is performed. In this initial setting, first, an RGB image to be processed (that is, an R image, a G image, and a B image) is read. Next, an R copy image that is a copy of the R image and a B copy image that is a copy of the B image are created.
  • an RGB image to be processed that is, an R image, a G image, and a B image
  • Step S2 a target pixel for performing phase difference detection is set.
  • the target pixel is set to one of the R image and the B image, here, for example, the R image.
  • Step S3 A phase difference with respect to the target pixel set in step S2 is detected.
  • This phase difference is detected by setting a partial area including the target pixel at the center position as an R image as a reference image, and setting a partial area of the same size as a B image as a reference image (see FIG. 26, etc.)
  • the distance calculation unit 39 performs a correlation calculation between the reference image and the reference image while shifting the position of the image in the direction in which the phase difference is generated, and the reference image determined to have the highest correlation value is compared with the reference image.
  • the amount of misalignment between them (note that the sign of the misregistration amount gives information on the direction of misalignment) is the phase difference amount.
  • the partial area can be set to an arbitrary size. However, in order to stably detect the phase difference, it is preferable to use partial areas of 30 [pixels] or more in the vertical and horizontal directions. As an example, 51 ⁇ 51 An area of [Pixel] is given.
  • the correlation calculation in the distance calculation unit 39 is performed by a process such as a ZNCC calculation or an SAD calculation for an image that has been previously subjected to filter processing.
  • Equation 1 I is a partial area of the R image
  • T is a partial area of the B image (partial area of the same size as I)
  • I (bar) is an average value of I
  • T (bar) is an average value of T
  • M is The horizontal width [pixel] of the partial area
  • N is the vertical width [pixel] of the partial area.
  • filtering processing such as a differential filter represented by a Sobel filter or a bandpass filter such as a LOG filter is first applied to the R image and the B image. Give it. After that, the correlation calculation is performed by the SAD calculation shown in the following formula 2.
  • I ′ is a partial region of the R image after the filtering process is performed
  • T ′ is a partial region of the B image after the filtering process is performed (partial region of the same size as I ′)
  • M is a partial region The horizontal width [pixel] of the area
  • N is the vertical width [pixel] of the partial area.
  • R SAD is a correlation value obtained as a result of the correlation calculation.
  • Step S4 It is determined whether or not the phase difference detection processing for all the target pixels in the image is completed. Then, until the completion, the processing of step S2 and step S3 is repeated while shifting the position of the target pixel.
  • all the target pixels in the image indicate all pixels in the image that can detect the phase difference. It should be noted that pixels that cannot detect the phase difference may be left undetected, or the phase difference amount may be calculated by performing interpolation or the like based on the detection result of the surrounding pixel of interest.
  • Step S5 the target pixel for performing the correction process is set for the pixel for which the phase difference amount is obtained.
  • the pixel position of the target pixel is the same (that is, common) pixel position in the R image and the B image.
  • the correction process as described with reference to FIGS. 17 and 18, a case will be described in which a colorization process using only a filter without translation (shift) is performed.
  • Step S6 In accordance with the phase difference amount of the pixel of interest, the shape of the R image filter for performing the filtering process so that the R image has an ideal circular blur shape is acquired.
  • the relationship between the phase difference amount and the filter shape for the R image is held in advance in the imaging apparatus as a table as shown in Table 1 below, for example.
  • the filter shape is determined by determining the size of the filter kernel, the deviation of the Gaussian filter from the center of the filter kernel, the standard deviation ⁇ of the Gaussian filter (this standard deviation ⁇ is the blur of the blur filter) It shows the degree of spread). Therefore, the shape of the R image filter can be acquired by referring to the table based on the phase difference amount of the target pixel.
  • Step S7 a filtering process is performed on a neighboring region including the target pixel and its neighboring pixels in the R image, and a filter output value at the target pixel is acquired. Then, the acquired filter output value is copied to the target pixel position of the R copy image, and the R copy image is updated.
  • Step S8 In accordance with the phase difference amount of the target pixel, the shape of the B image filter for performing the filtering process so that the B image has an ideal circular blur shape is acquired.
  • the relationship between the phase difference amount and the filter shape for the B image is held in advance in the imaging apparatus as a table as shown in Table 2 below, for example.
  • the filter shape is determined by the size of the filter kernel, the deviation of the Gaussian filter from the center of the filter kernel, and the standard deviation ⁇ of the Gaussian filter. Therefore, the shape of the B image filter can be acquired by referring to the table based on the phase difference amount of the target pixel.
  • Step S9 a filtering process is performed on a neighboring region including the target pixel and its neighboring pixels in the B image, and a filter output value at the target pixel is acquired. Then, the acquired filter output value is copied to the target pixel position of the B copy image, and the B copy image is updated.
  • Step S10 It is determined whether or not the filtering process for all the target pixels in the image has been completed. Until the processing is completed, the processing in steps S5 to S9 is repeated while shifting the position of the target pixel.
  • the R copy image and the B copy image are output as corrected images for the R image and the B image, and the colorization process is terminated.
  • step S6 or step S8 instead of using the circular Gaussian filter, when using an elliptical Gaussian filter as shown in FIG. 21 or FIG. 22, the standard deviation value of the elliptical Gaussian filter is set to x. Since the direction and the y direction may be set separately, the parameter table relating to the filter shape held in the imaging apparatus is as shown in Table 3 below, for example, excluding the deviation from the filter kernel center.
  • the deviation from the filter kernel center when the elliptical Gaussian filter is used in step S6 is the deviation from the filter kernel center when the elliptical Gaussian filter is used in step S8, for example, as in Table 1. For example, it may be the same as in Table 2.
  • an example in which the shape of the filter corresponding to the phase difference amount is acquired using the table with reference to Table 1 and Table 2 is not limited thereto.
  • the correspondence between each parameter for determining the filter shape and the phase difference amount is held as, for example, an equation, the phase difference amount is substituted into the equation, and the filter shape is determined by calculation or the like. It doesn't matter if you do.
  • the processing for detecting the phase difference from the image captured through the band limiting filter 12 and the colorization processing for correcting the color misregistration do not necessarily have to be performed in the form of the imaging device, and the image processing device is separate from the imaging device. 41 (for example, a computer that executes an image processing program) may be performed.
  • FIG. 24 is a block diagram showing a configuration of the image processing apparatus 41.
  • the image processing apparatus 41 includes a lens mount (not shown), a shutter 21, an image sensor 22, an image pickup circuit 23, an image pickup drive unit 24, and a lens unit 1, which are not shown in FIG. Contrast AF control unit 38 (including AF assist control unit 38a) related to AF control, body side communication connector 35 related to communication with the lens unit 1, strobe control circuit 33 and strobe 34 related to subject illumination, and state of the imaging device
  • AF control unit 38 including AF assist control unit 38a
  • body side communication connector 35 related to communication with the lens unit 1
  • strobe control circuit 33 and strobe 34 related to subject illumination
  • state of the imaging device The sensor unit 31 for obtaining the information is removed, and a recording unit 42 for recording information input from the interface (IF) 28 is further provided.
  • the recording unit 42 is configured to output information input and recorded via the interface 28 to the image processing unit 25. Information can be recorded from the image processing unit 25 to the recording unit 42.
  • the recording unit 42 is controlled by the system controller 30A (the reference number of the system controller is 30A in accordance with the removal of the contrast AF control unit 38) together with the interface 28.
  • the reference numeral of the operation unit is 32A.
  • a series of processing in the image processing apparatus 41 is performed as follows, for example. First, an image is captured using an imaging device including the band limiting filter 12 and is recorded on the recording medium 29 as a RAW image output from the imaging circuit 23. Further, information related to the shape of the band limiting filter 12, lens data of the imaging optical system 9, and the like are also recorded on the recording medium 29.
  • the recording medium 29 is connected to the interface 28 of the image processing apparatus 41, and images and various information are recorded in the recording unit 42.
  • the recording medium 29 may be removed from the interface 28.
  • the image and various types of information recorded in the recording unit 42 are read out, and the phase difference is calculated by the distance calculating unit 39 or the color shift by the color image generating unit 37 is performed in the same manner as the imaging device described above. Perform color correction processing.
  • the colorized image processed by the image processing device 41 is recorded in the recording unit 42 again. Further, the colorized image recorded in the recording unit 42 is displayed on the display unit or transmitted to an external device via the interface 28. Therefore, in the external device, the colorized image can be used for various purposes.
  • the center of gravity positions of the R image and the B image are converted into an ideal circular blur shape by using the blur filter itself or by translating (shifting) it before using the blur filter. Since the position of the center of gravity of the R image, the B image, and the G image is corrected so that the positions of the center of gravity of the blur coincide with each other, the color misregistration is reduced. A more preferable image can be obtained for viewing.
  • FIGS. 25 to 30 show the second embodiment of the present invention
  • FIG. 25 is a diagram showing an outline of the colorization processing performed by the color image generation unit 37.
  • the colorization processing in the color image generation unit 37 is different from that in the first embodiment.
  • correction of color misregistration is performed using a blur filter, but in the present embodiment, this is performed by image copy addition processing.
  • the blur of the R image has a shape in which the left half of the blur of the G image that is an ideal circular blur shape is missing.
  • the blur of the B image is a shape in which the right half of the blur of the G image, which is an ideal circular blur shape, is missing.
  • the ideal circular blur shape is a G image.
  • the ideal circular blur shape is a circular blur shape that compensates for the half blurring shape lacking the R image and the B image.
  • the blur of the R image is copied and added to the missing portion of the blur of the R image, and the blur of the B image is Copy addition is added to the missing part of the blurred image.
  • the shape of the partial area to be copied and added matches the shape of the missing portion of the blur in the R image and the B image, but here, in order to simplify the processing, a rectangular (for example, square) area is used. Yes.
  • the blur diffusion partial region G1 and the blur diffusion partial region G2 are shown in the circular blur diffusion region in the G image which is an ideal circular blur shape.
  • the blur diffusion partial region G1 and the blur diffusion partial region G2 are symmetrically positioned with respect to the vertical line Cg passing through the center of gravity of the blur diffusion region forming the circular shape of the G image which is an ideal circular blur shape.
  • the sizes of the partial areas G1 and G2 are preferably about the size of the radius of the circular blur diffusion area in consideration of fulfilling the function of color misregistration correction.
  • the blur diffusion partial region R1 shown for the R image is a region having the same size and the same size as the blur diffusion partial region G2.
  • the blur diffusion partial region R2 having the same size and the same position as the blur diffusion partial region G1 is insufficient.
  • the blur diffusion partial region B1 shown for the B image is the same size and the same position as the blur diffusion partial region G1.
  • the blur diffusion region B2 of the B image lacks the blur diffusion partial region B2 having the same size and the same position as the blur diffusion partial region G2.
  • a movement amount that completely overlaps the blur diffusion partial region G1 when the blur diffusion partial region G2 is moved in the blur diffusion partial region R1 of the R image (for example, a movement amount about the radius of the circular blur diffusion region G1).
  • the blur diffused partial area R2 of the R image is generated by moving and moving the image (to an R copy image that is a copy of the R image, as will be described later).
  • the blur diffusion partial region R2 is a region corresponding to the blur diffusion partial region G1 (or the blur diffusion partial region B1 of the B image) of the blur diffusion region forming the circular shape of the G image which is an ideal circular blur shape.
  • the blur diffusion partial region B1 of the B image is moved by a movement amount (same as above) that completely overlaps the blur diffusion partial region G2 when the blur diffusion partial region G1 is moved (as will be described later, B
  • the blur diffusion partial area B2 of the B image is generated by performing copy addition (to the B copy image which is a copy of the image).
  • This blur diffusion partial region B2 is a region corresponding to the blur diffusion partial region G2 (or the blur diffusion partial region R1 of the R image) of the blur diffusion region forming the circular shape of the G image which is an ideal circular blur shape.
  • the center of gravity of the blur diffusion region of the R image is close to the center of gravity of the blur diffusion region of the G image having an ideal circular blur shape (the proximity includes matching), and the same applies to the B image.
  • the center of gravity of the blur diffusion region is close to the center of gravity of the blur diffusion region of the G image having an ideal circular blur shape.
  • FIG. 26 is a diagram showing a partial region set in the R image and the B image when performing phase difference detection
  • FIG. 27 is a diagram showing a state in which the blur diffused partial region of the original R image is copied and added to the R copy image
  • FIG. 28 is a diagram showing a state in which the blur diffusion partial area of the original B image is copied and added to the B copy image
  • FIG. 29 is a diagram showing an example of changing the size of the blur diffusion partial area according to the phase difference amount.
  • Reference numeral 30 denotes a flowchart showing the colorization processing performed by the color image generation unit 37. Description will be made along FIG. 30 with reference to FIGS. 26 to 29 as appropriate.
  • Step S21 When this process is started, initialization is performed.
  • an RGB image to be processed that is, an R image, a G image, and a B image
  • the input image is a Bayer image
  • the demosaicing process is performed in advance in the image processing unit 25 (however, when the accuracy of the phase difference to be acquired is not required to be high) Or, if you want to prioritize speed over accuracy, or if you want to reduce processing load, etc.), R image, G image, and B image (that is, demosaicing process) by simply separating the Bayer image into each color. Not each color image)).
  • the color difference amount between the R image and the G image is calculated to generate a Cr image that is a color difference image
  • the color difference amount between the B image and the G image is calculated to calculate the color difference image.
  • a certain Cb image is generated.
  • Cr 0.50000R-0.41869G-0.0811B
  • Cb ⁇ 0.16874R ⁇ 0.33126G + 0.50000B Is widely known, instead of Equation 3, Equation 4 may be used.
  • a Cr copy image Cr1 that is a copy image of the original Cr image Cr0 and a Cb copy image Cb1 (see FIG. 28) that is a copy image of the original Cb image Cb0 are created. Further, a Cr count image and a Cb count image having the same size as the Cr copy image Cr1 and the Cb copy image Cb1 are generated (here, in these count images, the initial value of the pixel value is set to 1 for all pixels). .
  • Step S22 Subsequently, a partial region for performing phase difference detection is set.
  • the partial area is set to one of the R image and the B image, here, for example, the R image.
  • Step S23 A phase difference with respect to the partial region set in step S22 is detected. This phase difference is detected by using the partial area set in the R image as a standard image and the partial area of the B image having the same size as the standard image as a reference image as shown in FIG. By doing so, phase difference detection is performed between the R image and the B image.
  • Step S24 Based on the phase difference amount obtained by the processing in step S23, the radius of the circular blur of the G image corresponding to the ideal blur shape (or the radius of the semicircular blur of the R image and the B image) (
  • the radius is given as an example, but it may be a diameter or any other amount as long as it can represent the ideal circular blur size.
  • the relationship between the phase difference amount and the blurring radius of the G image that is an ideal circular blur shape is held in advance in the imaging apparatus as a table, a mathematical expression, or the like. Therefore, the blur radius can be acquired by referring to a table or performing a calculation using a mathematical formula based on the phase difference amount.
  • step S24 may be omitted, and a simple method using the phase difference amount acquired in step S23 in place of the blur radius may be applied. In this case, the relationship between the phase difference amount and the blur radius need not be held in the imaging apparatus in advance.
  • Step S25 a partial region is read from the original Cr image Cr0, shifted by a predetermined amount corresponding to the phase difference detected in step S23, and then copied and added to the Cr copy image Cr1.
  • the predetermined amount for shifting the partial area is an amount including the shifting direction, and the size thereof is, for example, the blur radius acquired in step S24.
  • the reason why the Cr copy image Cr1 is created in the above-described step S21 is that it is necessary to hold the original Cr image Cr0 separately from the Cr copy image Cr1 whose pixel value is changed by copy addition ( The same applies to the Cb copy image Cb1). However, it is not necessary to prepare a copy image when, for example, partial areas are processed in parallel operation instead of sequentially processing in the order of raster scanning.
  • Step S26 Subsequently, +1 is added to the region of “the position where the copy addition process has been performed in step S25” of the Cr count image so that the number of times of addition can be understood.
  • This Cr count image is used to perform pixel value normalization processing in the subsequent step S30.
  • Step S27 Further, the partial area is read from the same position as “the position copied to the Cr copy image Cr1 in step S25” in the original Cb image Cb0, and the copy source from the original Cr image Cr0 in “step S25” of the Cb copy image Cb1. A copy is added to “the position from which data was acquired”.
  • the predetermined amount for shifting the Cb image has the same absolute value as the predetermined amount for shifting the Cr image, but the direction is reversed.
  • Step S28 the number of times of addition can be seen in the region of “the position where the copy source data was acquired from the original Cr image Cr0 in step S25” (that is, the position where the copy addition process was performed in step S27) of the Cb count image. Add +1 to.
  • This Cb count image is also used to perform pixel value normalization processing in the subsequent step S30.
  • step S25 and step S27 described above the copy addition process is performed for each partial area of the image, but this partial area may be the same as the partial area for which the phase difference detection process was performed in step S23. On the other hand, it may be a partial area having a size different from that of the phase difference detection process.
  • the size of the partial area used for the copy addition process may be constant for the entire image (that is, global size), but may be different for each partial area set in the image (that is, local size). (Even if it is a big size).
  • the size of the partial region used in steps S25 to S28 may be changed as shown in FIG. 29 according to the phase difference amount detected in step S23.
  • the processing may be branched depending on whether or not the phase difference amount is 0, and no processing may be performed when the phase difference amount is 0.
  • the size of the partial area is increased in proportion to the phase difference amount. Since the inclination of the straight line at this time is appropriately set according to the configuration of the optical system, a specific scale is not shown in FIG.
  • FIG. 29 shows an example in which the relationship between the phase difference amount and the size of the partial area is proportional, but of course, the relationship is not limited to the proportional relationship.
  • the design may be made so that the size of the partial region with respect to the phase difference amount is appropriate.
  • the amount of diffusion (a point spread amount of PSF (Point Spread Function)) when the light beam from the point light source is blurred is not necessarily uniform within the blurred area. For example, it is conceivable that the amount of diffusion is smaller (that is, the luminance is lower) in the peripheral portion of the blur than in the central portion. Therefore, when performing copy addition of partial areas as described above, a weighting factor corresponding to the amount of blur diffusion may be multiplied. For example, each pixel in the peripheral part of the partial region is multiplied by a weighting factor of 1/2, each pixel in the central part of the partial region is multiplied by one weighting factor, and then copy addition is performed. At this time, also in the count images in step S26 and step S28, 1/2 is added to the peripheral part of the partial area, and 1 is added to the central part of the partial area.
  • PSF Point Spread Function
  • Step S29 Thereafter, it is determined whether or not the processing for all the partial areas in the image is completed. Until the processing is completed, the processing in steps S22 to S28 is repeated while shifting the position of the partial area.
  • an arbitrary value can be set as the step for shifting the partial area, but it is preferably a value smaller than the width of the partial area.
  • Step S30 Thus, if it is determined in step S29 that the processing for all the partial areas has been completed, the normalized Cr by dividing the pixel value of the Cr copy image by the pixel value of the Cr count image for each same pixel position. A copy image is obtained, and a normalized Cb copy image is obtained by dividing the pixel value of the Cb copy image by the pixel value of the Cb count image.
  • Step S31 an R image, a B image, and a G image are generated using the G image (or Y image) and the Cr copy image and the Cb copy image normalized in step S30.
  • Equation 3 is used to calculate the color difference image
  • B Y + 1.77200Cb Is used to generate an R image, a B image, and a G image.
  • the RGB image calculated in step S31 is an image obtained by the colorization process in the color image generation unit 37.
  • the same effects as those of the first embodiment can be obtained by correcting the color misregistration by the image copy addition process.
  • FIGS. 31 to 34 show Embodiment 3 of the present invention
  • FIG. 31 is a diagram showing an outline of a PSF (Point Spread Function) table for each color according to the phase difference amount
  • FIG. FIG. 33 is a diagram showing an outline of colorization processing performed by the color image generation unit 37
  • FIG. 33 is a diagram showing an overview of colorization processing with blur amount control performed by the color image generation unit 37
  • FIG. 34 is a color image generation unit.
  • 37 is a flowchart showing a colorization process performed by 37.
  • the colorization process in the color image generation unit 37 is different from those in the first and second embodiments. That is, according to the present embodiment, the colorization process includes a restoration process (inverse filtering process) for restoring a blurred image to a non-blurred image, and a circular blur shape corresponding to the subject distance for the restored non-blurred image. This is performed by combining filtering processing to be obtained.
  • a restoration process inverse filtering process
  • the PSF Point Spread Function
  • PSFr which is the PSF of the R image
  • the shape is a left semicircular shape as shown in the upper half of FIG. 31, but when the distance is farther than the in-focus position, it is shown in the lower half of FIG. It becomes a right semicircle shape.
  • PSFb which is the PSF of the B image
  • PSFb shows a larger semicircular shape as the absolute value of the phase difference amount increases, and converges to 1 point at the phase difference amount of 0, that is, at the in-focus position.
  • the shape is a right semicircle as shown in the upper half of FIG. 31, but when the distance is farther than the in-focus position, the lower half of FIG. It becomes a left semicircle shape.
  • PSFg which is the PSF of the G image, shows a larger circular shape as the absolute value of the phase difference amount increases, and converges to one point when the phase difference amount is 0, that is, at the in-focus position. Further, the PSFg is always an ideally defocused circular shape regardless of whether it is closer or farther than the in-focus position except when it converges to one point.
  • a PSF table for each color corresponding to the phase difference amount as shown in FIG. 31 is assumed to be stored in advance in, for example, a non-illustrated nonvolatile memory in the color image generation unit 37 (however, in FIG. 1).
  • the table stored in the lens control unit 14 may be received and used by communication).
  • FIG. 32 shows an example of the case where the subject is farther than the in-focus position, but the same processing can be applied when the subject is closer than the in-focus position.
  • the right semicircular blur is converted into a non - blurred image in which the point light source converges to one point (either the restored first image or the restored second image). Either).
  • the left semicircular blur is converted into a blur-free image in which the point light source converges to one point (the restored first image and the restored second image Convert to either one).
  • a PSF for generating an ideal circular blurred shape is applied to the restored R image and the restored B image.
  • the PSF to be actuated here is PSFg, which is a PSF for a G image corresponding to an ideal circular blurred shape.
  • the concrete processing is performed as follows.
  • the blur PSF of the R image at a certain pixel position is Pr1
  • the blur PSF at the same pixel position of the B image is Pb1
  • the blur PSF at the same pixel position of the G image is Pg1.
  • the blurred PSF differs depending on the phase difference between the R image and the B image, and the phase difference basically differs for each pixel position. Therefore, the PSF is different for each pixel position. To be determined. Further, it is assumed that the PSF is defined for a partial region including a plurality of neighboring pixels with the pixel position of interest at the center (see FIG. 31).
  • Pr1 which is a PSF centered on the pixel of interest in the R image and the B image Pb1 that is the PSF centered on the pixel of interest at the same position in Pb1 and Pg1 that is the PSF centered on the pixel of interest at the same position in the G image are acquired.
  • R is divided by PR1
  • B is divided by PB1
  • restoration processing is performed, and each is multiplied by a value for generating an ideal blurred shape.
  • the filtering process is performed by Specifically, here, the value for generating an ideal blurred shape is PG1 described above.
  • two-dimensional inverse Fourier transform IFFT2 is applied to the result, and an R image r ′ and a B image b ′ from which the same blur as the G image corresponding to the ideal blur shape is obtained are calculated.
  • b ′ IFFT2 (B ⁇ PG1 / PB1 ⁇
  • ⁇ in Equation 9 is an arbitrary constant appropriately set according to the shapes of PR1 and PB1.
  • for example, the relationship between the absolute values
  • the restoration process and the filtering process as described above are performed for each partial area of the image.
  • the partial area is designated by slightly shifting the position, and the designated partial area is again displayed. Similar restoration processing and filtering processing are performed.
  • restoration processing and filtering processing are performed on the entire region of the image.
  • the sum of a plurality of corrected pixel values processed at the pixel position is divided by the number of corrections and averaged to obtain a normalized corrected image.
  • the size of the partial area on which the restoration process and the filtering process are performed is larger than the shape of the blur. Therefore, it is conceivable to adaptively change the size of the partial region in accordance with the size of the blur. Alternatively, when it is known in advance in which range the shape of the blur changes according to the phase difference, a partial region having a size larger than the maximum size of the blur may be used in a fixed manner.
  • Step S41 When this process is started, initialization is performed.
  • an RGB image to be processed that is, an R image, a G image, and a B image
  • an R copy image that is a copy of the R image and a B copy image that is a copy of the B image are created.
  • an R count image and a B count image having the same size as the R copy image and the B copy image are generated (here, in these count images, the initial value of the pixel value is set to 1 for all pixels).
  • Step S42 Subsequently, a partial region for performing phase difference detection is set.
  • the partial area is set to one of the R image and the B image, here, for example, the R image.
  • Step S43 A phase difference with respect to the partial region set in step S42 is detected. This phase difference is detected by using the partial area set in the R image as a standard image and the partial area of the B image having the same size as the standard image as a reference image, as shown in FIG. By performing the above, phase difference detection is performed between the R image and the B image.
  • Step S44 Based on the phase difference amount obtained by the processing in step S43, the radius of the circular blur of the G image corresponding to the ideal blur shape (or the radius of the semicircular blur of the R image and the B image) can be obtained. Obtained in the same manner as in step S24 described above.
  • Step S45 Next, the restoration process and the filtering process as described above are performed on the partial region specified in step S42 of the original R image.
  • the processing result obtained in this way is copied and added to the same position as the partial area of the original R image in the R copy image.
  • the partial area for performing the colorization process is the same as the partial area for performing the phase difference detection will be described, but a different area may be set as a matter of course, as described above.
  • a partial area having an adaptive size according to the detected phase difference may be used (the same applies to the B image described later).
  • Step S46 Subsequently, +1 is added to the “partial region designated in step S42” of the R count image so that the number of times of addition can be understood.
  • This R count image is used to perform pixel value normalization processing in the subsequent step S50.
  • Step S47 Further, the restoration process and the filtering process as described above are performed on the partial region specified in the above-described step S42 of the original B image.
  • the processing result obtained in this way is copied and added at the same position as the partial area of the original B image in the B copy image.
  • Step S48 Then, 1 is added to the “partial region designated in step S42” of the B count image so that the number of times of addition can be understood.
  • This B count image is also used to perform pixel value normalization processing in the subsequent step S50.
  • Step S49 Thereafter, it is determined whether or not the processing for all the partial areas in the image is completed. Then, the processes in steps S42 to S48 are repeated while shifting the position of the partial area until the process is completed.
  • an arbitrary value can be set as the step for shifting the partial area, but it is preferably a value smaller than the width of the partial area.
  • Step S50 Thus, if it is determined in step S49 that the processing for all the partial areas has been completed, the R value normalized by dividing the pixel value of the R copy image by the pixel value of the R count image for each same pixel position. A copy image is obtained, and a normalized B copy image is obtained by dividing the pixel value of the B copy image by the pixel value of the B count image.
  • the RGB image calculated in step S50 is an image obtained by the colorization process in the color image generation unit 37.
  • step S50 when the process of step S50 is completed, the process shown in FIG. 34 is terminated.
  • restoration processing and filtering processing are performed after transforming from real space to frequency space using Fourier transform.
  • the present invention is not limited to this, and restoration processing or filtering processing (for example, MAP estimation processing) in real space may be applied.
  • the blurring shape of the R image and the B image is matched with the blurring shape of the G image, but in addition to this, the blur amount control may be performed.
  • the inverse operation PSFr -1 of PSFr the R image the inverse operation PSFB -1 of PSFB the B image, restoring the first image and the restored second image through each
  • the inverse of the PSFg PSFg ⁇ 1 is performed on the G image to convert the circular blur into a blur-free image in which the point light source converges to one point (restored third image).
  • the R image, the B image, and the G image are restored.
  • PSF for generating ideal circular blur is applied to the restored R image, B image, and G image.
  • PSF′g which is a desired PSF for the G image, is caused to act as a PSF for generating an ideal circular blur.
  • a desired Pg1 ′ is further acquired as a PSF centered on the pixel of interest at the same position in the G image.
  • Equation 10 two-dimensional Fourier is obtained as shown in Equation 10 below for the acquired Pg1 ′ and the partial region g centered on the pixel of interest in the G image.
  • a conversion FFT2 is performed to obtain converted values PG1 ′ and G.
  • G FFT2 (g)
  • R is divided by PR1
  • B is divided by PB1
  • G is divided by PG1
  • restoration processing is performed, and each is multiplied by PG1 ′ to perform filtering processing.
  • the result is subjected to a two-dimensional inverse Fourier transform IFFT2 to calculate an R image r ′′, a B image b ′′, and a G image g ′′ from which a desired amount of blur has been obtained.
  • r ′′ IFFT2 (R ⁇ PG1 ′ / PR1 ⁇
  • b ′′ IFFT2 (B ⁇ PG1 ′ / PB1 ⁇
  • g ′′ IFFT2 (G ⁇ PG1 ′ / PG1 ⁇
  • is an arbitrary constant appropriately set according to the shapes of PR1, PB1, and PG1.
  • the same effects as those of the first and second embodiments can be obtained by performing color shift correction by the restoration processing and filtering processing using PSF.
  • the amount of blur of the RGB color image is controlled as desired. It is also possible to do.
  • the R image and the B image are corrected.
  • the color misalignment can be eliminated.
  • the color deviation of the R image, the B image, and the G image can also be eliminated.
  • the color shift can be further improved.
  • all the blur shapes of the R image, the B image, and the G image are processed so as to have the same ideal circular shape, a natural and preferable image suitable for viewing can be obtained.
  • the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying the constituent elements without departing from the scope of the invention in the implementation stage.
  • various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiment. For example, you may delete some components from all the components shown by embodiment.
  • the constituent elements over different embodiments may be appropriately combined.

Abstract

This image capture device is provided with: a color image sensor (22) that generates at least an R image and a B image; an imaging optical system (9) that forms a subject image on the image sensor (22); a band-limiting filter (12) is disposed on an optical path of an imaging light beam, and limits a first band, whereby, with regard to light that attempts to pass through part of an aperture area of the imaging optical system (9), B light is blocked and R light is allowed to pass through, and limits a second band, whereby, with regard to light that attempts to pass through the other part of the aperture area, R light is blocked and B light is allowed to pass through; a distance computation unit (39) that computes the amount of phase difference between the R image and the B image; and a color image generation unit (37) which, on the basis of the computed amount of phase difference, corrects a deviation between the position of the center of gravity of blur of the R image and the position of the center of gravity of blur of the B image.

Description

撮像装置、画像処理装置Imaging device, image processing device
 本発明は、撮像素子から得られた画像に基づき距離情報を取得し得る撮像装置、画像処理装置に関する。 The present invention relates to an imaging apparatus and an image processing apparatus that can acquire distance information based on an image obtained from an imaging element.
 距離情報は、自動焦点調節機構(AF)によるAF処理を行うためや、立体視用画像を作成するため、あるいは画像処理(例えば、被写体抽出処理や背景抽出処理、あるいはボケ量コントロールの画像加工処理)を行うためなど、撮像装置における様々な機能を実現するのに利用可能である。 The distance information is used for AF processing by an automatic focus adjustment mechanism (AF), for creating a stereoscopic image, or for image processing (for example, subject extraction processing, background extraction processing, or image processing processing for blur amount control). And the like can be used to realize various functions in the imaging apparatus.
 このような距離情報を取得する方法は、従来より種々のものが提案されており、例えば、照明光を照射し被写体からの反射光を受光して測距を行うアクティブ測距方式や、基線長を設けて配置した複数の撮像装置(例えばステレオカメラ)の取得画像から三角測距の原理により測距を行う方式、あるいは撮像装置自体により取得した画像のコントラストが高くなるようにフォーカスレンズを駆動するコントラストAF方式など、様々な方式が提案されている。 Various methods for obtaining such distance information have been proposed. For example, an active distance measurement method in which illumination light is irradiated and reflected light from a subject is received to perform distance measurement, or a baseline length. The focus lens is driven so as to increase the contrast of images acquired by a plurality of image pickup devices (for example, stereo cameras) arranged in accordance with the principle of triangulation, or the image acquired by the image pickup device itself. Various methods such as a contrast AF method have been proposed.
 しかし、アクティブ測距方式は測距用投光装置などの測距専用の部材が必要であり、また、三角測距方式では複数の撮像装置が必要であるために、撮像装置が大型化したりコストが上昇したりする要因となる。一方、コントラストAF方式は、撮像装置自体により取得した画像を利用するために、測距専用部材等は不要であるが、フォーカスレンズの位置を変化させながら複数回の撮像を行ってコントラスト値のピークを探す方式であるために、合焦位置に対応するピークを探すのに時間を要し、高速のAFを行うことが困難である。 However, the active distance measurement method requires a dedicated member for distance measurement such as a light projection device for distance measurement, and the triangular distance measurement method requires a plurality of image pickup devices. Will increase. On the other hand, since the contrast AF method uses an image acquired by the imaging apparatus itself, a dedicated distance measuring member or the like is not required. However, the contrast value peak is obtained by performing multiple imaging while changing the position of the focus lens. Therefore, it takes time to search for a peak corresponding to the in-focus position, and it is difficult to perform high-speed AF.
 このような背景の下、大型化やコストの上昇を抑えながら距離情報を取得する技術として、レンズの瞳を通過する光束を複数に分割して受光し、レンズ内の一瞳領域を通過した光束から得られた画素信号と、レンズ内の他の瞳領域を通過した光束から得られた画素信号と、の間で相関演算を行うことにより、被写体までの距離情報を取得する技術が提案されている。このような瞳分割画像を同時に取得する技術としては、例えば、距離検出用の画素上に遮光板(マスク)を配置する技術がある。 Under such a background, as a technology to acquire distance information while suppressing an increase in size and cost, a light beam that passes through the pupil of the lens is divided into a plurality of light beams, and a light beam that passes through one pupil region in the lens Has been proposed to obtain distance information to the subject by performing a correlation operation between the pixel signal obtained from the pixel signal and the pixel signal obtained from the light beam that has passed through another pupil region in the lens. Yes. As a technique for simultaneously acquiring such pupil division images, for example, there is a technique for disposing a light shielding plate (mask) on a pixel for distance detection.
 また、瞳分割を遮光板(マスク)によることなく行う技術として、例えば特開2001-174696号公報には、部分瞳毎に異なる分光特性を持たせた瞳色分割用フィルタを撮影光学系に介在させ、撮影光学系からの被写体像をカラー撮像素子により受光することで、色による瞳分割を行う技術が記載されている。すなわち、カラー撮像素子から出力される画像信号を色分離して、各色画像上の同一被写体間の相対的なズレ量を検知することにより、合焦位置から近距離側にずれているのか遠距離側にずれているのかのフォーカシングズレ方向と、その方向への合焦位置からのズレ量であるフォーカシングズレ量と、の2つのフォーカシング情報が取得される。 As a technique for performing pupil division without using a light-shielding plate (mask), for example, Japanese Patent Laid-Open No. 2001-174696 includes a pupil color division filter having different spectral characteristics for each partial pupil in an imaging optical system. In addition, a technique is described in which a subject image from a photographic optical system is received by a color image sensor to perform pupil division by color. In other words, the image signal output from the color image sensor is color-separated, and the relative shift amount between the same subject on each color image is detected, so that it is shifted from the in-focus position to the short distance side or the long distance Two pieces of focusing information, that is, a focusing shift direction that is shifted to the side and a focusing shift amount that is a shift amount from the in-focus position in that direction, are acquired.
 また、特開平4-251239号公報には、該公報の図7に示すような分光特性(青色とオレンジ色)の部分瞳を設け、異なる部分瞳から視差がある複数の画像を同時に取得して、立体視用画像を作成する技術が記載されている。 In Japanese Patent Laid-Open No. 4-251239, partial pupils having spectral characteristics (blue and orange) as shown in FIG. 7 of the publication are provided, and a plurality of images with parallax are acquired simultaneously from different partial pupils. A technique for creating a stereoscopic image is described.
 さらに、特開平11-344662号公報には、AFレンズの中心に対して異なる偏心位置に配された開口部に、異なる色フィルタ(Gフィルタ、Mフィルタ)をそれぞれ配置したAF絞りが記載されている。そして、AFのための撮像では、AF絞りの各色フィルタを通過する各光束に応じた画像フレーム内のAFエリアの画像データを色別に取得し、相互相関演算部がこれら色別の画像データ間の相互相関を算出し、距離方向算出部が相互相関に基づきAFレンズの合焦位置までの距離と方向を算出して、AFレンズを合焦位置に駆動するようになっている。 Further, Japanese Patent Application Laid-Open No. 11-344661 describes an AF diaphragm in which different color filters (G filter and M filter) are respectively arranged in openings arranged at different eccentric positions with respect to the center of the AF lens. Yes. In imaging for AF, the image data of the AF area in the image frame corresponding to each light beam passing through each color filter of the AF stop is acquired for each color, and the cross-correlation calculating unit determines between these image data for each color. The cross-correlation is calculated, and the distance direction calculation unit calculates the distance and direction to the in-focus position of the AF lens based on the cross-correlation, and drives the AF lens to the in-focus position.
 ところで、特開2001-16611号公報には、第1の開口部を通過した第1の画像と第2の開口部を通過した第2の画像とのズレ量を取得して、レンズの公式により距離情報を演算する技術が記載されている。 By the way, in Japanese Patent Laid-Open No. 2001-16611, the amount of deviation between the first image that has passed through the first opening and the second image that has passed through the second opening is acquired, and the lens formula is used. A technique for calculating distance information is described.
 しかしながら、上記各公報に記載されたような技術を用いた場合には、撮像素子から得られる各色画像にずれが生じるために、得られた画像の用途はAFに限ったものとなり、観賞には不適切であった。 However, when techniques such as those described in the above publications are used, each color image obtained from the image sensor is misaligned, so that the use of the obtained image is limited to AF. It was inappropriate.
 本発明は上記事情に鑑みてなされたものであり、帯域によって撮像光学系の異なる瞳領域を通過した光により撮影された画像を、鑑賞用としてより好ましい画像にすることができる撮像装置、画像処理装置を提供することを目的としている。 The present invention has been made in view of the above circumstances, and an imaging apparatus and image processing that can make an image captured by light that has passed through different pupil regions of the imaging optical system depending on the band into a more preferable image for viewing. The object is to provide a device.
 上記の目的を達成するために、本発明の一態様による撮像装置は、少なくとも第1帯域の光と第2帯域の光とをそれぞれ受光して光電変換し、第1の画像信号と第2の画像信号とを生成するカラーの撮像素子と、被写体像を前記撮像素子に結像する撮像光学系と、前記撮像光学系の被写体側から前記撮像素子に至る撮影光束の光路上に配設されていて、前記撮像光学系の瞳領域の一部を通過しようとする光に関して前記第1帯域の光を遮断し前記第2帯域の光を通過させる第1の帯域制限を行い、前記撮像光学系の瞳領域の他の一部を通過しようとする光に関して前記第2帯域の光を遮断し前記第1帯域の光を通過させる第2の帯域制限を行う帯域制限フィルタと、前記第1の画像信号と前記第2の画像信号とに基づいて位相差量を演算する演算部と、前記演算部により演算された位相差量に基づき、前記第1の画像信号のボケの重心位置と、前記第2の画像信号のボケの重心位置と、の位置ズレを補正する画像補正部と、を具備している。 In order to achieve the above object, an imaging device according to one embodiment of the present invention receives and photoelectrically converts at least first-band light and second-band light to each of a first image signal and a second image signal. A color imaging device that generates an image signal, an imaging optical system that forms a subject image on the imaging device, and an optical path of a photographic light flux from the subject side of the imaging optical system to the imaging device. The first band limitation for blocking the light in the first band and allowing the light in the second band to pass with respect to the light that attempts to pass through a part of the pupil region of the imaging optical system is performed. A band-limiting filter that performs a second band limitation that blocks the light in the second band and allows the light in the first band to pass with respect to light that is about to pass through another part of the pupil region; and the first image signal And calculating the phase difference amount based on the second image signal An image for correcting a positional deviation between the blur center position of the first image signal and the blur center position of the second image signal based on the calculation unit and the phase difference amount calculated by the calculation unit. And a correction unit.
 また、本発明の他の態様による画像処理装置は、少なくとも第1帯域の光と第2帯域の光とをそれぞれ受光して光電変換し、第1の画像信号と第2の画像信号とを生成するカラーの撮像素子と、被写体像を前記撮像素子に結像する撮像光学系と、前記撮像光学系の被写体側から前記撮像素子に至る撮影光束の光路上に配設されていて、前記撮像光学系の瞳領域の一部を通過しようとする光に関して前記第1帯域の光を遮断し前記第2帯域の光を通過させる第1の帯域制限を行い、前記撮像光学系の瞳領域の他の一部を通過しようとする光に関して前記第2帯域の光を遮断し前記第1帯域の光を通過させる第2の帯域制限を行う帯域制限フィルタと、を有する撮像装置により得られた画像を処理するための画像処理装置であって、前記第1の画像信号と前記第2の画像信号とに基づいて位相差量を演算する演算部と、前記演算部により演算された位相差量に基づき、前記第1の画像信号のボケの重心位置と、前記第2の画像信号のボケの重心位置と、の位置ズレを補正する画像補正部と、を具備している。 An image processing apparatus according to another aspect of the present invention receives at least light in the first band and light in the second band and performs photoelectric conversion to generate a first image signal and a second image signal. A color imaging device, an imaging optical system that forms a subject image on the imaging device, and an optical path of a photographic light beam from the subject side of the imaging optical system to the imaging device. A first band restriction for blocking light in the first band and allowing light in the second band to pass through a part of the pupil area of the system; Processing an image obtained by an imaging device having a second band limiting filter that blocks the second band light and allows the first band light to pass with respect to light that is about to pass through a part An image processing apparatus for performing the first processing A calculation unit that calculates a phase difference amount based on an image signal and the second image signal; a center of gravity position of a blur of the first image signal based on the phase difference amount calculated by the calculation unit; And an image correction unit that corrects a positional deviation between the center of gravity position of the blur of the second image signal.
本発明の実施形態1における撮像装置の構成を示すブロック図。1 is a block diagram illustrating a configuration of an imaging apparatus according to Embodiment 1 of the present invention. 上記実施形態1における撮像素子の画素配列を説明するための図。FIG. 3 is a diagram for explaining a pixel array of the image sensor in the first embodiment. 上記実施形態1における帯域制限フィルタの一構成例を説明するための図。The figure for demonstrating one structural example of the band-limiting filter in the said Embodiment 1. FIG. 上記実施形態1において、合焦位置にある被写体を撮像するときの被写体光束集光の様子を示す平面図。In the said Embodiment 1, the top view which shows the mode of a subject light beam condensing when imaging the to-be-photographed object in a focus position. 上記実施形態1において、合焦位置よりも近距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図。FIG. 3 is a plan view showing a state of subject light flux condensing when an image of a subject closer to the in-focus position is imaged in the first embodiment. 上記実施形態1において、合焦位置よりも近距離側にある被写体上の1点からの光により形成されるボケの形状を示す図。FIG. 4 is a diagram showing a blur shape formed by light from one point on a subject that is closer to the in-focus side than the in-focus position in the first embodiment. 上記実施形態1において、合焦位置よりも近距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図。In the said Embodiment 1, the figure which shows the shape of the blur formed with the light from one point on the to-be-photographed object in the short distance side from a focus position for every color component. 上記実施形態1において、合焦位置よりも遠距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図。In the said Embodiment 1, the top view which shows the mode of a subject light beam condensing when imaging the to-be-photographed object from the in-focus position. 上記実施形態1において、合焦位置よりも遠距離側にある被写体上の1点からの光により形成されるボケの形状を示す図。FIG. 4 is a diagram showing a blur shape formed by light from one point on a subject that is farther than the in-focus position in the first embodiment. 上記実施形態1において、合焦位置よりも遠距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図。In the said Embodiment 1, the figure which shows the shape of the blur formed with the light from one point on the to-be-photographed object on the far side from the focus position for every color component. 上記実施形態1において、合焦位置とそれよりも近距離および遠距離にある被写体を撮像したときの画像の様子を示す図。In the said Embodiment 1, the figure which shows the mode of an image when the to-be-photographed object and the to-be-photographed object in the near distance and far distance are imaged. 上記実施形態1における帯域制限フィルタの第1の変形例を示す図。The figure which shows the 1st modification of the band-limiting filter in the said Embodiment 1. FIG. 上記実施形態1における帯域制限フィルタの第2の変形例を示す図。The figure which shows the 2nd modification of the band-limiting filter in the said Embodiment 1. FIG. 上記実施形態1における帯域制限フィルタの第3の変形例を示す図。The figure which shows the 3rd modification of the band-limiting filter in the said Embodiment 1. FIG. 上記実施形態1における帯域制限フィルタの第4の変形例を示す図。The figure which shows the 4th modification of the band-limiting filter in the said Embodiment 1. FIG. 上記実施形態1において、カラー化処理後のボケ形状の概要を示す図。In the said Embodiment 1, the figure which shows the outline | summary of the blur shape after a colorization process. 上記実施形態1のカラー化処理において、合焦位置よりも遠距離側にある被写体のR画像に対して適用されるR用フィルタカーネルの形状を示す図。The figure which shows the shape of the R filter kernel applied with respect to the R image of the to-be-photographed object in the far side from the focus position in the colorization process of the said Embodiment 1. FIG. 上記実施形態1のカラー化処理において、合焦位置よりも遠距離側にある被写体のB画像に対して適用されるB用フィルタカーネルの形状を示す図。The figure which shows the shape of the filter kernel for B applied to the B image of the to-be-photographed object in the far distance side from the focus position in the colorization process of the said Embodiment 1. FIG. 上記実施形態1において、他の例のカラー化処理における合焦位置よりも遠距離側にある被写体のR画像およびB画像のシフトの様子を示す図。In the said Embodiment 1, the figure which shows the mode of the shift of the R image of the to-be-photographed object and B image in the far side from the focus position in the colorization process of another example. 上記実施形態1の他の例のカラー化処理においてR画像およびB画像に対して適用されるフィルタの形状を示す図。The figure which shows the shape of the filter applied with respect to R image and B image in the colorization process of the other example of the said Embodiment 1. FIG. 上記実施形態1において、横方向の標準偏差を大きくした楕円型ガウシアンフィルタの形状を示す図。In the said Embodiment 1, the figure which shows the shape of the elliptical Gaussian filter which enlarged the standard deviation of the horizontal direction. 上記実施形態1において、縦方向の標準偏差を小さくした楕円型ガウシアンフィルタの形状を示す図。In the said Embodiment 1, the figure which shows the shape of the elliptical Gaussian filter which made the standard deviation of the vertical direction small. 上記実施形態1において、色画像生成部により行われるカラー化処理を示すフローチャート。5 is a flowchart showing colorization processing performed by a color image generation unit in the first embodiment. 上記実施形態1における画像処理装置の構成を示すブロック図。FIG. 2 is a block diagram illustrating a configuration of an image processing apparatus according to the first embodiment. 本発明の実施形態2において、色画像生成部により行われるカラー化処理の概要を示す図。The figure which shows the outline | summary of the colorization process performed by the color image generation part in Embodiment 2 of this invention. 上記実施形態2において、位相差検出を行う際にR画像およびB画像に設定する部分領域を示す図。The figure which shows the partial area | region set to R image and B image when performing phase difference detection in the said Embodiment 2. FIG. 上記実施形態2において、オリジナルR画像のボケ拡散部分領域をRコピー画像にコピー加算する様子を示す図。FIG. 10 is a diagram illustrating a state in which a blur diffusion partial region of an original R image is copied and added to an R copy image in the second embodiment. 上記実施形態2において、オリジナルB画像のボケ拡散部分領域をBコピー画像にコピー加算する様子を示す図。FIG. 10 is a diagram showing a state in which the blur diffusion partial area of the original B image is copied and added to the B copy image in the second embodiment. 上記実施形態2において、位相差量に応じてボケ拡散部分領域のサイズを変更する例を示す線図。In the said Embodiment 2, the diagram which shows the example which changes the size of a blur spreading | diffusion partial area | region according to phase difference amount. 上記実施形態2において、色画像生成部により行われるカラー化処理を示すフローチャート。9 is a flowchart illustrating colorization processing performed by a color image generation unit in the second embodiment. 本発明の実施形態3において、位相差量に応じた各色のPSFのテーブルの概要を示す図。FIG. 10 is a diagram illustrating an outline of a PSF table for each color according to a phase difference amount in the third embodiment of the present invention. 上記実施形態3において、色画像生成部により行われるカラー化処理の概要を示す図。FIG. 10 is a diagram illustrating an outline of colorization processing performed by a color image generation unit in the third embodiment. 上記実施形態3において、色画像生成部により行われるボケ量コントロールを伴うカラー化処理の概要を示す図。In the said Embodiment 3, the figure which shows the outline | summary of the colorization process accompanying the blurring amount control performed by the color image generation part. 上記実施形態3において、色画像生成部により行われるカラー化処理を示すフローチャート。9 is a flowchart illustrating colorization processing performed by a color image generation unit in the third embodiment.
 以下、図面を参照して本発明の実施の形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[実施形態1]
 図1から図24は本発明の実施形態1を示したものであり、図1は撮像装置の構成を示すブロック図である。
[Embodiment 1]
1 to 24 show Embodiment 1 of the present invention, and FIG. 1 is a block diagram showing a configuration of an imaging apparatus.
 本実施形態の撮像装置は、例えばデジタルスチルカメラとして構成されている。ただし、ここではデジタルスチルカメラを例に挙げているが、撮像装置は、カラー撮像素子を備え、撮像機能を有する装置であればどのようなものでも良く、幾つかの例を挙げれば、上述したデジタルスチルカメラ、ビデオカメラ、カメラ付携帯電話、カメラ付携帯情報端末(カメラ付PDA)、カメラ付パーソナルコンピュータ、監視カメラ、内視鏡などである。 The imaging apparatus of the present embodiment is configured as a digital still camera, for example. However, here, a digital still camera is taken as an example, but the imaging device may be any device that has a color imaging device and has an imaging function. A digital still camera, a video camera, a camera-equipped mobile phone, a camera-equipped personal digital assistant (PDA with camera), a camera-equipped personal computer, a surveillance camera, an endoscope, and the like.
 撮像装置は、レンズユニット1と、このレンズユニット1がレンズマウントを介して着脱自在に取り付けられる本体部であるボディユニット2と、を備えている。なお、ここではレンズユニット1が着脱式である場合を例に挙げて説明するが、勿論、着脱式でなくても構わない。 The imaging apparatus includes a lens unit 1 and a body unit 2 that is a main body portion to which the lens unit 1 is detachably attached via a lens mount. Here, a case where the lens unit 1 is detachable will be described as an example, but of course, it may not be detachable.
 レンズユニット1は、レンズ10と絞り11とを含む撮像光学系9と、帯域制限フィルタ12と、レンズ制御部14と、レンズ側通信コネクタ15と、を備えている。 The lens unit 1 includes an imaging optical system 9 including a lens 10 and a diaphragm 11, a band limiting filter 12, a lens control unit 14, and a lens side communication connector 15.
 ボディユニット2は、シャッタ21と、撮像素子22と、撮像回路23と、撮像駆動部24と、画像処理部25と、画像メモリ26と、表示部27と、インターフェース(IF)28と、システムコントローラ30と、センサ部31と、操作部32と、ストロボ制御回路33と、ストロボ34と、ボディ側通信コネクタ35と、を備えている。なお、図1のボディユニット2内には記録媒体29も記載されているが、この記録媒体29は撮像装置に対して着脱自在な例えばメモリカード(スマートメディア、SDカード、xDピクチャーカード等)で構成されているために、撮像装置に固有の構成でなくても構わない。 The body unit 2 includes a shutter 21, an imaging element 22, an imaging circuit 23, an imaging drive unit 24, an image processing unit 25, an image memory 26, a display unit 27, an interface (IF) 28, and a system controller. 30, a sensor unit 31, an operation unit 32, a strobe control circuit 33, a strobe 34, and a body side communication connector 35. In addition, although the recording medium 29 is also described in the body unit 2 of FIG. 1, this recording medium 29 is a memory card (smart media, SD card, xD picture card, etc.) which is detachable with respect to the imaging device. Since it is configured, the configuration may not be unique to the imaging apparatus.
 まず、レンズユニット1における撮像光学系9は、被写体像を撮像素子22に結像するためのものである。この撮像光学系9のレンズ10は、焦点調節を行うためのフォーカスレンズを備えて構成されている。レンズ10は、一般的には複数枚のレンズで構成されることが多いが、図1においては簡単のために1枚のレンズのみを図示している。 First, the imaging optical system 9 in the lens unit 1 is for forming a subject image on the imaging element 22. The lens 10 of the imaging optical system 9 includes a focus lens for performing focus adjustment. Although the lens 10 is generally composed of a plurality of lenses in general, only one lens is shown in FIG. 1 for simplicity.
 撮像光学系9の絞り11は、レンズ10を通過する被写体光束の通過範囲を規制することにより、撮像素子22上に結像される被写体像の明るさを調節するためのものである。 The diaphragm 11 of the imaging optical system 9 is for adjusting the brightness of the subject image formed on the imaging element 22 by regulating the passage range of the subject luminous flux that passes through the lens 10.
 帯域制限フィルタ12は、撮像光学系9の被写体側から撮像素子22に至る撮影光束の光路上(望ましくは、撮像光学系9の絞り11の位置またはその近傍)に配設されており、撮像光学系9の瞳領域の一部を通過しようとする光に関して第1帯域の光を遮断し第2帯域の光を通過させる第1の帯域制限を行い、撮像光学系9の瞳領域の他の一部を通過しようとする光に関して第2帯域の光を遮断し第1帯域の光を通過させる第2の帯域制限を行うフィルタである。特に、本実施形態における帯域制限フィルタ12は、撮像光学系9の瞳領域の一部である第1の領域を通過しようとする撮影光束中の第1帯域の光を遮断し第2帯域および第3帯域の光を通過させる第1の帯域制限と、撮像光学系9の瞳領域の他の一部である第2の領域を通過しようとする撮影光束中の第2帯域の光を遮断し第1帯域および第3帯域の光を通過させる第2の帯域制限と、を行うフィルタとなっている。 The band limiting filter 12 is disposed on the optical path of the photographing light beam from the subject side of the imaging optical system 9 to the imaging element 22 (preferably at the position of the diaphragm 11 of the imaging optical system 9 or in the vicinity thereof). The first band restriction for blocking light in the first band and allowing light in the second band to pass through a part of the pupil area of the system 9 is performed, and the other pupil area of the imaging optical system 9 It is a filter that performs second band limitation for blocking light in the second band and allowing light in the first band to pass through the light that is about to pass through the section. In particular, the band limiting filter 12 in the present embodiment blocks the first band light and the second band and the second band in the imaging light flux that attempts to pass through the first area that is a part of the pupil area of the imaging optical system 9. The first band limitation that allows light in the three bands to pass through and the second band light in the imaging light flux that attempts to pass through the second area that is another part of the pupil area of the imaging optical system 9 are blocked. This is a filter that performs the second band limitation that allows light in the first band and the third band to pass therethrough.
 ここに、図3は帯域制限フィルタ12の一構成例を説明するための図である。この図3に示す構成例の帯域制限フィルタ12は、撮像光学系9の瞳領域が、第1の領域と第2の領域とに2分されたものとなっている。すなわち、帯域制限フィルタ12は、撮像装置を標準姿勢(いわゆる、カメラを通常の横位置に構えた姿勢)として撮像素子22から見たときに、左半分がG(緑)成分およびR(赤)成分を通過させB(青)成分を遮断する(すなわち、B(青)成分は第1帯域と第2帯域との内の一方の帯域である)RGフィルタ12r、右半分がG成分およびB成分を通過させR成分を遮断する(すなわち、R成分は第1帯域と第2帯域との内の他方の帯域である)GBフィルタ12bとなっている。従って、帯域制限フィルタ12は、撮像光学系9の絞り11の開口を通過する光に含まれるG成分を全て通過させ(すなわち、G成分は第3帯域である)、R成分を開口の半分の領域だけ通過させ、B成分を開口の残り半分の領域だけ通過させる。なお、帯域制限フィルタ12のRGB各分光透過特性が、撮像素子22にオンチップで構成されている素子フィルタ(図2参照)のRGB各分光透過特性と異なると、RGフィルタ12rとGBフィルタ12bの空間位置の相違に基づいて得られる画像から後述するように取得する位置情報の精度が低下したり、分光特性のミスマッチによる光量ロスが発生したりすることになる。従って、帯域制限フィルタ12の分光透過特性は、撮像素子22の素子フィルタの分光透過特性と、同一または可能な限り近似していることが望ましい。また、帯域制限フィルタ12の他の構成例については、後で図12~図15等を参照して説明する。 Here, FIG. 3 is a diagram for explaining a configuration example of the band limiting filter 12. In the band limiting filter 12 of the configuration example shown in FIG. 3, the pupil region of the imaging optical system 9 is divided into a first region and a second region. In other words, the band limiting filter 12 has a left half of the G (green) component and R (red) when the image pickup device is viewed from the image pickup device 22 in a standard posture (so-called posture in which the camera is held in a normal horizontal position). The RG filter 12r that passes the component and blocks the B (blue) component (that is, the B (blue) component is one of the first band and the second band), and the right half is the G component and the B component. And the R component is cut off (that is, the R component is the other one of the first band and the second band). Therefore, the band limiting filter 12 passes all the G components included in the light passing through the aperture 11 of the imaging optical system 9 (that is, the G component is the third band), and the R component is half of the aperture. Only the region is allowed to pass, and the B component is allowed to pass only for the remaining half of the aperture. If the RGB spectral transmission characteristics of the band limiting filter 12 are different from the RGB spectral transmission characteristics of an element filter (see FIG. 2) configured on-chip in the image sensor 22, the RG filter 12r and the GB filter 12b As will be described later, the accuracy of the position information acquired from the images obtained based on the difference in the spatial position is reduced, or a light amount loss due to a mismatch in spectral characteristics occurs. Therefore, it is desirable that the spectral transmission characteristic of the band limiting filter 12 is the same as or as close as possible to the spectral transmission characteristic of the element filter of the image sensor 22. Further, other configuration examples of the band limiting filter 12 will be described later with reference to FIGS.
 レンズ制御部14は、レンズユニット1の制御を行うものである。すなわち、レンズ制御部14は、レンズ側通信コネクタ15およびボディ側通信コネクタ35を介してシステムコントローラ30から受信した指令に基づき、レンズ10内のフォーカスレンズを駆動して合焦させたり、絞り11を駆動して絞り開口径を変更させたりするものである。 The lens control unit 14 controls the lens unit 1. That is, the lens control unit 14 drives and focuses the focus lens in the lens 10 based on a command received from the system controller 30 via the lens-side communication connector 15 and the body-side communication connector 35, or the aperture 11 is moved. It is driven to change the aperture diameter.
 レンズ側通信コネクタ15は、レンズユニット1とボディユニット2とがレンズマウントにより結合されてボディ側通信コネクタ35と接続されることにより、レンズ制御部14とシステムコントローラ30との間の通信を可能にするコネクタである。 The lens-side communication connector 15 enables communication between the lens control unit 14 and the system controller 30 by connecting the lens unit 1 and the body unit 2 with a lens mount and connecting to the body-side communication connector 35. Connector.
 次に、ボディユニット2におけるシャッタ21は、レンズ10から撮像素子22に到達する被写体光束の通過時間を規制することにより、撮像素子22の露光時間を調節するための光学シャッタである。なお、ここでは光学シャッタを用いているが、光学シャッタに代えて、または光学シャッタに加えて、撮像素子22による素子シャッタ(電子シャッタ)を用いるようにしても構わない。 Next, the shutter 21 in the body unit 2 is an optical shutter for adjusting the exposure time of the image sensor 22 by regulating the passage time of the subject light beam reaching the image sensor 22 from the lens 10. Although an optical shutter is used here, an element shutter (electronic shutter) by the image sensor 22 may be used instead of or in addition to the optical shutter.
 撮像素子22は、撮像光学系9により結像される被写体像を、複数の波長帯(例えば、RGBが挙げられるが、これに限るものではない)光毎にそれぞれ受光して光電変換し、電気信号として出力するカラー撮像素子であり、例えば、CCDやCMOS等として構成されている。ここに、カラー撮像素子の構成としては、オンチップの素子カラーフィルタを備えた単板の撮像素子でも良いし、RGB各色光への色分解を行うダイクロイックプリズムを用いた3板式であっても良いし、同一の画素位置で半導体の深さ方向位置に応じてRGBの撮像情報を取得可能な方式の撮像素子であっても良いし、複数の波長帯光の撮像情報を取得可能であればどのようなものでも構わない。 The imaging device 22 receives and photoelectrically converts the subject image formed by the imaging optical system 9 for each of a plurality of wavelength bands (for example, but not limited to RGB), and performs photoelectric conversion. A color image sensor that outputs a signal, for example, a CCD or CMOS. Here, the configuration of the color image sensor may be a single-plate image sensor provided with an on-chip element color filter, or a three-plate system using a dichroic prism that performs color separation into RGB color lights. However, it may be an image sensor of a method capable of acquiring RGB imaging information according to the position in the depth direction of the semiconductor at the same pixel position, or any imaging element that can acquire imaging information of a plurality of wavelength bands. It does n’t matter.
 例えば、図2を参照して、一般的なデジタルスチルカメラに用いられることが多い単板のカラー撮像素子の構成例を説明する。ここに、図2は撮像素子22の画素配列を説明するための図である。本実施形態においては、オンチップで搭載される素子カラーフィルタが透過する複数の波長帯光はR、G、およびBとなっており、この図2に示すように、原色ベイヤー配列の単板カラー撮像素子が構成されている。従って、撮像素子22がこの図2に示したような構成である場合には、1画素に付き1色の色成分のみが得られることになるために、画像処理部25においてデモザイキング処理を行い1画素につきRGBの3色が揃ったカラー画像を生成するようになっている。従って、撮像素子22は、少なくとも、第1帯域の光としての例えばR光と、第2帯域の光としての例えばB光とをそれぞれ受光して光電変換し、第1の画像信号としてのR画像信号と第2の画像信号としてのB画像信号とを生成するものとなっている。 For example, with reference to FIG. 2, a configuration example of a single-plate color image sensor often used for a general digital still camera will be described. FIG. 2 is a diagram for explaining the pixel arrangement of the image sensor 22. In the present embodiment, the plurality of wavelength band lights transmitted through the element color filter mounted on-chip are R, G, and B. As shown in FIG. An image sensor is configured. Therefore, when the image pickup device 22 has the configuration shown in FIG. 2, only one color component is obtained per pixel, and therefore the image processor 25 performs a demosaicing process. A color image in which three colors of RGB are aligned per pixel is generated. Accordingly, the image sensor 22 receives and photoelectrically converts at least R light, for example, as light in the first band and B light, for example, as light in the second band, and performs R conversion as the first image signal. A signal and a B image signal as a second image signal are generated.
 撮像回路23は、撮像素子22から出力される画像信号を増幅(ゲイン調整)したり、撮像素子22がアナログ撮像素子であってアナログの画像信号を出力する場合には、A/D変換してデジタル画像信号(以下では「画像情報」ともいう)を生成したりするものである(撮像素子22がデジタル撮像素子である場合には、撮像回路23に入力される時点で既にデジタルとなっているためにA/D変換は行わない)。撮像回路23は、後述するように撮像駆動部24で切り換えられた撮像モードに対応するフォーマットで、画像信号を画像処理部25へ出力する。 The image pickup circuit 23 performs A / D conversion when the image signal output from the image pickup device 22 is amplified (gain adjustment), or when the image pickup device 22 is an analog image pickup device and outputs an analog image signal. Or a digital image signal (hereinafter also referred to as “image information”) (when the image pickup device 22 is a digital image pickup device, it is already digital when it is input to the image pickup circuit 23). Therefore, A / D conversion is not performed). The imaging circuit 23 outputs an image signal to the image processing unit 25 in a format corresponding to the imaging mode switched by the imaging driving unit 24 as will be described later.
 撮像駆動部24は、システムコントローラ30の指令に基づいて、撮像素子22および撮像回路23にタイミング信号および電力を供給して、撮像素子に露光、読出、素子シャッタ等を行わせるとともに、撮像素子22の動作に同期させて撮像回路23によるゲイン調整およびA/D変換を実行させるように制御するものである。また、この撮像駆動部24は、撮像素子22の撮像モードを切り換える制御も行う。 The imaging drive unit 24 supplies a timing signal and power to the imaging element 22 and the imaging circuit 23 based on a command from the system controller 30 to cause the imaging element to perform exposure, reading, element shuttering, and the like, and also to the imaging element 22. Control is performed to execute gain adjustment and A / D conversion by the imaging circuit 23 in synchronism with the above operations. The imaging drive unit 24 also performs control to switch the imaging mode of the image sensor 22.
 画像処理部25は、WB(ホワイトバランス)調整、黒レベルの補正、γ補正、欠陥画素の補正、デモザイキング、画像情報の色情報の変換処理、画像情報の画素数変換処理、等のデジタル画像処理を行うものである。この画像処理部25は、さらに、色間補正部36と、画像補正部であり設定手段たる色画像生成部37と、を備えている。 The image processing unit 25 is a digital image such as WB (white balance) adjustment, black level correction, γ correction, defective pixel correction, demosaicing, image information color information conversion processing, image information pixel number conversion processing, and the like. The processing is performed. The image processing unit 25 further includes a color correction unit 36 and a color image generation unit 37 which is an image correction unit and is a setting unit.
 上述したように、帯域制限フィルタ12は通過させる光の帯域(色)によって、撮像光学系9の瞳領域における通過領域を制限しているために、通過する光は、帯域によって明るさが異なることになる。色間補正部36は、このような帯域間(色間)の明るさの違いを補正するためのものである。この帯域間(色間)の明るさの違いの補正は、簡易的には、帯域毎の通過領域の面積に応じて補正することが考えられるが、画像の中心よりも周辺の方が光量が低下する傾向があることを考慮して、撮像光学系9の光学特性に応じたより詳細な補正を行うようにしても勿論構わない。このときには、撮像装置内で補正値を算出するに限るものではなく、テーブルデータ等として補正値を保持するようにしても構わない。具体例として、帯域制限フィルタ12が図3に示したように構成されている場合には、RとBはGの半分の通過光量となるために、Rの色信号とBの色信号を2倍にする処理を簡易的な色間補正として行うことが考えられる(同様に、後述する図12~図15の構成の場合には、それぞれの帯域毎の通過領域の面積に応じて補正することになる)。このような処理を行うことにより、光量バランスの点においては図3の帯域制限フィルタ12が挿入されていない撮像装置と同様な画像として扱うことが可能となり、各種機能(例えば画像生成処理など)を利用することが可能となる。 As described above, since the band limiting filter 12 limits the pass region in the pupil region of the imaging optical system 9 by the band (color) of the light that passes through, the brightness of the light that passes through the band varies. become. The inter-color correction unit 36 is for correcting such a difference in brightness between bands (between colors). The brightness difference between bands (between colors) can be easily corrected according to the area of the pass region for each band, but the amount of light in the periphery is larger than the center of the image. Of course, more detailed correction according to the optical characteristics of the imaging optical system 9 may be performed in consideration of the tendency to decrease. At this time, the correction value is not limited to be calculated in the imaging apparatus, and the correction value may be held as table data or the like. As a specific example, when the band limiting filter 12 is configured as shown in FIG. 3, R and B have half the amount of light passing through G, so that the R color signal and the B color signal are 2 It is conceivable to perform the doubling process as a simple inter-color correction (similarly, in the case of the configuration shown in FIGS. 12 to 15 described later, correction is performed according to the area of the pass region for each band. become). By performing such processing, it becomes possible to handle the image as an image similar to that of the imaging device in which the band limiting filter 12 of FIG. 3 is not inserted in terms of light quantity balance, and various functions (for example, image generation processing, etc.) can be performed. It can be used.
 色画像生成部37は、カラー画像情報を形成するためのデジタル処理であるカラー化の処理を行うものである。図3に示したような帯域制限フィルタ12を用いた場合には、R画像とB画像とに空間的な位置ズレが発生することがあるために、この空間的な位置ズレを補正するのが色画像生成部37が行うカラー化処理の一例である。この位置ズレはボケ部分(非合焦部分)において発生し合焦部分においては発生しないために、より詳しくは、色画像生成部37は、演算部である後述する距離演算部39により演算された位相差量に基づき、第1の画像信号(例えばR画像信号)のボケの重心位置と、第2の画像信号(例えばB画像信号)のボケの重心位置と、の位置ズレを補正する処理を行う。 The color image generation unit 37 performs a colorization process that is a digital process for forming color image information. When the band limiting filter 12 as shown in FIG. 3 is used, a spatial positional deviation may occur between the R image and the B image. Therefore, the spatial positional deviation is corrected. It is an example of the colorization process which the color image generation part 37 performs. Since this positional deviation occurs in the blurred portion (non-focused portion) and does not occur in the focused portion, more specifically, the color image generation unit 37 is calculated by a distance calculation unit 39, which will be described later, which is a calculation unit. Based on the phase difference amount, a process of correcting a positional deviation between a blur center of gravity of the first image signal (for example, R image signal) and a blur center of gravity of the second image signal (for example, the B image signal). Do.
 まず、各色画像の空間的な位置ズレについて、図4~図11を参照して説明する。ここに、図4は合焦位置にある被写体を撮像するときの被写体光束集光の様子を示す平面図、図5は合焦位置よりも近距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図、図6は合焦位置よりも近距離側にある被写体上の1点からの光により形成されるボケの形状を示す図、図7は合焦位置よりも近距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図、図8は合焦位置よりも遠距離側にある被写体を撮像するときの被写体光束集光の様子を示す平面図、図9は合焦位置よりも遠距離側にある被写体上の1点からの光により形成されるボケの形状を示す図、図10は合焦位置よりも遠距離側にある被写体上の1点からの光により形成されるボケの形状を色成分毎に示す図、図11は合焦位置とそれよりも近距離および遠距離にある被写体を撮像したときの画像の様子を示す図である。なお、ボケ形状の説明に際しては、絞り11の開口が円形状である場合を例に挙げて説明する。 First, the spatial positional shift of each color image will be described with reference to FIGS. FIG. 4 is a plan view showing a state of subject light flux condensing when imaging a subject at the in-focus position, and FIG. 5 is a subject light flux concentrating when imaging a subject closer to the in-focus position. FIG. 6 is a plan view showing the state of light, FIG. 6 is a diagram showing the shape of a blur formed by light from one point on the subject that is closer to the focus position, and FIG. 7 is a closer distance than the focus position. FIG. 8 is a diagram showing the shape of blur formed by light from one point on a subject on the side for each color component, and FIG. 8 is a diagram of subject light flux collection when imaging a subject farther than the in-focus position. FIG. 9 is a plan view showing the state, FIG. 9 is a diagram showing the shape of a blur formed by light from one point on the subject that is on the far side from the in-focus position, and FIG. 10 is on the far side from the in-focus position. FIG. 11 is a diagram showing the shape of a blur formed by light from one point on a subject for each color component. Location and than a diagram showing a state of image obtained by imaging the object at a short distance and long distance. In the description of the blur shape, the case where the aperture of the diaphragm 11 has a circular shape will be described as an example.
 被写体OBJcが合焦位置にあるときには、被写体OBJc上の1点から放射された光は、図4に示すように、帯域制限フィルタ12全体を通過するG成分も、帯域制限フィルタ12の半分のRGフィルタ12rのみを通過するR成分も、帯域制限フィルタ12の他の半分のGBフィルタ12bのみを通過するB成分も、撮像素子22上の1点に集光され、点像IMGrgbを形成するために、上述したような通過領域の面積に応じた光量の相違はあるものの、色間に位置ズレは発生しない。従って、合焦位置にある被写体OBJcを撮像したときには、図11に示すように、色にじみのない被写体像IMGrgbが結像される。 When the subject OBJc is at the in-focus position, the light emitted from one point on the subject OBJc has a G component that passes through the entire band limiting filter 12 as shown in FIG. Both the R component that passes only through the filter 12r and the B component that passes only through the other half of the GB filter 12b of the band limiting filter 12 are collected at one point on the image sensor 22 to form a point image IMGrgb. Although there is a difference in the amount of light according to the area of the passing region as described above, no positional deviation occurs between the colors. Therefore, when the subject OBJc at the in-focus position is imaged, a subject image IMGrgb having no color blur is formed as shown in FIG.
 これに対して、被写体OBJnが例えば合焦位置よりも近距離側にある場合には、被写体OBJn上の1点から放射された光により、図5~図7に示すように、G成分については円形ボケをなす被写体像IMGgが形成され、R成分については左半分の半円形ボケをなす被写体像IMGrが形成され、B成分については右半分の半円形ボケをなす被写体像IMGbが形成される。従って、合焦位置よりも近距離側にある被写体OBJnを撮像したときには、図11に示すように、R成分の被写体像IMGrが左側にずれ、B成分の被写体像IMGbが右側にずれたボケ画像が形成され、このボケ画像におけるR成分とB成分の左右位置は、撮像素子22から見たときの帯域制限フィルタ12におけるR成分透過領域(RGフィルタ12r)とB成分透過領域(GBフィルタ12b)の左右位置と同じである(なお、図11においてはズレを強調して示しており、実際に生じているボケ形状の図示は省略している(下記遠距離側の場合も同様))。そして、被写体OBJnが合焦位置から近距離側へ離れるほど、ボケが大きくなって、R成分の被写体像IMGrの重心位置とB成分の被写体像IMGbの重心位置との離間距離が大きくなることになる。 On the other hand, when the subject OBJn is, for example, closer to the in-focus position, the light emitted from one point on the subject OBJn is used for the G component as shown in FIGS. A subject image IMGg that forms a circular blur is formed, a subject image IMGr that forms a semicircular blur of the left half is formed for the R component, and a subject image IMGb that forms a semicircular blur of the right half is formed for the B component. Therefore, when the subject OBJn that is closer to the in-focus position is imaged, as shown in FIG. 11, the blur component image in which the R component subject image IMGr is shifted to the left and the B component subject image IMGb is shifted to the right. The left and right positions of the R component and B component in this blurred image are the R component transmission region (RG filter 12r) and B component transmission region (GB filter 12b) of the band limiting filter 12 when viewed from the image sensor 22. (In FIG. 11, the deviation is emphasized and illustration of the blurred shape actually generated is omitted (the same applies to the case of the far side below)). As the subject OBJn moves away from the in-focus position, the blur increases, and the distance between the center of gravity of the R component subject image IMGr and the center of gravity of the B component subject image IMGb increases. Become.
 一方、被写体OBJfが例えば合焦位置よりも遠距離側にある場合には、被写体OBJf上の1点から放射された光により、図8~図10に示すように、G成分については円形ボケをなす被写体像IMGgが形成され、R成分については右半分の半円形ボケをなす被写体像IMGrが形成され、B成分については左半分の半円形ボケをなす被写体像IMGbが形成される。従って、合焦位置よりも遠距離側にある被写体OBJfを撮像したときには、図11に示すように、R成分の被写体像IMGrが右側にずれ、B成分の被写体像IMGbが左側にずれたボケ画像が形成され、このボケ画像におけるR成分とB成分の左右位置は、撮像素子22から見たときの帯域制限フィルタ12におけるR成分透過領域(RGフィルタ12r)とB成分透過領域(GBフィルタ12b)の左右位置と逆である。そして、被写体OBJfが合焦位置から遠距離側へ離れるほど、ボケが大きくなって、R成分の被写体像IMGrの重心位置とB成分の被写体像IMGbの重心位置との離間距離が大きくなることになる。 On the other hand, when the subject OBJf is on the far side from the in-focus position, for example, the light emitted from one point on the subject OBJf causes a circular blur for the G component as shown in FIGS. A subject image IMGg is formed, a subject image IMGr that forms a semicircular blur of the right half is formed for the R component, and a subject image IMGb that forms a semicircular blur of the left half is formed for the B component. Therefore, when the subject OBJf that is farther than the in-focus position is imaged, as shown in FIG. 11, the blur component image in which the R component subject image IMGr is shifted to the right side and the B component subject image IMGb is shifted to the left side. The left and right positions of the R component and B component in this blurred image are the R component transmission region (RG filter 12r) and B component transmission region (GB filter 12b) of the band limiting filter 12 when viewed from the image sensor 22. It is the opposite of the left and right position. As the subject OBJf moves away from the in-focus position, the blur increases, and the distance between the center of gravity of the R component subject image IMGr and the center of gravity of the B component subject image IMGb increases. Become.
 画像メモリ26は、高速な書き込みや読み出しが可能なメモリであり、例えばSDRAM(Synchronous Dynamic Random Access Memory)により構成されていて、画像処理用のワークエリアとして使用されるとともに、システムコントローラ30のワークエリアとしても使用される。例えば、画像メモリ26は、画像処理部25により処理された最終的な画像を記憶するだけでなく、画像処理部25による複数の処理過程における各中間画像も適宜記憶する。 The image memory 26 is a memory capable of high-speed writing and reading. The image memory 26 is composed of, for example, SDRAM (Synchronous Dynamic Random Access Memory), and is used as a work area for image processing. Also used as For example, the image memory 26 not only stores the final image processed by the image processing unit 25 but also appropriately stores each intermediate image in a plurality of processing steps by the image processing unit 25.
 表示部27は、LCD等を有して構成されていて、画像処理部25により表示用に処理された画像(記録媒体29から読み出されて画像処理部25により表示用に処理された画像も含む)を表示するものである。具体的には、この表示部27は、ライブビュー画像の表示、静止画像記録時の確認表示、記録媒体29から読み出した静止画像または動画像の再生表示、等を行う。 The display unit 27 includes an LCD or the like, and an image processed for display by the image processing unit 25 (an image read from the recording medium 29 and processed for display by the image processing unit 25 is also included). Display). Specifically, the display unit 27 performs live view image display, confirmation display when recording a still image, reproduction display of a still image or a moving image read from the recording medium 29, and the like.
 インターフェース(IF)28は、記録媒体29を着脱可能に接続するものであり、記録媒体29へ記録する情報の伝達、および記録媒体29から読み出した情報の伝達を行う。 The interface (IF) 28 is detachably connected to the recording medium 29, and transmits information to be recorded on the recording medium 29 and information read from the recording medium 29.
 記録媒体29は、画像処理部25により記録用に処理された画像や、該画像に関連する各種データを記録するものであり、上述したように例えばメモリカード等として構成されている。 The recording medium 29 is for recording an image processed for recording by the image processing unit 25 and various data related to the image, and is configured as a memory card or the like as described above.
 センサ部31は、例えば、撮像装置のブレを検出するための加速度センサ等で構成される手振れセンサ、撮像素子22の温度を測定するための温度センサ、撮像装置周辺の明るさを測定するための明るさセンサ、等を含んでいる。このセンサ部31による検出結果はシステムコントローラ30に入力される。ここに、手振れセンサによる検出結果は撮像素子22やレンズ10を駆動して手振れ補正を行ったり、画像処理による手振れ補正を行ったりするために用いられる。また、温度センサによる検出結果は撮像駆動部24による駆動クロックの制御や撮像素子22から得られる画像中のノイズ量を推定するのに用いられる。さらに、明るさセンサによる検出結果は、例えば、周囲の明るさに応じて表示部27の輝度を適正に制御するために用いられる。 The sensor unit 31 is, for example, a camera shake sensor configured by an acceleration sensor or the like for detecting blur of the imaging device, a temperature sensor for measuring the temperature of the imaging device 22, and a brightness for measuring the brightness around the imaging device. Includes brightness sensor, etc. The detection result by the sensor unit 31 is input to the system controller 30. Here, the detection result by the camera shake sensor is used to drive the image pickup device 22 and the lens 10 to perform camera shake correction or to perform camera shake correction by image processing. The detection result by the temperature sensor is used to control the drive clock by the imaging drive unit 24 and to estimate the amount of noise in the image obtained from the image sensor 22. Furthermore, the detection result by the brightness sensor is used, for example, to appropriately control the luminance of the display unit 27 according to the ambient brightness.
 操作部32は、撮像装置の電源をオン/オフするための電源スイッチ、静止画像や動画像等の撮像動作を指示入力するための2段式の押圧ボタンでなるレリーズボタン、撮像モード等を変更するためのモードボタン、選択項目や数値などを変更するのに用いられる十字キー、等を含んでいる。この操作部32の操作により発生した信号は、システムコントローラ30に入力される。 The operation unit 32 changes a power switch for turning on / off the power of the image pickup device, a release button including a two-stage press button for inputting an image pickup operation such as a still image or a moving image, an image pickup mode, and the like. Mode buttons, cross keys used to change selection items, numerical values, and the like. A signal generated by the operation of the operation unit 32 is input to the system controller 30.
 ストロボ制御回路33は、システムコントローラ30の指令に基づいて、ストロボ34の発光量や発光タイミングを制御するものである。 The strobe control circuit 33 controls the light emission amount and the light emission timing of the strobe 34 based on a command from the system controller 30.
 ストロボ34は、ストロボ制御回路33の制御により、被写体へ照明光を照射する発光源である。 The strobe 34 is a light source that emits illumination light to the subject under the control of the strobe control circuit 33.
 ボディ側通信コネクタ35は、上述したように、レンズユニット1とボディユニット2とがレンズマウントにより結合されてレンズ側通信コネクタ15と接続されることにより、レンズ制御部14とシステムコントローラ30との間の通信を可能にするコネクタである。 As described above, the body side communication connector 35 is connected between the lens control unit 14 and the system controller 30 when the lens unit 1 and the body unit 2 are coupled by the lens mount and connected to the lens side communication connector 15. It is a connector that enables communication.
 システムコントローラ30は、このボディユニット2の制御を行うとともに、レンズ制御部14を介してレンズユニット1の制御も行うものであり、この撮像装置を統合的に制御する制御部である。このシステムコントローラ30は、図示しないフラッシュメモリ等の不揮発性メモリから撮像装置の基本制御プログラムを読み出して、操作部32からの入力に応じて、撮像装置全体を制御するようになっている。 The system controller 30 controls the body unit 2 and also controls the lens unit 1 via the lens control unit 14, and is a control unit that integrally controls the imaging apparatus. The system controller 30 reads a basic control program of the image pickup apparatus from a non-illustrated non-volatile memory such as a flash memory, and controls the entire image pickup apparatus in accordance with an input from the operation unit 32.
 例えば、システムコントローラ30は、レンズ制御部14を介して絞り11の絞り調整を制御したり、シャッタ21を制御して駆動したり、センサ部31の加速度センサによる検出結果に基づいて図示しない手振れ補正機構を制御駆動して手振れ補正を行ったり、等を行う。さらに、システムコントローラ30は、操作部32のモードボタンからの入力に応じて、撮像装置のモード設定(静止画像を撮像するための静止画モード、動画像を撮像するための動画モード、立体視画像を撮像するための3Dモード等の設定)を行うものとなっている。 For example, the system controller 30 controls the diaphragm adjustment of the diaphragm 11 via the lens control unit 14, controls and drives the shutter 21, or shake correction (not shown) based on the detection result of the acceleration sensor of the sensor unit 31. The mechanism is controlled to perform camera shake correction or the like. Furthermore, the system controller 30 sets the mode setting of the imaging device (a still image mode for capturing a still image, a moving image mode for capturing a moving image, a stereoscopic image) in response to an input from the mode button of the operation unit 32. 3D mode or the like for imaging).
 さらに、システムコントローラ30は、コントラストAF制御部38と、演算部たる距離演算部39とを備え、コントラストAF制御部38によりAF制御を行わせ、あるいは、距離演算部39により算出された距離情報に基づきレンズユニット1を制御してAFを行うものである。 Further, the system controller 30 includes a contrast AF control unit 38 and a distance calculation unit 39 as a calculation unit. The system controller 30 performs AF control by the contrast AF control unit 38, or uses the distance information calculated by the distance calculation unit 39. Based on this, the lens unit 1 is controlled to perform AF.
 コントラストAF制御部38は、画像処理部25から出力される画像信号(この画像信号は、輝度成分を含む割合が高いG画像であっても良いし、後述するカラー化処理により色ずれが補正された画像に係る輝度信号画像であっても構わない)からコントラスト値(AF評価値ともいう)を生成し、レンズ制御部14を介してレンズ10内のフォーカスレンズを制御するものである。すなわち、コントラストAF制御部38は、画像信号にフィルタ、例えばハイパスフィルタを作用させて高周波成分を抽出し、コントラスト値とする。そして、コントラストAF制御部38は、フォーカスレンズ位置を異ならせてコントラスト値を取得し、コントラスト値が大きくなる方向へフォーカスレンズを移動させて、さらにコントラスト値を取得する。このような処理を繰り返して行うことにより、最大のコントラスト値が得られるフォーカスレンズ位置(合焦位置)へフォーカスレンズを駆動するように制御するものである。 The contrast AF control unit 38 may output an image signal output from the image processing unit 25 (this image signal may be a G image having a high ratio including a luminance component, and color misregistration is corrected by a colorization process described later. A contrast value (also referred to as an AF evaluation value) is generated from the luminance signal image related to the image, and the focus lens in the lens 10 is controlled via the lens control unit 14. That is, the contrast AF control unit 38 extracts a high-frequency component by applying a filter, for example, a high-pass filter, to the image signal to obtain a contrast value. Then, the contrast AF control unit 38 acquires the contrast value by changing the focus lens position, moves the focus lens in the direction in which the contrast value increases, and further acquires the contrast value. By repeatedly performing such processing, control is performed so that the focus lens is driven to the focus lens position (focus position) at which the maximum contrast value is obtained.
 次に、距離演算部39は、撮像素子22から得られた、第1帯域の第1の画像と第2帯域の第2の画像との位相差量を演算し、演算した位相差量に基づき、被写体までの距離を演算するものである。具体的には、距離演算部39は、図7、図10、図11等に示したようなR成分とB成分とのズレ量から、レンズの公式により距離情報を演算する。 Next, the distance calculation unit 39 calculates the phase difference amount between the first image in the first band and the second image in the second band obtained from the image sensor 22, and based on the calculated phase difference amount. The distance to the subject is calculated. Specifically, the distance calculation unit 39 calculates distance information according to the lens formula from the amount of deviation between the R component and the B component as shown in FIG. 7, FIG. 10, FIG.
 すなわち、距離演算部39は、撮像画像として得られたRGB各色の成分の内の、R成分とB成分とを抽出する。そして、R成分とB成分とに相関演算を行うことにより、R成分の画像とB成分の画像とに生じているズレの方向とズレの大きさとを算出する(ただしこれに限定されるものではなく、R画像とG画像との間、またはB画像とG画像との間に生じているズレの方向とズレの大きさとを算出することも可能である)。 That is, the distance calculation unit 39 extracts the R component and the B component from among the components of each RGB color obtained as the captured image. Then, by calculating the correlation between the R component and the B component, the direction of the deviation and the magnitude of the deviation occurring in the R component image and the B component image are calculated (however, the present invention is not limited to this). Alternatively, it is possible to calculate the direction of displacement and the size of the displacement occurring between the R image and the G image or between the B image and the G image).
 ここに、例えば図3に示したような帯域制限フィルタ12を用いる場合には、図5~図11を参照して説明したように、被写体が合焦位置よりも近いか遠いかによって、画像におけるR成分とB成分のズレの方向が逆になる。従って、ズレの方向は、着目する被写体が合焦位置よりも近いか遠いかを判別するための情報となる。 Here, for example, when the band limiting filter 12 as shown in FIG. 3 is used, as described with reference to FIGS. 5 to 11, depending on whether the subject is closer or farther than the in-focus position, The direction of deviation between the R component and the B component is reversed. Therefore, the direction of deviation is information for determining whether the subject of interest is closer or farther than the in-focus position.
 また、合焦位置から近距離側または遠距離側へ離れれば離れるほど、R成分とB成分のズレの大きさが大きくなる。従って、ズレの大きさは、着目する被写体が合焦位置からどの程度の距離だけ離れているかを判定するための情報となる。 Also, the further away from the in-focus position, the closer to the near side or the far side, the greater the difference between the R component and the B component. Therefore, the magnitude of the deviation is information for determining how far the subject of interest is away from the in-focus position.
 こうして、距離演算部39は、算出したズレの方向とズレの大きさとに基づいて、着目する被写体が、合焦位置よりも近い側または遠い側へ、どの程度の距離だけ離れているかを算出するようになっている。 In this way, the distance calculation unit 39 calculates how far the subject of interest is away to the side closer to or farther from the in-focus position based on the calculated direction of deviation and magnitude of deviation. It is like that.
 距離演算部39は、操作部32からの入力に基づき、または撮像装置の基本制御プログラムに基づき、システムコントローラ30により決定された距離演算領域(例えば、撮像画像の全領域、あるいは撮像画像における距離情報を取得したい一部の領域)について、上述したような距離演算を行う。このような、ズレ量を取得して距離情報を演算する技術は、例えば上記特開2001-16611号公報に記載された技術を用いることができる。 The distance calculation unit 39 is based on an input from the operation unit 32 or based on a basic control program of the imaging apparatus, and a distance calculation area determined by the system controller 30 (for example, the entire area of the captured image or distance information in the captured image). The distance calculation as described above is performed for a part of the region where it is desired to acquire the For example, a technique described in Japanese Patent Application Laid-Open No. 2001-16611 can be used as such a technique for obtaining the deviation amount and calculating the distance information.
 距離演算部39により取得された距離情報は、例えばオートフォーカス(AF)に利用することができる。 The distance information acquired by the distance calculation unit 39 can be used for, for example, autofocus (AF).
 すなわち、距離演算部39によりR成分とB成分のズレに基づく距離情報を取得し、取得した距離情報に基づきシステムコントローラ30がレンズ制御部14を介してレンズ10のフォーカスレンズを駆動するAF、すなわち位相差AFを行うことができる。これにより、1枚の撮像画像に基づいて、高速なAFが可能となる。 That is, the distance calculation unit 39 acquires distance information based on the difference between the R component and the B component, and the system controller 30 drives the focus lens of the lens 10 via the lens control unit 14 based on the acquired distance information, that is, AF. Phase difference AF can be performed. Thereby, high-speed AF is possible based on one captured image.
 ただし、システムコントローラ30内には上述したように距離演算部39とコントラストAF制御部38とが設けられているために、AF制御を距離演算部39による算出結果に基づいて行っても良いが、コントラストAF制御部38により行っても構わない。 However, since the distance calculation unit 39 and the contrast AF control unit 38 are provided in the system controller 30 as described above, AF control may be performed based on the calculation result by the distance calculation unit 39. It may be performed by the contrast AF control unit 38.
 ここに、コントラストAF制御部38によるコントラストAFは合焦精度が高い反面、複数枚の撮像画像が必要になるために、合焦速度が早いとはいえない課題がある。一方、距離演算部39による被写体距離の算出は、1枚の撮像画像に基づいて行うことができるために、合焦速度が早い反面、合焦精度はコントラストAFよりも劣ることがある。 Here, contrast AF by the contrast AF control unit 38 has high focusing accuracy, but it requires a plurality of captured images, and therefore there is a problem that the focusing speed cannot be said to be fast. On the other hand, since the calculation of the subject distance by the distance calculation unit 39 can be performed based on one captured image, the focusing speed is fast, but the focusing accuracy may be inferior to the contrast AF.
 そこで、コントラストAF制御部38内に設けられているAFアシスト制御部38aが、コントラストAF制御部38と距離演算部39とを組み合わせてAFを行わせるようにしても良い。すなわち、帯域制限フィルタ12を介して取得した画像のR成分とB成分のズレに基づく距離演算を、距離演算部39に行わせて、被写体が現在のフォーカス位置よりも遠距離側にあるのか、あるいは近距離側にあるのかを取得する。あるいはさらに、被写体が現在のフォーカス位置から離れている距離を取得する。次に、AFアシスト制御部38aは、取得した遠距離側または近距離側へ(取得した距離の分だけ)フォーカスレンズを駆動し、コントラストAFを行わせるようにコントラストAF制御部38を制御する。このような処理を行うことにより、早い合焦速度で高い合焦精度を得ることが可能となる。 Therefore, the AF assist control unit 38a provided in the contrast AF control unit 38 may perform AF by combining the contrast AF control unit 38 and the distance calculation unit 39. That is, the distance calculation unit 39 performs a distance calculation based on the deviation between the R component and the B component of the image acquired through the band limiting filter 12 to determine whether the subject is on the far side from the current focus position. Or it is acquired whether it exists in the short distance side. Alternatively, the distance that the subject is away from the current focus position is acquired. Next, the AF assist control unit 38a controls the contrast AF control unit 38 to drive the focus lens toward the acquired long distance side or the short distance side (by the acquired distance) and perform contrast AF. By performing such processing, it is possible to obtain high focusing accuracy at a fast focusing speed.
 また、撮像素子22から得られたR画像とB画像とは、例えばステレオ立体視画像(3D画像)として用いることができる。 Also, the R image and B image obtained from the image sensor 22 can be used as, for example, a stereo stereoscopic image (3D image).
 3D画像は、左側の瞳からの画像を左眼で観察し、右側の瞳からの画像を右眼で観察できれば良い。このような3D画像の観察方式として、従来よりアナグリフ方式が知られている(上述した特開平4-251239号公報も参照)。このアナグリフ方式は、一般に、赤色の左眼画像と青色の右眼画像とを生成して両方を表示し、左眼側に赤色透過フィルタ、右眼側に青色透過フィルタを配置したアナグリフ用の赤青メガネを用いてこの画像を観察することにより、モノクロの立体視画像を観察可能とする方式である。 The 3D image only needs to be able to observe the image from the left pupil with the left eye and the image from the right pupil with the right eye. An anaglyph method has been conventionally known as such a 3D image observation method (see also Japanese Patent Laid-Open No. 4-251239 described above). This anaglyph method generally generates a red left-eye image and a blue right-eye image, displays both images, and arranges a red transmission filter on the left eye side and a blue transmission filter on the right eye side. By observing this image using blue glasses, a monochrome stereoscopic image can be observed.
 そこで本実施形態においては、標準姿勢の撮像素子22から得られたR画像とB画像(色画像生成部37における、R成分とB成分の位置ズレを補正するカラー化処理を行わない画像)を、このアナグリフ方式の赤青メガネで観察すればそのまま立体視が可能となるように構成している。すなわち、図3を参照して説明したように、撮像装置を標準姿勢とすると、帯域制限フィルタ12のRGフィルタ12rが撮像素子22から被写体を見たときの左側に配置され、GBフィルタ12bが右側に配置されるように構成している。これにより赤青メガネをかければ、左側のRGフィルタ12rを透過したR成分光が左眼のみにより観察され、右側のGBフィルタ12bを透過したB成分光が右眼のみにより観察されて、立体視が可能となる。 Therefore, in the present embodiment, an R image and a B image obtained from the image sensor 22 with a standard posture (an image in which color processing for correcting the positional deviation between the R component and the B component in the color image generation unit 37 is not performed) are performed. If the anaglyph type red / blue glasses are used for observation, a stereoscopic view is possible as it is. That is, as described with reference to FIG. 3, when the imaging apparatus is in the standard posture, the RG filter 12r of the band limiting filter 12 is arranged on the left side when the subject is viewed from the imaging element 22, and the GB filter 12b is on the right side. It is comprised so that it may be arrange | positioned. Thus, when wearing red and blue glasses, the R component light transmitted through the left RG filter 12r is observed only by the left eye, and the B component light transmitted through the right GB filter 12b is observed only by the right eye. Is possible.
 次に、図12~図15を参照して、帯域制限フィルタ12の変形例について説明する。 Next, modified examples of the band limiting filter 12 will be described with reference to FIGS.
 まず、図12は帯域制限フィルタ12の第1の変形例を示す図である。 First, FIG. 12 is a diagram showing a first modification of the band limiting filter 12.
 図12に示す帯域制限フィルタ12は、撮像光学系9の瞳領域が、第1の領域と第2の領域との間に第1の帯域制限および第2の帯域制限の両方を受ける撮影光束が通過する領域が挟まれる構成となっている。すなわち、図3に示した帯域制限フィルタ12は、撮像装置を標準姿勢にして撮像素子22から見たときに、左半分がRGフィルタ12r、右半分がGBフィルタ12bであった。これに対して、この図12に示す帯域制限フィルタ12は、左側のRGフィルタ12rと右側のGBフィルタ12bとの間に、G成分のみを通過させるGフィルタ12gを配置したものとなっている。 In the band limiting filter 12 shown in FIG. 12, the imaging light beam in which the pupil region of the imaging optical system 9 receives both the first band limitation and the second band limitation between the first region and the second region. It is configured such that a passing area is sandwiched. That is, the band limiting filter 12 shown in FIG. 3 is the RG filter 12r on the left half and the GB filter 12b on the right half when viewed from the image pickup device 22 with the image pickup apparatus in the standard posture. On the other hand, in the band limiting filter 12 shown in FIG. 12, a G filter 12g that allows only the G component to pass is disposed between the left RG filter 12r and the right GB filter 12b.
 このような構成においても、撮像光学系9の絞り11の開口を通過する光に含まれるG成分が全て通過するのは図3の帯域制限フィルタ12と同様である。ただし、RGフィルタ12rとGBフィルタ12bとが離間して配置されているために、非合焦位置にある被写体の撮像素子22上における像の、R成分とB成分の位置ズレがより明瞭化されるために、距離演算部39による距離演算精度が向上する利点がある。従って、高い精度の距離情報を得たい場合には、この図12に示すような構成の帯域制限フィルタ12を用いると良い。 Even in such a configuration, all of the G component contained in the light passing through the aperture of the diaphragm 11 of the imaging optical system 9 passes in the same way as the band limiting filter 12 in FIG. However, since the RG filter 12r and the GB filter 12b are arranged apart from each other, the positional deviation between the R component and the B component of the image of the subject at the out-of-focus position on the image sensor 22 is further clarified. Therefore, there is an advantage that the distance calculation accuracy by the distance calculation unit 39 is improved. Therefore, when it is desired to obtain distance information with high accuracy, it is preferable to use the band limiting filter 12 configured as shown in FIG.
 また、図13は帯域制限フィルタ12の第2の変形例を示す図である。 FIG. 13 is a diagram showing a second modification of the band limiting filter 12.
 この図13に示す帯域制限フィルタ12は、撮像光学系9の瞳領域が、第1の領域と第2の領域との間に帯域制限を受けない撮影光束が通過する領域が挟まれる構成となっている。すなわち、帯域制限フィルタ12は、左側のRGフィルタ12rと右側のGBフィルタ12bとの間に、RGB全色の成分(つまり、白色光W)を通過させるWフィルタ12wを配置したものとなっている。 The band limiting filter 12 shown in FIG. 13 has a configuration in which the pupil region of the imaging optical system 9 is sandwiched between the first region and the second region through which the imaging light flux that is not subjected to the band limitation passes. ing. In other words, the band limiting filter 12 is a filter in which a W filter 12w that passes components of all RGB colors (that is, white light W) is disposed between the left RG filter 12r and the right GB filter 12b. .
 このような構成においても、撮像光学系9の絞り11の開口を通過する光に含まれるG成分が全て通過するのは図3の帯域制限フィルタ12と同様である。これに対して、R成分はRGフィルタ12rおよびWフィルタ12wを通過する一方で、GBフィルタ12bに遮断される。同様に、B成分はGBフィルタ12bおよびWフィルタ12wを通過する一方で、RGフィルタ12rに遮断される。 Even in such a configuration, all of the G component contained in the light passing through the aperture of the diaphragm 11 of the imaging optical system 9 passes in the same way as the band limiting filter 12 in FIG. In contrast, the R component passes through the RG filter 12r and the W filter 12w, but is blocked by the GB filter 12b. Similarly, the B component passes through the GB filter 12b and the W filter 12w, but is blocked by the RG filter 12r.
 このような構成によれば、Wフィルタ12wの部分についてはG成分のみでなく、R成分およびB成分も透過されるために、帯域制限フィルタ12を透過するR成分およびB成分の光量が増加し、明るい撮像画像を取得することができる利点がある(ただし、距離演算部39により算出される距離情報は精度が低下する可能性がある)。従って、距離情報としてはそれ程高い精度は必要でない場合に用いると、必要な効果を得られるとともに、例えば明るい高品位の3D画像を取得することが可能となる。 According to such a configuration, not only the G component but also the R component and the B component are transmitted through the portion of the W filter 12w, so that the light amounts of the R component and the B component that pass through the band limiting filter 12 increase. There is an advantage that a bright captured image can be acquired (however, the accuracy of the distance information calculated by the distance calculation unit 39 may be reduced). Accordingly, when the distance information is used when high accuracy is not necessary, a necessary effect can be obtained and, for example, a bright high-quality 3D image can be acquired.
 さらに、図14は帯域制限フィルタ12の第3の変形例を示す図である。 Further, FIG. 14 is a diagram showing a third modification of the band limiting filter 12.
 この図14に示す帯域制限フィルタ12は、第1の領域と第2の領域とが、撮像装置が標準姿勢であるときの上下方向位置および左右方向位置を異ならせて配置されたものとなっている。すなわち、帯域制限フィルタ12は、例えば円形をなすフィルタを十字状に4分割して、左下(グラフにおける第3象限)にRGフィルタ12r、右上(グラフにおける第1象限)にGBフィルタ12bを配置するとともに、左上(グラフにおける第2象限)に第1のGフィルタ12g1を、右下(グラフにおける第4象限)に第2のGフィルタ12g2を、それぞれ配置したものとなっている。 In the band limiting filter 12 shown in FIG. 14, the first region and the second region are arranged with different vertical and horizontal positions when the imaging apparatus is in the standard posture. Yes. That is, the band limiting filter 12 divides a circular filter into four crosses, for example, and arranges the RG filter 12r in the lower left (third quadrant in the graph) and the GB filter 12b in the upper right (first quadrant in the graph). In addition, the first G filter 12g1 is arranged in the upper left (second quadrant in the graph), and the second G filter 12g2 is arranged in the lower right (fourth quadrant in the graph).
 上述した図3、図12、図13に示したような帯域制限フィルタ12の構成の場合には、垂直方向のエッジを有する被写体の距離情報取得は容易であるが、水平方向のエッジを有する被写体の距離情報取得は困難となる。これに対して、この図14に示したような帯域制限フィルタ12の構成を採用すれば、垂直方向のエッジを有する被写体と、水平方向のエッジを有する被写体と、の何れに対しても、距離情報を容易に取得することができる利点がある。このようにRGフィルタ12rとGBフィルタ12bの水平方向位置および垂直方向位置を異ならせることにより、距離情報取得における被写体エッジの方向依存性(特に、水平/垂直方向依存性)を改善することが可能となる。 In the case of the configuration of the band limiting filter 12 as shown in FIGS. 3, 12, and 13 described above, it is easy to obtain distance information of a subject having a vertical edge, but a subject having a horizontal edge. It becomes difficult to acquire distance information. On the other hand, if the configuration of the band limiting filter 12 as shown in FIG. 14 is adopted, the distance to both the subject having the vertical edge and the subject having the horizontal edge is determined. There is an advantage that information can be easily acquired. Thus, by making the horizontal position and the vertical position of the RG filter 12r and the GB filter 12b different, it is possible to improve the subject edge direction dependency (particularly, the horizontal / vertical direction dependency) in distance information acquisition. It becomes.
 なお、帯域制限フィルタ12が図12~図14等に示すように構成されている場合でも、R画像とB画像のズレ量の大きさや、後述するようなボケのPSF(Point Spread Function:点広がり関数)の形状が変化するだけであるために、下記に説明するような色画像生成部37におけるR成分とB成分の位置ズレを補正するカラー化処理を同様に適用して、色ズレの補正された画像を生成することができる。 Even when the band limiting filter 12 is configured as shown in FIG. 12 to FIG. 14 and the like, the amount of deviation between the R image and the B image, or a blurred PSF (Point Spread Function: point spread) as will be described later. Since only the shape of the function) changes, a colorization process for correcting the positional deviation between the R component and the B component in the color image generation unit 37 as described below is similarly applied to correct the color deviation. Generated images can be generated.
 続いて、図15は帯域制限フィルタ12の第4の変形例を示す図である。 Subsequently, FIG. 15 is a diagram showing a fourth modification of the band limiting filter 12.
 図3や図12~図14に示した帯域制限フィルタ12は通常のカラーフィルタであったが、この図15に示す帯域制限フィルタ12は、色選択透過素子18により構成したものとなっている。 Although the band limiting filter 12 shown in FIG. 3 and FIGS. 12 to 14 is a normal color filter, the band limiting filter 12 shown in FIG. 15 is configured by a color selective transmission element 18.
 ここに、色選択透過素子18は、色(波長)に対応して偏光透過軸を回転可能な部材と、例えばLCDのような偏光透過軸を回転する/しないを選択制御可能な部材と、を複数組み合わせることにより実現された、色分布を変更可能な素子である。この色選択透過素子18は、具体例としては、西暦2000年4月にSID2000において「SID '00 Digest, Vol. 31, p. 92」として開示されたようなカラーリンク社のカラースイッチが挙げられる。 Here, the color selective transmission element 18 includes a member capable of rotating the polarization transmission axis corresponding to the color (wavelength) and a member capable of selectively controlling whether or not to rotate the polarization transmission axis such as an LCD. This element is realized by combining a plurality of elements, and can change the color distribution. A specific example of the color selective transmission element 18 is a color switch manufactured by Color Link as disclosed in “SID '00 Digest, Vol. 31, p. 92” in SID2000 in April 2000 AD. .
 この色選択透過素子18は、撮像装置を標準姿勢(横位置)にして撮像素子22から見たときに、左半分が第1色選択透過素子18L、右半分が第2色選択透過素子18Rとして構成されている。ここに、第1色選択透過素子18Lは、G成分およびR成分を通過させB成分を遮断するRG状態と、RGB全成分(W)を通過させるW状態と、を選択的に取り得るようになっている。また、第2色選択透過素子18Rは、G成分およびB成分を通過させR成分を遮断するGB状態と、RGB全成分(W)を通過させるW状態と、を選択的に取り得るようになっている。これら第1色選択透過素子18Lと第2色選択透過素子18Rとは、図示しない色選択駆動部により各独立に駆動されるようになっていて、第1色選択透過素子18LにRG状態を取らせ、かつ第2色選択透過素子18RにGB状態を取らせることにより、図3に示した帯域制限フィルタ12と同様の機能を果たすことができる。 The color selective transmission element 18 has the left half as the first color selection transmission element 18L and the right half as the second color selection transmission element 18R when viewed from the imaging element 22 with the imaging apparatus in the standard posture (lateral position). It is configured. Here, the first color selective transmission element 18L can selectively take an RG state that passes the G component and the R component and blocks the B component, and a W state that passes all the RGB components (W). It has become. Further, the second color selective transmission element 18R can selectively take a GB state in which the G component and the B component are allowed to pass and an R component is blocked, and a W state in which all the RGB components (W) are allowed to pass. ing. The first color selection / transmission element 18L and the second color selection / transmission element 18R are independently driven by a color selection drive unit (not shown), and the first color selection / transmission element 18L is set in the RG state. In addition, by causing the second color selective transmission element 18R to assume the GB state, the same function as the band limiting filter 12 shown in FIG. 3 can be achieved.
 さらに、構成を別のものに変更すれば、色選択透過素子18が図12~図14に示した帯域制限フィルタ12と同様の機能を果たすようにすることも可能であるし、さらに他の構成の帯域制限フィルタ12を構成することもできる。 Furthermore, if the configuration is changed to another configuration, the color selective transmission element 18 can perform the same function as the band limiting filter 12 shown in FIGS. The band limiting filter 12 can also be configured.
 次に、色画像生成部37において行われる、各色画像同士のずれ(色ズレ)を補正するためのカラー化処理について説明する。 Next, a colorization process for correcting a shift (color shift) between the color images performed in the color image generation unit 37 will be described.
 図3に示した帯域制限フィルタ12を用いて、合焦位置よりも遠距離側にある被写体を撮影したときには図7に示すようなボケが形成され、合焦位置よりも近距離側にある被写体を撮影したときには図10に示すようなボケが形成される。 When the subject on the far side from the in-focus position is photographed using the band limiting filter 12 shown in FIG. 3, a blur as shown in FIG. 7 is formed, and the subject is in the near side from the in-focus position. As shown in FIG. 10, a blur is formed.
 カラー化処理は、これら図7または図10に示したような形状のボケを、図16に示すような形状のボケに補正するための処理である。ここに、図16は、カラー化処理後のボケ形状の概要を示す図である。 The colorization process is a process for correcting the blur having the shape shown in FIG. 7 or 10 to the blur having the shape shown in FIG. FIG. 16 is a diagram showing an outline of the blurred shape after the colorization process.
 以下では、説明を簡単にするために、被写体が合焦位置よりも遠距離側にある場合(図10に示す場合)を例に挙げて説明するが、被写体が合焦位置よりも近距離側にある場合には、図7を図10と比較すれば分かるように、R画像のボケとB画像のボケの形状や位置が左右反対になるだけであるために、以下の処理を適宜変更すれば同様に適用することが可能である。 In the following, in order to simplify the description, the case where the subject is on the far side from the in-focus position (in the case shown in FIG. 10) will be described as an example, but the subject is closer to the in-focus side than the in-focus position. In this case, as can be seen by comparing FIG. 7 with FIG. 10, the shape and position of the blur of the R image and the blur of the B image are just opposite to each other. It is possible to apply similarly.
 図17はカラー化処理において合焦位置よりも遠距離側にある被写体のR画像に対して適用されるR用フィルタカーネルの形状を示す図、図18はカラー化処理において合焦位置よりも遠距離側にある被写体のB画像に対して適用されるB用フィルタカーネルの形状を示す図である。 FIG. 17 is a diagram showing the shape of the R filter kernel applied to the R image of the subject on the far side from the in-focus position in the colorization processing, and FIG. 18 is far from the in-focus position in the colorization processing. It is a figure which shows the shape of the filter kernel for B applied with respect to the B image of the to-be-photographed object on the distance side.
 このカラー化処理は、色ズレの補正およびボケ形状の補正を、フィルタリング処理(フィルタカーネルを画像に畳み込み演算する処理)により行うものとなっている。すなわち、R画像に対してR用のフィルタリング処理を行うことにより、R画像のボケの形状および重心位置(Ch,Cr)(この重心位置(Ch,Cr)は、図17に示す例においては、R用フィルタカーネルのフィルタ係数のピーク位置でもある)を、理想的なボケである円形ボケの形状および重心位置(Ch,Cg)(この重心位置(Ch,Cg)は、図17および図18に示す例においては、R用フィルタカーネルおよびB用フィルタカーネルの座標中心でもある)に近接させるとともに、B画像に対してB用のフィルタリング処理を行うことにより、B画像のボケの形状および重心位置(Ch,Cb)(この重心位置(Ch,Cb)は、図18に示す例においては、B用フィルタカーネルのフィルタ係数のピーク位置でもある)を、上述した理想的なボケである円形ボケの形状および重心位置(Ch,Cg)に近接させる処理となっている。 In this colorization process, color misregistration correction and blur shape correction are performed by filtering processing (processing that convolves a filter kernel with an image). That is, by performing R filtering processing on the R image, the shape of the blur of the R image and the centroid position (Ch, Cr) (this centroid position (Ch, Cr) is determined in the example shown in FIG. The peak position of the filter coefficient of the filter kernel for R) is the ideal shape of circular blur and the center of gravity (Ch, Cg) (this center of gravity (Ch, Cg) is shown in FIG. 17 and FIG. In the example shown, the B filter is made close to the coordinate center of the R filter kernel and the B filter kernel), and the B filtering process is performed on the B image, whereby the blur shape and the gravity center position of the B image ( Ch, Cb) (this centroid position (Ch, Cb) is also the peak position of the filter coefficient of the B filter kernel in the example shown in FIG. 18). The shape and position of the center of gravity of the circular blur an ideal blur mentioned above (Ch, Cg) has become a process of close to.
 まず、第1および第2の画像信号に対する理想的なボケ形状の重心位置が、設定手段たる色画像生成部37により設定されたものとする。ここに、設定手段たる色画像生成部37による理想的なボケ形状の重心位置の設定は、所望の位置とすることが可能であるが、具体的な設定例としては、R画像のボケ形状の重心位置とB画像のボケ形状の重心位置との幾何的位置関係が一定となるように設定する例、標準画像であるG画像のボケ形状の重心位置として設定する例、などが幾つかの例として考えられる。前者の幾何的位置関係が一定となるように設定する例は、より具体的には、R画像のボケ形状の重心位置とB画像のボケ形状の重心位置との所定比率の内分点(より好ましくは、例えば中点)として設定する等が考えられる。 First, it is assumed that an ideal blur center of gravity position for the first and second image signals is set by the color image generation unit 37 as setting means. Here, the setting of the ideal center of gravity of the blur shape by the color image generation unit 37 as the setting means can be set to a desired position. As a specific setting example, the blur shape of the R image can be set as a desired position. Some examples include an example in which the geometrical positional relationship between the center of gravity position and the center of gravity position of the blur shape of the B image is set constant, and an example of setting as the center of gravity position of the blur shape of the G image that is the standard image. Is considered. More specifically, an example in which the former geometric positional relationship is set to be constant is more specifically an inner division point (more than a predetermined ratio between the center of gravity position of the blur shape of the R image and the center of gravity position of the blur shape of the B image. Preferably, for example, setting as a midpoint) can be considered.
 図3に示した帯域制限フィルタ12におけるGBフィルタ12bとRBフィルタ12rは、瞳領域の半分の領域をそれぞれ占めており、図示のように線対称な形状となっているために、理想的なボケ形状の重心位置を、R画像のボケ形状の重心位置とB画像のボケ形状の重心位置との中点とする場合には、適切な設定となり得ると考えられる。 The GB filter 12b and the RB filter 12r in the band limiting filter 12 shown in FIG. 3 each occupy a half of the pupil region and have a line-symmetric shape as shown in the figure, so that an ideal blur is obtained. When the center of gravity position of the shape is set to the midpoint between the center of gravity position of the blur shape of the R image and the center of gravity position of the blur shape of the B image, it can be considered to be an appropriate setting.
 また、理想的な円形ボケ形状の重心位置を標準画像であるG画像のボケ形状の重心位置とする理由は、G画像が、実際に取得される画像であって、帯域制限フィルタ12の全瞳領域に対応する自然な円形のボケ形状を呈するからである。そして、R画像およびB画像のボケの形状のG画像のボケの形状に合わせると、R画像およびB画像のボケがG画像のボケと同等の円形状を呈する形状になること、G画像のボケの大きさがR画像およびB画像のボケの大きさよりも大きいために、R画像およびB画像のボケ形状をG画像のボケ形状に合わせる方が処理が容易であること、等の利点が得られるためでもある。 The reason why the center of gravity position of the ideal circular blur shape is the center of gravity position of the blur shape of the G image that is the standard image is that the G image is an actually acquired image and the entire pupil of the band limiting filter 12 This is because a natural circular blur shape corresponding to the region is exhibited. Then, when matching the blurring shape of the R image and the B image with the blurring shape of the G image, the blurring of the R image and the B image becomes a shape exhibiting a circular shape equivalent to the blurring of the G image. Since the size of the image is larger than the size of the blur of the R image and the B image, advantages such as easier processing can be obtained by matching the blur shape of the R image and the B image with the blur shape of the G image. It is also for the purpose.
 次に、上述のように設定された理想的な円形ボケ形状の重心位置(Ch,Cg)がフィルタ中心となり、かつR画像のボケ形状の重心位置(Ch,Cr)またはB画像のボケ形状の重心位置(Ch,Cb)がフィルタ係数のピーク位置となるように、フィルタカーネルが決定される。 Next, the center of gravity (Ch, Cg) of the ideal circular blur shape set as described above becomes the filter center, and the center of gravity (Ch, Cr) of the blur shape of the R image or the blur shape of the B image. The filter kernel is determined so that the barycentric position (Ch, Cb) is the peak position of the filter coefficient.
 なお、R画像とB画像の重心位置を近接させるのは色ズレを補正するためであり、R画像とB画像のボケの形状を近似させるのは各色毎にボケの形状が異なるのを補正してカラー画像におけるボケを自然な形状のボケとするためである。 The reason why the center of gravity positions of the R image and the B image are brought close is to correct the color shift, and the blur shape of the R image and the B image is approximated by correcting that the blur shape is different for each color. This is because the blur in the color image is made to have a natural shape.
 図17に示すR用フィルタカーネルは、R画像に畳み込み演算するためのものであり、ボケフィルタの一例としてガウシアンフィルタを配置したものとなっている。このR用フィルタカーネルは、ガウシアンフィルタのフィルタ係数のピーク(フィルタ係数の重心位置にほぼ対応する)が、カーネル中心の位置(カーネルの上下を2等分する横ラインChと、カーネルの左右を2等分する縦ラインであり理想的な円形ボケ形状の重心を通る縦ラインCgとが交差する位置(Ch,Cg))から、理想的な円形ボケ形状から得られる画像とRの画像の間の位相差分ずれた位置(横ラインChと、R画像のボケ形状の重心を通る縦ラインCrとが交差する位置(Ch,Cr))にあるフィルタ形状となっている。 The R filter kernel shown in FIG. 17 is for performing a convolution operation on the R image, and a Gaussian filter is arranged as an example of a blur filter. In this R filter kernel, the peak of the filter coefficient of the Gaussian filter (corresponding approximately to the position of the center of gravity of the filter coefficient) is the position of the center of the kernel (the horizontal line Ch that divides the top and bottom of the kernel into two equal parts) Between the image obtained from the ideal circular blur shape and the R image from the position (Ch, Cg) at which the vertical line Cg that passes through the center of gravity of the ideal circular blur shape intersects the vertical line equally dividing. The filter shape is at a position shifted in phase difference (a position (Ch, Cr) where the horizontal line Ch and the vertical line Cr passing through the center of gravity of the blur shape of the R image intersect).
 図18に示すB用フィルタカーネルは、B画像に畳み込み演算するためのものであり、R用フィルタカーネルと同様に、ボケフィルタの一例としてガウシアンフィルタを配置したものとなっている。このB用フィルタカーネルは、ガウシアンフィルタのフィルタ係数のピーク(フィルタ係数の重心位置にほぼ対応する)が、カーネル中心の位置(横ラインChと縦ラインCgとが交差する位置(Ch,Cg))から、理想的な円形ボケ形状から得られる画像とBの画像の間の位相差分ずれた位置(横ラインChと、B画像のボケ形状の重心を通る縦ラインCbとが交差する位置(Ch,Cb))にあるフィルタ形状となっている。 The B filter kernel shown in FIG. 18 is for performing a convolution operation on the B image, and similarly to the R filter kernel, a Gaussian filter is arranged as an example of a blur filter. In the filter kernel for B, the peak of the filter coefficient of the Gaussian filter (corresponding substantially to the position of the center of gravity of the filter coefficient) is the position of the kernel center (position where the horizontal line Ch and the vertical line Cg intersect (Ch, Cg)). From the position obtained by shifting the phase difference between the image obtained from the ideal circular blur shape and the B image (the position where the horizontal line Ch and the vertical line Cb passing through the center of gravity of the blur shape of the B image intersect (Ch, Cb)).
 このようなフィルタ形状のR用フィルタカーネルおよびB用フィルタカーネルをR画像およびB画像にそれぞれ作用させることにより、色ズレの補正と、ボケ形状の補正と、を同時に行うことが可能となっている。 By causing the R filter kernel and the B filter kernel having such a filter shape to act on the R image and the B image, respectively, it is possible to simultaneously perform color misalignment correction and blur shape correction. .
 次に、図19および図20を参照して、カラー化処理の他の例を説明する。ここに、図19は他の例のカラー化処理における合焦位置よりも遠距離側にある被写体のR画像およびB画像のシフトの様子を示す図、図20は他の例のカラー化処理においてR画像およびB画像に対して適用されるフィルタの形状を示す図である。 Next, another example of colorization processing will be described with reference to FIGS. 19 and 20. FIG. 19 is a diagram showing how the R image and B image of the subject located farther than the in-focus position in the color processing of another example, and FIG. 20 shows the color processing of another example. It is a figure which shows the shape of the filter applied with respect to R image and B image.
 この例のカラー化処理は、色ズレの補正を平行移動(シフト)により行い、ボケ形状の補正をフィルタリング処理により行うものとなっている。 In this example, the colorization processing is performed by correcting the color misregistration by parallel movement (shift) and correcting the blurred shape by the filtering processing.
 すなわちまず、図10に示すR画像に対して位相差に応じたR用のシフト処理を行うことにより、図19に示すようにR画像の重心位置を理想的な円形ボケ形状の重心位置に近接させるとともに、図10に示すB画像に対して位相差に応じたB用のシフト処理を行うことにより、図19に示すようにB画像の重心位置を理想的な円形ボケ形状の重心位置に近接させる。 That is, first, by performing R shift processing corresponding to the phase difference on the R image shown in FIG. 10, the centroid position of the R image is brought close to the ideal circular blur-shaped centroid position as shown in FIG. In addition, by performing B shift processing corresponding to the phase difference on the B image shown in FIG. 10, the centroid position of the B image is brought close to the ideal circular blur-shaped centroid position as shown in FIG. Let
 その後に、R画像およびB画像に対して図20に示すような同一のフィルタリング処理を行うことにより、R画像およびB画像のボケの形状を、理想的な円形ボケ形状に近似させるようになっている(図16参照)。 Thereafter, the same filtering process as shown in FIG. 20 is performed on the R image and the B image to approximate the blur shape of the R image and the B image to an ideal circular blur shape. (See FIG. 16).
 ここに、図20に示すフィルタカーネルは、R画像およびB画像に畳み込み演算するためのものであり、ボケフィルタの一例としてガウシアンフィルタを配置したものとなっている。このフィルタカーネルは、ガウシアンフィルタのフィルタ係数のピーク(フィルタ係数の重心位置に対応する)が、カーネル中心の位置(横ラインChと縦ラインCgとが交差する位置(Ch,Cg))にあるフィルタ形状となっている。 Here, the filter kernel shown in FIG. 20 is for performing a convolution operation on the R image and the B image, and a Gaussian filter is arranged as an example of the blur filter. In this filter kernel, the filter coefficient peak of the Gaussian filter (corresponding to the position of the center of gravity of the filter coefficient) is at the kernel center position (position (Ch, Cg) where the horizontal line Ch and the vertical line Cg intersect). It has a shape.
 ただし、ここでは図3に示したような、第1の領域と第2の領域とが左右対称の帯域制限フィルタ12を想定しているために、R画像およびB画像に対して同一のフィルタリング処理を行っているが、第1の領域と第2の領域とが非対称である場合には、R画像とB画像とに対して異なるフィルタリング処理を行うようにしても良いことは勿論である。 However, since the band limiting filter 12 in which the first area and the second area are symmetric is assumed here as shown in FIG. 3, the same filtering process is applied to the R image and the B image. However, when the first region and the second region are asymmetric, it is needless to say that different filtering processes may be performed on the R image and the B image.
 また、上述ではボケフィルタとしてガウシアンフィルタ(円型ガウシアンフィルタ)を例に挙げているが、勿論これに限定されるものではない。例えば図3に示したフィルタ形状の場合には、図7や図10に示したように、R画像およびB画像に発生するボケは縦方向の半円状(すなわち、円形よりも横方向に短い形状)となる。従って、横方向(より一般には、位相差が生じている方向)を長軸方向とする、図21や図22等に示すような楕円型ガウシアンフィルタを用いれば、より高精度にボケを円形状に近づける補正処理を行うことが可能となる。ここに、図21は横方向の標準偏差を大きくした楕円型ガウシアンフィルタの形状を示す図、図22は縦方向の標準偏差を小さくした楕円型ガウシアンフィルタの形状を示す図である。なお、図21および図22には図20に対応したフィルタカーネルの中心にフィルタ係数のピーク(フィルタ係数の重心位置に対応する)が位置する例を示したが、図17や図18に対応させる場合には、フィルタカーネルの中心からフィルタ係数のピーク(フィルタ係数の重心位置にほぼ対応する)をずらすことは勿論である。 In the above description, a Gaussian filter (circular Gaussian filter) is taken as an example of a blur filter, but the present invention is not limited to this. For example, in the case of the filter shape shown in FIG. 3, as shown in FIGS. 7 and 10, blurring generated in the R image and the B image is semicircular in the vertical direction (that is, shorter in the horizontal direction than in the circular shape). Shape). Therefore, if an elliptical Gaussian filter as shown in FIG. 21, FIG. 22 or the like having the horizontal direction (more generally, the direction in which the phase difference is generated) as the major axis direction is used, the blur is more accurately formed into a circular shape. It is possible to perform a correction process to approach FIG. 21 is a diagram showing the shape of an elliptical Gaussian filter with a large standard deviation in the horizontal direction, and FIG. 22 is a diagram showing the shape of an elliptical Gaussian filter with a small standard deviation in the vertical direction. 21 and 22 show examples in which the peak of the filter coefficient (corresponding to the position of the center of gravity of the filter coefficient) is located at the center of the filter kernel corresponding to FIG. 20, but it corresponds to FIG. 17 and FIG. In this case, it goes without saying that the peak of the filter coefficient (which substantially corresponds to the position of the center of gravity of the filter coefficient) is shifted from the center of the filter kernel.
 さらに、ボケフィルタは、円型ガウシアンフィルタや楕円型ガウシアンフィルタに限るものでないことも勿論であり、R画像やB画像のボケ形状を、理想的な円形ボケ形状に近似することができるようなボケフィルタであれば、広く適用することが可能である。 Further, the blur filter is not limited to the circular Gaussian filter and the elliptic Gaussian filter, and the blur shape of the R image or the B image can be approximated to an ideal circular blur shape. A filter can be widely applied.
 次に、図23は、色画像生成部37により行われるカラー化処理を示すフローチャートである。 FIG. 23 is a flowchart showing the colorization process performed by the color image generation unit 37.
(ステップS1)
 この処理を開始すると、初期設定を行う。この初期設定においては、まず、処理対象のRGB画像(つまり、R画像、G画像、およびB画像)の読み込みを行う。次に、R画像のコピーであるRコピー画像と、B画像のコピーであるBコピー画像と、を作成する。
(Step S1)
When this process is started, initialization is performed. In this initial setting, first, an RGB image to be processed (that is, an R image, a G image, and a B image) is read. Next, an R copy image that is a copy of the R image and a B copy image that is a copy of the B image are created.
(ステップS2)
 続いて、位相差検出を行うための注目画素を設定する。ここでは、注目画素を、R画像とB画像との内の何れか一方、ここでは例えばR画像に設定する。
(Step S2)
Subsequently, a target pixel for performing phase difference detection is set. Here, the target pixel is set to one of the R image and the B image, here, for example, the R image.
(ステップS3)
 ステップS2で設定された注目画素に対する位相差を検出する。この位相差の検出は、注目画素を中心位置に含む部分領域をR画像に設定して基準画像とし、B画像に同一サイズの部分領域を設定して参照画像として(図26等参照)、参照画像の位置を位相差が発生している方向にずらしながら距離演算部39において基準画像と参照画像との間で相関演算を行い、最も相関値が高いと判定された参照画像と基準画像との間の位置ズレ量(なお、位置ズレ量の符号が位置ズレの方向の情報を与える)を位相差量とする。
(Step S3)
A phase difference with respect to the target pixel set in step S2 is detected. This phase difference is detected by setting a partial area including the target pixel at the center position as an R image as a reference image, and setting a partial area of the same size as a B image as a reference image (see FIG. 26, etc.) The distance calculation unit 39 performs a correlation calculation between the reference image and the reference image while shifting the position of the image in the direction in which the phase difference is generated, and the reference image determined to have the highest correlation value is compared with the reference image. The amount of misalignment between them (note that the sign of the misregistration amount gives information on the direction of misalignment) is the phase difference amount.
 なお、部分領域は任意のサイズに設定することができるが、安定的に位相差を検出するためには縦横それぞれ30[ピクセル]以上の部分領域を利用することが好ましく、一例としては51×51[ピクセル]の領域が挙げられる。 The partial area can be set to an arbitrary size. However, in order to stably detect the phase difference, it is preferable to use partial areas of 30 [pixels] or more in the vertical and horizontal directions. As an example, 51 × 51 An area of [Pixel] is given.
 そして、距離演算部39における相関演算は、具体的には、例えばZNCC演算、あるいは予めフィルタ処理が施された画像に対するSAD演算などの処理により行う。 Then, specifically, the correlation calculation in the distance calculation unit 39 is performed by a process such as a ZNCC calculation or an SAD calculation for an image that has been previously subjected to filter processing.
 まず、ZNCCによる相関演算は、以下の数式1に基づき行う。 First, the correlation calculation by ZNCC is performed based on the following formula 1.
[数1]
Figure JPOXMLDOC01-appb-I000001
 ここに、IはR画像の部分領域、TはB画像の部分領域(Iと同一サイズの部分領域)、I(バー)はIの平均値、T(バー)はTの平均値、Mは部分領域の横幅[ピクセル]、Nは部分領域の縦幅[ピクセル]である。この数式1に基づきZNCC演算を行い、その結果の絶対値|RZNCC|を相関演算の結果として得られた相関値とする。
[Equation 1]
Figure JPOXMLDOC01-appb-I000001
Here, I is a partial area of the R image, T is a partial area of the B image (partial area of the same size as I), I (bar) is an average value of I, T (bar) is an average value of T, and M is The horizontal width [pixel] of the partial area, and N is the vertical width [pixel] of the partial area. A ZNCC operation is performed based on Equation 1, and the absolute value | R ZNCC | of the result is set as a correlation value obtained as a result of the correlation operation.
 また、予めフィルタ処理が施された画像に対するSAD演算を行う場合は、はじめにR画像およびB画像に対してソーベルフィルタなどに代表される微分フィルタや、LOGフィルタなどのバンドパスフィルタなどのフィルタリング処理を施しておく。そしてその後に、以下の数式2に示すSAD演算により相関演算を行う。 In addition, when performing SAD calculation on an image that has been previously filtered, filtering processing such as a differential filter represented by a Sobel filter or a bandpass filter such as a LOG filter is first applied to the R image and the B image. Give it. After that, the correlation calculation is performed by the SAD calculation shown in the following formula 2.
[数2]
Figure JPOXMLDOC01-appb-I000002
 ここに、I'はフィルタリング処理が施された後のR画像の部分領域、T'はフィルタリング処理が施された後のB画像の部分領域(I'と同一サイズの部分領域)、Mは部分領域の横幅[ピクセル]、Nは部分領域の縦幅[ピクセル]である。この場合には、RSADが相関演算の結果得られた相関値である。
[Equation 2]
Figure JPOXMLDOC01-appb-I000002
Here, I ′ is a partial region of the R image after the filtering process is performed, T ′ is a partial region of the B image after the filtering process is performed (partial region of the same size as I ′), and M is a partial region The horizontal width [pixel] of the area, and N is the vertical width [pixel] of the partial area. In this case, R SAD is a correlation value obtained as a result of the correlation calculation.
(ステップS4)
 画像内における全ての注目画素に対する位相差検出処理が完了したか否かを判定する。そして、完了するまで、注目画素の位置をずらしながらステップS2およびステップS3の処理を繰り返して行う。ここに、画像内における全ての注目画素とは、画像内において位相差検出可能な全ての画素のことを指している。なお、位相差検出不可能な画素については、検出を行わないままとするか、あるいは周囲の注目画素の検出結果に基づいて補間等を行って位相差量を算出すれば良い。
(Step S4)
It is determined whether or not the phase difference detection processing for all the target pixels in the image is completed. Then, until the completion, the processing of step S2 and step S3 is repeated while shifting the position of the target pixel. Here, all the target pixels in the image indicate all pixels in the image that can detect the phase difference. It should be noted that pixels that cannot detect the phase difference may be left undetected, or the phase difference amount may be calculated by performing interpolation or the like based on the detection result of the surrounding pixel of interest.
(ステップS5)
 次に、補正処理を行うための注目画素を、位相差量が求められた画素に対して設定する。ここに、注目画素の画素位置は、R画像とB画像とで同一の(つまり共通の)画素位置である。なお、以下では補正処理として、図17および図18を参照して説明したように、平行移動(シフト)を伴わないフィルタのみを用いたカラー化処理を行う場合について説明する。
(Step S5)
Next, the target pixel for performing the correction process is set for the pixel for which the phase difference amount is obtained. Here, the pixel position of the target pixel is the same (that is, common) pixel position in the R image and the B image. Hereinafter, as the correction process, as described with reference to FIGS. 17 and 18, a case will be described in which a colorization process using only a filter without translation (shift) is performed.
(ステップS6)
 注目画素の位相差量に応じて、R画像が理想的な円形ボケ形状となるようにフィルタリング処理を行うためのR画像用フィルタの形状を取得する。ここに、位相差量とR画像用のフィルタ形状との関係は、例えば以下の表1に示すようなテーブルとして、撮像装置内に予め保持されている。この表1に示す例では、フィルタ形状を決定するのは、フィルタカーネルの大きさ、フィルタカーネル中心からのガウシアンフィルタのずれ、ガウシアンフィルタの標準偏差σ(この標準偏差σは、ボケフィルタのボケの広がり度合いを示している)である。従って、注目画素の位相差量に基づきテーブル参照を行うことにより、R画像用フィルタの形状を取得することができる。
(Step S6)
In accordance with the phase difference amount of the pixel of interest, the shape of the R image filter for performing the filtering process so that the R image has an ideal circular blur shape is acquired. Here, the relationship between the phase difference amount and the filter shape for the R image is held in advance in the imaging apparatus as a table as shown in Table 1 below, for example. In the example shown in Table 1, the filter shape is determined by determining the size of the filter kernel, the deviation of the Gaussian filter from the center of the filter kernel, the standard deviation σ of the Gaussian filter (this standard deviation σ is the blur of the blur filter) It shows the degree of spread). Therefore, the shape of the R image filter can be acquired by referring to the table based on the phase difference amount of the target pixel.
[表1]
Figure JPOXMLDOC01-appb-I000003
(ステップS7)
 次に、R画像における注目画素とその近傍の画素とでなる近傍領域に対してフィルタリング処理を行い、注目画素におけるフィルタ出力値を取得する。そして、取得したフィルタ出力値を、Rコピー画像の注目画素位置にコピーして、Rコピー画像を更新する。
[Table 1]
Figure JPOXMLDOC01-appb-I000003
(Step S7)
Next, a filtering process is performed on a neighboring region including the target pixel and its neighboring pixels in the R image, and a filter output value at the target pixel is acquired. Then, the acquired filter output value is copied to the target pixel position of the R copy image, and the R copy image is updated.
(ステップS8)
 注目画素の位相差量に応じて、B画像が理想的な円形ボケ形状となるようにフィルタリング処理を行うためのB画像用フィルタの形状を取得する。ここに、位相差量とB画像用のフィルタ形状との関係は、例えば以下の表2に示すようなテーブルとして、撮像装置内に予め保持されている。この表2に示す例においても、フィルタ形状を決定するのは、フィルタカーネルの大きさ、フィルタカーネル中心からのガウシアンフィルタのずれ、ガウシアンフィルタの標準偏差σである。従って、注目画素の位相差量に基づきテーブル参照を行うことにより、B画像用フィルタの形状を取得することができる。
(Step S8)
In accordance with the phase difference amount of the target pixel, the shape of the B image filter for performing the filtering process so that the B image has an ideal circular blur shape is acquired. Here, the relationship between the phase difference amount and the filter shape for the B image is held in advance in the imaging apparatus as a table as shown in Table 2 below, for example. Also in the example shown in Table 2, the filter shape is determined by the size of the filter kernel, the deviation of the Gaussian filter from the center of the filter kernel, and the standard deviation σ of the Gaussian filter. Therefore, the shape of the B image filter can be acquired by referring to the table based on the phase difference amount of the target pixel.
[表2]
Figure JPOXMLDOC01-appb-I000004
(ステップS9)
 次に、B画像における注目画素とその近傍の画素とでなる近傍領域に対してフィルタリング処理を行い、注目画素におけるフィルタ出力値を取得する。そして、取得したフィルタ出力値を、Bコピー画像の注目画素位置にコピーして、Bコピー画像を更新する。
[Table 2]
Figure JPOXMLDOC01-appb-I000004
(Step S9)
Next, a filtering process is performed on a neighboring region including the target pixel and its neighboring pixels in the B image, and a filter output value at the target pixel is acquired. Then, the acquired filter output value is copied to the target pixel position of the B copy image, and the B copy image is updated.
(ステップS10)
 画像内における全ての注目画素に対するフィルタリング処理が完了したか否かを判定する。そして、処理が完了するまで、注目画素の位置をずらしながらステップS5~S9の処理を繰り返して行う。
(Step S10)
It is determined whether or not the filtering process for all the target pixels in the image has been completed. Until the processing is completed, the processing in steps S5 to S9 is repeated while shifting the position of the target pixel.
 こうして、全ての注目画素に対するフィルタリング処理が完了したと判定された場合には、Rコピー画像およびBコピー画像をR画像およびB画像に対する補正画像として出力し、このカラー化処理を終了する。 Thus, when it is determined that the filtering process for all the target pixels has been completed, the R copy image and the B copy image are output as corrected images for the R image and the B image, and the colorization process is terminated.
 なお、ステップS6やステップS8において、円型ガウシアンフィルタを用いるのに代えて、図21や図22等に示したような楕円型ガウシアンフィルタを用いる場合は、楕円形ガウシアンフィルタの標準偏差値をx方向とy方向とで別々に設定すると良いために、撮像装置内に保持するフィルタ形状に関するパラメータテーブルは、フィルタカーネル中心からのずれを除いて、例えば以下の表3に示すようになる。 In step S6 or step S8, instead of using the circular Gaussian filter, when using an elliptical Gaussian filter as shown in FIG. 21 or FIG. 22, the standard deviation value of the elliptical Gaussian filter is set to x. Since the direction and the y direction may be set separately, the parameter table relating to the filter shape held in the imaging apparatus is as shown in Table 3 below, for example, excluding the deviation from the filter kernel center.
[表3]
Figure JPOXMLDOC01-appb-I000005
 ここに記載は省略したが、ステップS6において楕円型ガウシアンフィルタを用いる場合のフィルタカーネル中心からのずれは例えば表1と同様、ステップS8において楕円型ガウシアンフィルタを用いる場合のフィルタカーネル中心からのずれは例えば表2と同様とすれば良い。
[Table 3]
Figure JPOXMLDOC01-appb-I000005
Although not described here, the deviation from the filter kernel center when the elliptical Gaussian filter is used in step S6 is the deviation from the filter kernel center when the elliptical Gaussian filter is used in step S8, for example, as in Table 1. For example, it may be the same as in Table 2.
 さらに、上述では表1や表2を参照して、テーブルを用いて位相差量に応じたフィルタの形状を取得する例を説明したが、これに限るものでもない。例えば、フィルタの形状を決定するための各パラメータと位相差量との対応関係を例えば数式等として保持しておいて、位相差量を数式に代入して、演算等によりフィルタの形状を決定するようにしても構わない。 Furthermore, in the above description, an example in which the shape of the filter corresponding to the phase difference amount is acquired using the table with reference to Table 1 and Table 2 is not limited thereto. For example, the correspondence between each parameter for determining the filter shape and the phase difference amount is held as, for example, an equation, the phase difference amount is substituted into the equation, and the filter shape is determined by calculation or the like. It doesn't matter if you do.
 なお、帯域制限フィルタ12を介して撮影した画像から位相差を検出する処理および色ズレを補正するカラー化処理は、必ずしも撮像装置という形態において行う必要はなく、撮像装置とは別途の画像処理装置41(例えば、画像処理プログラムを実行するコンピュータなど)において行うようにしても構わない。 Note that the processing for detecting the phase difference from the image captured through the band limiting filter 12 and the colorization processing for correcting the color misregistration do not necessarily have to be performed in the form of the imaging device, and the image processing device is separate from the imaging device. 41 (for example, a computer that executes an image processing program) may be performed.
 図24は画像処理装置41の構成を示すブロック図である。 FIG. 24 is a block diagram showing a configuration of the image processing apparatus 41.
 この画像処理装置41は、図1に示した撮像装置のボディユニット2から、撮像機構に係る図示しないレンズマウント、シャッタ21、撮像素子22、撮像回路23、撮像駆動部24や、レンズユニット1のAF制御に係るコントラストAF制御部38(AFアシスト制御部38aを含む)、レンズユニット1との通信に係るボディ側通信コネクタ35、被写体の照明に係るストロボ制御回路33およびストロボ34、撮像装置の状態を取得するためのセンサ部31などを取り除き、インターフェース(IF)28から入力された情報を記録するための記録部42をさらに設けたものとなっている。この記録部42は、インターフェース28を介して入力し記録した情報を、画像処理部25へ出力するようになっている。なお、画像処理部25から記録部42へ情報を記録することも可能である。そして、記録部42は、インターフェース28とともに、システムコントローラ30A(コントラストAF制御部38を取り除いたことに伴い、システムコントローラの符号を30Aとしている)により制御されるようになっている。また、撮像に係るレリーズボタン等も不要であるために、操作部の符号を32Aとしている。 The image processing apparatus 41 includes a lens mount (not shown), a shutter 21, an image sensor 22, an image pickup circuit 23, an image pickup drive unit 24, and a lens unit 1, which are not shown in FIG. Contrast AF control unit 38 (including AF assist control unit 38a) related to AF control, body side communication connector 35 related to communication with the lens unit 1, strobe control circuit 33 and strobe 34 related to subject illumination, and state of the imaging device The sensor unit 31 for obtaining the information is removed, and a recording unit 42 for recording information input from the interface (IF) 28 is further provided. The recording unit 42 is configured to output information input and recorded via the interface 28 to the image processing unit 25. Information can be recorded from the image processing unit 25 to the recording unit 42. The recording unit 42 is controlled by the system controller 30A (the reference number of the system controller is 30A in accordance with the removal of the contrast AF control unit 38) together with the interface 28. In addition, since a release button and the like for imaging are not necessary, the reference numeral of the operation unit is 32A.
 また、画像処理装置41における一連の処理は、例えば次のように行う。まず、帯域制限フィルタ12を備えた撮像装置を用いて、画像を撮影し、撮像回路23から出力されたままのRAW画像として記録媒体29に記録する。さらに、帯域制限フィルタ12の形状等に係る情報や、撮像光学系9のレンズデータ等も記録媒体29に併せて記録する。 Further, a series of processing in the image processing apparatus 41 is performed as follows, for example. First, an image is captured using an imaging device including the band limiting filter 12 and is recorded on the recording medium 29 as a RAW image output from the imaging circuit 23. Further, information related to the shape of the band limiting filter 12, lens data of the imaging optical system 9, and the like are also recorded on the recording medium 29.
 次に、この記録媒体29を画像処理装置41のインターフェース28に接続して、画像および各種情報を記録部42に記録する。記録が終了したら、記録媒体29はインターフェース28から取り外しても構わない。 Next, the recording medium 29 is connected to the interface 28 of the image processing apparatus 41, and images and various information are recorded in the recording unit 42. When recording is completed, the recording medium 29 may be removed from the interface 28.
 その後は、記録部42に記録されている画像および各種情報を読み出して、上述した撮像装置と同様にして、距離演算部39による位相差の演算を行ったり、色画像生成部37による色ズレを補正するカラー化処理を行ったりする。 Thereafter, the image and various types of information recorded in the recording unit 42 are read out, and the phase difference is calculated by the distance calculating unit 39 or the color shift by the color image generating unit 37 is performed in the same manner as the imaging device described above. Perform color correction processing.
 こうして、画像処理装置41により処理されたカラー化後の画像は、再び記録部42に記録される。また、記録部42に記録されたカラー化後の画像は、表示部に表示されたり、インターフェース28を介して外部機器に送信されたりする。従って、外部機器においては、カラー化後の画像を様々な用途に利用することができる。 Thus, the colorized image processed by the image processing device 41 is recorded in the recording unit 42 again. Further, the colorized image recorded in the recording unit 42 is displayed on the display unit or transmitted to an external device via the interface 28. Therefore, in the external device, the colorized image can be used for various purposes.
 このような実施形態1によれば、ボケフィルタ自体を用いることにより、またはボケフィルタを用いる前に平行移動(シフト)することにより、R画像およびB画像の重心位置を、理想的な円形ボケ形状の重心位置に近接させているために、R画像、B画像、およびG画像の全ての色の画像のボケの重心位置が一致するように位置ずれが補正されることで、色ずれが軽減された鑑賞用としてより好ましい画像を得ることができる。 According to the first embodiment, the center of gravity positions of the R image and the B image are converted into an ideal circular blur shape by using the blur filter itself or by translating (shifting) it before using the blur filter. Since the position of the center of gravity of the R image, the B image, and the G image is corrected so that the positions of the center of gravity of the blur coincide with each other, the color misregistration is reduced. A more preferable image can be obtained for viewing.
 このときさらに、ボケフィルタにより、R画像およびB画像のボケ形状を、理想的な円形ボケ形状に近似させているために、自然なボケ形状の、鑑賞用としてより好ましい画像を得ることができる。 At this time, since the blur shape of the R image and the B image is approximated to an ideal circular blur shape by the blur filter, a natural blur-shaped image more preferable for viewing can be obtained.
[実施形態2]
 図25から図30は本発明の実施形態2を示したものであり、図25は色画像生成部37により行われるカラー化処理の概要を示す図である。
[Embodiment 2]
FIGS. 25 to 30 show the second embodiment of the present invention, and FIG. 25 is a diagram showing an outline of the colorization processing performed by the color image generation unit 37.
 この実施形態2において、上述の実施形態1と同様である部分については同一の符号を付すなどして説明を適宜省略し、主として異なる点についてのみ説明する。 In the second embodiment, the same parts as those in the first embodiment are denoted by the same reference numerals and the description thereof is omitted as appropriate, and only different points will be mainly described.
 本実施形態は、色画像生成部37におけるカラー化処理を上述した実施形態1とは異ならせたものとなっている。すなわち、上述した実施形態1は、色ズレの補正を、ボケフィルタを用いて行っていたが、本実施形態は画像のコピー加算処理により行うものとなっている。 In the present embodiment, the colorization processing in the color image generation unit 37 is different from that in the first embodiment. In other words, in the first embodiment described above, correction of color misregistration is performed using a blur filter, but in the present embodiment, this is performed by image copy addition processing.
 図25を参照して、被写体が合焦位置よりも遠距離側にある場合の、本実施形態におけるカラー化処理の概念を説明する。 Referring to FIG. 25, the concept of colorization processing in the present embodiment when the subject is on the far side from the in-focus position will be described.
 被写体が合焦位置よりも遠距離側にある場合には、図10に示したように、R画像のボケは、理想的な円形ボケ形状であるG画像のボケの左半分が欠けた形状であり、B画像のボケは、理想的な円形ボケ形状であるG画像のボケの右半分が欠けた形状である。なお、本実施形態2で用いるカラーフィルタ12(図3参照)によれば、G画像は理想的な円形ボケ形状を得られるので、説明を容易にするために理想的な円形ボケ形状がG画像であるものとして本実施形態における説明を行うが、理想的な円形ボケ形状は、R画像およびB画像の欠けた半分のボケ形状を補った円形ボケ形状である。 When the subject is on the far side from the in-focus position, as shown in FIG. 10, the blur of the R image has a shape in which the left half of the blur of the G image that is an ideal circular blur shape is missing. Yes, the blur of the B image is a shape in which the right half of the blur of the G image, which is an ideal circular blur shape, is missing. According to the color filter 12 (see FIG. 3) used in the second embodiment, an ideal circular blur shape can be obtained for the G image. Therefore, for easy explanation, the ideal circular blur shape is a G image. In this embodiment, the ideal circular blur shape is a circular blur shape that compensates for the half blurring shape lacking the R image and the B image.
 そこで、図25に示すように、R画像のボケを理想的なボケ形状に近似させるために、R画像のボケを、R画像のボケの欠損部分へコピー加算し、B画像のボケを、B画像のボケの欠損部分へコピー加算するようにしたものである。 Therefore, as shown in FIG. 25, in order to approximate the blur of the R image to an ideal blur shape, the blur of the R image is copied and added to the missing portion of the blur of the R image, and the blur of the B image is Copy addition is added to the missing part of the blurred image.
 コピー加算する部分領域の形状は、R画像およびB画像におけるボケの欠損部分の各形状と合致することが望ましいが、ここでは処理を簡単にするために、矩形状(例えば正方形状)の領域としている。 It is desirable that the shape of the partial area to be copied and added matches the shape of the missing portion of the blur in the R image and the B image, but here, in order to simplify the processing, a rectangular (for example, square) area is used. Yes.
 理想的な円形ボケ形状であるG画像における円形状をなすボケ拡散領域の中に、ボケ拡散部分領域G1と、ボケ拡散部分領域G2と、を示している。ここに、ボケ拡散部分領域G1とボケ拡散部分領域G2とは、理想的な円形ボケ形状であるG画像の円形状をなすボケ拡散領域の重心を通る縦ラインCgに対して左右対称の位置に、同一の大きさとして配置されている。また、部分領域G1,G2の大きさは、色ズレ補正の機能を果たすことを考慮すると、円形状をなすボケ拡散領域の半径程度の大きさであることが望ましい。 The blur diffusion partial region G1 and the blur diffusion partial region G2 are shown in the circular blur diffusion region in the G image which is an ideal circular blur shape. Here, the blur diffusion partial region G1 and the blur diffusion partial region G2 are symmetrically positioned with respect to the vertical line Cg passing through the center of gravity of the blur diffusion region forming the circular shape of the G image which is an ideal circular blur shape. Are arranged as the same size. In addition, the sizes of the partial areas G1 and G2 are preferably about the size of the radius of the circular blur diffusion area in consideration of fulfilling the function of color misregistration correction.
 一方、R画像に対して示したボケ拡散部分領域R1は、ボケ拡散部分領域G2と同じ大きさで同一位置の領域である。そして、R画像のボケ拡散領域には、ボケ拡散部分領域G1と同じ大きさで同一位置のボケ拡散部分領域R2が不足している。 On the other hand, the blur diffusion partial region R1 shown for the R image is a region having the same size and the same size as the blur diffusion partial region G2. In the blur diffusion region of the R image, the blur diffusion partial region R2 having the same size and the same position as the blur diffusion partial region G1 is insufficient.
 同様に、B画像に対して示したボケ拡散部分領域B1は、ボケ拡散部分領域G1と同じ大きさで同一位置の領域である。そして、B画像のボケ拡散領域には、ボケ拡散部分領域G2と同じ大きさで同一位置のボケ拡散部分領域B2が不足している。 Similarly, the blur diffusion partial region B1 shown for the B image is the same size and the same position as the blur diffusion partial region G1. The blur diffusion region B2 of the B image lacks the blur diffusion partial region B2 having the same size and the same position as the blur diffusion partial region G2.
 そこで、R画像のボケ拡散部分領域R1を、ボケ拡散部分領域G2を移動させたときにボケ拡散部分領域G1に完全に重なる移動量(例えば、円形状をなすボケ拡散領域の半径程度の移動量になると考えられる)だけ移動させて、(後述するように、R画像のコピーであるRコピー画像に対して)コピー加算することにより、R画像のボケ拡散部分領域R2を生成する。このボケ拡散部分領域R2は、理想的な円形ボケ形状であるG画像の円形状をなすボケ拡散領域のボケ拡散部分領域G1(あるいはB画像のボケ拡散部分領域B1)に相当する領域となる。 Therefore, a movement amount that completely overlaps the blur diffusion partial region G1 when the blur diffusion partial region G2 is moved in the blur diffusion partial region R1 of the R image (for example, a movement amount about the radius of the circular blur diffusion region G1). The blur diffused partial area R2 of the R image is generated by moving and moving the image (to an R copy image that is a copy of the R image, as will be described later). The blur diffusion partial region R2 is a region corresponding to the blur diffusion partial region G1 (or the blur diffusion partial region B1 of the B image) of the blur diffusion region forming the circular shape of the G image which is an ideal circular blur shape.
 同様に、B画像のボケ拡散部分領域B1を、ボケ拡散部分領域G1を移動させたときにボケ拡散部分領域G2に完全に重なる移動量(同上)だけ移動させて、(後述するように、B画像のコピーであるBコピー画像に対して)コピー加算することにより、B画像のボケ拡散部分領域B2を生成する。このボケ拡散部分領域B2は、理想的な円形ボケ形状であるG画像の円形状をなすボケ拡散領域のボケ拡散部分領域G2(あるいはR画像のボケ拡散部分領域R1)に相当する領域となる。 Similarly, the blur diffusion partial region B1 of the B image is moved by a movement amount (same as above) that completely overlaps the blur diffusion partial region G2 when the blur diffusion partial region G1 is moved (as will be described later, B The blur diffusion partial area B2 of the B image is generated by performing copy addition (to the B copy image which is a copy of the image). This blur diffusion partial region B2 is a region corresponding to the blur diffusion partial region G2 (or the blur diffusion partial region R1 of the R image) of the blur diffusion region forming the circular shape of the G image which is an ideal circular blur shape.
 このような処理により、R画像およびB画像においてそれぞれ不足しているボケ拡散部分領域を補うことができ、その結果としてG画像、R画像、B画像のボケ拡散領域を近似させることができる。そして、これにより、R画像のボケ拡散領域の重心は理想的な円形ボケ形状であるG画像のボケ拡散領域の重心に近接(なお、近接は一致も含むものとする。以下同様。)し、B画像のボケ拡散領域の重心は理想的な円形ボケ形状であるG画像のボケ拡散領域の重心に近接することになる。 By such processing, it is possible to compensate for the blur diffusion partial areas that are lacking in the R image and the B image, respectively, and as a result, it is possible to approximate the blur diffusion areas of the G image, the R image, and the B image. Accordingly, the center of gravity of the blur diffusion region of the R image is close to the center of gravity of the blur diffusion region of the G image having an ideal circular blur shape (the proximity includes matching), and the same applies to the B image. The center of gravity of the blur diffusion region is close to the center of gravity of the blur diffusion region of the G image having an ideal circular blur shape.
 このような処理を、R画像およびB画像の全体に対して行うことにより、色ずれが補正されたカラー画像を得ることができる。 By performing such processing on the entire R image and B image, a color image with corrected color misregistration can be obtained.
 次に、図26は位相差検出を行う際にR画像およびB画像に設定する部分領域を示す図、図27はオリジナルR画像のボケ拡散部分領域をRコピー画像にコピー加算する様子を示す図、図28はオリジナルB画像のボケ拡散部分領域をBコピー画像にコピー加算する様子を示す図、図29は位相差量に応じてボケ拡散部分領域のサイズを変更する例を示す線図、図30は色画像生成部37により行われるカラー化処理を示すフローチャートである。図26~図29を適宜参照しながら、図30に沿って説明する。 Next, FIG. 26 is a diagram showing a partial region set in the R image and the B image when performing phase difference detection, and FIG. 27 is a diagram showing a state in which the blur diffused partial region of the original R image is copied and added to the R copy image. FIG. 28 is a diagram showing a state in which the blur diffusion partial area of the original B image is copied and added to the B copy image, and FIG. 29 is a diagram showing an example of changing the size of the blur diffusion partial area according to the phase difference amount. Reference numeral 30 denotes a flowchart showing the colorization processing performed by the color image generation unit 37. Description will be made along FIG. 30 with reference to FIGS. 26 to 29 as appropriate.
(ステップS21)
 この処理を開始すると、初期設定を行う。この初期設定においては、まず、処理対象のRGB画像(つまり、R画像、G画像、およびB画像)の読み込みを行う。ここで、入力画像がベイヤー画像である場合には、事前に画像処理部25においてデモザイキング処理を行っておくものとする(ただし、取得しようとする位相差の精度として高い精度が要求されない場合(あるいは精度よりも速度を優先したい場合、さらにあるいは処理負荷を軽減したい場合、等)には、ベイヤー画像を各色に分離しただけのR画像、G画像、およびB画像(つまり、デモザイキング処理を行っていない各色画像)を用いてもよい)。
(Step S21)
When this process is started, initialization is performed. In this initial setting, first, an RGB image to be processed (that is, an R image, a G image, and a B image) is read. Here, when the input image is a Bayer image, it is assumed that the demosaicing process is performed in advance in the image processing unit 25 (however, when the accuracy of the phase difference to be acquired is not required to be high) Or, if you want to prioritize speed over accuracy, or if you want to reduce processing load, etc.), R image, G image, and B image (that is, demosaicing process) by simply separating the Bayer image into each color. Not each color image)).
 次に、R画像とB画像との位相差を補正するのに、R画像およびB画像自体を使用するのに代えて、色差Cr,Cbを用いることにする。これは、一般的な被写体においては、RGB画像よりも色差画像の方が画素値の変動が滑らかであるために、補正処理を安定して行うことができる利点があるためである(ただし、R画像およびB画像をそのまま用いても構わない。この場合には以下の処理において、Cr→R,Cb→Bの読み替えを行えば良い)。 Next, in order to correct the phase difference between the R image and the B image, color differences Cr and Cb are used instead of using the R image and the B image itself. This is because a general subject has an advantage that the correction process can be performed stably because the variation of the pixel value is smoother in the color difference image than in the RGB image (however, R The image and the B image may be used as they are.In this case, in the following processing, the reading of Cr → R and Cb → B may be performed).
 すなわち、例えば下記の数式3に基づき、R画像とG画像との色差量を計算して色差画像であるCr画像を生成するとともに、B画像とG画像との色差量を計算して色差画像であるCb画像を生成する。 That is, for example, based on the following Equation 3, the color difference amount between the R image and the G image is calculated to generate a Cr image that is a color difference image, and the color difference amount between the B image and the G image is calculated to calculate the color difference image. A certain Cb image is generated.
[数3]
               Cr=R-G
               Cb=B-G
 なお、RGB信号から色差信号Cr,Cb(および輝度信号Y)を算出するその他の演算方法としては、
[数4]
     Y = 0.29900R+0.58700G+0.11400B
    Cr= 0.50000R-0.41869G-0.08131B
    Cb=-0.16874R-0.33126G+0.50000B
が広く知られているために、数式3に代えて、数式4を用いても構わない。
[Equation 3]
Cr = RG
Cb = BG
As other calculation methods for calculating the color difference signals Cr and Cb (and the luminance signal Y) from the RGB signals,
[Equation 4]
Y = 0.29900R + 0.58700G + 0.11400B
Cr = 0.50000R-0.41869G-0.0811B
Cb = −0.16874R−0.33126G + 0.50000B
Is widely known, instead of Equation 3, Equation 4 may be used.
 次に、オリジナルのCr画像Cr0のコピー画像であるCrコピー画像Cr1(図27参照)と、オリジナルのCb画像Cb0のコピー画像であるCbコピー画像Cb1(図28参照)と、を作成する。さらに、Crコピー画像Cr1およびCbコピー画像Cb1と同一サイズの、Crカウント画像およびCbカウント画像を生成する(ここに、これらのカウント画像は、画素値の初期値を、全画素について1としておく)。 Next, a Cr copy image Cr1 (see FIG. 27) that is a copy image of the original Cr image Cr0 and a Cb copy image Cb1 (see FIG. 28) that is a copy image of the original Cb image Cb0 are created. Further, a Cr count image and a Cb count image having the same size as the Cr copy image Cr1 and the Cb copy image Cb1 are generated (here, in these count images, the initial value of the pixel value is set to 1 for all pixels). .
(ステップS22)
 続いて、位相差検出を行うための部分領域を設定する。ここでは、部分領域を、R画像とB画像との内の何れか一方、ここでは例えばR画像に設定する。
(Step S22)
Subsequently, a partial region for performing phase difference detection is set. Here, the partial area is set to one of the R image and the B image, here, for example, the R image.
(ステップS23)
 ステップS22で設定された部分領域に対する位相差を検出する。この位相差の検出は、R画像に設定した部分領域を基準画像とし、B画像における基準画像と同一サイズの部分領域を参照画像として、上述したステップS3と同様の処理を図26に示すように行うことにより、R画像とB画像の間で位相差検出を行う。
(Step S23)
A phase difference with respect to the partial region set in step S22 is detected. This phase difference is detected by using the partial area set in the R image as a standard image and the partial area of the B image having the same size as the standard image as a reference image as shown in FIG. By doing so, phase difference detection is performed between the R image and the B image.
(ステップS24)
 ステップS23の処理により得られた位相差量に基づいて、理想的なボケ形状に相当するG画像の円形ボケの半径(または、R画像およびB画像の半円ボケの半径ということもできる)(ここでは一例として半径を挙げているが、直径であっても構わないし、理想的な円形ボケの大きさを表すことができる量であればその他の量であっても良い)を取得する。ここに、位相差量と理想的な円形ボケ形状であるG画像のボケ半径との関係は、テーブルや数式等として撮像装置内に予め保持されている。従って、位相差量に基づいて、テーブル参照したり、数式を用いた演算を行ったりすることにより、ボケ半径を取得することができる。なお、このステップS24の処理を省略して、ステップS23において取得した位相差量をボケ半径に代えて用いる簡易的な方法を適用しても構わない。この場合には、位相差量とボケ半径との関係を撮像装置内に予め保持しておく必要はない。
(Step S24)
Based on the phase difference amount obtained by the processing in step S23, the radius of the circular blur of the G image corresponding to the ideal blur shape (or the radius of the semicircular blur of the R image and the B image) ( Here, the radius is given as an example, but it may be a diameter or any other amount as long as it can represent the ideal circular blur size. Here, the relationship between the phase difference amount and the blurring radius of the G image that is an ideal circular blur shape is held in advance in the imaging apparatus as a table, a mathematical expression, or the like. Therefore, the blur radius can be acquired by referring to a table or performing a calculation using a mathematical formula based on the phase difference amount. Note that the process of step S24 may be omitted, and a simple method using the phase difference amount acquired in step S23 in place of the blur radius may be applied. In this case, the relationship between the phase difference amount and the blur radius need not be held in the imaging apparatus in advance.
(ステップS25)
 次に、オリジナルのCr画像Cr0から部分領域を読み出して、ステップS23で検出した位相差量に応じた所定量だけずらしてから、Crコピー画像Cr1にコピー加算する。ここに、部分領域をずらす所定量は、ずらす方向も含む量であり、その大きさは例えばステップS24において取得したボケ半径とする。上述したステップS21においてCrコピー画像Cr1を作成したのは、コピー加算により画素値が変化しているCrコピー画像Cr1とは別に、オリジナルのCr画像Cr0を保持しておく必要があるためである(Cbコピー画像Cb1についても同様)。ただし、部分領域を例えばラスタスキャンの順序でシーケンシャルに処理する代わりに、並列動作により処理する場合などには、コピー画像を用意する必要はない。
(Step S25)
Next, a partial region is read from the original Cr image Cr0, shifted by a predetermined amount corresponding to the phase difference detected in step S23, and then copied and added to the Cr copy image Cr1. Here, the predetermined amount for shifting the partial area is an amount including the shifting direction, and the size thereof is, for example, the blur radius acquired in step S24. The reason why the Cr copy image Cr1 is created in the above-described step S21 is that it is necessary to hold the original Cr image Cr0 separately from the Cr copy image Cr1 whose pixel value is changed by copy addition ( The same applies to the Cb copy image Cb1). However, it is not necessary to prepare a copy image when, for example, partial areas are processed in parallel operation instead of sequentially processing in the order of raster scanning.
(ステップS26)
 続いて、Crカウント画像の「ステップS25においてコピー加算処理を行った位置」の領域に、加算された回数が分かるように+1を加算する。このCrカウント画像は、後段のステップS30において、画素値の正規化処理を行うために利用する。
(Step S26)
Subsequently, +1 is added to the region of “the position where the copy addition process has been performed in step S25” of the Cr count image so that the number of times of addition can be understood. This Cr count image is used to perform pixel value normalization processing in the subsequent step S30.
(ステップS27)
 また、オリジナルのCb画像Cb0における「ステップS25においてCrコピー画像Cr1にコピーした位置」と同一の位置から部分領域を読み出して、Cbコピー画像Cb1の「ステップS25においてオリジナルのCr画像Cr0からコピー元のデータを取得した位置」にコピー加算する。これにより、Cb画像をずらす所定量は、Cr画像をずらす所定量と絶対値が同じで向きが逆となる。
(Step S27)
Further, the partial area is read from the same position as “the position copied to the Cr copy image Cr1 in step S25” in the original Cb image Cb0, and the copy source from the original Cr image Cr0 in “step S25” of the Cb copy image Cb1. A copy is added to “the position from which data was acquired”. Thus, the predetermined amount for shifting the Cb image has the same absolute value as the predetermined amount for shifting the Cr image, but the direction is reversed.
(ステップS28)
 そして、Cbカウント画像の「ステップS25においてオリジナルのCr画像Cr0からコピー元のデータを取得した位置」(つまり、ステップS27においてコピー加算処理を行った位置)の領域に、加算された回数が分かるように+1を加算する。このCbカウント画像も、後段のステップS30において、画素値の正規化処理を行うために利用する。
(Step S28)
Then, the number of times of addition can be seen in the region of “the position where the copy source data was acquired from the original Cr image Cr0 in step S25” (that is, the position where the copy addition process was performed in step S27) of the Cb count image. Add +1 to. This Cb count image is also used to perform pixel value normalization processing in the subsequent step S30.
 なお、上述したステップS25やステップS27においては、コピー加算処理を画像の部分領域毎に行っているが、この部分領域は、ステップS23において位相差検出処理を行った部分領域と同一であっても良い一方で、位相差検出処理とは異なる大きさの部分領域としても構わない。 In step S25 and step S27 described above, the copy addition process is performed for each partial area of the image, but this partial area may be the same as the partial area for which the phase difference detection process was performed in step S23. On the other hand, it may be a partial area having a size different from that of the phase difference detection process.
 また、コピー加算処理に用いる部分領域の大きさは、画像全体で一定(すなわち、グローバルな大きさ)としても構わないが、画像内に設定する各部分領域毎に異ならせても(すなわち、ローカルな大きさとしても)良い。 In addition, the size of the partial area used for the copy addition process may be constant for the entire image (that is, global size), but may be different for each partial area set in the image (that is, local size). (Even if it is a big size).
 例えば、ステップS25~S28で用いる部分領域の大きさを、ステップS23において検出された位相差量に応じて、図29に示すように変化させても良い。 For example, the size of the partial region used in steps S25 to S28 may be changed as shown in FIG. 29 according to the phase difference amount detected in step S23.
 この図29に示す例では、位相差量が0である場合に、部分領域の縦サイズおよび横サイズは共に1となり、部分領域は1画素となる。なお、この場合には、位相差量が0であるために上述したコピー加算処理も行わないことになり、実質的に何の処理も行わないことになる。従って、位相差量が0であるか否かに応じて処理を分岐させて、位相差量が0であるときには何の処理も行わないようにしても良い。 In the example shown in FIG. 29, when the phase difference amount is 0, the vertical size and the horizontal size of the partial region are both 1, and the partial region is 1 pixel. In this case, since the phase difference amount is 0, the above-described copy addition process is not performed, and no process is substantially performed. Therefore, the processing may be branched depending on whether or not the phase difference amount is 0, and no processing may be performed when the phase difference amount is 0.
 また、図29に示す例では、位相差量に比例して、部分領域の大きさが大きくなるように構成されている。このときの直線の傾きは、光学系の構成に応じて適切に設定することになるために、図29には具体的なスケールを示していない。 In the example shown in FIG. 29, the size of the partial area is increased in proportion to the phase difference amount. Since the inclination of the straight line at this time is appropriately set according to the configuration of the optical system, a specific scale is not shown in FIG.
 なお、図29には、位相差量と部分領域の大きさとの関係が比例関係となる例を示しているが、勿論、比例関係とするに限るものではなく、例えば主観的な画質の評価に応じて位相差量に対する部分領域の大きさが適切となるように設計しても良い。 Note that FIG. 29 shows an example in which the relationship between the phase difference amount and the size of the partial area is proportional, but of course, the relationship is not limited to the proportional relationship. For example, for subjective image quality evaluation. Accordingly, the design may be made so that the size of the partial region with respect to the phase difference amount is appropriate.
 また、点光源からの光線がボケとなるときの拡散量(PSF(Point Spread Function:点広がり関数)の点拡がりの量)は、ボケが発生している領域内において均一であるとは限らず、例えばボケの周辺部においては中央部よりも拡散量が少ない(つまり、輝度が低い)ことなどが考えられる。そこで、上述したような部分領域のコピー加算を行うときに、ボケの拡散量に応じた重み係数を乗算するようにしても良い。例えば、部分領域の周辺部の各画素には1/2の重み係数を乗算し、部分領域の中央部の各画素には1の重み係数を乗算してから、コピー加算を行うなどである。このときには、ステップS26およびステップS28におけるカウント画像も、部分領域の周辺部については1/2を加算し、部分領域の中央部については1を加算することになる。 Further, the amount of diffusion (a point spread amount of PSF (Point Spread Function)) when the light beam from the point light source is blurred is not necessarily uniform within the blurred area. For example, it is conceivable that the amount of diffusion is smaller (that is, the luminance is lower) in the peripheral portion of the blur than in the central portion. Therefore, when performing copy addition of partial areas as described above, a weighting factor corresponding to the amount of blur diffusion may be multiplied. For example, each pixel in the peripheral part of the partial region is multiplied by a weighting factor of 1/2, each pixel in the central part of the partial region is multiplied by one weighting factor, and then copy addition is performed. At this time, also in the count images in step S26 and step S28, 1/2 is added to the peripheral part of the partial area, and 1 is added to the central part of the partial area.
(ステップS29)
 その後、画像内における全ての部分領域に対する処理が完了したか否かを判定する。そして、処理が完了するまで、部分領域の位置をずらしながらステップS22~S28の処理を繰り返して行う。ここに、部分領域をずらすときのステップは任意の値を設定可能であるが、部分領域の幅よりも小さい値であることが好ましい。
(Step S29)
Thereafter, it is determined whether or not the processing for all the partial areas in the image is completed. Until the processing is completed, the processing in steps S22 to S28 is repeated while shifting the position of the partial area. Here, an arbitrary value can be set as the step for shifting the partial area, but it is preferably a value smaller than the width of the partial area.
(ステップS30)
 こうして、ステップS29において全ての部分領域に対する処理が完了したと判定された場合には、同一画素位置毎に、Crコピー画像の画素値をCrカウント画像の画素値で割ることにより正規化されたCrコピー画像を得るとともに、Cbコピー画像の画素値をCbカウント画像の画素値で割ることにより正規化されたCbコピー画像を得る。
(Step S30)
Thus, if it is determined in step S29 that the processing for all the partial areas has been completed, the normalized Cr by dividing the pixel value of the Cr copy image by the pixel value of the Cr count image for each same pixel position. A copy image is obtained, and a normalized Cb copy image is obtained by dividing the pixel value of the Cb copy image by the pixel value of the Cb count image.
(ステップS31)
 そして、G画像(またはY画像)と、ステップS30において正規化されたCrコピー画像およびCbコピー画像と、を用いて、R画像、B画像、G画像を生成する。ここに、色差画像の算出に数式3を用いた場合には、
[数5]
                R=Cr+G
                B=Cb+G
を用いてR画像およびB画像を生成する(なお、G画像は、ステップS21におけるG画像をそのまま用いる)。
(Step S31)
Then, an R image, a B image, and a G image are generated using the G image (or Y image) and the Cr copy image and the Cb copy image normalized in step S30. Here, when Equation 3 is used to calculate the color difference image,
[Equation 5]
R = Cr + G
B = Cb + G
Are used to generate an R image and a B image (the G image in step S21 is used as it is).
 また、色差画像の算出に数式4を用いた場合には、
[数6]
        R=Y+1.40200Cr
        G=Y-0.71414Cr-0.34414Cb
        B=Y          +1.77200Cb
を用いてR画像、B画像、G画像を生成する。
In addition, when Equation 4 is used to calculate the color difference image,
[Equation 6]
R = Y + 1.40200Cr
G = Y-0.71414Cr-0.44414Cb
B = Y + 1.77200Cb
Is used to generate an R image, a B image, and a G image.
 このステップS31において算出されたRGB画像が、色画像生成部37におけるカラー化処理により得られた画像となる。 The RGB image calculated in step S31 is an image obtained by the colorization process in the color image generation unit 37.
 このような実施形態2によれば、画像のコピー加算処理によって色ズレの補正を行うことにより、上述した実施形態1とほぼ同様の効果を奏することができる。 According to the second embodiment, the same effects as those of the first embodiment can be obtained by correcting the color misregistration by the image copy addition process.
 また、コピー加算を行う場合には、フィルタリング処理の場合よりも処理負荷を軽減することが可能となる利点がある。従って、処理回路の低コスト化や、低消費電力化を図ることもできる。 Further, when performing copy addition, there is an advantage that the processing load can be reduced as compared with the case of filtering processing. Therefore, the cost of the processing circuit and the power consumption can be reduced.
[実施形態3]
 図31から図34は本発明の実施形態3を示したものであり、図31は位相差量に応じた各色のPSF(Point Spread Function:点広がり関数)のテーブルの概要を示す図、図32は色画像生成部37により行われるカラー化処理の概要を示す図、図33は色画像生成部37により行われるボケ量コントロールを伴うカラー化処理の概要を示す図、図34は色画像生成部37により行われるカラー化処理を示すフローチャートである。
[Embodiment 3]
FIGS. 31 to 34 show Embodiment 3 of the present invention, and FIG. 31 is a diagram showing an outline of a PSF (Point Spread Function) table for each color according to the phase difference amount, and FIG. FIG. 33 is a diagram showing an outline of colorization processing performed by the color image generation unit 37, FIG. 33 is a diagram showing an overview of colorization processing with blur amount control performed by the color image generation unit 37, and FIG. 34 is a color image generation unit. 37 is a flowchart showing a colorization process performed by 37.
 この実施形態3において、上述の実施形態1,2と同様である部分については同一の符号を付すなどして説明を適宜省略し、主として異なる点についてのみ説明する。 In the third embodiment, the same parts as those in the first and second embodiments are denoted by the same reference numerals and the description thereof is omitted as appropriate, and only different points will be mainly described.
 本実施形態は、色画像生成部37におけるカラー化処理を上述した実施形態1,2と異ならせたものとなっている。すなわち、本実施形態は、カラー化処理を、ぼけた画像をぼけていない画像に復元する復元処理(逆フィルタリング処理)と、復元されたぼけていない画像を被写体距離に応じた円形のボケ形状が得られるようにするフィルタリング処理と、を組み合わせることにより行うものとなっている。 In the present embodiment, the colorization process in the color image generation unit 37 is different from those in the first and second embodiments. That is, according to the present embodiment, the colorization process includes a restoration process (inverse filtering process) for restoring a blurred image to a non-blurred image, and a circular blur shape corresponding to the subject distance for the restored non-blurred image. This is performed by combining filtering processing to be obtained.
 図3に示したような帯域制限フィルタ12を用いた場合には、位相差量に応じて、RGB各色のPSF(Point Spread Function:点広がり関数)は、図31に示すように変化する。なお、この図31において、点光源からの光が到達しない暗部にはハッチングを付している。 When the band limiting filter 12 as shown in FIG. 3 is used, the PSF (Point Spread Function) of each color of RGB changes as shown in FIG. 31 according to the phase difference amount. In FIG. 31, the dark part where the light from the point light source does not reach is hatched.
 例えば、R画像のPSFであるPSFrは、位相差量の絶対値が大きくなるほど、大きな半円形状を示し、位相差量が0、つまり合焦位置においては1点に収束する。さらに、合焦位置よりも近距離である場合には図31の上半分に示すように左半円形状であるが、合焦位置よりも遠距離である場合には図31の下半分に示すように右半円形状となる。 For example, PSFr, which is the PSF of the R image, shows a larger semicircular shape as the absolute value of the phase difference increases, and converges to one point when the phase difference is 0, that is, at the in-focus position. Further, when the distance is closer than the in-focus position, the shape is a left semicircular shape as shown in the upper half of FIG. 31, but when the distance is farther than the in-focus position, it is shown in the lower half of FIG. It becomes a right semicircle shape.
 同様に、B画像のPSFであるPSFbは、位相差量の絶対値が大きくなるほど、大きな半円形状を示し、位相差量が0、つまり合焦位置においては1点に収束する。さらに、合焦位置よりも近距離である場合には図31の上半分に示すように右半円形状であるが、合焦位置よりも遠距離である場合には図31の下半分に示すように左半円形状となる。 Similarly, PSFb, which is the PSF of the B image, shows a larger semicircular shape as the absolute value of the phase difference amount increases, and converges to 1 point at the phase difference amount of 0, that is, at the in-focus position. Further, when the distance is shorter than the in-focus position, the shape is a right semicircle as shown in the upper half of FIG. 31, but when the distance is farther than the in-focus position, the lower half of FIG. It becomes a left semicircle shape.
 さらに、G画像のPSFであるPSFgは、位相差量の絶対値が大きくなるほど、大きな円形状を示し、位相差量が0、つまり合焦位置においては1点に収束する。また、PSFgは、1点に収束したときを除いて、合焦位置よりも近距離であるか遠距離であるかに関わらず、常に理想的なボケ形状の円形状である。 Furthermore, PSFg, which is the PSF of the G image, shows a larger circular shape as the absolute value of the phase difference amount increases, and converges to one point when the phase difference amount is 0, that is, at the in-focus position. Further, the PSFg is always an ideally defocused circular shape regardless of whether it is closer or farther than the in-focus position except when it converges to one point.
 この図31に示したような、位相差量に応じた各色のPSFのテーブルが、例えば色画像生成部37内の図示しない不揮発性メモリに予め記憶されているものとする(ただし、図1に示したようなレンズ交換式の撮像装置の場合には、レンズ制御部14内に記憶されているテーブルを、通信により受信して用いても勿論構わない)。 A PSF table for each color corresponding to the phase difference amount as shown in FIG. 31 is assumed to be stored in advance in, for example, a non-illustrated nonvolatile memory in the color image generation unit 37 (however, in FIG. 1). In the case of the lens-interchangeable image pickup apparatus as shown, of course, the table stored in the lens control unit 14 may be received and used by communication).
 このようなPSFを用いた復元処理およびフィルタリング処理について、図32を参照して説明する。この図32は、被写体が合焦位置よりも遠距離である場合の例であるが、被写体が合焦位置よりも近距離である場合にも同様の処理を適用することができる。 The restoration processing and filtering processing using such PSF will be described with reference to FIG. FIG. 32 shows an example of the case where the subject is farther than the in-focus position, but the same processing can be applied when the subject is closer than the in-focus position.
 まず、R画像にPSFrの逆演算PSFr-1を行うことにより、右半円形状のボケを、点光源が1点に収束するボケのない像(復元第1画像と復元第2画像との何れか一方)に変換する。同様に、B画像にPSFbの逆演算PSFb-1を行うことにより、左半円形状のボケを、点光源が1点に収束するボケのない像(復元第1画像と復元第2画像との何れか他方)に変換する。このような処理により、R画像およびB画像の復元処理を行う。 First, by performing the inverse PSFr operation PSFr −1 on the R image, the right semicircular blur is converted into a non - blurred image in which the point light source converges to one point (either the restored first image or the restored second image). Either). Similarly, by performing the inverse PSFb operation PSFb −1 on the B image, the left semicircular blur is converted into a blur-free image in which the point light source converges to one point (the restored first image and the restored second image Convert to either one). By such processing, restoration processing of the R image and the B image is performed.
 次に、復元処理されたR画像および復元処理されたB画像に対して、理想的な円形ボケ形状を生成するためのPSFを作用させる。具体的にここでは、作用させるPSFが、理想的な円形ボケ形状に相当するG画像に対するPSFであるPSFgとなる。これにより、R画像およびB画像の欠けた半円形状を補うようにG画像と同様の円形状のボケが生成される。 Next, a PSF for generating an ideal circular blurred shape is applied to the restored R image and the restored B image. Specifically, the PSF to be actuated here is PSFg, which is a PSF for a G image corresponding to an ideal circular blurred shape. Thereby, a circular blur similar to the G image is generated so as to compensate for the semicircular shape lacking the R image and the B image.
 具体的な処理は、次のようにして行う。 The concrete processing is performed as follows.
 まず、ある画素位置におけるR画像のボケのPSFがPr1、B画像の同一画素位置におけるボケのPSFがPb1、G画像の同一画素位置におけるボケのPSFがPg1であるとする。ボケのPSFは、図31に示したように、R画像とB画像との間の位相差に応じて異なり、位相差は基本的に画素位置毎に異なるために、PSFは各画素位置毎に決定されるものとなる。また、PSFは、着目画素位置を中心として、近傍の複数画素を含む部分領域に対して定義されているものとする(図31参照)。 First, it is assumed that the blur PSF of the R image at a certain pixel position is Pr1, the blur PSF at the same pixel position of the B image is Pb1, and the blur PSF at the same pixel position of the G image is Pg1. As shown in FIG. 31, the blurred PSF differs depending on the phase difference between the R image and the B image, and the phase difference basically differs for each pixel position. Therefore, the PSF is different for each pixel position. To be determined. Further, it is assumed that the PSF is defined for a partial region including a plurality of neighboring pixels with the pixel position of interest at the center (see FIG. 31).
 処理を開始すると、着目画素に対して求められた位相差を取得し、図31に示したようなテーブルを参照することにより、R画像における着目画素を中心としたPSFであるPr1と、B画像における同一位置の着目画素を中心としたPSFであるPb1と、G画像における同一位置の着目画素を中心としたPSFであるPg1と、を取得する。 When the processing is started, the phase difference obtained for the pixel of interest is acquired, and by referring to a table as shown in FIG. 31, Pr1 which is a PSF centered on the pixel of interest in the R image and the B image Pb1 that is the PSF centered on the pixel of interest at the same position in Pb1 and Pg1 that is the PSF centered on the pixel of interest at the same position in the G image are acquired.
 次に、取得したPr1、Pb1、およびPg1と、R画像における着目画素を中心とする部分領域rと、B画像における着目画素を中心とする部分領域bと、に対して次の数式7に示すように、2次元フーリエ変換FFT2を行い、各変換後の値PR1,PB1,PG1,R,Bを得る。 Next, for the obtained Pr1, Pb1, and Pg1, the partial region r centered on the target pixel in the R image, and the partial region b centered on the target pixel in the B image, the following Expression 7 is shown. As described above, the two-dimensional Fourier transform FFT2 is performed to obtain values PR1, PB1, PG1, R, and B after the respective transformations.
[数7]
            PR1=FFT2(Pr1)
            PB1=FFT2(Pb1)
            PG1=FFT2(Pg1)
              R =FFT2(r)
              B =FFT2(b)
 次に、下記の数式8に示すように、RをPR1で除算し、BをPB1で除算することにより復元処理を行い、さらにそれぞれに理想的なボケ形状を生成するための値を乗算することによりフィルタリング処理を行う。具体的にここでは、理想的なボケ形状を生成するための値が、上述したPG1となる。そして、その結果に2次元逆フーリエ変換IFFT2を施し、理想的なボケ形状に相当するG画像と同様のボケが得られたR画像r'およびB画像b'を算出する。
[Equation 7]
PR1 = FFT2 (Pr1)
PB1 = FFT2 (Pb1)
PG1 = FFT2 (Pg1)
R = FFT2 (r)
B = FFT2 (b)
Next, as shown in Equation 8 below, R is divided by PR1, B is divided by PB1, and restoration processing is performed, and each is multiplied by a value for generating an ideal blurred shape. The filtering process is performed by Specifically, here, the value for generating an ideal blurred shape is PG1 described above. Then, two-dimensional inverse Fourier transform IFFT2 is applied to the result, and an R image r ′ and a B image b ′ from which the same blur as the G image corresponding to the ideal blur shape is obtained are calculated.
[数8]
          r'=IFFT2(R×PG1/PR1)
          b'=IFFT2(B×PG1/PB1)
 なお、数式8の逆フィルタリング処理による復元処理をより安定的にするために、以下の数式9のように復元処理の復元量を制御しても構わないものとする(ウィナーフィルターの方式)。
[Equation 8]
r ′ = IFFT2 (R × PG1 / PR1)
b ′ = IFFT2 (B × PG1 / PB1)
In order to make the restoration process by the inverse filtering process of Expression 8 more stable, the restoration amount of the restoration process may be controlled as shown in Expression 9 below (Wiener filter method).
[数9]
 r'=IFFT2(R×PG1/PR1×|PR1|2/(|PR1|2+Γ))
 b'=IFFT2(B×PG1/PB1×|PB1|2/(|PB1|2+Γ))
 ここに、数式9におけるΓは、PR1、PB1の形状に応じて適切に設定される任意の定数である。
[Equation 9]
r ′ = IFFT2 (R × PG1 / PR1 × | PR1 | 2 / (| PR1 | 2 + Γ))
b ′ = IFFT2 (B × PG1 / PB1 × | PB1 | 2 / (| PB1 | 2 + Γ))
Here, Γ in Equation 9 is an arbitrary constant appropriately set according to the shapes of PR1 and PB1.
 この数式9のような処理を採用することにより、復元処理時のノイズの増幅が抑制され、より好ましいR画像、B画像を生成することができる。 By adopting processing such as Equation 9, noise amplification during restoration processing is suppressed, and more preferable R and B images can be generated.
 なお、Γの設定方法として、例えばPR1の周波数係数の絶対値|PR1|、|PB1|と、この絶対値|PR1|、|PB1|に対するΓの好ましい値との関係を撮像装置内に予め保持しておき、この関係に基づいて周波数係数毎にΓを指定する方法が考えられる。また、その他の方法としては、撮像装置の各種パラメータ(ISO値、焦点距離、開口値など)に基づいて、画像に含まれるノイズ量を推定し、推定したノイズ量に応じてΓを変化させる構成(例えば、ISO200の場合Γ=0.01とし、ISO800の場合Γ=0.04とするなど)を採用しても構わない。 As a method for setting Γ, for example, the relationship between the absolute values | PR1 | and | PB1 | of the frequency coefficient of PR1 and the preferable values of Γ with respect to the absolute values | PR1 | and | PB1 | A method of designating Γ for each frequency coefficient based on this relationship can be considered. As another method, a configuration in which the amount of noise included in an image is estimated based on various parameters (ISO value, focal length, aperture value, etc.) of the imaging device, and Γ is changed according to the estimated amount of noise. (For example, Γ = 0.01 for ISO 200 and Γ = 0.04 for ISO 800) may be employed.
 上述したような復元処理およびフィルタリング処理は、画像の部分領域毎に行い、ある部分領域で補正処理が完了したら、次に位置を少しずらして部分領域を指定し、指定した部分領域に対して再び同様の復元処理およびフィルタリング処理を行う。このような処理を繰り返して行うことにより、復元処理およびフィルタリング処理を画像の全領域に対して行う。その後、重複して処理された画素位置に関しては、その画素位置で処理された複数の補正画素値の総和を補正回数で割って平均化することにより、正規化された補正画像を得る。 The restoration process and the filtering process as described above are performed for each partial area of the image. When the correction process is completed in a certain partial area, the partial area is designated by slightly shifting the position, and the designated partial area is again displayed. Similar restoration processing and filtering processing are performed. By repeatedly performing such processing, restoration processing and filtering processing are performed on the entire region of the image. Thereafter, with respect to the pixel position processed in duplicate, the sum of a plurality of corrected pixel values processed at the pixel position is divided by the number of corrections and averaged to obtain a normalized corrected image.
 なお、復元処理およびフィルタリング処理を行う部分領域の大きさは、ボケの形状よりも大きいことが好ましい。従って、ボケの大きさに応じて、部分領域の大きさを適応的に変化させることが考えられる。あるいは、位相差に応じてボケの形状がどの範囲で変化するかが予め分かっている場合には、そのボケの最大サイズ以上の部分領域を固定的に用いるようにしても良い。 In addition, it is preferable that the size of the partial area on which the restoration process and the filtering process are performed is larger than the shape of the blur. Therefore, it is conceivable to adaptively change the size of the partial region in accordance with the size of the blur. Alternatively, when it is known in advance in which range the shape of the blur changes according to the phase difference, a partial region having a size larger than the maximum size of the blur may be used in a fixed manner.
 次に、図34を参照して、色画像生成部37により行われる上述したような復元処理およびフィルタリング処理によるカラー化処理の流れについて説明する。 Next, with reference to FIG. 34, the flow of colorization processing by the above-described restoration processing and filtering processing performed by the color image generation unit 37 will be described.
(ステップS41)
 この処理を開始すると、初期設定を行う。この初期設定においては、まず、処理対象のRGB画像(つまり、R画像、G画像、およびB画像)の読み込みを行う。次に、R画像のコピーであるRコピー画像と、B画像のコピーであるBコピー画像と、を作成する。
(Step S41)
When this process is started, initialization is performed. In this initial setting, first, an RGB image to be processed (that is, an R image, a G image, and a B image) is read. Next, an R copy image that is a copy of the R image and a B copy image that is a copy of the B image are created.
 続いて、Rコピー画像およびBコピー画像と同一サイズの、Rカウント画像およびBカウント画像を生成する(ここに、これらのカウント画像は、画素値の初期値を、全画素について1としておく)。 Subsequently, an R count image and a B count image having the same size as the R copy image and the B copy image are generated (here, in these count images, the initial value of the pixel value is set to 1 for all pixels).
(ステップS42)
 続いて、位相差検出を行うための部分領域を設定する。ここでは、部分領域を、R画像とB画像との内の何れか一方、ここでは例えばR画像に設定する。
(Step S42)
Subsequently, a partial region for performing phase difference detection is set. Here, the partial area is set to one of the R image and the B image, here, for example, the R image.
(ステップS43)
 ステップS42で設定された部分領域に対する位相差を検出する。この位相差の検出は、R画像に設定した部分領域を基準画像とし、B画像における基準画像と同一サイズの部分領域を参照画像として、上述したステップS3と同様の処理を図26に示したように行うことにより、R画像とB画像の間で位相差検出を行う。
(Step S43)
A phase difference with respect to the partial region set in step S42 is detected. This phase difference is detected by using the partial area set in the R image as a standard image and the partial area of the B image having the same size as the standard image as a reference image, as shown in FIG. By performing the above, phase difference detection is performed between the R image and the B image.
(ステップS44)
 ステップS43の処理により得られた位相差量に基づいて、理想的なボケ形状に相当するG画像の円形ボケの半径(または、R画像およびB画像の半円ボケの半径ということもできる)を、上述したステップS24と同様にして取得する。
(Step S44)
Based on the phase difference amount obtained by the processing in step S43, the radius of the circular blur of the G image corresponding to the ideal blur shape (or the radius of the semicircular blur of the R image and the B image) can be obtained. Obtained in the same manner as in step S24 described above.
(ステップS45)
 次に、オリジナルのR画像の上述したステップS42で指定した部分領域に対して、上述したような復元処理およびフィルタリング処理を行う。こうして得られた処理結果を、Rコピー画像におけるオリジナルのR画像の部分領域と同位置にコピー加算する。なお、ここでは、カラー化処理を行うための部分領域を、位相差検出を行うための部分領域と同一とする例について説明するが、勿論異なる領域を設定しても構わないし、上述したように、検出された位相差に応じた適応的な大きさの部分領域としても良い(後述するB画像についても同様)。
(Step S45)
Next, the restoration process and the filtering process as described above are performed on the partial region specified in step S42 of the original R image. The processing result obtained in this way is copied and added to the same position as the partial area of the original R image in the R copy image. Here, an example in which the partial area for performing the colorization process is the same as the partial area for performing the phase difference detection will be described, but a different area may be set as a matter of course, as described above. Alternatively, a partial area having an adaptive size according to the detected phase difference may be used (the same applies to the B image described later).
(ステップS46)
 続いて、Rカウント画像の「ステップS42で指定した部分領域」に、加算された回数が分かるように+1を加算する。このRカウント画像は、後段のステップS50において、画素値の正規化処理を行うために利用する。
(Step S46)
Subsequently, +1 is added to the “partial region designated in step S42” of the R count image so that the number of times of addition can be understood. This R count image is used to perform pixel value normalization processing in the subsequent step S50.
(ステップS47)
 また、オリジナルのB画像の上述したステップS42で指定した部分領域に対して、上述したような復元処理およびフィルタリング処理を行う。こうして得られた処理結果を、Bコピー画像におけるオリジナルのB画像の部分領域と同位置にコピー加算する。
(Step S47)
Further, the restoration process and the filtering process as described above are performed on the partial region specified in the above-described step S42 of the original B image. The processing result obtained in this way is copied and added at the same position as the partial area of the original B image in the B copy image.
(ステップS48)
 そして、Bカウント画像の「ステップS42で指定した部分領域」に、加算された回数が分かるように1を加算する。このBカウント画像も、後段のステップS50において、画素値の正規化処理を行うために利用する。
(Step S48)
Then, 1 is added to the “partial region designated in step S42” of the B count image so that the number of times of addition can be understood. This B count image is also used to perform pixel value normalization processing in the subsequent step S50.
(ステップS49)
 その後、画像内における全ての部分領域に対する処理が完了したか否かを判定する。そして、処理が完了するまで、部分領域の位置をずらしながらステップS42~S48の処理を繰り返して行う。ここに、部分領域をずらすときのステップは任意の値を設定可能であるが、部分領域の幅よりも小さい値であることが好ましい。
(Step S49)
Thereafter, it is determined whether or not the processing for all the partial areas in the image is completed. Then, the processes in steps S42 to S48 are repeated while shifting the position of the partial area until the process is completed. Here, an arbitrary value can be set as the step for shifting the partial area, but it is preferably a value smaller than the width of the partial area.
(ステップS50)
 こうして、ステップS49において全ての部分領域に対する処理が完了したと判定された場合には、同一画素位置毎に、Rコピー画像の画素値をRカウント画像の画素値で割ることにより正規化されたRコピー画像を得るとともに、Bコピー画像の画素値をBカウント画像の画素値で割ることにより正規化されたBコピー画像を得る。このステップS50において算出されたRGB画像が、色画像生成部37におけるカラー化処理により得られた画像となる。
(Step S50)
Thus, if it is determined in step S49 that the processing for all the partial areas has been completed, the R value normalized by dividing the pixel value of the R copy image by the pixel value of the R count image for each same pixel position. A copy image is obtained, and a normalized B copy image is obtained by dividing the pixel value of the B copy image by the pixel value of the B count image. The RGB image calculated in step S50 is an image obtained by the colorization process in the color image generation unit 37.
 こうして、ステップS50の処理が終了したところで、この図34に示す処理を終了する。 Thus, when the process of step S50 is completed, the process shown in FIG. 34 is terminated.
 なお、上述では、フーリエ変換を利用して実空間から周波数空間へ変換した後に、復元処理およびフィルタリング処理を行っている。しかし、これに限るものではなく、実空間における(例えばMAP推定処理などの)復元処理やフィルタリング処理を適用しても構わない。 In the above description, restoration processing and filtering processing are performed after transforming from real space to frequency space using Fourier transform. However, the present invention is not limited to this, and restoration processing or filtering processing (for example, MAP estimation processing) in real space may be applied.
 また、上述ではR画像およびB画像のボケ形状をG画像のボケ形状に整合させる処理を行っているが、これに加えて、ボケ量コントロールを行うようにしても構わない。 In the above description, the blurring shape of the R image and the B image is matched with the blurring shape of the G image, but in addition to this, the blur amount control may be performed.
 この場合には、図33に示すように、まず、R画像にPSFrの逆演算PSFr-1を、B画像にPSFbの逆演算PSFb-1を、それぞれ行って復元第1画像および復元第2画像に変換するとともに、さらに、G画像にPSFgの逆演算PSFg-1を行うことにより円形状のボケを点光源が1点に収束するボケのない像(復元第3画像)に変換する。これにより、R画像、B画像、およびG画像の復元が行われる。 In this case, as shown in FIG. 33, first, the inverse operation PSFr -1 of PSFr the R image, the inverse operation PSFB -1 of PSFB the B image, restoring the first image and the restored second image through each In addition, the inverse of the PSFg PSFg −1 is performed on the G image to convert the circular blur into a blur-free image in which the point light source converges to one point (restored third image). Thereby, the R image, the B image, and the G image are restored.
 次に、復元処理されたR画像、B画像、およびG画像に対して、理想的な円形ボケを生成するためのPSFをそれぞれ作用させる。ここでは、理想的な円形ボケを生成するためのPSFとして、G画像に対する所望のPSFであるPSF'gをそれぞれ作用させる。これにより、R画像、B画像、およびG画像に、所望の大きさの円形状のボケを生成し、ボケ量コントロールを行うことができる。 Next, PSF for generating ideal circular blur is applied to the restored R image, B image, and G image. Here, PSF′g, which is a desired PSF for the G image, is caused to act as a PSF for generating an ideal circular blur. Thereby, a circular blur having a desired size can be generated in the R image, the B image, and the G image, and the amount of blur can be controlled.
 具体的には、図31に示したようなテーブルを参照することにより、G画像における同一位置の着目画素を中心としたPSFとして、所望のPg1'をさらに取得しておく。 Specifically, by referring to a table as shown in FIG. 31, a desired Pg1 ′ is further acquired as a PSF centered on the pixel of interest at the same position in the G image.
 次に、上述した数式7の処理に加えて、さらに、取得したPg1'と、G画像における着目画素を中心とする部分領域gと、に対して次の数式10に示すように、2次元フーリエ変換FFT2を行い、各変換後の値PG1',Gを得る。 Next, in addition to the processing of Equation 7 described above, two-dimensional Fourier is obtained as shown in Equation 10 below for the acquired Pg1 ′ and the partial region g centered on the pixel of interest in the G image. A conversion FFT2 is performed to obtain converted values PG1 ′ and G.
[数10]
           PG1'=FFT2(Pg1')
              G =FFT2(g)
 そして、下記の数式11に示すように、RをPR1で除算し、BをPB1で除算し、GをPG1で除算することにより復元処理を行い、さらにそれぞれにPG1'を乗算することによりフィルタリング処理を行って、その結果に2次元逆フーリエ変換IFFT2を施し、所望のボケ量のボケが得られたR画像r"、B画像b"、およびG画像g"を算出する。
[Equation 10]
PG1 ′ = FFT2 (Pg1 ′)
G = FFT2 (g)
Then, as shown in Equation 11 below, R is divided by PR1, B is divided by PB1, G is divided by PG1, restoration processing is performed, and each is multiplied by PG1 ′ to perform filtering processing. The result is subjected to a two-dimensional inverse Fourier transform IFFT2 to calculate an R image r ″, a B image b ″, and a G image g ″ from which a desired amount of blur has been obtained.
[数11]
         r"=IFFT2(R×PG1'/PR1)
         b"=IFFT2(B×PG1'/PB1)
         g"=IFFT2(G×PG1'/PG1)
 なお、上述した数式9と同様に、数式11に代えて、次の数式12に示すようなウィナーフィルターの方式を採用しても構わない。
[Equation 11]
r ″ = IFFT2 (R × PG1 ′ / PR1)
b ″ = IFFT2 (B × PG1 ′ / PB1)
g ″ = IFFT2 (G × PG1 ′ / PG1)
Note that, similarly to the above-described equation 9, a Wiener filter method as shown in the following equation 12 may be employed instead of the equation 11.
[数12]
 r"=IFFT2(R×PG1'/PR1×|PR1|2/(|PR1|2+Γ))
 b"=IFFT2(B×PG1'/PB1×|PB1|2/(|PB1|2+Γ))
 g"=IFFT2(G×PG1'/PG1×|PG1|2/(|PG1|2+Γ))
 ここに、ΓはPR1、PB1、PG1の形状に応じて適切に設定される任意の定数である。
[Equation 12]
r ″ = IFFT2 (R × PG1 ′ / PR1 × | PR1 | 2 / (| PR1 | 2 + Γ))
b ″ = IFFT2 (B × PG1 ′ / PB1 × | PB1 | 2 / (| PB1 | 2 + Γ))
g ″ = IFFT2 (G × PG1 ′ / PG1 × | PG1 | 2 / (| PG1 | 2 + Γ))
Here, Γ is an arbitrary constant appropriately set according to the shapes of PR1, PB1, and PG1.
 このような実施形態3によれば、PSFを用いた復元処理およびフィルタリング処理による色ズレ補正を行うことにより、上述した実施形態1,2とほぼ同様の効果を奏することができる。 According to the third embodiment, the same effects as those of the first and second embodiments can be obtained by performing color shift correction by the restoration processing and filtering processing using PSF.
 さらに、G画像についても復元処理を行い、その後にG画像に対する任意のPSFを用いたフィルタリング処理をR画像、B画像、およびG画像に対して行えば、RGBカラー画像のボケ量を所望にコントロールすることも可能となる。 Furthermore, if a restoration process is performed for the G image, and then a filtering process using an arbitrary PSF for the G image is performed on the R image, the B image, and the G image, the amount of blur of the RGB color image is controlled as desired. It is also possible to do.
 そして、本発明の各実施形態によれば、R画像とB画像の位相差量に基づき、R画像とB画像のボケの重心位置の位置ズレを補正しているために、R画像とB画像の色ズレを解消することができる。このとき、R画像とB画像のボケの重心位置をG画像のボケの重心位置に合わせることで、R画像とB画像とG画像の色ズレも解消することができる。加えて、R画像とB画像とG画像の全てのボケ形状を整合(同一化)しているために、色ズレをさらに改善することができる。また、R画像とB画像とG画像の全てのボケ形状が同一の理想的な円形状となるように処理しているために、鑑賞に適した自然な好ましい画像にすることが可能となる。 According to each embodiment of the present invention, since the positional deviation of the center of gravity position of the blur between the R image and the B image is corrected based on the phase difference amount between the R image and the B image, the R image and the B image are corrected. The color misalignment can be eliminated. At this time, by aligning the gravity center position of the blur of the R image and the B image with the gravity center position of the blur of the G image, the color deviation of the R image, the B image, and the G image can also be eliminated. In addition, since all blur shapes of the R image, the B image, and the G image are matched (identified), the color shift can be further improved. In addition, since all the blur shapes of the R image, the B image, and the G image are processed so as to have the same ideal circular shape, a natural and preferable image suitable for viewing can be obtained.
 なお、上述した全ての実施形態において、理想的なボケ形状をG画像のような実際に取得した画像に基づいて設定する必要はなく、仮想的に設定した理想的なボケ形状であっても構わない。 In all the embodiments described above, it is not necessary to set an ideal blur shape based on an actually acquired image such as a G image, and an ideal blur shape set virtually may be used. Absent.
 また、本発明は上述した実施形態そのままに限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化することができる。また、上記実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成することができる。例えば、実施形態に示される全構成要素から幾つかの構成要素を削除しても良い。さらに、異なる実施形態にわたる構成要素を適宜組み合わせても良い。このように、発明の主旨を逸脱しない範囲内において種々の変形や応用が可能であることは勿論である。 Further, the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying the constituent elements without departing from the scope of the invention in the implementation stage. Moreover, various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiment. For example, you may delete some components from all the components shown by embodiment. Furthermore, the constituent elements over different embodiments may be appropriately combined. Thus, it goes without saying that various modifications and applications are possible without departing from the spirit of the invention.
 本出願は、2011年7月4日に日本国に出願された特願2011-148592号を優先権主張の基礎として出願するものであり、上記の開示内容は、本願明細書、請求の範囲、図面に引用されたものとする。 This application is filed on the basis of the priority claim of Japanese Patent Application No. 2011-148592 filed in Japan on July 4, 2011, and the above disclosure is disclosed in the present specification, claims, It shall be cited in the drawing.

Claims (12)

  1.  少なくとも第1帯域の光と第2帯域の光とをそれぞれ受光して光電変換し、第1の画像信号と第2の画像信号とを生成するカラーの撮像素子と、
     被写体像を前記撮像素子に結像する撮像光学系と、
     前記撮像光学系の被写体側から前記撮像素子に至る撮影光束の光路上に配設されていて、前記撮像光学系の瞳領域の一部を通過しようとする光に関して前記第1帯域の光を遮断し前記第2帯域の光を通過させる第1の帯域制限を行い、前記撮像光学系の瞳領域の他の一部を通過しようとする光に関して前記第2帯域の光を遮断し前記第1帯域の光を通過させる第2の帯域制限を行う帯域制限フィルタと、
     前記第1の画像信号と前記第2の画像信号とに基づいて位相差量を演算する演算部と、
     前記演算部により演算された位相差量に基づき、前記第1の画像信号のボケの重心位置と、前記第2の画像信号のボケの重心位置と、の位置ズレを補正する画像補正部と、
     を具備したことを特徴とする撮像装置。
    A color imaging device that receives and photoelectrically converts at least light in the first band and light in the second band, and generates a first image signal and a second image signal;
    An imaging optical system that forms a subject image on the imaging device;
    It is arranged on the optical path of the imaging light flux from the subject side of the imaging optical system to the imaging device, and blocks the light in the first band with respect to light that attempts to pass through a part of the pupil region of the imaging optical system. The first band is limited to allow the second band light to pass through, and the second band light is blocked with respect to the light that is about to pass through another part of the pupil region of the imaging optical system. A band-limiting filter that performs a second band limitation that allows the light to pass through;
    A calculation unit for calculating a phase difference amount based on the first image signal and the second image signal;
    Based on the phase difference amount calculated by the calculation unit, an image correction unit that corrects a positional deviation between the blur center of gravity of the first image signal and the blur center of gravity of the second image signal;
    An imaging apparatus comprising:
  2.  前記第1および第2の画像信号に対する理想的なボケ形状の重心位置を設定する設定手段をさらに具備し、
     前記画像補正部は、前記第1の画像信号のボケの重心位置と、前記第2の画像信号のボケの重心位置とを、前記理想的なボケ形状の重心位置に近接させ、位置ズレを補正することを特徴とする請求項1に記載の撮像装置。
    Further comprising setting means for setting a center of gravity position of an ideal blur shape for the first and second image signals;
    The image correction unit corrects misalignment by bringing the center of gravity of the blur of the first image signal and the center of gravity of the blur of the second image signal close to the center of gravity of the ideal blur shape. The imaging apparatus according to claim 1, wherein:
  3.  前記設定手段は、前記理想的なボケ形状の重心位置を、前記第1の画像信号のボケの重心位置と、前記第2の画像信号のボケの重心位置と、の幾何的位置関係が一定となるように設定することを特徴とする請求項2に記載の撮像装置。 The setting means has a constant geometric position relationship between the ideal gravity center position of the blur shape, the blur gravity center position of the first image signal, and the blur gravity center position of the second image signal. The imaging apparatus according to claim 2, wherein the imaging apparatus is set to be
  4.  前記撮像素子は、さらに第3帯域の光を受光して第3の画像信号を生成し、
     前記帯域制限フィルタは、前記第1の帯域制限および前記第2の帯域制限において、前記第3帯域の光を通過させ、
     前記設定手段は、前記理想的なボケ形状の重心位置を、前記第3の画像信号のボケの重心位置に設定することを特徴とする請求項2に記載の撮像装置。
    The imaging device further receives a third band of light to generate a third image signal,
    The band limiting filter allows the light of the third band to pass in the first band limitation and the second band limitation,
    The imaging apparatus according to claim 2, wherein the setting unit sets a center of gravity position of the ideal blur shape to a center of gravity position of a blur of the third image signal.
  5.  前記画像補正部は、
     前記第1の画像信号のボケ形状を前記理想的なボケ形状に近似する第1のフィルタカーネルを該第1の画像信号に作用させ、
     前記第2の画像信号のボケ形状を前記理想的なボケ形状に近似する第2のフィルタカーネルを該第2の画像信号に作用させることにより、
     前記ボケ形状の近似を行うことを特徴とする請求項2に記載の撮像装置。
    The image correction unit
    Causing a first filter kernel that approximates the blur shape of the first image signal to the ideal blur shape to act on the first image signal;
    By causing the second image signal to act on a second filter kernel that approximates the blur shape of the second image signal to the ideal blur shape,
    The imaging apparatus according to claim 2, wherein the blur shape is approximated.
  6.  前記第1のフィルタカーネルおよび第2のフィルタカーネルは、ボケフィルタとして構成されていることを特徴とする請求項5に記載の撮像装置。 The imaging apparatus according to claim 5, wherein the first filter kernel and the second filter kernel are configured as a blur filter.
  7.  前記画像補正部は、
     前記第1のフィルタカーネルを、前記理想的なボケ形状の重心位置に対する前記第1の画像信号のボケの重心位置のズレ方向へ、前記位相差量に応じた量だけ、前記ボケフィルタのフィルタ係数の重心位置をカーネル中心の位置からずらしたものとし、
     前記第2のフィルタカーネルを、前記理想的なボケ形状の重心位置に対する前記第2の画像信号のボケの重心位置のズレ方向へ、前記位相差量に応じた量だけ、前記フィルタのフィルタ係数の重心位置をカーネル中心の位置からずらしたものとすることにより、
     前記位置ズレの補正を行うことを特徴とする請求項6に記載の撮像装置。
    The image correction unit
    The filter coefficient of the blur filter is an amount corresponding to the amount of the phase difference in the first filter kernel in the direction of deviation of the center of gravity of the first image signal from the center of gravity of the ideal blur shape. The center of gravity position of is shifted from the position of the kernel center,
    The filter coefficient of the filter is set to the second filter kernel by an amount corresponding to the phase difference amount in the direction of deviation of the center of gravity of the second image signal from the center of gravity of the ideal blur shape. By shifting the position of the center of gravity from the position of the kernel center,
    The imaging apparatus according to claim 6, wherein the positional deviation is corrected.
  8.  前記画像補正部は、
     前記第1のフィルタカーネルおよび前記第2のフィルタカーネルが、カーネル中心に前記ボケフィルタのフィルタ係数の重心位置が位置するフィルタであり、
     前記第1のフィルタカーネルを前記第1の画像信号に作用させる前に、該第1の画像信号のボケの重心位置を前記理想的なボケ形状の重心位置に一致させるように該第1の画像信号をシフトし、
     前記第2のフィルタカーネルを前記第2の画像信号に作用させる前に、該第2の画像信号のボケの重心位置を前記理想的なボケ形状の重心位置に一致させるように該第2の画像信号をシフトすることにより、
     前記位置ズレの補正を行うことを特徴とする請求項6に記載の撮像装置。
    The image correction unit
    The first filter kernel and the second filter kernel are filters in which a centroid position of a filter coefficient of the blur filter is located at a kernel center;
    Before the first filter kernel is applied to the first image signal, the first image kernel is set such that the position of the center of gravity of the blur of the first image signal matches the position of the center of gravity of the ideal blur shape. Shift the signal,
    Before applying the second filter kernel to the second image signal, the second image signal is set so that the position of the center of gravity of the blur of the second image signal matches the position of the center of gravity of the ideal blur shape. By shifting the signal,
    The imaging apparatus according to claim 6, wherein the positional deviation is corrected.
  9.  前記画像補正部は、
     前記第1の画像信号のボケの重心位置を前記理想的なボケ形状の重心位置に近接させるように該第1の画像信号をシフトし、
     前記第2の画像信号のボケの重心位置を前記理想的なボケ形状の重心位置に近接させるように該第2の画像信号をシフトすることにより、
     前記位置ズレの補正を行うものであることを特徴とする請求項2に記載の撮像装置。
    The image correction unit
    Shifting the first image signal so that the position of the center of gravity of the blur of the first image signal is close to the position of the center of gravity of the ideal blur shape;
    By shifting the second image signal so that the position of the center of gravity of the blur of the second image signal is close to the position of the center of gravity of the ideal blur shape,
    The imaging apparatus according to claim 2, wherein the positional deviation is corrected.
  10.  前記第1帯域は赤(R)色帯域と青(B)色帯域との内の一方、前記第2帯域は赤(R)色帯域と青(B)色帯域との内の他方、であり、
     前記画像補正部は、R成分とB成分との内の一方の寄与率が高い第1の色差画像信号と、R成分とB成分との内の他方の寄与率が高い第2の色差画像信号と、を算出して、
     前記第1の画像信号のシフトに代えて前記第1の色差画像信号のシフトを行い、
     前記第2の画像信号のシフトに代えて前記第2の色差画像信号のシフトを行うことにより、
     前記位置ズレの補正を行うものであることを特徴とする請求項9に記載の撮像装置。
    The first band is one of a red (R) color band and a blue (B) color band, and the second band is the other of a red (R) color band and a blue (B) color band. ,
    The image correction unit includes a first color difference image signal having a high contribution ratio of one of the R component and the B component, and a second color difference image signal having a high contribution ratio of the other of the R component and the B component. And
    Instead of shifting the first image signal, shifting the first color difference image signal,
    By shifting the second color difference image signal instead of shifting the second image signal,
    The imaging apparatus according to claim 9, wherein the positional deviation is corrected.
  11.  前記画像補正部は、
     前記第1の画像信号に対する前記撮像光学系の前記位相差に応じたPSF(Point Spread Function:点広がり関数)の逆演算を該第1の画像信号に行うことにより、ボケのないまたはボケの大きさを小さくした復元第1画像信号を作成し、
     前記第2の画像信号に対する前記撮像光学系の前記位相差に応じたPSFの逆演算を該第2の画像信号に行うことにより、ボケのないまたはボケの大きさを小さくした復元第2画像信号を作成することによって、
     前記位置ズレの補正を行うものであることを特徴とする請求項2に記載の撮像装置。
    The image correction unit
    The inverse of PSF (Point Spread Function) corresponding to the phase difference of the imaging optical system with respect to the first image signal is performed on the first image signal, so that there is no blur or large blur. Create a restored first image signal with reduced size,
    Reconstructed second image signal with no blur or reduced blur size by performing inverse calculation of PSF according to the phase difference of the imaging optical system with respect to the second image signal on the second image signal By creating
    The imaging apparatus according to claim 2, wherein the positional deviation is corrected.
  12.  少なくとも第1帯域の光と第2帯域の光とをそれぞれ受光して光電変換し、第1の画像信号と第2の画像信号とを生成するカラーの撮像素子と、
     被写体像を前記撮像素子に結像する撮像光学系と、
     前記撮像光学系の被写体側から前記撮像素子に至る撮影光束の光路上に配設されていて、前記撮像光学系の瞳領域の一部を通過しようとする光に関して前記第1帯域の光を遮断し前記第2帯域の光を通過させる第1の帯域制限を行い、前記撮像光学系の瞳領域の他の一部を通過しようとする光に関して前記第2帯域の光を遮断し前記第1帯域の光を通過させる第2の帯域制限を行う帯域制限フィルタと、
     を有する撮像装置により得られた画像を処理するための画像処理装置であって、
     前記第1の画像信号と前記第2の画像信号とに基づいて位相差量を演算する演算部と、
     前記演算部により演算された位相差量に基づき、前記第1の画像信号のボケの重心位置と、前記第2の画像信号のボケの重心位置と、の位置ズレを補正する画像補正部と、
     を具備したことを特徴とする画像処理装置。
    A color imaging device that receives and photoelectrically converts at least light in the first band and light in the second band, and generates a first image signal and a second image signal;
    An imaging optical system that forms a subject image on the imaging device;
    It is arranged on the optical path of the imaging light flux from the subject side of the imaging optical system to the imaging device, and blocks the light in the first band with respect to light that attempts to pass through a part of the pupil region of the imaging optical system. The first band is limited to allow the second band light to pass through, and the second band light is blocked with respect to the light that is about to pass through another part of the pupil region of the imaging optical system. A band-limiting filter that performs a second band limitation that allows the light to pass through;
    An image processing device for processing an image obtained by an imaging device having
    A calculation unit for calculating a phase difference amount based on the first image signal and the second image signal;
    Based on the phase difference amount calculated by the calculation unit, an image correction unit that corrects a positional deviation between the blur center of gravity of the first image signal and the blur center of gravity of the second image signal;
    An image processing apparatus comprising:
PCT/JP2012/066187 2011-07-04 2012-06-25 Image capture device and image processing device WO2013005602A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011148592 2011-07-04
JP2011-148592 2011-07-04

Publications (1)

Publication Number Publication Date
WO2013005602A1 true WO2013005602A1 (en) 2013-01-10

Family

ID=47436957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/066187 WO2013005602A1 (en) 2011-07-04 2012-06-25 Image capture device and image processing device

Country Status (1)

Country Link
WO (1) WO2013005602A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010079298A (en) * 2008-09-12 2010-04-08 Sharp Corp Camera and imaging system
JP2010145693A (en) * 2008-12-18 2010-07-01 Sanyo Electric Co Ltd Image display device and imaging device
JP2011095026A (en) * 2009-10-28 2011-05-12 Kyocera Corp Object distance estimation apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010079298A (en) * 2008-09-12 2010-04-08 Sharp Corp Camera and imaging system
JP2010145693A (en) * 2008-12-18 2010-07-01 Sanyo Electric Co Ltd Image display device and imaging device
JP2011095026A (en) * 2009-10-28 2011-05-12 Kyocera Corp Object distance estimation apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BANDO ET AL.: "Extracting Depth and Matte using a Color-Filtered Aperture", ACM SIGGRAPH ASIA 2008 PAPERS, vol. 27, no. 5, December 2008 (2008-12-01), pages 134.1 - 134.9 *

Similar Documents

Publication Publication Date Title
WO2013027504A1 (en) Imaging device
US9247227B2 (en) Correction of the stereoscopic effect of multiple images for stereoscope view
JP5066851B2 (en) Imaging device
US8885026B2 (en) Imaging device and imaging method
JP6478457B2 (en) Focus adjustment device and focus adjustment method
JP6429546B2 (en) Imaging apparatus, control method, program, and storage medium
WO2013005489A1 (en) Image capture device and image processing device
JP6036829B2 (en) Image processing apparatus, imaging apparatus, and control program for image processing apparatus
KR20100093134A (en) Image processing for supporting a stereoscopic presentation
US10992854B2 (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium
US9911183B2 (en) Image processing method, image processing apparatus, image pickup apparatus, and non-transitory computer-readable storage medium
JP6951917B2 (en) Imaging device
JPWO2012108099A1 (en) Imaging apparatus and imaging method
WO2014192300A1 (en) Imaging element, imaging device, and image processing device
JP2016038414A (en) Focus detection device, control method thereof, and imaging apparatus
JP2014026051A (en) Image capturing device and image processing device
JP5348258B2 (en) Imaging device
JP2013003159A (en) Imaging apparatus
JP2016018012A (en) Imaging device and control method of the same
JP2015046019A (en) Image processing device, imaging device, imaging system, image processing method, program, and storage medium
JP2013097154A (en) Distance measurement device, imaging apparatus, and distance measurement method
JP2010026011A (en) Imaging apparatus
JP2014026050A (en) Image capturing device and image processing device
JP2013037294A (en) Image pickup apparatus
WO2013005602A1 (en) Image capture device and image processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12807207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12807207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP