WO2013005489A1 - Dispositif de capture d'image et dispositif de traitement d'image - Google Patents

Dispositif de capture d'image et dispositif de traitement d'image Download PDF

Info

Publication number
WO2013005489A1
WO2013005489A1 PCT/JP2012/063057 JP2012063057W WO2013005489A1 WO 2013005489 A1 WO2013005489 A1 WO 2013005489A1 JP 2012063057 W JP2012063057 W JP 2012063057W WO 2013005489 A1 WO2013005489 A1 WO 2013005489A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blur
phase difference
filter
color
Prior art date
Application number
PCT/JP2012/063057
Other languages
English (en)
Japanese (ja)
Inventor
高宏 矢野
一哉 山中
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Publication of WO2013005489A1 publication Critical patent/WO2013005489A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00193Optical arrangements adapted for stereoscopic vision
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000095Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope for image enhancement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00186Optical arrangements with imaging filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters
    • G02B5/201Filters in the form of arrays
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • G03B35/12Stereoscopic photography by simultaneous recording involving recording of different viewpoint images in different colours on a colour film
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/214Image signal generators using stereoscopic image cameras using a single 2D image sensor using spectral multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0077Colour aspects

Definitions

  • the present invention relates to an imaging apparatus and an image processing apparatus that can acquire distance information based on an image obtained from an imaging element.
  • the distance information is used for AF processing by an automatic focus adjustment mechanism (AF), for creating a stereoscopic image, or for image processing (for example, subject extraction processing, background extraction processing, or image processing processing for blur amount control).
  • AF automatic focus adjustment mechanism
  • image processing for example, subject extraction processing, background extraction processing, or image processing processing for blur amount control.
  • AF automatic focus adjustment mechanism
  • image processing for example, subject extraction processing, background extraction processing, or image processing processing for blur amount control.
  • it can be used to realize various functions in the imaging apparatus.
  • an active distance measurement method in which illumination light is irradiated and reflected light from a subject is received to perform distance measurement, or a baseline length.
  • the focus lens is driven so as to increase the contrast of images acquired by a plurality of image pickup devices (for example, stereo cameras) arranged in accordance with the principle of triangulation, or the image acquired by the image pickup device itself.
  • image pickup devices for example, stereo cameras
  • Various methods such as a contrast AF method have been proposed.
  • the active distance measurement method requires a dedicated member for distance measurement such as a light projection device for distance measurement, and the triangular distance measurement method requires a plurality of image pickup devices. Will increase.
  • the contrast AF method uses an image acquired by the imaging apparatus itself, a dedicated distance measuring member or the like is not required.
  • the contrast value peak is obtained by performing multiple imaging while changing the position of the focus lens. Therefore, it takes time to search for a peak corresponding to the in-focus position, and it is difficult to perform high-speed AF.
  • a light beam that passes through the pupil of the lens is divided into a plurality of light beams, and a light beam that passes through one pupil region in the lens Has been proposed to obtain distance information to the subject by performing a correlation operation between the pixel signal obtained from the pixel signal and the pixel signal obtained from the light beam that has passed through another pupil region in the lens.
  • a technique for simultaneously acquiring such pupil division images for example, there is a technique for disposing a light shielding plate (mask) on a pixel for distance detection.
  • Japanese Patent Laid-Open No. 2001-174696 includes a pupil color division filter having different spectral characteristics for each partial pupil in an imaging optical system.
  • a technique is described in which a subject image from a photographic optical system is received by a color image sensor to perform pupil division by color.
  • the image signal output from the color image sensor is color-separated, and the relative shift amount between the same subject on each color image is detected, so that it is shifted from the in-focus position to the short distance side or the long distance
  • Two pieces of focusing information that is, a focusing shift direction that is shifted to the side and a focusing shift amount that is a shift amount from the in-focus position in that direction, are acquired.
  • partial pupils having spectral characteristics (blue and orange) as shown in FIG. 7 of the publication are provided, and a plurality of images with parallax are acquired simultaneously from different partial pupils.
  • a technique for creating a stereoscopic image is described.
  • a technique is also described in which partial pupils having different polarizations are provided, a plurality of images with parallax are acquired from different partial pupils, and a stereoscopic image is created.
  • Japanese Patent Application Laid-Open No. 11-344661 describes an AF diaphragm in which different color filters (G filter and M filter) are respectively arranged in openings arranged at different eccentric positions with respect to the center of the AF lens.
  • G filter and M filter different color filters
  • the cross-correlation calculating unit determines between these image data for each color.
  • the cross-correlation is calculated, and the distance direction calculation unit calculates the distance and direction to the in-focus position of the AF lens based on the cross-correlation, and drives the AF lens to the in-focus position.
  • each color image obtained from the image pickup device is shifted, so that the obtained image is obtained. Is limited to AF and is inappropriate for viewing as a color stereoscopic image.
  • the present invention has been made in view of the above circumstances, and an imaging apparatus capable of obtaining a stereoscopic image that is preferable for viewing from an image captured by light that has passed through different pupil regions of the imaging optical system depending on the band.
  • An object of the present invention is to provide an image processing apparatus.
  • an imaging device receives and photoelectrically converts light in a plurality of wavelength bands, and performs first conversion on the first image in the first band and the second in the second band.
  • a color imaging device that generates the second image and the third image in the third band, an imaging optical system that forms a subject image on the imaging device, and a photographic light flux that extends from the imaging optical system to the imaging device.
  • the first band is blocked by blocking the light in the first band in the imaging light flux that is disposed on the optical path of the imaging optical system and attempts to pass through the first area that is a part of the pupil area of the imaging optical system.
  • a band limiting filter, a distance calculation unit that calculates a phase difference amount between the first image and the second image, and based on the phase difference amount calculated by the distance calculation unit, the second image and A one-eye color image is generated by shifting the center of gravity position of the blur of the third image in the direction of the center of gravity position of the blur of the first image, and the first image and the third image
  • a stereo image generating unit that generates a color stereoscopic image by generating another one-eye color image in which the blur center of gravity is moved in the direction of the center of gravity of the second image. ing.
  • An image processing apparatus receives and photoelectrically converts light in a plurality of wavelength bands, and performs a first image in a first band, a second image in a second band, and a first image.
  • a color imaging device that generates a third image in the band 3, an imaging optical system that forms a subject image on the imaging device, and an optical path of a photographic light flux that extends from the imaging optical system to the imaging device.
  • the first band in the photographic light flux that attempts to pass through the first area that is a part of the pupil area of the imaging optical system, and blocks the second band and the third band.
  • a distance calculation unit that calculates a phase difference amount between the first image and the second image, and the distance One one-eye color obtained by moving the centroid position of the blur of the second image and the third image in the direction of the centroid position of the blur of the first image based on the phase difference amount calculated by the calculation unit By generating an image and generating another one-eye color image in which the position of the center of gravity of the blur of the first image and the third image is moved in the direction of the position of the center of gravity of the blur of the second image A stereo image generation unit that generates a color stereoscopic image.
  • FIG. 1 is a block diagram illustrating a configuration of an imaging apparatus according to Embodiment 1 of the present invention.
  • FIG. 3 is a diagram for explaining a pixel array of the image sensor in the first embodiment. The figure for demonstrating one structural example of the band-limiting filter in the said Embodiment 1.
  • FIG. 3 is a plan view showing a state of subject light flux condensing when an image of a subject closer to the in-focus position is imaged in the first embodiment.
  • FIG. 4 is a diagram showing a blur shape formed by light from one point on a subject that is closer to the in-focus side than the in-focus position in the first embodiment.
  • FIG. 4 is a diagram showing a blur shape formed by light from one point on a subject that is farther than the in-focus position in the first embodiment.
  • the figure which shows the mode of an image when the to-be-photographed object and the to-be-photographed object in the near distance and far distance are imaged.
  • FIG. 6 is a diagram showing the shape of an R filter applied to an R image of a subject that is farther from the in-focus position in the colorization process of Example 1 of Embodiment 1 described above.
  • FIG. 6 is a diagram showing the shape of a B filter applied to a B image of a subject that is farther from the in-focus position in the colorization process of Example 1 of Embodiment 1 described above.
  • FIG. 7 is a flowchart illustrating colorization processing performed by a color image generation unit in Example 1 of Embodiment 1 described above.
  • FIG. 3 is a block diagram showing a configuration of an image processing apparatus in Example 1 of Embodiment 1 described above.
  • FIG. 5 is a diagram illustrating an outline of colorization processing performed by a color image generation unit in Example 2 of Embodiment 1 described above. The figure which shows the partial area
  • FIG. FIG. 10 is a diagram illustrating a state in which the blur diffusion partial area of the original R image is copied and added to the R copy image in Example 2 of the first embodiment.
  • FIG. 10 is a diagram illustrating a state in which the blur diffusion partial area of the original R image is copied and added to the R copy image in Example 2 of the first embodiment.
  • FIG. 6 is a diagram showing a state in which the blur diffusion partial area of the original B image is copied and added to the B copy image in Example 2 of the first embodiment.
  • FIG. 6 is a diagram illustrating an example in which the size of the blur diffusion partial region is changed according to the phase difference amount in Example 2 of Embodiment 1 described above.
  • 7 is a flowchart illustrating colorization processing performed by a color image generation unit in Example 2 of Embodiment 1 described above.
  • FIG. 6 is a diagram illustrating an outline of a PSF table for each color according to a phase difference amount in Example 3 of the first embodiment.
  • FIG. 6 is a diagram illustrating an outline of colorization processing performed by a color image generation unit in Example 3 of Embodiment 1 described above.
  • FIG. 6 is a diagram illustrating an outline of colorization processing with blur amount control performed by a color image generation unit in Example 3 of Embodiment 1 described above.
  • 7 is a flowchart illustrating colorization processing performed by a color image generation unit in Example 3 of Embodiment 1 described above.
  • 5 is a flowchart showing a stereoscopic image generation process by a stereo image generation unit in the first embodiment.
  • generation process of the stereoscopic vision image of Embodiment 2 of this invention The figure which shows the mode of the shift at the time of producing
  • FIG. 1 is a block diagram showing a configuration of an imaging apparatus.
  • the imaging apparatus of the present embodiment is configured as a digital still camera, for example.
  • a digital still camera is taken as an example, but the imaging device may be any device that has a color imaging device and has an imaging function.
  • the imaging apparatus includes a lens unit 1 and a body unit 2 that is a main body portion to which the lens unit 1 is detachably attached via a lens mount.
  • a case where the lens unit 1 is detachable will be described as an example, but of course, it may not be detachable.
  • the lens unit 1 includes an imaging optical system 9 including a lens 10 and a diaphragm 11, a band limiting filter 12, a lens control unit 14, and a lens side communication connector 15.
  • the body unit 2 includes a shutter 21, an imaging element 22, an imaging circuit 23, an imaging drive unit 24, an image processing unit 25, an image memory 26, a display unit 27, an interface (IF) 28, and a system controller. 30, a sensor unit 31, an operation unit 32, a strobe control circuit 33, a strobe 34, and a body side communication connector 35.
  • this recording medium 29 is also described in the body unit 2 of FIG. 1, this recording medium 29 is a memory card (smart media, SD card, xD picture card, etc.) which is detachable with respect to the imaging device. Since it is configured, the configuration may not be unique to the imaging apparatus.
  • the imaging optical system 9 in the lens unit 1 is for forming a subject image on the imaging element 22.
  • the lens 10 of the imaging optical system 9 includes a focus lens for performing focus adjustment.
  • the lens 10 is generally composed of a plurality of lenses in general, only one lens is shown in FIG. 1 for simplicity.
  • the diaphragm 11 of the imaging optical system 9 is for adjusting the brightness of the subject image formed on the imaging element 22 by regulating the passage range of the subject luminous flux that passes through the lens 10.
  • the band limiting filter 12 is disposed on the optical path of the photographing light flux from the imaging optical system 9 to the imaging element 22 (preferably, at the position of the diaphragm 11 of the imaging optical system 9 or in the vicinity thereof).
  • a first band limitation that blocks light in the first band in the imaging light flux that attempts to pass through the first area that is a part of the pupil area and passes light in the second band and the third band;
  • the light in the second band in the imaging light flux that attempts to pass through the second region that is another part of the pupil region of the imaging optical system 9 is blocked, and the light in the first band and the third band is allowed to pass.
  • This is a filter that performs the second band limitation.
  • FIG. 3 is a diagram for explaining a configuration example of the band limiting filter 12.
  • the pupil region of the imaging optical system 9 is divided into a first region and a second region.
  • the band limiting filter 12 has a left half of the G (green) component and R (red) when the image pickup device is viewed from the image pickup device 22 in a standard posture (so-called posture in which the camera is held in a normal horizontal position).
  • the band limiting filter 12 passes all the G components contained in the light passing through the aperture 11 of the imaging optical system 9 (that is, the G component is the third band), and the R component is half of the aperture. And pass the B component only through the remaining half of the aperture. If the RGB spectral transmission characteristics of the band limiting filter 12 are different from the RGB spectral transmission characteristics of an element filter (see FIG.
  • the spectral transmission characteristic of the band limiting filter 12 is the same as or as close as possible to the spectral transmission characteristic of the element filter of the image sensor 22. Further, other configuration examples of the band limiting filter 12 will be described later with reference to FIGS.
  • the lens control unit 14 controls the lens unit 1. That is, the lens control unit 14 drives and focuses the focus lens in the lens 10 based on a command received from the system controller 30 via the lens-side communication connector 15 and the body-side communication connector 35, or the aperture 11 is moved. It is driven to change the aperture diameter.
  • the lens-side communication connector 15 enables communication between the lens control unit 14 and the system controller 30 by connecting the lens unit 1 and the body unit 2 with a lens mount and connecting to the body-side communication connector 35. Connector.
  • the shutter 21 in the body unit 2 is an optical shutter for adjusting the exposure time of the image sensor 22 by regulating the passage time of the subject light beam reaching the image sensor 22 from the lens 10.
  • an optical shutter is used here, an element shutter (electronic shutter) by the image sensor 22 may be used instead of or in addition to the optical shutter.
  • the imaging device 22 receives and photoelectrically converts the subject image formed by the imaging optical system 9 for each of a plurality of wavelength bands (for example, but not limited to RGB), and performs photoelectric conversion.
  • a color image sensor that outputs a signal for example, a CCD or CMOS.
  • the configuration of the color image sensor may be a single-plate image sensor provided with an on-chip element color filter, or a three-plate system using a dichroic prism that performs color separation into RGB color lights.
  • it may be an image sensor of a method capable of acquiring RGB imaging information according to the position in the depth direction of the semiconductor at the same pixel position, or any imaging element that can acquire imaging information of a plurality of wavelength bands. It does n’t matter.
  • FIG. 2 is a diagram for explaining the pixel arrangement of the image sensor 22.
  • the plurality of wavelength band lights transmitted through the element color filter mounted on-chip are R, G, and B.
  • An image sensor is configured. Therefore, when the image pickup device 22 has the configuration shown in FIG. 2, only one color component is obtained per pixel, and therefore the image processor 25 performs a demosaicing process. A color image in which three colors of RGB are aligned per pixel is generated.
  • the image pickup circuit 23 performs A / D conversion when the image signal output from the image pickup device 22 is amplified (gain adjustment), or when the image pickup device 22 is an analog image pickup device and outputs an analog image signal. Or a digital image signal (hereinafter also referred to as “image information”) (when the image pickup device 22 is a digital image pickup device, it is already digital when it is input to the image pickup circuit 23). Therefore, A / D conversion is not performed).
  • the imaging circuit 23 outputs an image signal to the image processing unit 25 in a format corresponding to the imaging mode switched by the imaging driving unit 24 as will be described later.
  • the imaging drive unit 24 supplies a timing signal and power to the imaging element 22 and the imaging circuit 23 based on a command from the system controller 30 to cause the imaging element 22 to perform exposure, reading, element shuttering, and the like. Control is performed so as to execute gain adjustment and A / D conversion by the image pickup circuit 23 in synchronization with the operation of the image pickup circuit 22. The imaging drive unit 24 also performs control to switch the imaging mode of the image sensor 22.
  • the image processing unit 25 is a digital image such as WB (white balance) adjustment, black level correction, ⁇ correction, defective pixel correction, demosaicing, image information color information conversion processing, image information pixel number conversion processing, and the like. The processing is performed.
  • the image processing unit 25 further includes an inter-color correction unit 36, a color image generation unit 37 serving as an image correction unit, and a stereo image generation unit 40.
  • the inter-color correction unit 36 is for correcting such a difference in brightness between bands (between colors).
  • the brightness difference between bands (between colors) can be easily corrected according to the area of the pass region for each band, but the amount of light in the periphery is larger than the center of the image.
  • more detailed correction according to the optical characteristics of the imaging optical system 9 may be performed in consideration of the tendency to decrease.
  • the correction value is not limited to be calculated in the imaging apparatus, and the correction value may be held as table data or the like.
  • the color image generation unit 37 performs a colorization process that is a digital process for forming color image information.
  • a spatial positional deviation may occur between the R image and the B image. Therefore, the spatial positional deviation is corrected. It is an example of the colorization process which the color image generation part 37 performs.
  • FIG. 4 is a plan view showing a state of subject light flux condensing when imaging a subject at the in-focus position
  • FIG. 5 is a subject light flux concentrating when imaging a subject closer to the in-focus position
  • FIG. 6 is a plan view showing the state of light
  • FIG. 6 is a diagram showing the shape of a blur formed by light from one point on the subject that is closer to the focus position
  • FIG. 7 is a closer distance than the focus position
  • FIG. 8 is a diagram showing the shape of blur formed by light from one point on a subject on the side for each color component
  • FIG. 8 is a diagram of subject light flux collection when imaging a subject farther than the in-focus position.
  • FIG. 9 is a plan view showing the state
  • FIG. 9 is a diagram showing the shape of a blur formed by light from one point on the subject that is on the far side from the in-focus position
  • FIG. 10 is on the far side from the in-focus position
  • FIG. 11 is a diagram showing the shape of a blur formed by light from one point on a subject for each color component. Location and than a diagram showing a state of image obtained by imaging the object at a short distance and long distance.
  • the aperture of the diaphragm 11 has a circular shape will be described as an example.
  • the light emitted from one point on the subject OBJc has a G component that passes through the entire band limiting filter 12 as shown in FIG.
  • Both the R component that passes only through the filter 12r and the B component that passes only through the other half of the GB filter 12b of the band limiting filter 12 are collected at one point on the image sensor 22 to form a point image IMGrgb.
  • a point image IMGrgb having no color blur is formed as shown in FIG.
  • the subject OBJn when the subject OBJn is, for example, closer to the in-focus position, the light emitted from one point on the subject OBJn is used for the G component as shown in FIGS.
  • a subject image IMGg that forms a circular blur is formed
  • a subject image IMGr that forms a semicircular blur of the left half is formed for the R component
  • a subject image IMGb that forms a semicircular blur of the right half is formed for the B component. Therefore, when the subject OBJn that is closer to the in-focus position is imaged, as shown in FIG. 11, the blur component image in which the R component subject image IMGr is shifted to the left and the B component subject image IMGb is shifted to the right.
  • the left and right positions of the R component and B component in this blurred image are the R component transmission region (RG filter 12r) and B component transmission region (GB filter 12b) of the band limiting filter 12 when viewed from the image sensor 22.
  • the deviation is emphasized and illustration of the blurred shape actually generated is omitted (the same applies to the case of the far side below)).
  • the G component subject image IMGg is a blurred image extending over the R component subject image IMGr and the B component subject image IMGb, as shown in FIGS. Since the G component subject image IMGg has the same information as the blurred image when the band limiting filter 12 is not used, it can be used as a standard image in the colorization processing performed by the color image generation unit 37 described above. is there.
  • the subject OBJf when the subject OBJf is on the far side from the in-focus position, for example, the light emitted from one point on the subject OBJf causes a circular blur for the G component as shown in FIGS.
  • a subject image IMGg is formed
  • a subject image IMGr that forms a semicircular blur of the right half is formed for the R component
  • a subject image IMGb that forms a semicircular blur of the left half is formed for the B component. Therefore, when the subject OBJf that is farther than the in-focus position is imaged, as shown in FIG. 11, the blur component image in which the R component subject image IMGr is shifted to the right side and the B component subject image IMGb is shifted to the left side.
  • the left and right positions of the R component and B component in this blurred image are the R component transmission region (RG filter 12r) and B component transmission region (GB filter 12b) of the band limiting filter 12 when viewed from the image sensor 22. It is the opposite of the left and right position.
  • the blur increases, and the distance between the center of gravity of the R component subject image IMGr and the center of gravity of the B component subject image IMGb increases.
  • the G component subject image becomes a blurred image straddling the R component subject image IMGr and the B component subject image IMGb (see FIGS. 9 and 10) as well on the far side. .
  • the G component subject image IMGg has the same information as the blurred image when the band limiting filter 12 is not used. Therefore, the standard image in the colorization process performed by the color image generation unit 37 described above. It can be used as
  • the color image generation unit 37 performs, as a colorization process, a G component image having the largest number of pixels and the largest contribution to the luminance signal in the Bayer array as illustrated in FIG. 2 as a standard image.
  • the process of correcting each color shift between the R component image and the B component image is performed as will be described in detail later. In this way, by generating an image based on the G color that can acquire information on the entire aperture of the diaphragm 11, it is possible to obtain a high-quality image that can withstand viewing.
  • the stereo image generation unit 40 is based on a single image data acquired from the image sensor 22, and based on a phase difference amount calculated by a distance calculation unit 39 described later, the second image and the third image are blurred.
  • the one-eye color image (one of the left-eye image and the right-eye image) is generated by moving the center-of-gravity position in the direction of the center-of-gravity position of the blur of the first image, and the first image and the third By generating another one-eye color image (one of the left-eye image and the right-eye image) in which the position of the center of gravity of the image is moved in the direction of the position of the center of gravity of the second image. An image is generated.
  • the processing of the stereo image generation unit 40 will be described in detail later.
  • the image memory 26 is a memory capable of high-speed writing and reading.
  • the image memory 26 is composed of, for example, SDRAM (Synchronous Dynamic Random Access Memory), and is used as a work area for image processing. Also used as For example, the image memory 26 not only stores the final image processed by the image processing unit 25 but also appropriately stores each intermediate image in a plurality of processing steps by the image processing unit 25.
  • the display unit 27 includes an LCD or the like, and an image processed for display by the image processing unit 25 (an image read from the recording medium 29 and processed for display by the image processing unit 25 is also included). Display). Specifically, the display unit 27 performs live view image display, confirmation display when recording a still image, reproduction display of a still image or a moving image read from the recording medium 29, and the like. In the present embodiment, the display unit 27 is configured to be able to display a stereoscopic image.
  • the interface (IF) 28 is detachably connected to the recording medium 29, and transmits information to be recorded on the recording medium 29 and information read from the recording medium 29.
  • the recording medium 29 is for recording an image processed for recording by the image processing unit 25 and various data related to the image, and is configured as a memory card or the like as described above.
  • the sensor unit 31 is, for example, a camera shake sensor configured by an acceleration sensor or the like for detecting blur of the imaging device, a temperature sensor for measuring the temperature of the imaging device 22, and a brightness for measuring the brightness around the imaging device. Includes brightness sensor, etc.
  • the detection result by the sensor unit 31 is input to the system controller 30.
  • the detection result by the camera shake sensor is used to drive the image pickup device 22 and the lens 10 to perform camera shake correction or to perform camera shake correction by image processing.
  • the detection result by the temperature sensor is used to control the drive clock by the imaging drive unit 24 and to estimate the amount of noise in the image obtained from the image sensor 22.
  • the detection result by the brightness sensor is used, for example, to appropriately control the luminance of the display unit 27 according to the ambient brightness.
  • the operation unit 32 changes a power switch for turning on / off the power of the image pickup device, a release button including a two-stage press button for inputting an image pickup operation such as a still image or a moving image, an image pickup mode, and the like. Mode buttons, cross keys used to change selection items, numerical values, and the like.
  • a signal generated by the operation of the operation unit 32 is input to the system controller 30.
  • the strobe control circuit 33 controls the light emission amount and the light emission timing of the strobe 34 based on a command from the system controller 30.
  • the strobe 34 is a light source that emits illumination light to the subject under the control of the strobe control circuit 33.
  • the body side communication connector 35 is connected between the lens control unit 14 and the system controller 30 when the lens unit 1 and the body unit 2 are coupled by the lens mount and connected to the lens side communication connector 15. It is a connector that enables communication.
  • the system controller 30 controls the body unit 2 and also controls the lens unit 1 via the lens control unit 14, and is a control unit that integrally controls the imaging apparatus.
  • the system controller 30 reads a basic control program of the image pickup apparatus from a non-illustrated non-volatile memory such as a flash memory, and controls the entire image pickup apparatus in accordance with an input from the operation unit 32.
  • the system controller 30 controls the diaphragm adjustment of the diaphragm 11 via the lens control unit 14, controls and drives the shutter 21, or shake correction (not shown) based on the detection result of the acceleration sensor of the sensor unit 31.
  • the mechanism is controlled to perform camera shake correction or the like.
  • the system controller 30 sets the mode setting of the imaging device (a still image mode for capturing a still image, a moving image mode for capturing a moving image, a stereoscopic image) in response to an input from the mode button of the operation unit 32. 3D mode or the like for imaging).
  • the system controller 30 includes a contrast AF control unit 38 and a distance calculation unit 39, and causes the contrast AF control unit 38 to perform AF control or based on the distance information calculated by the distance calculation unit 39. 1 is performed to perform AF.
  • the contrast AF control unit 38 may output an image signal output from the image processing unit 25 (this image signal may be a G image having a high ratio including a luminance component, and color misregistration is corrected by a colorization process described later.
  • a contrast value (also referred to as an AF evaluation value) is generated from the luminance signal image related to the image, and the focus lens in the lens 10 is controlled via the lens control unit 14. That is, the contrast AF control unit 38 extracts a high-frequency component by applying a filter, for example, a high-pass filter, to the image signal to obtain a contrast value. Then, the contrast AF control unit 38 acquires the contrast value by changing the focus lens position, moves the focus lens in the direction in which the contrast value increases, and further acquires the contrast value. By repeatedly performing such processing, control is performed so that the focus lens is driven to the focus lens position (focus position) at which the maximum contrast value is obtained.
  • the distance calculation unit 39 calculates the phase difference amount between the first image in the first band and the second image in the second band obtained from the image sensor 22 and calculates the calculated phase difference amount. Based on the above, the distance to the subject is calculated. Specifically, the distance calculation unit 39 calculates distance information according to the lens formula from the amount of deviation between the R component and the B component as shown in FIG. 7, FIG. 10, FIG.
  • the distance calculation unit 39 extracts the R component and the B component from among the components of each RGB color obtained as the captured image. Then, by calculating the correlation between the R component and the B component, the direction of the deviation and the magnitude of the deviation occurring in the R component image and the B component image are calculated (however, the present invention is not limited to this). Alternatively, it is possible to calculate the direction of displacement and the size of the displacement occurring between the R image and the G image or between the B image and the G image).
  • the direction of deviation is information for determining whether the subject of interest is closer or farther than the in-focus position.
  • the magnitude of the deviation is information for determining how far the subject of interest is away from the in-focus position.
  • the distance calculation unit 39 calculates how far the subject of interest is away to the side closer to or farther from the in-focus position based on the calculated direction of deviation and magnitude of deviation. It is like that.
  • the distance calculation unit 39 is based on an input from the operation unit 32 or based on a basic control program of the imaging apparatus, and a distance calculation area determined by the system controller 30 (for example, the entire area of the captured image or distance information in the captured image).
  • the distance calculation as described above is performed for a part of the region where it is desired to acquire the
  • a technique described in Japanese Patent Application Laid-Open No. 2001-16611 can be used as such a technique for obtaining the deviation amount and calculating the distance information.
  • the distance information acquired by the distance calculation unit 39 can be used for, for example, autofocus (AF).
  • AF autofocus
  • the distance calculation unit 39 acquires distance information based on the difference between the R component and the B component, and the system controller 30 drives the focus lens of the lens 10 via the lens control unit 14 based on the acquired distance information, that is, AF.
  • Phase difference AF can be performed. Thereby, high-speed AF is possible based on one captured image.
  • AF control may be performed based on the calculation result by the distance calculation unit 39. It may be performed by the contrast AF control unit 38.
  • contrast AF by the contrast AF control unit 38 has high focusing accuracy, but it requires a plurality of captured images, and therefore there is a problem that the focusing speed cannot be said to be fast.
  • the focusing speed since the calculation of the subject distance by the distance calculation unit 39 can be performed based on one captured image, the focusing speed is fast, but the focusing accuracy may be inferior to the contrast AF.
  • the AF assist control unit 38a provided in the contrast AF control unit 38 may perform AF by combining the contrast AF control unit 38 and the distance calculation unit 39. That is, the distance calculation unit 39 performs a distance calculation based on the deviation between the R component and the B component of the image acquired through the band limiting filter 12 to determine whether the subject is on the far side from the current focus position. Or it is acquired whether it exists in the short distance side. Alternatively, the distance that the subject is away from the current focus position is acquired. Next, the AF assist control unit 38a controls the contrast AF control unit 38 to drive the focus lens toward the acquired long distance side or the short distance side (by the acquired distance) and perform contrast AF. By performing such processing, it is possible to obtain high focusing accuracy at a fast focusing speed.
  • the R image and B image obtained from the image sensor 22 can be used as, for example, a stereo stereoscopic image (3D image).
  • the 3D image only needs to be able to observe the image from the left pupil with the left eye and the image from the right pupil with the right eye.
  • An anaglyph method has been conventionally known as such a 3D image observation method (see also Japanese Patent Laid-Open No. 4-251239 described above).
  • This anaglyph method generally uses red and blue glasses for anaglyphs that generate a red left-eye image and a blue right-eye image, display both, and place a red transmission filter on the left eye side and a blue transmission filter on the right eye side. By observing the lever image, a monochrome stereoscopic image can be observed.
  • an R image and a B image obtained from the image sensor 22 with a standard posture are performed.
  • a standard posture an image in which color processing for correcting the positional deviation between the R component and the B component in the color image generation unit 37 is not performed.
  • the anaglyph type red / blue glasses are used for observation, a stereoscopic view is possible as it is. That is, as described with reference to FIG. 3, when the imaging apparatus is in the standard posture, the RG filter 12r of the band limiting filter 12 is arranged on the left side when the subject is viewed from the imaging element 22, and the GB filter 12b is on the right side. It is comprised so that it may be arrange
  • the R component light transmitted through the left RG filter 12r is observed only by the left eye
  • the B component light transmitted through the right GB filter 12b is observed only by the right eye, allowing stereoscopic viewing. It becomes.
  • the stereo image generation unit 40 can also generate a color stereoscopic image that is not an anaglyph method.
  • FIG. 12 is a diagram showing a first modification of the band limiting filter 12.
  • FIG. 13 is a diagram showing a second modification of the band limiting filter 12.
  • the band limiting filter 12 shown in FIG. 13 has a configuration in which the pupil region of the imaging optical system 9 is sandwiched between the first region and the second region through which the imaging light flux that is not subjected to the band limitation passes. ing.
  • the band limiting filter 12 is a filter in which a W filter 12w that passes components of all RGB colors (that is, white light W) is disposed between the left RG filter 12r and the right GB filter 12b. .
  • FIG. 14 is a diagram showing a third modification of the band limiting filter 12.
  • the first region and the second region are arranged with different vertical and horizontal positions when the imaging apparatus is in the standard posture. Yes. That is, the band limiting filter 12 divides a circular filter into four crosses, for example, and arranges the RG filter 12r in the lower left (third quadrant in the graph) and the GB filter 12b in the upper right (first quadrant in the graph). In addition, the first G filter 12g1 is arranged in the upper left (second quadrant in the graph), and the second G filter 12g2 is arranged in the lower right (fourth quadrant in the graph).
  • the direction of the phase difference acquired by the band limiting filter shown in FIG. 14 is a phase difference of 45 degrees obliquely.
  • the subsequent color image generation unit performs colorization processing, and then the stereo image generation unit performs stereo image generation.
  • a normal band limiting filter (filter example other than the band limiting filter shown in FIG. 14) uses a horizontal phase difference. Colorization processing is performed in a phase difference direction of 45 degrees obliquely.
  • a stereo image generation process is performed using a normal horizontal phase difference (the horizontal phase difference is a 45 ° diagonal phase difference). If only the horizontal component is extracted, it can be easily obtained). By doing in this way, even when the phase difference acquired by the band limiting filter is in an oblique direction, it is possible to generate a stereo image having a phase difference in a natural horizontal direction when viewed by a person.
  • the band limiting filter 12 is configured as shown in FIGS. 12 to 14 and the like, the amount of deviation between the R image and the B image and the shape of the blurred PSF as described later only change. Therefore, it is possible to generate an image with corrected color misalignment by similarly applying a colorization process for correcting the positional misalignment between the R component and the B component in the color image generating unit 37 as described below. .
  • FIG. 15 is a diagram showing a fourth modification of the band limiting filter 12.
  • band limiting filter 12 shown in FIG. 3 and FIGS. 12 to 14 is a normal color filter
  • the band limiting filter 12 shown in FIG. 15 is configured by a color selective transmission element 18.
  • the color selective transmission element 18 includes a member capable of rotating the polarization transmission axis corresponding to the color (wavelength) and a member capable of selectively controlling whether the polarization transmission axis rotates or not, such as an LCD.
  • This element is realized by combining a plurality of elements, and can change the color distribution.
  • a specific example of the color selective transmission element 18 is a color switch manufactured by Color Link as disclosed in “SID '00 Digest, Vol. 31, p. 92” in SID2000 in April 2000 AD. .
  • the color selective transmission element 18 has the left half as the first color selection transmission element 18L and the right half as the second color selection transmission element 18R when viewed from the imaging element 22 with the imaging apparatus in the standard posture (lateral position). It is configured.
  • the first color selective transmission element 18L can selectively take an RG state that passes the G component and the R component and blocks the B component, and a W state that passes all the RGB components (W). It has become.
  • the second color selective transmission element 18R can selectively take a GB state in which the G component and the B component are allowed to pass and an R component is blocked, and a W state in which all the RGB components (W) are allowed to pass. ing.
  • the first color selection / transmission element 18L and the second color selection / transmission element 18R are independently driven by a color selection drive unit (not shown), and the first color selection / transmission element 18L is set in the RG state.
  • the second color selective transmission element 18R by causing the second color selective transmission element 18R to assume the GB state, the same function as the band limiting filter 12 shown in FIG. 3 can be achieved.
  • the color selective transmission element 18 can perform the same function as the band limiting filter 12 shown in FIGS.
  • the band limiting filter 12 can also be configured.
  • FIGS. 16 to 24 are diagrams showing Example 1 of the colorization process.
  • the colorization process is a process for correcting the blur having the shape shown in FIG. 7 or 10 to the blur having the shape shown in FIG.
  • FIG. 16 is a diagram showing an outline of the blurred shape after the colorization process.
  • FIG. 17 is a diagram illustrating the shape of the R filter applied to the R image of the subject on the far side from the in-focus position in the colorization process according to the first embodiment.
  • FIG. 18 illustrates the colorization process according to the first embodiment.
  • FIG. 6 is a diagram illustrating the shape of a B filter applied to a B image of a subject that is farther from the in-focus position in FIG.
  • color misregistration correction and blur shape correction are performed by filtering processing (processing that convolves a filter kernel with an image). That is, by performing R filtering processing on the R image, the blur shape and the gravity center position of the R image are approximated to the blur shape and the gravity center position of the G image that is the standard image, and the B image By performing the filtering process for B, the blur shape and the gravity center position of the B image are approximated to the blur shape and the gravity center position of the G image that is the standard image.
  • the center of gravity position is approximated to correct color misregistration
  • the blur shape is approximated to correct the blur shape different for each color, and the blur in the color image is naturally blurred. This is because.
  • the blur shape of the R image and the B image is matched with the blur shape of the G image, which is the standard image.
  • the reason is that the blur of the G image has a circular shape. Since it is blur and the blur size of the G image is larger than the blur size of the R image and the B image, it is easier to process the blur shape of the R image and the B image with the blur shape of the G image. There are some things.
  • the R filter kernel shown in FIG. 17 is for performing a convolution operation on the R image, and a Gaussian filter is arranged as an example of a blur filter.
  • the peak of the filter coefficient of the Gaussian filter (corresponding approximately to the position of the center of gravity of the filter coefficient) is the position of the center of the kernel (the horizontal line Ch dividing the top and bottom of the kernel into two equal parts)
  • a position where the phase difference between the G image and the R image is shifted (a horizontal line Ch and the R image) from a vertical line that equally divides and intersects the vertical line Cg that passes through the center of gravity of the G image.
  • This is a filter shape at a position where the vertical line Cr passing through the center of gravity of the blurred shape intersects.
  • the B filter kernel shown in FIG. 18 is for performing a convolution operation on the B image, and similarly to the R filter kernel, a Gaussian filter is arranged as an example of a blur filter.
  • the peak of the filter coefficient of the Gaussian filter starts from the position of the center of the kernel (the position where the horizontal line Ch and the vertical line Cg intersect).
  • the filter shape is at a position where the phase difference between the images B and B is shifted (a position where the horizontal line Ch intersects the vertical line Cb passing through the center of gravity of the blur shape of the B image).
  • FIG. 19 is a diagram showing a shift state of the R image and the B image of the subject located farther than the in-focus position in the colorization process of the modification of the first embodiment
  • FIG. 20 is a modification of the first embodiment. It is a figure which shows the shape of the filter applied with respect to R image and B image in the colorization process of an example.
  • the colorization processing is performed by correcting the color misregistration by parallel movement (shift) and correcting the blurred shape by the filtering processing.
  • the center of gravity position of the R image becomes the center of gravity position of the blur of the G image that is the standard image as shown in FIG. 10 and performing the B shift processing corresponding to the phase difference on the B image shown in FIG. 10, the center of gravity of the B image is changed to the center of gravity of the G image as the standard image as shown in FIG. 19.
  • the filter kernel shown in FIG. 20 is for performing a convolution operation on the R image and the B image, and a Gaussian filter is arranged as an example of the blur filter.
  • This filter kernel has a filter shape in which the peak of the filter coefficient of the Gaussian filter (corresponding to the position of the center of gravity of the filter coefficient) is at the position of the kernel center (position where the horizontal line Ch and the vertical line Cg intersect). .
  • the band limiting filter 12 in which the first area and the second area are symmetric is assumed here as shown in FIG. 3, the same filtering process is applied to the R image and the B image.
  • the first region and the second region are asymmetric, it is needless to say that different filtering processes may be performed on the R image and the B image.
  • a Gaussian filter (circular Gaussian filter) is taken as an example of a blur filter, but the present invention is not limited to this.
  • a Gaussian filter circular Gaussian filter
  • FIG. 3 the filter shape shown in FIG. 3, as shown in FIGS. 7 and 10, blurring generated in the R image and the B image has a half-moon shape in the vertical direction (that is, a shape shorter in the horizontal direction than in the circular shape). ) Therefore, if an elliptical Gaussian filter as shown in FIG. 21, FIG. 22 or the like having the horizontal direction (more generally, the direction in which the phase difference is generated) as the major axis direction is used, the blur is more accurately formed into a circular shape. It is possible to perform a correction process to approach FIG.
  • FIG. 21 is a diagram showing the shape of the elliptical Gaussian filter in which the horizontal standard deviation is increased in the first embodiment
  • FIG. 22 is the shape of the elliptical Gaussian filter in which the vertical standard deviation is reduced in the first embodiment.
  • FIG. 21 and 22 show examples in which the peak of the filter coefficient (corresponding to the position of the center of gravity of the filter coefficient) is located at the center of the filter kernel corresponding to FIG. 20, but it corresponds to FIG. 17 and FIG. In this case, it goes without saying that the peak of the filter coefficient (which substantially corresponds to the position of the center of gravity of the filter coefficient) is shifted from the center of the filter kernel.
  • the blur filter is not limited to the circular Gaussian filter and the elliptical Gaussian filter, and the blur filter that can approximate the blur shape of the R image or the B image to the blur shape of the G image. If so, it can be widely applied.
  • FIG. 23 is a flowchart illustrating the colorization process performed by the color image generation unit 37 in the first embodiment.
  • Step S1 When this process is started, initialization is performed. In this initial setting, first, an RGB image to be processed (that is, an R image, a G image, and a B image) is read. Next, an R copy image that is a copy of the R image and a B copy image that is a copy of the B image are created.
  • an RGB image to be processed that is, an R image, a G image, and a B image
  • Step S2 a target pixel for performing phase difference detection is set.
  • the target pixel is set to one of the R image and the B image, here, for example, the R image.
  • Step S3 A phase difference with respect to the target pixel set in step S2 is detected.
  • This phase difference is detected by setting a partial area including the target pixel at the center position as an R image as a reference image, and setting a partial area of the same size as a B image as a reference image (see FIG. 26, etc.)
  • the distance calculation unit 39 performs a correlation calculation between the reference image and the reference image while shifting the position of the image in the direction in which the phase difference is generated, and the reference image determined to have the highest correlation value is compared with the reference image.
  • the amount of misalignment between them (note that the sign of the misregistration amount gives information on the direction of misalignment) is the phase difference amount.
  • the partial area can be set to an arbitrary size. However, in order to stably detect the phase difference, it is preferable to use partial areas of 30 [pixels] or more in the vertical and horizontal directions. As an example, 51 ⁇ 51 An area of [Pixel] is given.
  • the correlation calculation in the distance calculation unit 39 is performed by a process such as a ZNCC calculation or an SAD calculation for an image that has been previously subjected to filter processing.
  • Equation 1 I is a partial area of the R image
  • T is a partial area of the B image (partial area of the same size as I)
  • I (bar) is an average value of I
  • T (bar) is an average value of T
  • M is The horizontal width [pixel] of the partial area
  • N is the vertical width [pixel] of the partial area.
  • filtering processing such as a differential filter represented by a Sobel filter or a bandpass filter such as a LOG filter is first applied to the R image and the B image. Give it. After that, the correlation calculation is performed by the SAD calculation shown in the following formula 2.
  • I ′ is a partial region of the R image after the filtering process is performed
  • T ′ is a partial region of the B image after the filtering process is performed (partial region of the same size as I ′)
  • M is a partial region The horizontal width [pixel] of the area
  • N is the vertical width [pixel] of the partial area.
  • R SAD is a correlation value obtained as a result of the correlation calculation.
  • Step S4 It is determined whether or not the phase difference detection processing for all the target pixels in the image is completed. Then, until the completion, the processing of step S2 and step S3 is repeated while shifting the position of the target pixel.
  • all the target pixels in the image indicate all pixels in the image that can detect the phase difference. It should be noted that pixels that cannot be detected in phase difference may be left undetected, or may be calculated by performing interpolation or the like based on the detection results of surrounding pixels of interest.
  • Step S5 the target pixel for performing the correction process is set for the pixel for which the phase difference amount is obtained.
  • the pixel position of the target pixel is the same (that is, common) pixel position in the R image and the B image.
  • the correction process as described with reference to FIGS. 17 and 18, a case will be described in which a colorization process using only a filter without translation (shift) is performed.
  • Step S6 The shape of the R image filter for filtering the R image is acquired according to the phase difference amount of the target pixel.
  • the relationship between the phase difference amount and the filter shape for the R image is held in advance in the imaging apparatus as a table as shown in Table 1 below, for example.
  • the filter shape is determined by determining the size of the filter kernel, the deviation of the Gaussian filter from the center of the filter kernel, the standard deviation ⁇ of the Gaussian filter (this standard deviation ⁇ is the blur of the blur filter) It shows the degree of spread). Therefore, the shape of the R image filter can be acquired by referring to the table based on the phase difference amount of the target pixel.
  • Step S7 a filtering process is performed on a neighboring region including the target pixel and its neighboring pixels in the R image, and a filter output value at the target pixel is acquired. Then, the acquired filter output value is copied to the target pixel position of the R copy image, and the R copy image is updated.
  • Step S8 The shape of the B image filter for performing filtering processing on the B image is acquired according to the phase difference amount of the target pixel.
  • the relationship between the phase difference amount and the filter shape for the B image is held in advance in the imaging apparatus as a table as shown in Table 2 below, for example.
  • the filter shape is determined by the size of the filter kernel, the deviation of the Gaussian filter from the center of the filter kernel, and the standard deviation ⁇ of the Gaussian filter. Therefore, the shape of the B image filter can be acquired by referring to the table based on the phase difference amount of the target pixel.
  • Step S9 a filtering process is performed on a neighboring region including the target pixel and its neighboring pixels in the B image, and a filter output value at the target pixel is acquired. Then, the acquired filter output value is copied to the target pixel position of the B copy image, and the B copy image is updated.
  • Step S10 It is determined whether or not the filtering process for all the target pixels in the image has been completed. Until the processing is completed, the processing in steps S5 to S9 is repeated while shifting the position of the target pixel.
  • the R copy image and the B copy image are output as corrected images for the R image and the B image, and the colorization process is terminated.
  • step S6 or step S8 instead of using the circular Gaussian filter, when using an elliptical Gaussian filter as shown in FIG. 21 or FIG. 22, the standard deviation value of the elliptical Gaussian filter is set to x. Since the direction and the y direction may be set separately, the parameter table relating to the filter shape held in the imaging apparatus is as shown in Table 3 below, for example, excluding the deviation from the filter kernel center.
  • the deviation from the filter kernel center when the elliptical Gaussian filter is used in step S6 is the deviation from the filter kernel center when the elliptical Gaussian filter is used in step S8, for example, as in Table 1. For example, it may be the same as in Table 2.
  • an example in which the shape of the filter corresponding to the phase difference amount is acquired using the table with reference to Table 1 and Table 2 is not limited thereto.
  • the correspondence between each parameter for determining the filter shape and the phase difference amount is held as, for example, an equation, the phase difference amount is substituted into the equation, and the filter shape is determined by calculation or the like. It doesn't matter if you do.
  • the processing for detecting the phase difference from the image captured through the band limiting filter 12 and the colorization processing for correcting the color misregistration do not necessarily have to be performed in the form of the imaging device, and the image processing device is separate from the imaging device. 41 (for example, a computer that executes an image processing program) may be performed.
  • FIG. 24 is a block diagram illustrating the configuration of the image processing apparatus 41 according to the first embodiment.
  • the image processing apparatus 41 includes a lens mount (not shown), a shutter 21, an image sensor 22, an image pickup circuit 23, an image pickup drive unit 24, and a lens unit 1, which are not shown in FIG. Contrast AF control unit 38 (including AF assist control unit 38a) related to AF control, body side communication connector 35 related to communication with the lens unit 1, strobe control circuit 33 and strobe 34 related to subject illumination, and state of the imaging device
  • AF control unit 38 including AF assist control unit 38a
  • body side communication connector 35 related to communication with the lens unit 1
  • strobe control circuit 33 and strobe 34 related to subject illumination
  • state of the imaging device The sensor unit 31 for obtaining the information is removed, and a recording unit 42 for recording information input from the interface (IF) 28 is further provided.
  • the recording unit 42 is configured to output information input and recorded via the interface 28 to the image processing unit 25. Information can be recorded from the image processing unit 25 to the recording unit 42.
  • the recording unit 42 is controlled by the system controller 30A (the reference number of the system controller is 30A in accordance with the removal of the contrast AF control unit 38) together with the interface 28.
  • the reference numeral of the operation unit is 32A.
  • a series of processing in the image processing apparatus 41 is performed as follows, for example. First, an image is captured using an imaging device including the band limiting filter 12 and is recorded on the recording medium 29 as a RAW image output from the imaging circuit 23. Further, information related to the shape of the band limiting filter 12, lens data of the imaging optical system 9, and the like are also recorded on the recording medium 29.
  • the recording medium 29 is connected to the interface 28 of the image processing apparatus 41, and images and various information are recorded in the recording unit 42.
  • the recording medium 29 may be removed from the interface 28.
  • the image and various types of information recorded in the recording unit 42 are read out, and the phase difference is calculated by the distance calculating unit 39 or the color shift by the color image generating unit 37 is performed in the same manner as the imaging device described above. Coloring processing to be corrected is performed, or stereoscopic image generation processing by the stereo image generation unit 40 as described later is performed.
  • the colorized image and the stereoscopic image processed by the image processing device 41 are recorded in the recording unit 42 again. Further, the colorized image and stereoscopic image recorded in the recording unit 42 are displayed on the display unit or transmitted to an external device via the interface 28. Thus, in the external device, the colorized image and the stereoscopic image can be used for various purposes.
  • the center of gravity positions of the R image and the B image can be obtained by using the blur filter itself or by parallel translation (shift) before using the blur filter. Therefore, it is possible to obtain a more preferable image for viewing with reduced color misregistration.
  • FIG. 25 to FIG. 30 show Example 2 of the colorization process
  • FIG. 25 is a diagram showing an outline of the colorization process performed by the color image generation unit 37 in Example 2.
  • the colorization processing in the color image generation unit 37 is different from that in the first embodiment. That is, in the first embodiment described above, correction of color misregistration is performed using a blur filter, but this embodiment is performed by image copy addition processing.
  • the blur of the R image has a shape in which the left half of the blur of the G image is missing, and the blur of the B image is The right half of the blur of the G image is missing.
  • the shape of the partial area to be copied and added matches the shape of the missing portion of the blur in the R image and the B image, but here, in order to simplify the processing, a rectangular (for example, square) area is used. Yes.
  • the blur diffusion partial region G1 and the blur diffusion partial region G2 are shown in the circular blur diffusion region in the G image.
  • the blur diffusion partial region G1 and the blur diffusion partial region G2 are arranged in the same size at positions symmetrical to the vertical line Cg passing through the center of gravity of the blur diffusion region forming the circular shape of the G image.
  • the sizes of the partial areas G1 and G2 are preferably about the size of the radius of the circular blur diffusion area in consideration of fulfilling the function of color misregistration correction.
  • the blur diffusion partial region R1 shown for the R image is a region having the same size and the same size as the blur diffusion partial region G2.
  • the blur diffusion partial region R2 having the same size and the same position as the blur diffusion partial region G1 is insufficient.
  • the blur diffusion partial region B1 shown for the B image is the same size and the same position as the blur diffusion partial region G1.
  • the blur diffusion region B2 of the B image lacks the blur diffusion partial region B2 having the same size and the same position as the blur diffusion partial region G2.
  • the blur diffused partial area R2 of the R image is generated by moving and moving the image (to an R copy image that is a copy of the R image, as will be described later).
  • the blur diffusion partial region R2 is a region corresponding to the blur diffusion partial region G1 of the G image (or the blur diffusion partial region B1 of the B image).
  • the blur diffusion partial area B1 of the B image is moved by a movement amount (same as above) necessary to completely overlap the blur diffusion partial area G1 with the blur diffusion partial area G2 (as will be described later, B
  • the blur diffusion partial area B2 of the B image is generated by performing the copy addition (to the B copy image which is a copy of the image).
  • This blur diffusion partial area B2 is an area corresponding to the blur diffusion partial area G2 of the G image (or the blur diffusion partial area R1 of the R image).
  • the center of gravity of the blur diffusion region of the R image is close to the center of gravity of the blur diffusion region of the G image
  • the center of gravity of the blur diffusion region of the B image is close to the center of gravity of the blur diffusion region of the G image.
  • FIG. 26 is a diagram showing partial areas set in the R image and the B image when performing phase difference detection in the second embodiment
  • FIG. 27 shows a blur diffusion partial area of the original R image in the second embodiment as an R copy image
  • FIG. 28 is a diagram showing a state where copy addition is performed on the image
  • FIG. 28 is a diagram showing a state where the blur diffusion partial area of the original B image is copied and added to the B copy image in the second embodiment
  • FIG. 29 is a diagram showing the phase difference amount in the second embodiment.
  • FIG. 30 is a flowchart showing a colorization process performed by the color image generation unit 37 in the second embodiment, in which the size of the blur diffusion partial region is changed. Description will be made along FIG. 30 with reference to FIGS. 26 to 29 as appropriate.
  • Step S21 When this process is started, initialization is performed.
  • an RGB image to be processed that is, an R image, a G image, and a B image
  • the input image is a Bayer image
  • the demosaicing process is performed in advance in the image processing unit 25.
  • the color difference amount between the R image and the G image is calculated to generate a Cr image that is a color difference image
  • the color difference amount between the B image and the G image is calculated to calculate the color difference image.
  • a certain Cb image is generated.
  • Cr 0.50000R-0.41869G-0.0811B
  • Cb ⁇ 0.16874R ⁇ 0.33126G + 0.50000B Is widely known, instead of Equation 3, Equation 4 may be used.
  • a Cr copy image Cr1 that is a copy image of the original Cr image Cr0 and a Cb copy image Cb1 (see FIG. 28) that is a copy image of the original Cb image Cb0 are created. Further, a Cr count image and a Cb count image having the same size as the Cr copy image Cr1 and the Cb copy image Cb1 are generated (here, in these count images, the initial value of the pixel value is set to 1 for all pixels). .
  • Step S22 Subsequently, a partial region for performing phase difference detection is set.
  • the partial area is set to one of the R image and the B image, here, for example, the R image.
  • Step S23 A phase difference with respect to the partial region set in step S22 is detected. This phase difference is detected by using the partial area set in the R image as a standard image and the partial area of the B image having the same size as the standard image as a reference image as shown in FIG. By doing so, phase difference detection is performed between the R image and the B image.
  • Step S24 Based on the phase difference amount obtained by the process of step S23, the radius of the circular blur of the G image (or the radius of the semicircular blur of the R image and the B image) is acquired.
  • the relationship between the phase difference amount and the blur radius of the G image is held in advance in the imaging apparatus as a table, a mathematical expression, or the like. Therefore, the blur radius can be acquired by referring to a table or performing a calculation using a mathematical formula based on the phase difference amount.
  • the process of step S24 may be omitted, and a simple method using the phase difference amount acquired in step S23 in place of the blur radius may be applied. In this case, the relationship between the phase difference amount and the blur radius need not be held in the imaging apparatus in advance.
  • Step S25 a partial region is read from the original Cr image Cr0, shifted by a predetermined amount corresponding to the phase difference detected in step S23, and then copied and added to the Cr copy image Cr1.
  • the predetermined amount for shifting the partial area is an amount including the shifting direction, and the size thereof is, for example, the blur radius acquired in step S24.
  • the reason why the Cr copy image Cr1 is created in the above-described step S21 is that it is necessary to hold the original Cr image Cr0 separately from the Cr copy image Cr1 whose pixel value is changed by copy addition ( The same applies to the Cb copy image Cb1). However, it is not necessary to prepare a copy image when, for example, partial areas are processed in parallel operation instead of sequentially processing in the order of raster scanning.
  • Step S26 Subsequently, +1 is added to the region of “the position where the copy addition process has been performed in step S25” of the Cr count image so that the number of times of addition can be understood.
  • This Cr count image is used to perform pixel value normalization processing in the subsequent step S30.
  • Step S27 Further, the partial area is read from the same position as “the position copied to the Cr copy image Cr1 in step S25” in the original Cb image Cb0, and the copy source from the original Cr image Cr0 in “step S25” of the Cb copy image Cb1. A copy is added to “the position from which data was acquired”.
  • the predetermined amount for shifting the Cb image has the same absolute value as the predetermined amount for shifting the Cr image, but the direction is reversed.
  • Step S28 the number of times of addition can be seen in the region of “the position where the copy source data was acquired from the original Cr image Cr0 in step S25” (that is, the position where the copy addition process was performed in step S27) of the Cb count image. Add +1 to.
  • This Cb count image is also used to perform pixel value normalization processing in the subsequent step S30.
  • step S25 and step S27 described above the copy addition process is performed for each partial area of the image, but this partial area may be the same as the partial area for which the phase difference detection process was performed in step S23. On the other hand, it may be a partial area having a size different from that of the phase difference detection process.
  • the size of the partial area used for the copy addition process may be constant for the entire image (that is, a partial area having a global size), or may be different for each partial area set in the image ( That is, it may be a partial area having a local size).
  • the size of the partial region used in steps S25 to S28 may be changed as shown in FIG. 29 according to the phase difference amount detected in step S23.
  • the processing may be branched depending on whether or not the phase difference amount is 0, and no processing may be performed when the phase difference amount is 0.
  • the size of the partial area is increased in proportion to the phase difference amount. Since the inclination of the straight line at this time is appropriately set according to the configuration of the optical system, a specific scale is not shown in FIG.
  • FIG. 29 shows an example in which the relationship between the phase difference amount and the size of the partial area is proportional, but of course, the relationship is not limited to the proportional relationship.
  • the design may be made so that the size of the partial region with respect to the phase difference amount is appropriate.
  • the amount of diffusion (a point spread amount of PSF (Point Spread Function)) when the light beam from the point light source is blurred is not necessarily uniform within the blurred area. For example, it is conceivable that the amount of diffusion is smaller (that is, the luminance is lower) in the peripheral portion of the blur than in the central portion. Therefore, when performing copy addition of partial areas as described above, a weighting factor corresponding to the amount of blur diffusion may be multiplied. For example, each pixel in the peripheral part of the partial region is multiplied by a weighting factor of 1/2, each pixel in the central part of the partial region is multiplied by one weighting factor, and then copy addition is performed. At this time, also in the count images in step S26 and step S28, 1/2 is added to the peripheral part of the partial area, and 1 is added to the central part of the partial area.
  • PSF Point Spread Function
  • Step S29 Thereafter, it is determined whether or not the processing for all the partial areas in the image is completed. Until the processing is completed, the processing in steps S22 to S28 is repeated while shifting the position of the partial area.
  • an arbitrary value can be set as the step for shifting the partial area, but it is preferably a value smaller than the width of the partial area.
  • Step S30 Thus, if it is determined in step S29 that the processing for all the partial areas has been completed, the normalized Cr by dividing the pixel value of the Cr copy image by the pixel value of the Cr count image for each same pixel position. A copy image is obtained, and a normalized Cb copy image is obtained by dividing the pixel value of the Cb copy image by the pixel value of the Cb count image.
  • Step S31 an R image, a B image, and a G image are generated using the G image (or Y image) and the Cr copy image and the Cb copy image normalized in step S30.
  • Equation 3 is used to calculate the color difference image
  • B Y + 1.77200Cb Is used to generate an R image, a B image, and a G image.
  • the RGB image calculated in step S31 is an image obtained by the colorization process in the color image generation unit 37.
  • FIGS. 31 to 34 show Example 3 of the colorization processing.
  • FIG. 31 is a diagram showing an outline of the PSF table of each color according to the phase difference amount in Example 3
  • FIG. 33 is a diagram showing an outline of colorization processing performed by the color image generation unit 37 in FIG. 33
  • FIG. 33 is a diagram showing an overview of colorization processing with blur amount control performed by the color image generation unit 37 in Example 3
  • FIG. 10 is a flowchart illustrating a colorization process performed by a color image generation unit 37 in the third embodiment.
  • the colorization process in the color image generation unit 37 is different from those in the first and second embodiments. That is, in this embodiment, the colorization process is a restoration process (inverse filtering process) for restoring a blurred image to a non-blurred image, and a circular blur shape corresponding to the subject distance for the restored non-blurred image. This is performed by combining filtering processing to be obtained.
  • a restoration process inverse filtering process
  • the PSF of each color of RGB changes as shown in FIG. 31 according to the phase difference amount.
  • the dark part where the light from the point light source does not reach is hatched.
  • PSFr which is the PSF of the R image
  • it when it is closer than the in-focus position, it has a left half moon shape as shown in the upper half of FIG. 31, but when it is far from the in-focus position, it is shown in the lower half of FIG. It becomes a right half moon shape.
  • PSFb which is the PSF of the B image
  • PSFb shows a larger half-moon shape as the absolute value of the phase difference amount increases, and converges to 1 point at the phase difference amount of 0, that is, at the in-focus position.
  • PSFb shows a right half moon shape as shown in the upper half of FIG. 31, but when it is far from the in-focus position, it is shown in the lower half of FIG.
  • the left half-moon shape when it is closer than the in-focus position, it has a right half moon shape as shown in the upper half of FIG. 31, but when it is far from the in-focus position, it is shown in the lower half of FIG.
  • the left half-moon shape when it is closer than the in-focus position, it has a right half moon shape as shown in the upper half of FIG. 31, but when it is far from the in-focus position, it is shown in the lower half of FIG.
  • the left half-moon shape when it is closer than the in-focus position, it has a right
  • PSFg which is the PSF of the G image, shows a larger full moon shape as the absolute value of the phase difference increases, and converges to 1 point when the phase difference is 0, that is, at the in-focus position. Further, PSFg is always in a full moon shape regardless of whether it is closer or farther than the in-focus position except when it converges to one point.
  • a PSF table for each color corresponding to the phase difference amount as shown in FIG. 31 is assumed to be stored in advance in, for example, a non-illustrated nonvolatile memory in the color image generation unit 37 (however, in FIG. 1).
  • the table stored in the lens control unit 14 may be received and used by communication).
  • FIG. 32 shows an example of the case where the subject is farther than the in-focus position, but the same processing can be applied when the subject is closer than the in-focus position.
  • the inverse of PSFr is performed on the R image (showing that -1 on the right shoulder is the inverse operation. The same applies hereinafter), so that the right semicircular blur is converged to a single point light source.
  • the image is converted into an image having no blur (either one of the restored first image and the restored second image).
  • the left semicircular blur is converted into a non-blurred image in which the point light source converges to one point (the other one of the restored first image and the restored second image). ).
  • restoration processing of the R image and the B image is performed.
  • PSFg that is a PSF for the G image is applied to the restored R image and the restored B image.
  • full-moon-shaped blur similar to the G image is generated in the R image and the B image.
  • the concrete processing is performed as follows.
  • the blur PSF of the R image at a certain pixel position is Pr1
  • the blur PSF at the same pixel position of the B image is Pb1
  • the blur PSF at the same pixel position of the G image is Pg1.
  • the blurred PSF differs depending on the phase difference between the R image and the B image, and the phase difference basically differs for each pixel position. Therefore, the PSF is different for each pixel position. To be determined. Further, it is assumed that the PSF is defined for a partial region including a plurality of neighboring pixels with the pixel position of interest at the center (see FIG. 31).
  • Pr1 which is a PSF centered on the pixel of interest in the R image and the B image Pb1 that is the PSF centered on the pixel of interest at the same position in Pb1 and Pg1 that is the PSF centered on the pixel of interest at the same position in the G image are acquired.
  • R is divided by PR1
  • B is divided by PB1
  • restoration processing is performed, and each is multiplied by PG1 to perform filtering processing.
  • Two-dimensional inverse Fourier transform IFFT2 is performed to calculate an R image r ′ and a B image b ′ in which the same blur as the G image is obtained.
  • b ′ IFFT2 (B ⁇ PG1 / PB1 ⁇
  • ⁇ in Equation 9 is an arbitrary constant appropriately set according to the shapes of PR1 and PB1.
  • for example, the relationship between the absolute values
  • the restoration process and the filtering process as described above are performed for each partial area of the image.
  • the partial area is designated by slightly shifting the position, and the designated partial area is again displayed. Similar restoration processing and filtering processing are performed.
  • restoration processing and filtering processing are performed on the entire region of the image.
  • the sum of a plurality of corrected pixel values processed at the pixel position is divided by the number of corrections and averaged to obtain a normalized corrected image.
  • the size of the partial area on which the restoration process and the filtering process are performed is larger than the shape of the blur. Therefore, it is conceivable to adaptively change the size of the partial region in accordance with the size of the blur. Alternatively, when it is known in advance in which range the shape of the blur changes according to the phase difference, a partial region having a size larger than the maximum size of the blur may be used in a fixed manner.
  • Step S41 When this process is started, initialization is performed.
  • an RGB image to be processed that is, an R image, a G image, and a B image
  • an R copy image that is a copy of the R image and a B copy image that is a copy of the B image are created.
  • an R count image and a B count image having the same size as the R copy image and the B copy image are generated (here, in these count images, the initial value of the pixel value is set to 1 for all pixels).
  • Step S42 Subsequently, a partial region for performing phase difference detection is set.
  • the partial area is set to one of the R image and the B image, here, for example, the R image.
  • Step S43 A phase difference with respect to the partial region set in step S42 is detected. This phase difference is detected by using the partial area set in the R image as a standard image and the partial area of the B image having the same size as the standard image as a reference image, as shown in FIG. By performing the above, phase difference detection is performed between the R image and the B image.
  • Step S44 Based on the amount of phase difference obtained by the process of step S43, the radius of the circular blur of the G image (or the radius of the semicircular blur of the R image and the B image) is set in the same manner as in step S24 described above. Get.
  • Step S45 Next, the above-described restoration process and filtering process are performed on the partial region specified in step S42 described above in the original R image.
  • the processing result obtained in this way is copied and added to the same position as the partial area of the original R image in the R copy image.
  • the partial area for performing the colorization process is the same as the partial area for performing the phase difference detection will be described, but a different area may be set as a matter of course, as described above.
  • a partial area having an adaptive size according to the detected phase difference may be used (the same applies to the B image described later).
  • Step S46 Subsequently, +1 is added to the “partial region designated in step S42” of the R count image so that the number of times of addition can be understood.
  • This R count image is used to perform pixel value normalization processing in the subsequent step S50.
  • Step S47 In addition, the restoration process and the filtering process as described above are performed on the partial region specified in step S42 described above in the original B image.
  • the processing result obtained in this way is copied and added at the same position as the partial area of the original B image in the B copy image.
  • Step S48 Then, 1 is added to the “partial region designated in step S42” of the B count image so that the number of times of addition can be understood.
  • This B count image is also used to perform pixel value normalization processing in the subsequent step S50.
  • Step S49 Thereafter, it is determined whether or not the processing for all the partial areas in the image is completed. Then, the processes in steps S42 to S48 are repeated while shifting the position of the partial area until the process is completed.
  • an arbitrary value can be set as the step for shifting the partial area, but it is preferably a value smaller than the width of the partial area.
  • Step S50 Thus, if it is determined in step S49 that the processing for all the partial areas has been completed, the R value normalized by dividing the pixel value of the R copy image by the pixel value of the R count image for each same pixel position. A copy image is obtained, and a normalized B copy image is obtained by dividing the pixel value of the B copy image by the pixel value of the B count image.
  • the RGB image calculated in step S50 is an image obtained by the colorization process in the color image generation unit 37.
  • step S50 when the process of step S50 is completed, the process shown in FIG. 34 is terminated.
  • restoration processing and filtering processing are performed after transforming from real space to frequency space using Fourier transform.
  • the present invention is not limited to this, and restoration processing or filtering processing (for example, MAP estimation processing) in real space may be applied.
  • the blurring shape of the R image and the B image is matched with the blurring shape of the G image, but in addition to this, the blur amount control may be performed.
  • the inverse operation of PSFr is performed on the R image
  • the inverse operation of PSFb is performed on the B image
  • the inverse operation of PSFg is performed on the G image.
  • the shape blur is converted into a blur-free image (restored third image) in which the point light source converges to one point. Thereby, the R image, the B image, and the G image are restored.
  • PSF′g which is a desired PSF for the G image
  • PSF′g is applied to the restored R image, B image, and G image.
  • a full-moon-shaped blur having a desired size can be generated in the R image, the B image, and the G image, and the blur amount can be controlled.
  • a desired Pg1 ' is further acquired as a PSF centered on the pixel of interest at the same position in the G image.
  • Equation 10 two-dimensional Fourier is obtained as shown in Equation 10 below for the acquired Pg1 ′ and the partial region g centered on the pixel of interest in the G image.
  • a conversion FFT2 is performed to obtain converted values PG1 ′ and G.
  • G FFT2 (g)
  • R is divided by PR1
  • B is divided by PB1
  • G is divided by PG1
  • restoration processing is performed, and each is multiplied by PG1 ′ to perform filtering processing.
  • the result is subjected to a two-dimensional inverse Fourier transform IFFT2 to calculate an R image r ′′, a B image b ′′, and a G image g ′′ from which a desired amount of blur has been obtained.
  • r ′′ IFFT2 (R ⁇ PG1 ′ / PR1 ⁇
  • b ′′ IFFT2 (B ⁇ PG1 ′ / PB1 ⁇
  • g ′′ IFFT2 (G ⁇ PG1 ′ / PG1 ⁇
  • is an arbitrary constant appropriately set according to the shapes of PR1, PB1, and PG1.
  • the amount of blur of the RGB color image is controlled as desired. It is also possible to do.
  • a two-dimensional image with corrected color misregistration is generated by performing any of the processes in the first to third embodiments described above.
  • the stereo image generation unit 40 of the present embodiment generates a color stereoscopic image (3D image) based on the two-dimensional image in which the color shift is corrected and the phase difference information.
  • FIG. 35 is a diagram illustrating a state in which a left-eye image and a right-eye image are generated from a two-dimensional image after colorization processing
  • FIG. 36 is a flowchart illustrating a stereoscopic image generation process performed by the stereo image generation unit 40.
  • Step S61 When this process is started, initialization is performed. In this initial setting, first, a color image RGB0 (see FIG. 35) in which the color misregistration is corrected by the colorization process is read. Next, information on the amount of phase difference corresponding to each partial area of the image to be processed is prepared in advance. Here, since the information on the phase difference amount has already been acquired in any of the colorization processes in the first to third embodiments, this information is used (however, a pixel for which no phase difference amount has been acquired is used). In some cases (for example, when a phase difference amount is acquired based on an image before demosaicing or when there is a pixel for which the phase difference amount cannot be acquired), interpolation processing or the like is performed.
  • phase difference amount for all pixels is generated). Further, a left-eye image RGB-L and a right-eye image RGB-R, and a left-eye count image and a right-eye count image are generated (here, each of these four images has an initial pixel value of 0 for all pixels). ).
  • Step S62 Next, a partial region is set as shown in FIG. 35 in the color image RGB0 in which the color misregistration is corrected.
  • the size of the partial region may be the same size as the partial region used for measuring the phase difference amount, but may be an arbitrary size. For example, as described with reference to FIG. 29, the size of the partial region may be increased in accordance with the amount of phase difference (for example, in proportion).
  • Step S63 Subsequently, a phase difference amount corresponding to the partial region set in step S62 is acquired from the information of the phase difference amount prepared in advance in step S61.
  • Step S64 Then, in accordance with the phase difference amount (phase difference direction and phase difference magnitude) generated between the R image and the B image, the partial region of the color image RGB0 is converted into the phase difference in the left-eye image RGB-L. Copy addition is performed on the partial area shifted by a half of the amount (see FIG. 35).
  • Step S65 Further, +1 is added to the pixel value of each pixel in the same area as the “partial area copied and added in step S64” of the left eye count image. This left-eye count image is used to perform pixel value normalization processing in the subsequent step S69.
  • Step S66 Similarly, according to the phase difference amount generated between the R image and the B image, the partial region of the color image RGB0 is reduced by half the phase difference amount in the right-eye image RGB-R from step S64. Copy addition is performed on the partial area shifted in the reverse direction (see FIG. 35).
  • Step S67 Similarly, +1 is added to the pixel value of each pixel in the same area as the “partial area copied and added in step S66” of the right eye count image. This right eye count image is used to perform pixel value normalization processing in the subsequent step S69.
  • Step S68 Thereafter, it is determined whether or not the processing for all the partial areas in the color image RGB0 is completed. Then, the processes in steps S62 to S67 are repeated while shifting the position of the partial area until the process is completed.
  • an arbitrary value can be set as the step for shifting the partial area, but it is preferable that the value be smaller than the width of the partial area (that is, the partial areas should be sequentially set while overlapping).
  • the partial area may be set while being shifted by 10 [pixel] with respect to the partial area having a size of 51 ⁇ 51 [pixel]. However, they do not have to overlap, and may be set so that tiles are laid out (for example, a partial area is set while shifting by 51 [pixels] with respect to a partial area having a size of 51 ⁇ 51 [pixels]. Etc.)
  • Step S69 When it is determined in step S68 that the processing for all the partial areas has been completed, the pixel value of the left-eye image RGB-L is normalized by dividing the pixel value of the left-eye count image for each same pixel position. In addition to obtaining a left eye image, a normalized right eye image is obtained by dividing the pixel value of the right eye image RGB-R by the pixel value of the right eye count image. In addition, when this process is completed and a pixel to which no pixel value is given remains, an interpolation process is performed to provide the pixel value of the pixel.
  • step S69 the process shown in FIG. 36 is terminated.
  • the partial area of the color image RGB0 is copied and added to the left-eye image RGB-L and the right-eye image RGB-R, the partial area is shifted by a half of the phase difference amount.
  • This is because it is preferable to be a half of the phase difference amount because it is faithful to the parallax amount of the actually observed stereoscopic image.
  • it is not limited to this. For example, if a value obtained by multiplying the acquired phase difference amount by a predetermined constant greater than 1 is used as the corrected phase difference amount, a stereoscopic image with a more pronounced stereoscopic effect can be generated.
  • a stereoscopic image in which the stereoscopic effect is further suppressed can be generated.
  • the predetermined constant by which the phase difference amount is multiplied is determined according to preference by the user viewing the displayed image while operating the operation unit 32 of the imaging apparatus while displaying the stereoscopic image on the display unit 27. It may be possible to make adjustments (desired adjustments by the user via the GUI). Alternatively, depending on the subject to be imaged (for example, depending on whether the subject is a person, the distance to the subject, the composition, etc.), the system controller 30 automatically You may make it set to.
  • a stereoscopic image having a color preferable for viewing can be obtained from an image photographed by light that has passed through different pupil regions of the imaging optical system depending on the band.
  • the color stereoscopic image is generated using the result of the colorization processing, if the image after the colorization processing obtained in the processing process is stored.
  • FIGS. 37 to 40 show Embodiment 2 of the present invention, and FIG. 37 shows a case where a left-eye image is generated from an image of a subject that is farther from the in-focus position in the stereoscopic image generation process.
  • FIG. 38 is a diagram illustrating a shift state
  • FIG. 38 is a diagram illustrating a shift state when generating a right-eye image from an image of a subject that is far from the in-focus position in the stereoscopic image generation process
  • FIG. FIG. 40 is a diagram illustrating a state after performing filter processing to approximate the blurred shape of each color in the left-eye image of FIG. 40
  • FIG. 40 is a diagram illustrating a state after performing filter processing to approximate the blurred shape of each color in the right-eye image after shifting. It is.
  • a stereoscopic image is generated using the result of the colorization processing by the color image generation unit 37.
  • the stereoscopic vision is not required without requiring the result of the colorization processing.
  • An image is generated.
  • the RG filter 12r is located on the left eye side
  • the GB filter 12b is located on the right eye side.
  • the center-of-gravity position Cr of the R-component subject image IMGr shown in FIG. 10 is the position to be the center-of-gravity position of each color component in the left-eye image
  • the phase difference amount is detected.
  • the B-component subject image IMGb is moved (shifted) by the phase difference amount so that the centroid position matches the centroid position Cr of the R-component subject image IMGr.
  • the G component subject image IMGg is moved by 1 ⁇ 2 of the phase difference amount so that the center of gravity position coincides (or approaches) the center of gravity position Cr of the R component subject image IMGr.
  • the R-component subject image IMGr is moved (shifted) by the phase difference amount to change the center-of-gravity position to the center-of-gravity position Cb of the B-component subject image IMGb.
  • the G component subject image IMGg is moved by 1 ⁇ 2 of the phase difference amount to match (or approach) the center of gravity position to the center of gravity position Cb of the B component subject image IMGb.
  • the color image after the shift shown in FIGS. 37 and 38 is a shift color image.
  • a filter kernel such as a circular Gaussian filter as shown in FIG. 20 or an elliptic Gaussian filter as shown in FIGS. 21 and 22 is applied to the R image and B image in the left-eye image after the shift.
  • the blur shape is approximated to the G image by applying the blur filter having the.
  • the circular Gaussian filter as shown in FIG. 20 similar to the left eye image, or the elliptical Gaussian filter as shown in FIG. 21 and FIG.
  • the blur shape is approximated to a G image as shown in FIG.
  • the filtering process is performed while changing the filter shape according to the phase difference, as described above.
  • FIG. 31 to FIG. 34 are referred to for the third embodiment of the color processing.
  • the combination of the restoration process and the filtering process may approximate the shape of the blur, or further control the size of the blur.
  • the amount of movement to be shifted is increased or decreased based on the corrected phase difference amount (a value obtained by multiplying the phase difference amount by a predetermined constant), so that the stereoscopic effect is further increased. You may make it stand out or suppress the stereoscopic effect more.
  • the effects similar to those of the first embodiment described above can be obtained, and since it is not necessary to perform the colorization process, the processing load is reduced when only a stereoscopic image is desired to be acquired. It becomes possible.
  • the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying the constituent elements without departing from the scope of the invention in the implementation stage.
  • various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiment. For example, you may delete some components from all the components shown by embodiment.
  • the constituent elements over different embodiments may be appropriately combined.

Abstract

La présente invention concerne un dispositif de capture d'image qui comporte : un capteur d'image (22) qui génère des images RVB ; un système optique d'imagerie (9) qui forme une image de sujet sur le capteur d'image (22) ; un filtre limiteur de bande (12) qui bloque une lumière B dans un faisceau de lumière d'imagerie qui passe à travers une première ouverture partielle, et permet à une lumière R et V de passer à travers celle-ci, et qui bloque une lumière R dans un faisceau d'imagerie qui passe à travers une seconde ouverture partielle, et permet à une lumière B et V de passer à travers celle-ci ; une unité de calcul de distance (39) qui calcule la quantité de différence de phase entre une image R et une image B ; et une unité de génération d'image stéréoscopique (40) qui, sur la base de la quantité de différence de phase, génère une image stéréoscopique en couleurs par génération d'une image en couleurs pour un œil dans lequel la position du centre de gravité du flou dans une image B et V a été décalée dans la direction de la position du centre de gravité du flou dans l'image R, et génération d'une image en couleurs pour l'autre œil dans lequel la position du centre de gravité du flou d'une image R et V a été décalée dans la direction de la position du centre de gravité du flou dans l'image B.
PCT/JP2012/063057 2011-07-06 2012-05-22 Dispositif de capture d'image et dispositif de traitement d'image WO2013005489A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011150331A JP2013017138A (ja) 2011-07-06 2011-07-06 撮像装置、画像処理装置
JP2011-150331 2011-07-06

Publications (1)

Publication Number Publication Date
WO2013005489A1 true WO2013005489A1 (fr) 2013-01-10

Family

ID=47436847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/063057 WO2013005489A1 (fr) 2011-07-06 2012-05-22 Dispositif de capture d'image et dispositif de traitement d'image

Country Status (2)

Country Link
JP (1) JP2013017138A (fr)
WO (1) WO2013005489A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10257405B2 (en) 2015-12-11 2019-04-09 Nanning Fugui Precision Industrial Co., Ltd. Automatic focusing method and automatic focusing system
US10914960B2 (en) 2016-11-11 2021-02-09 Kabushiki Kaisha Toshiba Imaging apparatus and automatic control system
CN117315210A (zh) * 2023-11-29 2023-12-29 深圳优立全息科技有限公司 一种基于立体成像的图像虚化方法及相关装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101591172B1 (ko) * 2014-04-23 2016-02-03 주식회사 듀얼어퍼처인터네셔널 이미지 센서와 피사체 사이의 거리를 결정하는 방법 및 장치
JP2016102733A (ja) * 2014-11-28 2016-06-02 株式会社東芝 レンズ及び撮影装置
WO2016194178A1 (fr) * 2015-06-03 2016-12-08 オリンパス株式会社 Dispositif d'imagerie, endoscope et procédé d'imagerie
TWI669538B (zh) 2018-04-27 2019-08-21 點晶科技股份有限公司 立體影像擷取模組及立體影像擷取方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001174696A (ja) * 1999-12-15 2001-06-29 Olympus Optical Co Ltd カラー撮像装置
JP2002344999A (ja) * 2001-05-21 2002-11-29 Asahi Optical Co Ltd ステレオ画像撮像装置
JP2005062729A (ja) * 2003-08-20 2005-03-10 Olympus Corp カメラ
JP2006105771A (ja) * 2004-10-05 2006-04-20 Canon Inc 撮像装置および地形図作成装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001174696A (ja) * 1999-12-15 2001-06-29 Olympus Optical Co Ltd カラー撮像装置
JP2002344999A (ja) * 2001-05-21 2002-11-29 Asahi Optical Co Ltd ステレオ画像撮像装置
JP2005062729A (ja) * 2003-08-20 2005-03-10 Olympus Corp カメラ
JP2006105771A (ja) * 2004-10-05 2006-04-20 Canon Inc 撮像装置および地形図作成装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10257405B2 (en) 2015-12-11 2019-04-09 Nanning Fugui Precision Industrial Co., Ltd. Automatic focusing method and automatic focusing system
US10914960B2 (en) 2016-11-11 2021-02-09 Kabushiki Kaisha Toshiba Imaging apparatus and automatic control system
CN117315210A (zh) * 2023-11-29 2023-12-29 深圳优立全息科技有限公司 一种基于立体成像的图像虚化方法及相关装置
CN117315210B (zh) * 2023-11-29 2024-03-05 深圳优立全息科技有限公司 一种基于立体成像的图像虚化方法及相关装置

Also Published As

Publication number Publication date
JP2013017138A (ja) 2013-01-24

Similar Documents

Publication Publication Date Title
WO2013027504A1 (fr) Dispositif d'imagerie
US9247227B2 (en) Correction of the stereoscopic effect of multiple images for stereoscope view
US8885026B2 (en) Imaging device and imaging method
JP5066851B2 (ja) 撮像装置
CN102934025B (zh) 摄像装置及摄像方法
US8520059B2 (en) Stereoscopic image taking apparatus
US8786676B2 (en) Imaging device for generating stereoscopic image
WO2013005489A1 (fr) Dispositif de capture d'image et dispositif de traitement d'image
JP5469258B2 (ja) 撮像装置および撮像方法
JP6036829B2 (ja) 画像処理装置、撮像装置および画像処理装置の制御プログラム
US10992854B2 (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium
JP6951917B2 (ja) 撮像装置
WO2014192300A1 (fr) Élément de formation d'image, dispositif de formation d'image, et dispositif de traitement d'image
JP2014026051A (ja) 撮像装置、画像処理装置
JP5348258B2 (ja) 撮像装置
JP2013057761A (ja) 距離測定装置、撮像装置、距離測定方法
JP2013003159A (ja) 撮像装置
JP2013097154A (ja) 距離測定装置、撮像装置、距離測定方法
US9106900B2 (en) Stereoscopic imaging device and stereoscopic imaging method
JP2014026050A (ja) 撮像装置、画像処理装置
JP2013037294A (ja) 撮像装置
JP2012124650A (ja) 撮像装置および撮像方法
JP7071165B2 (ja) 表示制御装置及び方法、及び撮像装置
WO2013005602A1 (fr) Dispositif de capture d'image et dispositif de traitement d'image
JP7019442B2 (ja) 撮像装置およびその制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12808051

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12808051

Country of ref document: EP

Kind code of ref document: A1