CN113596431A - Image processing apparatus, image capturing apparatus, image processing method, and storage medium - Google Patents

Image processing apparatus, image capturing apparatus, image processing method, and storage medium Download PDF

Info

Publication number
CN113596431A
CN113596431A CN202110790632.2A CN202110790632A CN113596431A CN 113596431 A CN113596431 A CN 113596431A CN 202110790632 A CN202110790632 A CN 202110790632A CN 113596431 A CN113596431 A CN 113596431A
Authority
CN
China
Prior art keywords
image
parallax
image processing
pixels
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110790632.2A
Other languages
Chinese (zh)
Other versions
CN113596431B (en
Inventor
福田浩一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016080328A external-priority patent/JP6746359B2/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN202110790632.2A priority Critical patent/CN113596431B/en
Publication of CN113596431A publication Critical patent/CN113596431A/en
Application granted granted Critical
Publication of CN113596431B publication Critical patent/CN113596431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • G03B35/10Stereoscopic photography by simultaneous recording having single camera with stereoscopic-base-defining system
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/225Image signal generators using stereoscopic image cameras using a single 2D image sensor using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/232Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Abstract

The present invention provides an image processing apparatus, an image capturing apparatus, an image processing method, and a storage medium capable of generating a parallax image with improved quality. The image processing apparatus includes: an acquisition unit (125a) for acquiring a parallax image generated based on a signal of one of a plurality of photoelectric converters for receiving light beams passing through mutually different partial pupil areas in an imaging optical system, and acquiring a captured image generated by synthesizing a plurality of signals of the plurality of photoelectric converters; and an image processing unit (125b) for performing correction processing based on the captured image to reduce defects included in the parallax image.

Description

Image processing apparatus, image capturing apparatus, image processing method, and storage medium
(this application is a divisional application of an application having an application date of 2016, 4, 21, and an application number of 201680026847.4 entitled "image processing apparatus, image pickup apparatus, image processing method, and storage medium")
Technical Field
The present invention relates to an image processing apparatus capable of correcting a parallax image.
Background
Conventionally, there is known an image pickup apparatus capable of dividing an exit pupil of an image pickup lens into a plurality of pupil areas and simultaneously photographing a plurality of parallax images corresponding to the divided pupil areas. The photographed parallax image (viewpoint image) is equivalent to LF (light field) data which is information on the spatial distribution and angular distribution of light intensity.
Patent document 1 discloses an image pickup apparatus using a two-dimensional image pickup element including one microlens and a plurality of divided photoelectric converters for one pixel. The divided photoelectric converters receive light beams passing through different partial pupil areas of the exit pupil of the imaging lens via one microlens, thereby performing pupil division. A plurality of parallax images corresponding to the divided partial pupil areas may be generated based on the divided light reception signals of the respective photoelectric converters. Patent document 2 discloses an image pickup apparatus that adds up respective light reception signals of divided photoelectric converters and generates a captured image.
Documents of the prior art
List of cited documents
Patent document
Patent document 1: U.S. Pat. No. 4410804
Patent document 2: japanese patent laid-open No. 2001 + 083407
Disclosure of Invention
Problems to be solved by the invention
However, in a part of the parallax images (viewpoint images) obtained by the image pickup apparatuses disclosed in patent documents 1 and 2, defects (defect signals) such as point defects and line defects, shadows and saturation signals caused by pupil division, and the like may be generated, and the parallax images may be degraded.
Accordingly, an object of the present invention is to provide an image processing apparatus, an image capturing apparatus, an image processing method, a program, and a storage medium, each of which can generate a parallax image with improved quality.
Means for solving the problems
As one aspect of the present invention, an image processing apparatus includes: an acquisition unit configured to acquire a parallax image generated based on a signal from one of a plurality of photoelectric converters for receiving light beams passing through mutually different partial pupil areas in an imaging optical system, and acquire a captured image generated by synthesizing a plurality of signals from the plurality of photoelectric converters; and an image processing unit configured to correct the parallax image based on the captured image.
As another aspect of the present invention, an image pickup apparatus includes: an image pickup element including a plurality of arrayed pixels provided with a plurality of photoelectric converters for receiving light beams passing through mutually different partial pupil regions in an imaging optical system; an acquisition unit configured to acquire a parallax image generated based on a signal from one of the plurality of photoelectric converters and acquire a captured image generated by synthesizing a plurality of signals from the plurality of photoelectric converters; and an image processing unit configured to correct the parallax image based on the captured image.
As another aspect of the present invention, an image processing method includes the steps of: acquiring a parallax image generated based on a signal from one of a plurality of photoelectric converters for receiving light beams passing through mutually different partial pupil areas in an imaging optical system, and acquiring a captured image generated by synthesizing a plurality of signals from the plurality of photoelectric converters; and correcting the parallax image based on the captured image.
As another aspect of the present invention, a program for causing a computer to execute a process including the steps of: acquiring a parallax image generated based on a signal from one of a plurality of photoelectric converters for receiving light beams passing through mutually different partial pupil areas in an imaging optical system, and acquiring a captured image generated by synthesizing a plurality of signals from the plurality of photoelectric converters; and correcting the parallax image based on the captured image.
As another aspect of the present invention, a storage medium stores the above-described program.
Other features and aspects of the present invention will become apparent from the following description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
ADVANTAGEOUS EFFECTS OF INVENTION
The present invention can provide an image processing apparatus, an image capturing apparatus, an image processing method, a program, and a storage medium capable of generating a parallax image with improved quality.
Drawings
Fig. 1 is a block diagram of an image pickup apparatus in each embodiment.
Fig. 2 is a diagram showing a pixel array according to the first embodiment.
Fig. 3A is a diagram showing a pixel structure according to the first embodiment.
Fig. 3B is a diagram showing a pixel structure according to the first embodiment.
Fig. 4 is an explanatory diagram of an image pickup element and a pupil division function in each embodiment.
Fig. 5 is an explanatory diagram of an image pickup element and a pupil division function in each embodiment.
Fig. 6 is a diagram of the relationship between the defocus amount and the image shift amount in each embodiment.
Fig. 7A is a configuration diagram of a parallax image in each embodiment.
Fig. 7B is a configuration diagram of a parallax image in each embodiment.
Fig. 8 is a typical parallax image before correction processing is performed in the first embodiment.
Fig. 9 is a typical parallax image after correction processing in the first embodiment.
Fig. 10 is another typical parallax image before the correction processing is performed in the first embodiment.
Fig. 11 is another typical parallax image after the correction processing is performed in the first embodiment.
Fig. 12 is a diagram showing a pixel array in the second embodiment.
Fig. 13A is a diagram showing a pixel structure in the second embodiment.
Fig. 13B is a diagram showing a pixel structure in the second embodiment.
Fig. 14 is a schematic explanatory diagram of refocus processing in each embodiment.
Fig. 15A is an explanatory view of a light intensity distribution in the case where light is incident on the microlens formed on each pixel in the third embodiment.
Fig. 15B is an explanatory view of a light intensity distribution in the case where light is incident on the microlens formed on each pixel in the third embodiment.
Fig. 16 shows a light receiving rate distribution depending on the light incident angle in the third embodiment.
Fig. 17 is a schematic diagram of the flow of correction processing in the third embodiment.
Fig. 18 is a schematic diagram of the flow of the correction processing in the third embodiment.
Fig. 19A is an explanatory diagram of shading in the third embodiment.
Fig. 19B is an explanatory diagram of shading in the third embodiment.
Fig. 19C is an explanatory diagram of shading in the third embodiment.
Fig. 20A is an explanatory diagram of a projection signal of a captured image in the third embodiment.
Fig. 20B is an explanatory diagram of a projection signal of the first viewpoint image in the third embodiment.
Fig. 20C is an explanatory diagram of a shading function of a captured image in the third embodiment.
Fig. 21 is an example of a captured image (after demosaicing) in the third embodiment.
Fig. 22 is an example of the first viewpoint image before shading correction (after demosaicing) in the third embodiment.
Fig. 23 is an example of a shading-corrected first-viewpoint (first-corrected) image (after demosaicing) in the third embodiment.
Fig. 24 is an example of a first-viewpoint (first-corrected) image (after demosaicing and after shading correction) before defect correction in the third embodiment.
Fig. 25 is an example of a defect-corrected first-viewpoint (second-corrected) image (after demosaicing and after shading correction) in the third embodiment.
Fig. 26 is an example of a second-viewpoint (first-corrected) image (after demosaicing) before shading correction in the third embodiment.
Fig. 27 is an example of the shading-corrected second corrected viewpoint image (after demosaicing) in the third embodiment.
Description of reference numerals
125 image processing circuit (image processing apparatus)
125a acquisition unit
125b image processing unit
Detailed Description
Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings.
First embodiment
A schematic structure of an image pickup apparatus according to a first embodiment of the present invention will now be described with reference to fig. 1. Fig. 1 is a block diagram of an image pickup apparatus 100 (camera) in the present embodiment. The image pickup apparatus 100 is a digital camera system including a camera body and an interchangeable lens (an imaging optical system or an image pickup optical system) removably mounted to the camera body. However, the present embodiment is not limited to this example, and is applicable to an image pickup apparatus including a camera body and a lens integrated with each other.
The first lens unit 101 is arranged on the frontmost side (object side) among a plurality of lens units constituting an imaging lens (imaging optical system), and is held on a lens barrel so as to reciprocate in the direction of the optical axis OA (optical axis direction). The stop/shutter 102 (aperture stop) adjusts its opening diameter to control the light amount when capturing an image, and also functions as a shutter to control the exposure time when capturing a still image. The second lens unit 103 reciprocates in the optical axis direction integrally with the stop/shutter 102, and has a zoom function of performing a magnification operation in conjunction with the reciprocation of the first lens unit 101. The third lens unit 105 is a focus lens unit that reciprocates in the optical axis direction to perform focusing (focusing operation). The optical low-pass filter 106 is an optical element for reducing false colors or moir e of a captured image.
The image pickup element 107 (image sensor) performs photoelectric conversion of an object image (optical image) formed by the imaging optical system, and includes, for example, a CMOS sensor or a CCD sensor and its peripheral circuits. As the image pickup element 107, for example, a two-dimensional single-plate color sensor provided with a primary color mosaic filter of a bayer array formed on a light receiving pixel having m pixels in the horizontal direction and n pixels in the vertical direction in an on-chip configuration is used.
The zoom actuator 111 rotationally moves (drives) an unillustrated cam barrel to move the first lens unit 101 and the second lens unit 103 in the optical axis direction, thereby performing a magnification varying operation. The stop/shutter actuator 112 controls the aperture diameter of the stop/shutter 102 to adjust the light amount (imaging light amount), and also controls the exposure time at the time of still image shooting. The focus actuator 114 moves the third lens unit 105 in the optical axis direction to perform focusing.
The electronic flash 115 is an illumination device to be used for illuminating an object. The electronic flash 115 may use a flash illumination device including a xenon tube or an illumination device including a continuously emitting LED (light emitting diode). The AF auxiliary light unit 116 projects an image of a mask having a predetermined opening pattern onto an object via a projection lens. This configuration can improve the focus detection capability for a dark object or an object with low contrast.
The CPU 121 is a control apparatus (control unit or controller) that manages various controls on the image capturing apparatus 100. The CPU 121 includes a processor, ROM, RAM, a/D converter, D/a converter, communication interface circuit, and the like. The CPU 121 reads out and executes a predetermined program stored in the ROM to drive various circuits of the image pickup apparatus 100, and performs a series of operations such as focus detection (AF), image pickup (shooting), image processing, or recording.
The electronic flash control circuit 122 controls the turning on and off of the electronic flash 115 in synchronization with the image capturing operation. The fill light driving circuit 123 controls turning on and off of the AF fill light unit 116 in synchronization with the focus detection operation. The image pickup element driving circuit 124 controls the image pickup operation of the image pickup element 107 and also performs a/D conversion of the acquired image signal to send it to the CPU 121.
An image processing circuit 125 (image processing apparatus) performs processing such as γ (gamma) conversion, color interpolation, or JPEG (joint photographic experts group) compression on the image data output from the image pickup element 107. In the present embodiment, the image processing circuit 125 includes an acquisition unit 125a and an image processing unit 125b (corrector). The acquisition unit 125a acquires a captured image and at least one parallax image (viewpoint image) from the image pickup element 107. The captured image is an image generated by synthesizing a plurality of signals (first and second signals) from a plurality of photoelectric converters (first and second sub-pixels) for receiving light beams passing through different partial pupil areas of the imaging optical system. The parallax image (viewpoint image) is an image generated based on signals from photoelectric converters (first sub-pixels or second sub-pixels) of the plurality of photoelectric converters. The image processing unit 125b performs correction processing (defect correction) based on the captured image so that defects contained in the parallax image are reduced.
A focus drive circuit 126 (focus driver) drives the focus actuator 114 based on the focus detection result to move the third lens unit 105 in the optical axis direction, thereby performing focusing. The stop/shutter drive circuit 128 drives the stop/shutter actuator 112 to control the aperture diameter of the stop/shutter 102. The zoom drive circuit 129 (zoom driver) drives the zoom actuator 111 in response to a zoom operation by the user.
The display device 131 (display unit) includes, for example, an LCD (liquid crystal display). The display device 131 displays information on the image capturing mode of the image capturing apparatus 100, a preview image before capturing, a confirmation image after capturing, a focus state display image at the time of focus detection, or the like. The operation member 132 (operation switch unit) includes a power switch, a release (image pickup trigger) switch, a zoom switch, an image pickup mode selection switch, and the like. The release switch is a two-stage switch having a half-pressed state (state in which SW1 is on) and a fully-pressed state (state in which SW2 is on). The recording medium 133 is, for example, a flash sensor removable with respect to the image capturing apparatus 100, and records a captured image (image data).
Now, a pixel array and a pixel structure of the image pickup element 107 according to the present embodiment will be described with reference to fig. 2, 3A, and 3B. Fig. 2 is a diagram illustrating a pixel array of the image pickup element 107. Fig. 3A and 3B are diagrams illustrating the pixel structure of the image pickup element 107, and fig. 3A illustrates a plan view (view viewed in the + z direction) of the pixel 200G of the image pickup element 107, and fig. 3B illustrates an a-a sectional view (view viewed in the a-z direction) in fig. 3A.
Fig. 2 shows a pixel array (array of image pickup pixels) on the image pickup element 107 (two-dimensional CMOS sensor) in a range of 4 columns × 4 rows. In the present embodiment, each image pickup pixel ( pixels 200R, 200G, and 200B) includes two sub-pixels 201 and 202. Thus, FIG. 2 shows an array of subpixels of 8 columns by 4 rows.
As shown in fig. 2, a pixel group 200 of 2 columns × 2 rows includes pixels 200R, 200G, and 200B in a bayer array. In the pixel group 200, the pixel 200R having the spectral sensitivity for R (red) is arranged at the upper left, the pixel 200G having the spectral sensitivity for G (green) is arranged at the upper right and lower left, and the pixel 200B having the spectral sensitivity for B (blue) is arranged at the lower right. Each of the pixels 200R, 200G, and 200B (each image pickup pixel) includes sub-pixels 201 and 202 arranged in 2 columns × 1 rows. The sub-pixel 201 is a pixel for receiving a light beam passing through a first pupil area in the imaging optical system. The sub-pixel 202 is a pixel for receiving a light beam passing through a second pupil area in the imaging optical system.
As shown in fig. 2, the image pickup element 107 includes many image pickup pixels of 4 columns × 4 rows (subpixels of 8 columns × 4 rows) arranged on the surface thereof, and outputs image pickup signals (subpixel signals). In the image pickup element 107 of the present embodiment, the period P of the pixels (image pickup pixels) is 4 μm, and the number N of the pixels (image pickup pixels) is about 2075 ten thousand pixels, which is 5575 columns in the horizontal direction × 3725 rows in the vertical direction. In the image pickup element 107, the period P of the sub-pixels in the column directionSUBIs 2 μm, and the number of sub-pixels NSUBThe number of columns 11150 in the horizontal direction × rows 3725 in the vertical direction is about 4150 ten thousand pixels. Alternatively, the image pickup element 107 may have a pixel period P of 6 μm and a number N of pixels (image pickup pixels) of about 2400 ten thousand pixels in 6000 columns in the horizontal direction × 4000 rows in the vertical direction. Alternatively, in the image pickup element 107, the period P of the sub-pixels in the column directionSUBMay be 3 μm, and the number of sub-pixels NSUBIt may be 12000 columns in the horizontal direction by 4000 rows in the vertical direction, about 4800 ten thousand pixels.
As shown in FIG. 3B, the pixel 200G of this embodiment isThe light receiving surface side of the pixel is provided with a microlens 305 to condense incident light. The plurality of microlenses 305 are arranged in a two-dimensional manner, and each microlens 305 is arranged at a position distant from the light receiving surface by a predetermined distance in the z-axis direction (direction of the optical axis OA). In the pixel 200G, by dividing the pixel into N in the x directionHTwo divided and divided into N in the y directionVAnd (one division) to form a photoelectric converter 301 and a photoelectric converter 302 (photoelectric converter). The photoelectric converter 301 and the photoelectric converter 302 correspond to the sub-pixel 201 and the sub-pixel 202, respectively.
The photoelectric converters 301 and 302 are each configured as a photodiode having a p-i-n structure including a p-type layer and an n-type layer and an intrinsic layer between the p-type layer and the n-type layer. The intrinsic layer may be omitted and a photodiode having a p-n junction may be applied, as desired. The pixel 200G (each pixel) is provided with a color filter 306 between the microlens 305 and each of the photoelectric converters 301 and 302. The spectral transmittance of the color filter 306 may be changed for each sub-pixel, or alternatively, the color filter may be removed, as desired.
As shown in fig. 3A and 3B, light incident to the pixel 200G is condensed by the microlens 305, and is split by the color filter 306, and then the split light is received by the photoelectric converters 301 and 302. In each of the photoelectric converters 301 and 302, electron and hole pairs are generated according to the light receiving amount, and they are separated in the depletion layer, and then electrons having negative charges are accumulated in the n-type layer. On the other hand, holes are discharged to the outside of the image pickup element 107 via a p-type layer connected to a constant voltage source (not shown). The electrons accumulated in the n-type layers of the photoelectric converters 301 and 302 are transferred to the electrostatic capacity portion (FD) via the transfer gate to be converted into a voltage signal.
Now, the pupil division function of the image pickup element 107 will be described with reference to fig. 4. Fig. 4 is an explanatory diagram of the pupil division function of the image pickup element 107, and shows pupil division of one pixel portion. Fig. 4 shows a-a cross-sectional view of the pixel structure shown in fig. 3A and an exit pupil of an imaging optical system, viewed from the + y direction. In fig. 4, the x-axis and the y-axis in the sectional view are inverted with respect to the x-axis and the y-axis of fig. 3A and 3B in order to correspond to the coordinate axes of the exit pupil plane.
In fig. 4, a partial pupil area 501 (first partial pupil area) of the sub-pixel 201 (first sub-pixel) has a substantially conjugate relationship with the light receiving surface of the photoelectric converter 301 whose center of gravity is shifted (decentered) in the-x direction via the microlens 305. Thus, the partial pupil area 501 represents a pupil area capable of receiving light by the sub-pixel 201. The center of gravity of the partial pupil area 501 of the sub-pixel 201 is shifted (decentered) in the + x direction on the pupil plane. A partial pupil region 502 (second partial pupil region) of the sub-pixel 202 (second sub-pixel) has a substantially conjugate relationship with the light receiving surface of the photoelectric converter 302 whose center of gravity is shifted (decentered) in the + x direction via the microlens 305. Thus, the partial pupil area 502 represents a pupil area that is capable of receiving light by the sub-pixel 202. The center of gravity of the partial pupil area 502 of the sub-pixel 202 is shifted (decentered) in the-x direction on the pupil plane. The pupil area 500 can receive light of the entire area of the pixel 200G when the photoelectric converters 301 and 302 (the sub-pixels 201 and 202) are fully combined.
Incident light is converged at a focal position by the microlens 305. However, the diameter of the condensed spot cannot become smaller than the diffraction limit Δ due to the influence of diffraction caused by the fluctuation of light, and it has a limited size. In the case where the light receiving faces of the photoelectric converters 301 and 302 each have a length of about 1 to 2 μm, the converging spot of the microlens 305 is about 1 μm. Therefore, the partial pupil regions 501 and 502 in fig. 4 having a conjugate relationship with the light receiving surfaces of the photoelectric converters 301 and 302 via the microlens 305 are not clearly divided due to diffraction blur, and thus a light receiving rate distribution (pupil intensity distribution) is obtained.
Fig. 5 is an explanatory diagram of the image pickup element 107 and the pupil division function. Light beams that have passed through different partial pupil areas 501 and 502 in the pupil area of the imaging optical system are incident on respective pixels of the image pickup element 107 on the image pickup plane 600 of the image pickup element 107 at angles different from each other, and are received by the sub-pixels 201 and 202 divided into 2 × 1 pieces. The present embodiment describes an example in which the pupil region is divided into two in the horizontal direction, but the present invention is not limited to the present embodiment, and pupil division may be performed in the vertical direction as needed.
In the present embodiment, the image pickup element 107 includes a plurality of sub-pixels that share one microlens and receive a plurality of light fluxes passing through different regions (a first partial pupil region and a second partial pupil region) in a pupil of an imaging optical system (image pickup lens). The image pickup element 107 includes a first sub-pixel (a plurality of sub-pixels 201) and a second sub-pixel (a plurality of sub-pixels 202) as a plurality of sub-pixels.
In this embodiment, signals of the sub-pixels 201 and 202 are added (synthesized) to each other for each pixel of the image pickup element 107 and read out so that a captured image having a resolution of the effective number of pixels N is generated. As described above, a captured image is generated by synthesizing the light reception signals of a plurality of sub-pixels (sub-pixels 201 and 202 in the present embodiment) for each pixel.
In the present embodiment, the light reception signals of the plurality of sub-pixels 201 are collected to generate the first parallax image. The first parallax image is subtracted from the captured image to generate a second parallax image. However, the present invention is not limited to this example, and the light reception signals of the plurality of sub-pixels 202 may be collected to generate the second parallax image. Thus, the parallax image is generated based on the light reception signals of the plurality of sub-pixels for the respective partial pupil regions different from each other.
In the present embodiment, the first parallax image, the second parallax image, and the captured image are each images employing a bayer array. Each of the first parallax image, the second parallax image, and the captured image may be demosaiced as necessary.
Now, the relationship between the defocus amount and the image shift amount of the first parallax image acquired from the sub-pixel 201 of the image pickup element 107 and the second parallax image acquired from the sub-pixel 202 will be described with reference to fig. 6. Fig. 6 shows a relationship between the defocus amount and the image shift amount. Fig. 6 illustrates the image pickup element 107 arranged on the image pickup plane 600, and as in fig. 4 and 5, the exit pupil of the imaging optical system is divided into two partial pupil regions 501 and 502.
The defocus amount d is defined such that: the distance from the imaging position of the object to the image pickup surface 600 is | d |, the front focus state has a negative sign (d <0) at a position where the imaging position is located closer to the object side than the image pickup surface 600, and the back focus state has a positive sign (d >0) at a position where the imaging position is located on the opposite side of the object with respect to the image pickup surface 600. In an in-focus state in which the imaging position of the object is located on the imaging surface 600 (focus position), the defocus amount d is 0. Fig. 6 illustrates an object 601 in an in-focus state (d ═ 0) and an object 602 in a front focus state (d < 0). The front focus state (d <0) and the back focus state (d >0) are collectively referred to as a defocus state (| d | > 0).
In the front focus state (d <0), the light flux that has passed through the partial pupil area 501 (or the partial pupil area 502) among the light fluxes from the object 602 is once condensed. Then, the light flux is spread to a width Γ 1(Γ 2) centered at the center of gravity position G1(G2) of the light flux, and a blurred image is formed on the image pickup surface 600. The blurred image is received by the sub-pixel 201 (sub-pixel 202) constituting each pixel arranged in the image pickup element 107, and the first parallax image (second parallax image) is generated. Accordingly, the first parallax image (second parallax image) is recorded as a blurred object image in which the object 602 has a blur width Γ 1(Γ 2) at the barycentric position G1(G2) on the image pickup surface 600. The blur width Γ 1(Γ 2) of the object image increases substantially in proportion to an increase in the absolute value | d | of the defocus amount d. Likewise, the absolute value | p | of the image shift amount p of the object image between the first parallax image and the second parallax image, that is, equal to the difference (G1-G2) in the barycentric position of the light flux, increases substantially with the increase in the absolute value | d | of the defocus amount d. The same applies to the rear focus state (d >0), but the image shift direction of the object image between the first parallax image and the second parallax image is opposite to that in the front focus state.
As described above, according to the present embodiment, as the absolute value of the defocus amount of the first parallax image and the second parallax image or the defocus amount of the image signal obtained by adding the first parallax image and the second parallax image increases, the absolute value of the image shift amount between the first parallax image and the second parallax image increases.
Now, the correction processing of the parallax image in the present embodiment will be described. In this embodiment, the image pickup element 107 can output a captured image and at least one parallax image (at least one of the first parallax image and the second parallax image). The image processing circuit 125 (acquisition unit 125a) acquires the captured image and the parallax image output from the image pickup element 107. Then, the image processing circuit 125 (image processing unit 125b) corrects (corrects) the parallax image based on the captured image. The acquisition unit 125a may store the acquired captured image and the acquired at least one parallax image in a memory such as the recording medium 133 and the memory 134 to acquire the captured image and the parallax image stored in the memory, as necessary.
According to the circuit configuration or the driving manner of the image pickup element 107, due to the influence caused by the short circuit of the transfer gate, even if the captured image is normal, a defect signal (defect signal) may be generated in the parallax image (the first parallax image or the second parallax image), and defects such as a point defect and a line defect may be included in the parallax image. As necessary, defect information such as point defect information and line defect information inspected in a mass production process or the like may be stored in the memory in advance. In this case, the image processing circuit 125 (image processing unit 125b) performs correction processing of the parallax image by using the stored defect information. The image processing circuit 125 (checker) can check the parallax image in real time (i.e., during use of the image capturing apparatus 100 by the user) and determine defects such as point defects and line defects, as necessary.
Now, correction processing of a parallax image according to the present embodiment will be described with reference to fig. 7A and 7B. Fig. 7A is an array diagram of a parallax image (first parallax image) using a bayer array. Fig. 7B is an array diagram of a captured image using a bayer array. In fig. 7A and 7B, pixel values (pixel signals) of the first parallax image and the captured image at a position (j, I) of the jth pixel in the row direction and the ith pixel in the column direction are defined as a (j, I) and I (j, I), respectively.
If the first parallax image has a defect (line defect) on the jth line and the captured image is normal on the jth line, the jth line of the first parallax image needs to be corrected. In the present embodiment, the image processing circuit 125 (image processing unit 125b) corrects the first parallax image (pixel value at the position to be corrected in the first parallax image) based on the captured image. The second parallax image may be corrected similarly as necessary.
In the present embodiment, the correction value (correction signal) of the first parallax image at the position (j, i) where the defect occurs, that is, the position to be corrected (first position) is defined as Ac (j, i). The image processing unit 125b calculates a correction value Ac (j, i) according to the following expression (1), and corrects the first parallax image by using the calculated correction value Ac (j, i) as the pixel value a (j, i) of the first parallax image.
Expression 1
Figure BDA0003160971110000131
In expression (1), the parameters a0 and I0 are values for stabilizing the calculated values and suppressing an increase in noise in the case where the pixel value a of the first parallax image and the pixel value I of the captured image have low luminance (low luminance signal).
As described above, in the present embodiment, the image processing unit 125b performs correction processing of the parallax image based on the captured image, that is, replaces the pixel value a (j, i) of the parallax image at the position to be corrected with the correction value Ac (j, i). Specifically, the image processing unit 125b determines the correction value Ac (j, I) of the parallax image by using the pixel value I (j, I) of the captured image and the pixel values I (j2, I2) and a (j2, I2) of the captured image and the parallax image at the position (j2, I2) not (j, I) near the position to be corrected.
In the expression (1), the specific values of the parameters a0 and I0 may be set as appropriate. For example, if the pupil division number Np is 2, the parameters a0 and I0 may be set to a0 — I0/Np. The values of the parameters a0 and I0 may be changed according to image capturing conditions such as the position (j, I) to be corrected, ISO sensitivity, the aperture value of the imaging optical system, and the exit pupil distance. The values of the parameters a0 and I0 may be set based on the pixel value a of the first parallax image near (peripheral to) the position to be corrected and the pixel value I of the captured image.
Fig. 8 shows an example of the first parallax image (after demosaicing) in the in-focus state before the correction processing according to the present embodiment is performed. Fig. 9 shows an example of the first parallax image (after demosaicing) in the in-focus state after the correction processing according to the present embodiment is performed. Likewise, fig. 10 shows an example of the first parallax image (after demosaicing) in a defocused state before the correction processing is performed. Fig. 11 shows an example of the first parallax image in a defocused state after the correction processing is performed (after demosaicing). It is to be understood that the defect of the parallax image is corrected by the correction processing according to the present embodiment in each of the in-focus state and the out-of-focus state.
Now, refocus processing according to the present embodiment will be described with reference to fig. 14. Refocus processing is performed by the image processing circuit 125 (image processing unit 125b as a refocus unit) based on an instruction of the CPU 121. Fig. 14 is an explanatory diagram of refocus processing in a one-dimensional direction (column direction or horizontal direction) of a first signal (light reception signal of a first sub-pixel forming a first parallax image) and a second signal (light reception signal of a second sub-pixel forming a second parallax image) acquired by the image pickup element 107 according to the present embodiment. In fig. 14, symbol i denotes an integer, and schematic symbols Ai and Bi denote a first signal and a second signal, respectively, of the i-th pixel in the column direction of the image pickup element 107 arranged on the image pickup plane 600. The first signal Ai is a light reception signal output based on a light beam incident to the i-th pixel at a chief ray angle θ a (corresponding to the partial pupil area 501 in fig. 5). The second signal Bi is a light reception signal output as a light beam incident on the ith pixel at a chief ray angle θ b (corresponding to the partial pupil area 502 in fig. 5).
The first signal Ai and the second signal Bi each have incident angle information and light intensity distribution information. Therefore, the first signal Ai is translated (translated) to the virtual imaging plane 610 at an angle θ a and the second signal Bi is translated (translated) to the virtual imaging plane 610 at an angle θ b, which are then added to generate a refocusing signal on the virtual imaging plane 610. The parallel shift of the first signal Ai to the virtual imaging plane 610 by the angle θ a corresponds to a shift of +0.5 pixels in the column direction, and the parallel shift of the second signal Bi to the virtual imaging plane 610 by the angle θ b corresponds to a shift of-0.5 pixels in the column direction. Accordingly, in the case where the first signal Ai and the second signal Bi are relatively shifted by +1 pixel to add the first signal Ai and the corresponding second signal (Bi +1), a refocus signal on the virtual imaging plane 610 is generated. Likewise, in the case where the first signal Ai and the second signal Bi are shifted by an integral multiple of the pixel pitch (i.e., are integer-shifted) and these signals are added to each other, a shift addition signal (refocus signal) on each virtual imaging plane may be generated in accordance with the integer shift amount.
In the present embodiment, the influence of the defect included in at least one of the parallax images (at least one of the first parallax image and the second parallax image) is removed or reduced by the correction processing. Therefore, refocus processing can be performed based on the corrected parallax image. Therefore, refocus processing by using the respective signals (the first signal and the second signal) forming the parallax image can be performed with high accuracy.
Second embodiment
An image pickup apparatus according to a second embodiment of the present invention will now be described with reference to fig. 12 and fig. 13A and 13B. The present embodiment is different from the first embodiment in that: instead of generating the captured image based on the first parallax image and the second parallax image, the captured image is generated based on the first parallax image to the fourth parallax image which are a plurality of parallax images.
Fig. 12 illustrates a pixel array of the image pickup element 107 according to the present embodiment. Fig. 13A and 13B are diagrams illustrating the pixel structure of the image pickup element 107, and fig. 13A and 13B are a plan view (view from the + z direction) and a sectional view (view from the-z direction) along the line a-a of fig. 13A, respectively, of the pixel 200G of the image pickup element 107.
Fig. 12 shows a pixel array (array of image pickup pixels) of the image pickup element 107 (two-dimensional CMOS sensor) in a range of 4 columns × 4 rows. In the present embodiment, each image pickup pixel ( pixels 200R, 200G, and 200B) includes four sub-pixels 201, 202, 203, and 204. Thus, fig. 12 shows an array of subpixels of 8 columns by 8 rows.
As shown in fig. 12, a pixel group 200 of 2 columns × 2 rows includes pixels 200R, 200G, and 200B in a bayer array. In other words, in the pixel group 200, the pixel 200R having the spectral sensitivity for R (red) is arranged at the upper left, the pixel 200G having the spectral sensitivity for G (green) is arranged at the upper right and lower left, and the pixel 200B having the spectral sensitivity for B (blue) is arranged at the lower right. Each of the pixels 200R, 200G, and 200B (each image pickup pixel) includes sub-pixels 201, 202, 203, and 204 arranged in 2 columns × 2 rows. The sub-pixel 201 is a pixel for receiving a light beam passing through a first pupil area of the imaging optical system. The sub-pixel 202 is a pixel for receiving a light beam passing through the second pupil area of the imaging optical system. The sub-pixel 203 is a pixel for receiving a light beam passing through the third pupil area of the imaging optical system. The sub-pixel 204 is a pixel for receiving a light beam passing through the fourth pupil area of the imaging optical system.
As shown in fig. 12, the image pickup element 107 includes many image pickup pixels (sub-pixels of 8 columns × 8 rows) of 4 columns × 4 rows arranged on the surface thereof, and outputs image pickup signals (sub-pixel signals). In the image pickup element 107 of the present embodiment, the period P of the pixels (image pickup pixels) is 4 μm, and the number N of the pixels (image pickup pixels) is about 2075 ten thousand pixels, which is 5575 columns in the horizontal direction × 3725 rows in the vertical direction. In the image pickup element 107, the period P of the sub-pixels in the column directionSUBIs 2 μm, and the number of sub-pixels NSUBThe number of columns 11150 in the horizontal direction × rows 7450 in the vertical direction is about 8300 ten thousand pixels. Alternatively, the image pickup element 107 may have a pixel (image pickup pixel) period P of 6 μm and a number N of pixels (image pickup pixels) of about 2400 ten thousand pixels which are 6000 columns in the horizontal direction × 4000 rows in the vertical direction. Optionally, the period P of the sub-pixels in the column directionSUBMay be 3 μm, and the number of sub-pixels NSUBIt may be 12000 columns in the horizontal direction by 4000 rows in the vertical direction, about 4800 ten thousand pixels.
As shown in fig. 13B, the pixel 200G of the present embodiment is provided with a microlens 305 on the light receiving surface side of the pixel to condense incident light. Each microlens 305 is disposed at a position distant from the light receiving surface by a predetermined distance in the z-axis direction (direction of the optical axis OA). In the pixel 200G, by dividing the pixel into N in the x directionHTwo divided and divided into N in the y directionVAnd (two-divided) to form photoelectric converters 301, 302, 303, and 304 (photoelectric converters). The photoelectric converters 301 to 304 correspond to the sub-pixels 201 to 204, respectively.
In the present embodiment, the image pickup element 107 includes a plurality of sub-pixels that share one microlens and receive a plurality of light fluxes that pass through mutually different regions (first to fourth partial pupil regions) in the pupil of the imaging optical system (image pickup lens). The image pickup element 107 includes a first sub-pixel (a plurality of sub-pixels 201), a second sub-pixel (a plurality of sub-pixels 202), a third sub-pixel (a plurality of sub-pixels 203), and a fourth sub-pixel (a plurality of sub-pixels 204) as a plurality of sub-pixels.
In the present embodiment, signals of the sub-pixels 201, 202, 203, and 204 are added (synthesized) and read out for each pixel of the image pickup element 107 so that a captured image having a resolution of the effective number of pixels N is generated. As described above, the captured image is generated by combining the light reception signals of a plurality of sub-pixels (sub-pixels 201 to 204 in the present embodiment) for each pixel.
In the present embodiment, the light reception signals of the plurality of sub-pixels 201 are collected to generate the first parallax image. Likewise, light reception signals of the plurality of sub-pixels 202 are collected to generate a second parallax image, and light reception signals of the plurality of sub-pixels 203 are collected to generate a third parallax image. Further, in the present embodiment, the first parallax image, the second parallax image, and the third parallax image are subtracted from the captured image to generate the fourth parallax image. However, the present embodiment is not limited to this example, and light reception signals of the plurality of sub-pixels 204 may be collected to generate the fourth parallax image. As described above, the parallax image is generated based on the light reception signals of the plurality of sub-pixels for each of the partial pupil regions that are different from each other.
In the present embodiment, the captured image and the first to third parallax images (and the fourth parallax image) are all images employing a bayer array. Each of the captured image and the first to third parallax images (and the fourth parallax image) may be demosaiced as necessary. The correction processing (defect correction) of the parallax image according to the present embodiment is the same as that of the first embodiment, and therefore, the description thereof is omitted.
Third embodiment
Next, a third embodiment according to the present invention is explained. The present embodiment is different from the first embodiment in that: the image processing unit 125b performs light amount correction processing (shading correction) of the parallax image based on the captured image. In addition to the light amount correction processing according to the present embodiment, as in the first embodiment, correction processing of a parallax image may be performed based on a captured image to reduce defects contained in the parallax image.
The pupil area 500 shown in fig. 4 includes photoelectric converters 301 and 302 (divided into N) divided into 2 × 1 pieces via microlensesx×NyFirst to Nth photoelectric convertersLFPhotoelectric converter) has a substantially optically conjugate relationship with a light-receiving surface thereof. The pupil area 500 is a region each including sub-pixels 201 and 202 (first to nth sub-pixels)LFSub-pixel) may receive a pupil area of light.
Fig. 15A and 15B are explanatory diagrams of light intensity distribution of microlenses formed when light is incident on each pixel. Fig. 15A shows a light intensity distribution of a cross section parallel to the optical axis of the microlens. Fig. 15B shows a light intensity distribution of a cross section perpendicular to the optical axis of the microlens. Incident light is converged to a focus position by a microlens. However, due to the influence of diffraction of the fluctuation of light, the diameter of the condensed spot cannot become smaller than the diffraction limit Δ and has a limited size. In the case where the light receiving surface of the photoelectric converter has a length of about 1 to 2 μm, the condensing spot of the microlens has a length of about 1 μm. Therefore, the partial pupil regions 501 and 502 shown in fig. 4 having a conjugate relationship with the light receiving surface of the photoelectric converter via the microlens are not clearly pupil-divided due to diffraction blur, and have a light reception rate distribution (pupil intensity distribution) that depends on the light incident angle.
Fig. 16 is a diagram of a light reception rate distribution (pupil intensity distribution) depending on a light incident angle. The horizontal axis represents pupil coordinates, and the vertical axis represents light reception rate. A graph line L1 indicated by a solid line in fig. 16 represents a pupil intensity distribution along the x-axis in the partial pupil area 501 (first partial pupil area) of fig. 4. The light reception rate indicated by a graph line L1 rises sharply from the left end to a peak, then falls off gradually at a gentle rate, and then reaches the right end. A graph line L2 indicated by a broken line of fig. 16 represents a pupil intensity distribution along the x-axis of the partial pupil region 502 (second partial pupil region). In contrast to the graph line L1, the light reception rate indicated by the graph line L2 sharply increases from the right end, reaches its peak, gradually decreases at a gentle rate of change, and then reaches the left end. It is to be understood that, as shown in fig. 16, gentle pupil division is performed.
As shown in fig. 5, photoelectric converters 301 and 302 (first to nth photoelectric converters)LFPhotoelectric converter) and sub-pixels 201 and 202 (first to nth sub-pixels)LFSub-pixels) correspond. In each pixel of the image pickup element, subpixels 201 and 202 (divided into N) divided into 2 × 1 pixelsx×NyFirst to Nth photoelectric convertersLFPhotoelectric converter) receives the light passing through the partial pupil areas 501 and 502 (first to nth sub-pixels)LFSub-pixels) of different partial pupil areas. LF data (input image)) representing the spatial distribution and angular distribution of light intensity is obtained based on the signal received by each sub-pixel.
For each pixel, sub-pixels 201 and 202 (N) from 2 × 1 division can be passed based on LF data (input image)x×NyThe divided first to Nth photoelectric convertersLFPhotoelectric converters) are combined with each other to generate a captured image having a resolution of the number of pixels N.
Based on the LF data (input image), for each pixel, a sub-pixel from the 2 × 1 division is selectedPixels 201 and 202 (N)x×NyThe divided first to Nth photoelectric convertersLFA photoelectric converter) of a specific sub-pixel. Thereby, the partial pupil areas 501 and 502 (first to nth sub-pixels) can be generatedLFSub-pixels) corresponding to a specific partial pupil area. For example, a first viewpoint image (first parallax image) having a resolution of the number of pixels N corresponding to the partial pupil area 501 in the imaging optical system can be generated by selecting a signal from the sub-pixel 201. The same applies to the other sub-pixels.
As discussed, the image pickup element according to the present embodiment includes a plurality of array pixels having a plurality of photoelectric converters configured to receive light beams that have passed through different partial pupil areas of the imaging optical system, and can acquire LF data (input data). The present embodiment deals with the first viewpoint image and the second viewpoint images (first viewpoint image to nth viewpoint image)LFViewpoint image) performs image processing (correction processing) such as stain correction and shading correction, and generates an output image.
Now, description will be given of a method for imaging a first viewpoint image and a second viewpoint image (first viewpoint image to nth viewpoint image) by using a captured image and LF data (input image) acquired by the image pickup element 107 with reference to fig. 17 and 18LFViewpoint image) to perform correction processing to generate an output image. Fig. 17 and 18 are schematic diagrams of the flow of the correction processing according to the present embodiment. Fig. 17 and 18 are mainly executed by the image processing circuit 125 (the acquisition unit 125a and the image processing unit 125b) based on a command of the CPU 121.
First, at a stage before step S1 of fig. 17 (or at step S0 not shown), the image processing circuit 125 (acquisition unit 125a) generates (acquires) a captured image and at least one viewpoint image based on LF data (input data) acquired by the image pickup element 107. The captured image is an image generated from pupil areas obtained by combining different partial pupil areas in the imaging optical system. The viewpoint images are images generated for respective different partial pupil areas in the imaging optical system.
In step S0, first, the image processing circuit 125 inputs LF data (input image) acquired by the image pickup element 107. Alternatively, the image processing circuit 125 may use the LF data (input image) captured in advance by the image pickup element 107 and stored in a recording medium.
Next, in step S0, the image processing circuit 125 generates a captured image corresponding to a pupil area in which different partial pupil areas (first partial pupil area and second partial pupil area) in the imaging optical system are synthesized. The LF data (input image) is referred to as LF. In addition, the i-th pixel signal in the column direction in each pixel signal of the LF is setsEach (1 is less than or equal to i)s≤Nx) And j in the row directionSIs (1 is less than or equal to j)SNy) is called a k-th sub-pixel signal, where k is Nx(jS-1)+iS(1≤k≤NLF). The image processing circuit 125 generates a synthetic image as the ith image in the column direction as expressed by the following expression (2)SJ in the row directionSThe photographed image I (j, I) of the person.
Expression 2
Figure BDA0003160971110000211
In order to maintain good S/N of the captured image (j, i), the present embodiment synthesizes the respective sub-pixel signals of expression (2) with each other in the electrostatic capacity section (FD) within the image pickup element before performing analog-to-digital conversion (a/D conversion) on the respective sub-pixel signals. As necessary, the present embodiment can synthesize the respective sub-pixel signals of expression (2) with each other when converting the electric charges accumulated in the electrostatic capacity section (FD) inside the image pickup element into voltage signals, before a/D conversion is performed on the respective sub-pixel signals. The present embodiment can synthesize the respective sub-pixel signals of expression (2) with each other after a/D conversion of the respective sub-pixel signals, as necessary.
In the present embodiment, each pixel is divided into two in the X direction (for example, N)x=2,N y1 and NLF2). Based on an input image (LF data) corresponding to the exemplary pixel array in fig. 2, sub-pixels 201 and 202(N is performed) obtained by binary division in the X direction are provided for each pixelx×NyDivided first to Nth sub-pixelsLFSub-pixels) are synthesized. This can be generated as a pixel having the number N (horizontal pixel number N)HX number of vertical pixels NV) A captured image of RGB signals of the bayer array of resolution (c). Since the correction processing of the viewpoint image according to the present embodiment uses the captured image as a reference image of the correction reference, shading (light amount) correction processing, point flaw correction processing, and the like are performed with respect to the captured image I (j, I). Other processing may be performed if desired.
Next, in step S0, the image processing circuit 125 generates a k-th viewpoint image I as the I-th in the column direction and the j-th in the row direction corresponding to the k-th partial pupil area of the imaging optical system as shown in the following expression (3)k(j,i)。
Expression 3
Figure BDA0003160971110000212
In the present embodiment, each pixel is divided into two in the X direction (for example, N)x=2,N y1 and NLF2) and k 1. The present embodiment selects a signal of the sub-pixel 201, which is obtained by two-division in the x direction, for each pixel based on LF data (input image) corresponding to the pixel array shown in fig. 2. Then, the present embodiment is based on the partial pupil regions 501 and 502 (first to nth partial pupil regions)LFPartial pupil area) as a pixel having the number of pixels N (horizontal pixel number N)HX number of vertical pixels NV) First viewpoint image I of RGB signals of bayer array of resolution1(j, i). As necessary, k may be selected to be 2, and the second viewpoint image I corresponding to the partial pupil region 502 in the imaging optical system may be generated2(j,i)。
As described above, the image processing unit 125b generates captured images corresponding to pupil areas resulting from synthesizing different partial pupil areas based on input images acquired by the image pickup element having a plurality of pixels configured to receive light beams having passed through the plurality of photoelectric converters of the different partial pupil areas in the imaging optical system. In addition, the image processing unit 125b generates at least one viewpoint image for each different partial pupil region.
The present embodiment generates a captured image I (j, I) as an RGB signal employing a bayer array and a first-viewpoint image I as an RGB signal employing a bayer array based on LF data (input image) acquired by the image pickup element 1071(j, i) and storing them in a recording medium. In addition, the present embodiment is based on the captured image I (j, I) and the first viewpoint image I1(j, I) generating a second view image I2(j, i). With respect to the captured image I (j, I) in the present embodiment, this configuration can provide the same image processing as that with respect to a captured image acquired by a conventional image pickup element in which the photoelectric converter of each pixel is not divided. If necessary, the first viewpoint image I may be generated so that the processing for each viewpoint image is the same1(j, I) and a second view image I2(j, i) and storing it in a recording medium.
Next, in step S1 of fig. 17, the image processing unit 125b performs processing for the first viewpoint image I based on the captured image I (j, I)1(kth view image Ik) Shading correction processing (light amount correction processing) for each of RGB.
Now, description will be given of the first and second viewpoint images (first to nth viewpoint images) with reference to fig. 19A to 19CLFViewpoint image) of the image. Fig. 19A to 19C are explanatory diagrams of shading, and show the relationship between a partial pupil area 501 where the photoelectric converter 301 receives light, a partial pupil area 502 where the photoelectric converter 302 receives light, and the exit pupil 400 in the imaging optical system at the peripheral image height of the image pickup element 107. Using the same reference numerals to designateCorresponding elements in fig. 4. Photoelectric converters 301 and 302 (first to nth photoelectric converters)LFPhotoelectric converter) and sub-pixels 201 and 202 (first to nth sub-pixels)FLSub-pixels) correspond.
Fig. 19A illustrates a case where the exit pupil distance Dl in the imaging optical system is equal to the set pupil distance Ds in the image pickup element 107. In this case, the exit pupil 400 in the imaging optical system is divided approximately equally by the partial pupil regions 501 and 502. In contrast, as shown in fig. 19B, in the case where the exit pupil distance Dl in the imaging optical system is shorter than the set pupil distance Ds in the image pickup element 107, pupil shift occurs between the exit pupil 400 and the entrance pupil of the image pickup element 107 at the peripheral image height of the image pickup element 107. As a result, the exit pupil 400 is not divided equally. Likewise, as shown in fig. 19C, in the case where the exit pupil distance Dl in the imaging optical system is longer than the set pupil distance Ds in the image pickup element 107, pupil shift occurs between the exit pupil 400 and the entrance pupil of the image pickup element 107 at the peripheral image height of the image pickup element 107. As a result, the exit pupil 400 is not divided equally. With unequal pupil division at the peripheral image height, the first viewpoint image and the second viewpoint image have unequal intensities and produce shadows each of which one of the first viewpoint image and the second viewpoint image is high in intensity and the other is low in intensity for RGB (color).
In order to generate each viewpoint image with good image quality, the image processing unit 125b according to the present embodiment performs processing for the first viewpoint image I (j, I) by using the captured image I (j, I) as a reference or reference image in step S1 of fig. 171(kth view image Ik) Shading correction (light amount correction) of each of RGB.
In step S1 of fig. 17, the image processing circuit 125 first detects the captured image I (j, I) and the first viewpoint image I1Any of (j, i) is an unsaturated and defect-free (non-defective) effective pixel V1(j, i). Photographed image I (j, I) and first-viewpoint image I1An effective pixel in which any of (j, i) is unsaturated and is free of defects (non-defective) satisfies V1(j, i) ═ 1. Photographed image I (j, I) and first-viewpoint image I1An invalid pixel in which any of (j, i) is saturated or defective satisfies V1(j, i) ═ 0. Similarly, for the k-th view image IkIn the shading (light amount) correction of (1), the image I (j, I) and the k-th viewpoint image I are photographedk(j, i) both of which are unsaturated and have no defect satisfy Vk(j,i)=1。
Assume integer j2(1≤j2≤NV/2) and i2(1≤i2≤NH/2). It is assumed that the photographic image I employing the bayer array in fig. 2 includes photographic images RI, Gr, GbI, and BI for R, Gr, Gb, and B. The captured image of R is represented as RI (2 j)2-1,2i2-1)=I(2j2-1,2i2-1), and the captured image of Gr is represented as GrI (2 j)2-1,2i2)=I(2j2-1,2i2). The captured image of Gb is represented as GbI (2 j)2,2i2-1)=I(2j2,2j2-1), and representing the captured image of B as BI (2 j)2,2i2)=I(2j2,2i2). Likewise, assume that the k-th captured image I shown in fig. 2kIncluding photographed images RI for R, Gr, Gb, and Bk、GrIk、GbIkAnd BIk. Representing the captured image of R as RIk(2j2-1,2i2-1)=Ik(2j2-1,2i2-1), and representing the captured image of Gr as GrIk(2j2-1,2i2)=Ik(2j2-1,2i2). Representing the captured image of Gb as GbIk(2j2,2i2-1)=Ik(2j2,2j2-1), and representing the captured image of B as BIk(2j2,2i2)=Ik(2j2,2i2)。
In step S1, the image processing unit 125b then performs processing for the captured image RI (2 j)2-1,2i2-1)、GrI(2j2-1,2i2)、GbI(2j2,2i2-1) and BI (2 j)2,2i2) The projection processing of (1).More specifically, it is directed to the captured image RI (2 j)2-1,2i2-1)、GrI(2j2-1,2i2)、GbI(2j2,2i2-1) and BI (2 j)2,2i2) With expressions (4A) to (4D), projection processing is performed in a direction (y direction) perpendicular to the pupil-dividing direction (x direction). Thereby, a projection signal RP (2 i) of the captured image is generated2-1)、GrP(2i2)、GbP(2i2-1) and BP (2 i)2). The saturation signal value or the defect signal value does not contain information for detecting the shading of each of RGB for the captured image with correction. Thus, the captured image and the effective pixel V are calculatedkAnd projection processing (numerator of upper stage in expressions (4A) to (4D)) is performed by excluding the saturation signal value and the defect signal value, and normalization is performed with the effective pixel number for projection processing (denominator of upper stage in expressions (4A) to (4D)). In the case where the number of effective pixels used for the projection processing is 0, the projection signal of the photographing signal is set to 0 through the lower stage in expression (4A) to the lower stage in expression (4D). In the case where the projection signal of the captured image becomes a negative signal due to the influence of noise, the projection signal of the captured image is also set to 0. Also, for the k view image RIk(2j2-1,2i2-1)、GrIk(2j2-1,2i2)、GbIk(2j2,2i2-1) and BIk(2j2,2i2) With expressions (4E) to (4H), projection processing is performed in a direction (y direction) perpendicular to the pupil-dividing direction (x direction). This configuration generates a projection signal RP of the k-th viewpoint imagek(2i2-1)、GrPk(2i2)、GbPk(2i2-1) and BPk(2i2)。
Expression 4A
Figure BDA0003160971110000251
Expression 4B
Figure BDA0003160971110000252
Expression 4C
Figure BDA0003160971110000253
Expression 4D
Figure BDA0003160971110000261
Expression 4E
Figure BDA0003160971110000262
Expression 4F
Figure BDA0003160971110000263
Expression 4G
Figure BDA0003160971110000271
Expression 4H
Figure BDA0003160971110000272
After the projection processing of expressions (4A) to (4H), low-pass filter processing is performed. Projection signal RP (2 i) for captured image2-1)、GrP(2i2)、GbP(2i2-1) and BP (2 i)2) And a projection signal RP of the k-th viewpoint imagek(2i2-1)、GrPk(2i2)、GbPk(2i2-1) and BPk(2i2) And performing low-pass filtering processing. Thereby, the projection signal in the captured image is smoothed. Alternatively, the low-pass filtering process may be omitted.
Fig. 20A to 20C are explanatory diagrams of the projection signal of the captured image, the projection signal of the first viewpoint image, and the shading function. Fig. 20A shows examples of projection signals rp (r), grp (g), gbp (g), and bp (b) of the captured image. Fig. 20B shows a projection signal RP of the first viewpoint image1(R)、GrP1(G)、GbP1(G) And BP1(B) Examples of (2). Each projection signal has a plurality of fluctuations depending on the object. For the first view point image I1(kth view image Ik) To perform shading correction with high accuracy, it is necessary to correct the first-viewpoint image I due to pupil shift1(kth view image Ik) The respective shadow components of RGB (red, green, blue) are separated from the signal components of RGB for the subject.
In step 1, the image processing unit 125b calculates the k-th viewpoint image I with respect to the captured image as a reference by expressions (5A) to (5D)kRespective shadow signals RS of RGBk(2i2-1)、GrSk(2i2)、GbSk(2i2-1) and BSk(2i2)。
Expression 5A
Figure BDA0003160971110000281
Expression 5B
Figure BDA0003160971110000282
Expression 5C
Figure BDA0003160971110000283
Expression 5D
Figure BDA0003160971110000284
In calculating the shading component, it is necessary to make the light-receiving amount of the pixel larger than that of the sub-pixel, and make the light-receiving amount of the sub-pixel larger than 0. Thus, when the conditional expression RP (2 i) is satisfied2-1)>RPk(2i2-1)>In the case of 0, the R projection signal RP of the k-th viewpoint image is acquired by expression (5A)k(2i2-1) and the R projection signal RP (2 i) of the captured image2-1) of the ratio between. Then, the result is compared with the pupil division number NLFThe multiplication is performed for normalization, and a k-th viewpoint image I is generatedkR shadow signal RSk(2i2-1). Thereby, the R signal component of the subject can be canceled, and the k-th viewpoint image I can be separatedkR shadow component of (a). When the conditional expression RP (2 i) is not satisfied2-1)>RPk(2i2-1)>In the case of 0, the k-th view image IkR shadow signal RSk(2i2-1) is set to 0.
Likewise, when conditional expression GrP (2 i) is satisfied2)>GrPk(2i2)>In the case of 0, the Gr projection signal GrP of the k-th viewpoint image is acquired by expression (5B)k(2i2) And Gr projection signal GrP (2 i) of the captured image2) The ratio of (a) to (b). Then, the result is compared with the pupil division number NLFThe multiplication is performed for normalization, and a k-th viewpoint image I is generatedkGr shadow signal GrSk(2i2). Thereby, the Gr signal component of the subject can be canceled, and the k-th viewpoint image I can be separatedkThe Gr shade component of (a). When conditional expression GrP (2 i) is not satisfied2)>GrPk(2i2)>In the case of 0, the k-th view image IkGr shadow signal GrSk(2i2) Is set to 0.
Likewise, when conditional expression GbP (2 i) is satisfied2-1)>GbPk(2i2-1)>In the case of 0, the Gb projection signal GbP of the k-th viewpoint image is acquired by expression (5C)k(2i2-1) and a Gb projection signal GbP (2 i) of the captured image2-1) of the ratio between. Then, the result is compared with the pupil division number NLFThe multiplication is performed for normalization, and a k-th viewpoint image I is generatedkGb shadow signal GbSk(2i2-1). This makes it possible to cancel the Gb signal component of the subject and separate the k-th viewpoint image IkGb shading component of (a). When conditional expression GbP (2 i) is not satisfied2-1)>GbPk(2i2-1)>In the case of 0, the k-th view image IkGb shadow signal GbSk(2i2-1) is set to 0.
Likewise, when conditional expression BP (2 i) is satisfied2)>BPk(2i2)>In the case of 0, the B projection signal BP of the k-th viewpoint image is acquired by expression (5D)k(2i2) And B projection signal BP (2 i) of the captured image2) The ratio of (a) to (b). Then, the result is compared with the pupil division number NLFThe multiplication is performed for normalization, and a k-th viewpoint image I is generatedkB shadow signal BSk(2i2). Thereby, the B signal component of the subject can be canceled, and the k-th viewpoint image I can be separatedkB shadow component of (a). When conditional expression BP (2 i) is not satisfied2)>BPk(2i2)>In the case of 0, the k-th view image IkB shadow signal BSk(2i2) Is set to 0.
For high-precision shadow correction, the method satisfies the requirement of RSk(2i2-1)>0、GrSk(2i2)>0、GbSk(2i2-1)>0 and BSk(2i2)>In the case where the number of effective shadow signals of 0 is equal to or higher than the predetermined number, the shadow correction may be provided.
Next, in step S1, the image processing unit 125b performs the calculation processing represented by expressions (6A) to (6D). Will be for the k viewImage IkRespective shadow function RSF of RGBk(2i2-1、GrSFk(2i2)、GbSFk(2i2-1) and BSFk(2i2) N set as smoothing for positional variation in pupil division direction (x direction)SFA polynomial function of degree. RS that is generated by expressions (5A) to (5D) and satisfies RSk(2i2-1)>0、GrSk(2i2)>0、GbSk(2i2-1)>0 and BSk(2i2)>The valid shadow signal of 0 is set to the data point. The coefficient RSC in expressions (6A) to (6D) is calculated from these data points and parameter fitting by the least square methodk(μ)、GrSCk(μ)、GbSCk(. mu.) and BSCk (. mu.).
Expression 6A
Figure BDA0003160971110000301
Expression 6B
Figure BDA0003160971110000302
Expression 6C
Figure BDA0003160971110000303
Expression 6D
Figure BDA0003160971110000304
As described above, the k-th viewpoint image I for the captured image as a reference is generatedkRespective shadow function RSF of RGBk(2i2-1、GrSFk(2i2)、GbSFk(2i2-1)And BSFk(2i2)。
Fig. 20C shows the first viewpoint image I for the captured image as a reference1Respective shadow function RSF of RGB1(R)、GrSF1(G)、GbSF1(G) And BSF1(B) In that respect The projection signal of the first viewpoint image in fig. 20B and the projection signal of the captured image in fig. 20A have subject-dependent fluctuations. On the other hand, fluctuations depending on the object (signal values of the object for each of RGB) can be cancelled out by obtaining the ratio between the projection signal of the first viewpoint image and the projection signal of the captured image, and the smoothed first viewpoint image I can be separated and generated for each of RGB1The shading function of (a). Although the present embodiment uses a polynomial function as the shading function, the present invention is not limited to the present embodiment, and a more general function may be used according to the shading shape as needed.
Next, in step S1 of fig. 17, the image processing unit 125b targets the k-th viewpoint image Ik(j, i) with expressions (7A) to (7D), shading (light amount) correction processing is performed using the shading functions of RGB, respectively. Thereby, a shading-corrected k-th viewpoint (first corrected) image M is generated1Ik(j, i). The k-th viewpoint (first corrected) image M employing the bayer array is represented as follows for each of R, Gr, Gb, and B1Ik. In other words, the k-th viewpoint (first correction) image of R is set to RM1Ik(2j2-1,2i2-1)=M1Ik(2j2-1,2i2-1), and the kth viewpoint (first correction) image of Gr is set to GrM1Ik(2j2-1,2i2)=M1Ik(2j2-1,2i2). Set the kth viewpoint (first correction) image of Gb to GbM1Ik(2j2,2i2-1)=M1Ik(2j2,2i2-1), and the kth viewpoint (first correction) image of B is set as BM1Ik(2j2,2i2)=M1Ik(2j2,2i2). According to the needTo do so, the shading-corrected kth viewpoint (first corrected) image M may be1Ik(j, i) is set as an output image.
Expression 7A
Figure BDA0003160971110000311
Expression 7B
Figure BDA0003160971110000312
Expression 7C
Figure BDA0003160971110000313
Expression 7D
Figure BDA0003160971110000314
Now, description will be made with reference to FIGS. 21 to 23 regarding the first viewpoint image I shown in step S1 of FIG. 171And (j, i) shading correction processing (light amount correction processing) for each of the RGB. Fig. 21 shows an example of a captured image I (after demosaicing) according to the present embodiment. The exemplary captured image has good image quality. FIG. 22 shows the first-viewpoint image I before shading correction in the present embodiment1(after demosaicing) example. RGB each generate a shadow due to pupil shift between an exit pupil of an imaging optical system and an entrance pupil of an image pickup element, and an image I at a first viewpoint1To the right of (j, i), the brightness decreases and the RGB ratio is modulated. Fig. 23 shows a first-viewpoint (first-corrected) image M after shading correction in the present embodiment1I1(after demosaicing) example. Brightness due to shading correction based on each of RGB of photographed imageThe modulation of the RGB ratio is reduced and corrected, and a shading-corrected first-viewpoint (first-corrected) image M having good image quality is generated as in the captured image1I1(j, i) (after demosaicing).
The present embodiment generates a captured image corresponding to a pupil area obtained by combining different partial pupil areas based on an input image acquired by an image pickup element including a plurality of pixels having a plurality of photoelectric converters configured to receive light beams having passed through the different partial pupil areas of an imaging optical system. Then, the present embodiment generates a plurality of viewpoint images for respective different partial pupil regions, performs image processing for correcting the viewpoint images based on the captured images, and generates an output image. The present embodiment performs image processing for correcting the light amounts (shades) for each color or RGB based on a captured image. The present embodiment performs image processing for correcting the light amount of the viewpoint image based on the projection signal of the captured image and the projection signal of the viewpoint image. The configuration of the present embodiment can provide a viewpoint image with good quality.
Next, in step S2 of fig. 17, the image processing unit 125b corrects the k-th viewpoint (first correction) image M after shading correction based on the captured image I1IkThe defect of (2). The present embodiment shows an example where k is 1, but the present invention is not limited to the present embodiment.
In the present embodiment, although the captured image I is normal, the k-th viewpoint image I is caused due to a short circuit of a transfer gate or the like caused by the circuit configuration and the driving manner of the image pickup elementk(first view image I1) Only a portion of which generates a defect signal or a point or line defect. As necessary, the present embodiment may store point defect information and line defect information detected in a mass production process or the like in the image processing circuit 125 or the like in advance, and may perform processing for the k-th viewpoint image I based on the recorded point and line informationk(first view image I1) The defect correction processing of (1). In addition, the present embodiment checks the k-th viewpoint image I in real timek(first view image I1) And performing a point defect judgment or a lineAnd judging defects.
Now, the defect correction according to the present embodiment (step S2 in fig. 17) will be explained. The present embodiment assumes the k-th viewpoint image Ik2j of the odd-numbered lineD-1 or even row 2jDIs determined to have a line defect in the horizontal direction (x-direction), and the odd-numbered line 2j in the captured image ID-1 or even row 2jDIs not judged to have a line defect.
In the defect correction of step S2 of the present embodiment, the normal captured image I is set as the reference image, and the k-th viewpoint (first correction) image M is corrected based on the captured image I1IkThe defect of (2). The present embodiment is implemented by dividing the k-th viewpoint (first corrected) image M at a position not determined to be defective1IkIs compared with the signal value of the photographed image I at the position not determined as the defect to correct the defect. In this comparison, in order to perform defect correction with high precision, it is necessary to remove the k-th viewpoint image I caused by pupil shiftkAnd the respective shadow components of RGB of (1) and accurately comparing the k-th viewpoint images IkAnd signal components of RGB of the subject between the captured images I. Thus, in step S1, the present embodiment corrects the shades (light amounts) of the RGB of the k-th viewpoint image in advance, generating a k-th viewpoint (first corrected) image M1IkThe same shadow state as the captured image I is established, and the influence of the shadow component is removed. Thereafter, in step S2, the present embodiment highly accurately corrects the shading-corrected kth-viewpoint (first correction) image M based on the captured image I1IkThe defect in (2) is corrected.
In step S2 of fig. 17, the present embodiment is based on the normal signal of the captured image I and the k-th viewpoint (first corrected) image M1IkTo the k-th viewpoint (first correction) image M after shading correction1IkAnd (j, i) performing defect correction processing on the signals judged to be defective. Then, the present embodiment generates a defect-corrected kth-viewpoint (second-corrected) image M2Ik(j, i). Here, R, Gr, Gb are as followsAnd B each represents a k-th viewpoint (second corrected) image M employing a Bayer array2Ik. In other words, the k-th viewpoint (second correction) image of R is represented as RM2Ik(2j2-1,2i2-1)=M2Ik(2j2-1,2i2-1), and represents the kth viewpoint (second correction) image of Gr as GrM2Ik(2j2-1,2i2)=M2Ik(2j2-1,2i2). Representing the kth viewpoint (second correction) image of Gb as GbM2Ik(2j2,2i2-1)=M2Ik(2j2,2i2-1), and represents the k-th viewpoint (second corrected) image of B as BM2Ik(2j2,2i2)=M2Ik(2j2,2i2)。
In step S2, assume that the kth viewpoint (first corrected) image M1IkFirst position (2 j) of R of (1)D-1,2iD-1) is judged as defective. At this time, based on the captured image RI (2 j) at the first positionD-1,2iD-1), the k-th viewpoint (first corrected) image RM at the second position of R that is not judged to be defective1IkAnd a captured image RI at the second position, the defect correction processing is performed by the following expression (8A). This configuration generates a defect-corrected kth-viewpoint (second-corrected) image RM at the first position2Ik(2jD-1,2iD-1)。
Assume that the kth viewpoint (first corrected) image M1IkFirst position (2 j) of GrD-1,2iD) Is judged to be defective. At this time, based on the captured image GrI (2 j) at the first positionD-1,2iD) And a k-th viewpoint (first corrected) image GbM at a second position of the Gb that is not determined to be defective1IkAnd a captured image GbI at the second position, defect correction processing is performed by the following expression (8B). This configuration generates a defect-corrected kth-viewpoint (second-corrected) image GrM at the first position2Ik(2jD-1,2iD)。
Assume that the kth viewpoint (first corrected) image M1IkFirst position (2 j) of Gb of (1)D,2iD-1) is judged as defective. At this time, based on the captured image GbI (2 j) at the first positionD,2iD-1), a k-th viewpoint (first corrected) image GrM at a second position of Gr not judged to be defective1IkAnd a captured image GrI at the second position, defect correction processing is performed by the following expression (8C). This configuration generates a defect-corrected kth-viewpoint (second-corrected) image GbM at the first position2Ik(2jD,2iD-1)。
Assume that the kth viewpoint (first corrected) image M1IkFirst position (2 j) of B of (1)D,2iD) Is judged to be defective. At this time, based on the photographed image BI (2 j) at the first positionD,2iD) And a k-th viewpoint (first corrected) image BM at a second position of B that is not determined to be defective1IkAnd the captured image BI at the second position, the defect correction processing is performed by the following expression (8D). This configuration generates a k-th viewpoint (second corrected) image BM after defect correction at the first position2Ik(2jD,2iD)。
Expression 8A
Figure BDA0003160971110000341
Expression 8B
Figure BDA0003160971110000351
Expression 8C
Figure BDA0003160971110000352
Expression 8D
Figure BDA0003160971110000353
At most of the positions (j, i) not judged as defective, the k-th viewpoint (second corrected) image M2Ik(j, i) has a (first corrected) image M corresponding to the k-th viewpoint1Ik(j, i) has the same signal value, and M2Ik(j,i)=M1Ik(j, i) holds. The k-th viewpoint (second corrected) image M after defect correction may be subjected to correction as needed2Ik(j, i) as an output image.
Now, the first-viewpoint (first-correction) image M based on the image shown in step S2 of fig. 17 in the present embodiment will be described with reference to fig. 24 and 251I1The normal captured image I. Fig. 24 shows the first-viewpoint (first-correction) image M before defect correction in the present embodiment1I1(after shading correction and demosaicing). Showing the image M at a first viewpoint (first correction)1I1An example in which a line defect occurs in the horizontal direction (x direction) in the central portion of (j, i). Fig. 25 shows the first-viewpoint (second-correction) image M after defect correction in the present embodiment2I1(after shading correction and demosaicing). The line defect in the horizontal direction (x direction) is corrected with the defect correction based on the normal captured image I, and the defect-corrected first-viewpoint (second-corrected) image M having good quality is generated as in the captured image2I1(j,i)。
The present embodiment generates a captured image corresponding to a pupil area obtained by combining different partial pupil areas based on an input image acquired by an image pickup element including a plurality of pixels having a plurality of photoelectric converters configured to receive light beams having passed through the different partial pupil areas in an imaging optical system. Then, the present embodiment generates a plurality of viewpoint images for respective different partial pupil areas, performs image processing for correcting the viewpoint images based on the captured images, and generates an output image. The present embodiment performs image processing based on a captured image to correct and reduce defects contained in a viewpoint image. The present embodiment performs image correction processing for correcting the signal value of the viewpoint image at the first position determined to be defective based on the signal value of the captured image at the first position. The present embodiment performs signal processing for correcting a signal value of a viewpoint image at a first position based on a signal value of a captured image at a first position determined to be defective, a signal value of a viewpoint image at a second position not determined to be defective, and a signal value of a captured image at the second position.
In the present embodiment, the image processing unit 125b performs correction processing (image processing) based on the captured image after performing light amount correction processing for the viewpoint image based on the captured image to reduce defects contained in the viewpoint image. This configuration can generate a viewpoint image with good quality.
Next, in step S2 of fig. 17, the image processing unit 125b corrects the k-th viewpoint (second correction) image M after the defect correction by using the following expressions (9A) to (9D)2Ik(j, i) shading. Thereby, a k-th viewpoint (third corrected) image M is generated3Ik(j,i)。
Expression 9ARM3Ik(2j2-1,2i2-1)=RSFk(2i2-1)×RM2Ik(2j2-1,2i2-1),…(9A)
Expression 9BGrM3Ik(2j2-1,2i2)=GrSFk(2i2)×GrM2Ik(2j2-1,2i2),…(9B)
Expression 9CGbM3Ik(2j2,2i2-1)=GbSFk(2i2-1)×GbM2Ik(2j2,2i2-1),…(9C)
Expression 9DBM3Ik(2j2,2i2)=BSFk(2i2)×BM2Ik(2j2,2i2)…(9D)
Now, a k-th viewpoint (third corrected) image M employing a bayer array is acquired for each of R, Gr, Gb, and B3Ik. Expressing the k-th viewpoint (third corrected) image of R as RM3Ik(2j2-1,2i2-1)=M3Ik(2j2-1,2i2-1), and represents the kth viewpoint (third corrected) image of Gr as GrM3Ik(2j2-1,2i2)=M3Ik(2j2-1,2i2). Representing the kth viewpoint (third correction) image of Gb as GbM3Ik(2j2,2i2-1)=M3Ik(2j2,2i2-1), and represents the k-th viewpoint (third corrected) image of B as BM3Ik(2j2,2i2)=M3Ik(2j2,2i2)。
In step S3 of fig. 18, the captured image I (j, I) and the k-th viewpoint (third corrected) image M are subjected to3Ik(j, i) performing saturation signal processing. This example discusses that k is 1 and NLFExample 2.
In step S3, first, for a captured image I (j, I) in which the maximum value of the captured signal is set to Imax, saturation signal processing is performed using the following expression (10), and a corrected captured image MI (j, I) is generated.
Expression 10
Figure BDA0003160971110000371
Next, in step S3, the image processing unit 125b targets the k-th viewpoint (third corrected) image M3Ik(j, i) (wherein the shading function of the Bayer array is set to SFk(j, i)), the saturation signal processing corresponding to the shadow state is performed as in the following expression (11). Thereby, the kth viewpoint (fourth corrected) image M can be generated4Ik(j, i). Here, the shading function RSF is generated based on each of R, Gr, Gb, and Bk(2i2-1)、GrSFk(2i2)、GbSFk(2i2-1) and BSFk(2i2) The shading function SF of the Bayer array is calculated by expressions (6A) to (6D)k(j, i). In other words, SF is assumedk(2j2-1,2i2-1)=RSFk(2i2-1)、SFk(2j2-1,2i2)=GrSFk(2i2)、SFk(2j2,2i2-1)=GbSFk(2i2-1) and SFk(2j2,2i2)=BSFk(2i2)。
Expression 11
Figure BDA0003160971110000381
In step S4 of fig. 18, the image processing unit 125b passes through the corrected photographed image MI (j, i) and the first-viewpoint (fourth-corrected) image M4I1(j, I) generating the second view image I based on the expression (12)2(j,i)。
Expression 12
I2(j,i)=MI(j,i)-M4I1(j,i). (12)
In the present embodiment, the first-viewpoint (third-corrected) image M is due to the driving method of the image pickup element 107 and the a/D conversion circuit configuration3I1The maximum signal value at the saturation of (j, I) may have the same maximum signal value as the maximum signal value Imax at the saturation of the captured image I (j, I). In this case, the second viewpoint image is generated by subtracting the first viewpoint (third correction) image from the captured image as in expression 12 in a state where the saturation signal processing is not performed, and the second viewpoint imageAs may be the case with a saturated signal value, the saturated signal value may have the wrong signal value 0. Thus, to prevent this problem, step S3 is directed to the captured image I (j, I) and the k-th viewpoint (third corrected) image M3Ik(j, i) saturation signal processing corresponding to the shading state is performed in advance so that the corrected captured image MI (j, i) and the first-viewpoint (fourth-corrected) image M subjected to the saturation signal processing are generated4I1(j, i). Thereafter, step S4 may generate the second view image I by using expression (12)2(j, I) generating a second view image I corresponding to a correct saturation signal value2
In step S5 of fig. 18, the image processing unit 125b aims at the first-viewpoint (fourth-corrected) image M4I1(j, I) and a second view image I2(j, i) shading correction (light amount correction) is performed. More specifically, the image M for the first viewpoint (fourth correction)4I1(j, i) using the shading function RSF which has been generated by the expressions (6A) to (6D)1、GrSF1、GbSF1And BSF1Shading correction (light amount correction) is performed in the same manner as expressions (7A) to (7D). Thereby, the first-viewpoint (fifth corrected) image M is generated5I1(j, i). Next, in step S5, the image processing unit 125b targets the second viewpoint image I2(j, i), shading correction (light amount correction) is performed based on the corrected captured image MI (j, i) as in expressions (4A) to (7D). Thereby, the second-viewpoint (fifth corrected) image M is generated5I2(j,i)。
Finally, in step S6 of fig. 18, the image processing unit 125b aims at the first-viewpoint (fifth-corrected) image M5I1(j, i) and a second-viewpoint (fifth-corrected) image M5I2(j, i) the saturated signal processing is performed by the following expression (13). Thereby, the first corrected viewpoint image MI is generated1(j, i) and a second corrected viewpoint image MI2(j, i) as an output image.
Expression 13
Figure BDA0003160971110000391
Now, the second viewpoint image I shown in step S5 of fig. 17 will be explained with reference to fig. 26 and 272And (j, i) shading correction processing (light amount correction processing) for each of the RGB. FIG. 26 shows the second-viewpoint image I before shading correction in the present embodiment2(after demosaicing) example. Pupil shift between the exit pupil of the imaging optical system and the entrance pupil of the image pickup element causes RGB to each generate a shadow, and thus an image I at the second viewpoint2The reduction in luminance and the modulation of the RGB ratio occur on the left side of (j, i). FIG. 27 shows the shading-corrected second corrected viewpoint image MI in the present embodiment2(after demosaicing) example. The reduction in luminance and the modulation of the RGB ratio are corrected based on the shading correction of each of RGB of the captured image, and the second corrected viewpoint image MI after shading correction having good quality is generated as in the captured image2(j,i)。
The image processing apparatus according to the present embodiment is an image processing apparatus having an image processing unit for performing the above-described image processing method. An image pickup apparatus according to the present embodiment is an image pickup apparatus including an image processing unit for performing the above-described image processing method and an image pickup element including a plurality of arrayed pixels having a plurality of sub-pixels configured to receive light beams having passed through different partial pupil regions in an imaging optical system. The structure of the present embodiment can generate a viewpoint image with good quality.
Fourth embodiment
Next, a fourth embodiment according to the present invention will be explained. The present embodiment is based on the first corrected viewpoint image and the second corrected viewpoint image (first corrected viewpoint image to nth corrected viewpoint image) generated in the third embodimentLFCorrected viewpoint image) and the correlation (degree of coincidence of signals) between the first corrected viewpoint image and the second corrected viewpoint image, an image shift amount distribution is detected by a phase difference detection method.
In generating an imageWhen the shift amount is distributed, first, the viewpoint image MI is corrected based on the k-th correction viewpoint image MI which is an RGB signal using a bayer arrayk(k=1~NLF) For each position (j, i), the color barycenters for RGB are made to coincide with each other. The k-th viewpoint luminance signal Y is generated by the following expression (14)k
Expression 14
Figure BDA0003160971110000401
Next, in generating the image shift amount distribution, the first corrected viewpoint image MI based on the RGB signals as the signals using the bayer array1For the first-viewpoint luminance signal Y generated by expression (14)1A one-dimensional band-pass filter process is performed in the pupil division direction (column direction), and a first focus detection signal dYA is generated. In addition, the present embodiment is based on the second corrected viewpoint image MI2For the second viewpoint luminance signal Y generated by expression (14)2A one-dimensional band-pass filter process is performed in the pupil division direction (column direction), and a second focus detection signal dYB is generated. One-dimensional bandpass filters, for example, may use first order differential filters [1,5,8,8,8, 5,1, -1, -5, -8, -8, -8, -8, -5, -1]And the like. The pass band of the one-dimensional band-pass filter can be adjusted as desired.
Next, in generating the image shift amount distribution, the present embodiment shifts the first focus detection signal dYA and the second focus detection signal dYB relative to each other in the pupil division direction (column direction), calculates a correlation amount indicating the degree of coincidence of the signals, and generates the image shift amount distribution M based on the correlation amountDIS(j, i). The present embodiment sets the first focus detection signal to dYA (j + j)2,i+i2) And the second focus detection signal is set to dYB (j + j)2,i+i2) Wherein, dYA (j + j)2,i+i2) J' th in the row direction with the position (j, i) as the center2Root of Chinese character (-n)2≤j2≤n2) And the ith in the column direction as the pupil division direction2Root of large (-m)2≤i2≤m2). Assume that the offset is s (-n)s≤s≤ns). Then, a correlation amount COR at each position (j, i) is calculated by the following expression (15A)EVEN(j, i, s), and the correlation amount COR is calculated by the following expression (15B)ODD(j,i,s)。
Expression 15A
Figure BDA0003160971110000411
Expression 15B
Figure BDA0003160971110000412
Correlation quantity CORODD(j, i, s) is determined by offsetting the amount of shift between the first focus detection signal dYA and the second focus detection signal dYB relative to the correlation amount COREVEN(j, i, s) is offset by half the phase or-1. The embodiment is based on the correlation COREVEN(j, i, s) and a correlation CORODD(j, i, s) calculating and averaging the shift amount of the real value minimizing the correlation amount by sub-pixel calculation, and detecting the image shift amount distribution MDIS(j,i)。
In detecting the image shift amount by the phase difference method, the present embodiment evaluates the correlation amounts of expressions (15A) and (15B), and detects the image shift amount based on the correlation (the degree of coincidence of signals) between the first focus detection signal and the second focus detection signal. The present embodiment generates the first focus detection signal and the second focus detection signal from the first corrected viewpoint image and the second corrected viewpoint image after shading (light amount) correction is performed on each of RGB based on the captured image. Thus, the present embodiment can improve the correlation (the degree of coincidence of signals) between the first focus detection signal and the second focus detection signal, and detect the amount of image shift with high accuracy.
In the defocus amount detected from the auto focus detectionWhen the lens is driven to the focusing position, the image shift amount distribution M isDIS(j, i) is multiplied by a conversion coefficient K from an image shift amount corresponding to lens information such as an aperture value F and an exit pupil distance of an imaging lens (imaging optical system) to a defocus amount. Thereby, the defocus distribution M can be detectedDef(j, i). This calculation may be performed for each image height position in the focus detection area.
The present embodiment can generate a plurality of viewpoint images with good quality. The present embodiment can improve the detection accuracy of the image shift amount by using a plurality of viewpoint images with good quality.
Fifth embodiment
Next, a fifth embodiment according to the present invention will be explained. This example discusses Nx=2、N y2 and NLFAn example of a four split of 4. In the present embodiment, sub-pixels 201 to 204 (N) from four divisions are set for each pixel based on an input image (LF data) corresponding to the pixel array shown in fig. 12 and expression (2) (N)x×NyDivided first to Nth sub-pixelsLFSub-pixels) are synthesized. Generating a captured image I as having a pixel number N (horizontal pixel number N)HX number of vertical pixels NV) RGB signals of a bayer array of resolution.
This example discusses Nx=2、N y2 and NLFFour divisions of 4 and k 1-3. The signal of the sub-pixel 201 (first sub-pixel) is selected from the four-divided sub-pixels 201 to 204 for each pixel based on LF data (input image) corresponding to the pixel array shown in fig. 12 and expression (3). Then, the present embodiment generates the first-viewpoint image I1(j, i) RGB signals as a bayer array having a resolution of the number of pixels N corresponding to the partial pupil area 501 in the imaging optical system. The present embodiment selects a signal of the sub-pixel 202 (second sub-pixel) from the four-divided sub-pixels 201 to 204 for each pixel based on LF data and expression (3). The present embodiment generates the second viewpoint image I2(j, i) as having an andRGB signals of a bayer array of resolution of the number of pixels N corresponding to the partial pupil area 502 in the image optical system. The present embodiment selects a signal of the sub-pixel 203 (third sub-pixel) from the four-divided sub-pixels 201 to 204 for each pixel based on LF data and expression (3). The present embodiment generates the third viewpoint image I3(j, i) RGB signals as a bayer array having a resolution of the number of pixels N corresponding to the partial pupil area 503 in the imaging optical system.
The present embodiment generates a captured image corresponding to a pupil area obtained by combining different partial pupil areas based on an input image (LF data) acquired by an image pickup element including a plurality of arrayed pixels having a plurality of photoelectric converters configured to receive light beams having passed through the different partial pupil areas in an imaging optical system. Then, the present embodiment generates a plurality of viewpoint images for different partial pupil areas, respectively. In order to generate each viewpoint image with good quality, the present embodiment performs the first to fourth viewpoint images (first to nth viewpoint images) based on the captured image, as in the third embodimentLFViewpoint image) performs image processing such as stain correction and shading correction, and generates an output image.
In step S1 of fig. 17, the present embodiment sets the captured image I (j, I) as a reference or reference image for the first viewpoint image I1To the third viewpoint image I3(kth view image Ik:k=1~NLF-1) shading (or light amount) correction. This example discusses Nx=2、N y2 and NLFFour divisions of 4 and k 1-3.
First, in step S1, the present embodiment is directed to the k-th viewpoint image Ik(k=1~NLF-1), shading (light amount) correction is performed in the x direction by expressions (4A) to (7D). Next, in expressions (4A) to (7D), the x direction is replaced with the y direction, and shading (light amount) correction processing is performed in the y direction, and the k-th viewpoint (first correction) image M is generated1Ik(k=1~NLF-1). In performing shading (light quantity) correction in the x-direction and the y-directionIn the case of two stages of shading correction in the upward direction, the pupil division number N LF1 more than the required number so that expressions (5A) to (5D) are normalized. Thus, at the time of the second shading correction in the y direction, the sum of the pupil division number N is omitted in expressions (5A) to (5D)LFMultiplied together for normalization.
By expressions (8A) to (10), the procedure of the present embodiment is the same as that of the third embodiment until the kth viewpoint (fourth correction) image M is generated4Ik(k=1~NLF-1) up to. In step S4 of fig. 18, the corrected photographed image MI (j, i) and the k-th viewpoint (fourth corrected) image M are used as the basis4Ik(k=1~NLF-1) generating the Nth by the following expression (16)LFViewpoint image INLF(j, i). This example discusses Nx=2、N y2 and NLFAn example of a four split of 4.
Expression 16
Figure BDA0003160971110000441
Step S5 and subsequent steps in fig. 18 are the same as those in the third embodiment.
The present embodiment can generate a viewpoint image with good quality. In the photoelectric converter of each pixel of the image pickup element, other embodiments may increase the division number, such as Nx=3、Ny3 and NLFNine divisions of 9 and Nx=4、Ny4 and NLFSixteen divisions of 16, etc.
As described above, the image processing apparatus (image processing circuit 125) in each embodiment includes the acquisition unit 125a and the image processing unit 125b (correction unit). The acquisition unit 125a acquires a parallax image generated based on a signal of one of a plurality of photoelectric converters for receiving light beams passing through mutually different partial pupil areas in the imaging optical system, and acquires a captured image generated by synthesizing the signals from the plurality of photoelectric converters. The image processing unit 125b performs correction processing based on the captured image so that defects (such as point flaws and line flaws) contained in the parallax image are reduced.
Preferably, the image processing unit 125b corrects the pixel value (pixel signal) of the parallax image at the first position by using the pixel value (pixel signal) of the captured image at the first position (position to be corrected) determined as the defect. More preferably, the image processing unit 125b corrects the pixel value of the parallax image at the first position based on the pixel value I of the captured image at the first position, the pixel value of the parallax image at the second position which is not determined to be defective, and the pixel value of the captured image at the second position. The second position is a position of a pixel near (or in the periphery of) the first position (position to be corrected). More preferably, the second position is a position adjacent to the first position in a predetermined direction (a vertical direction, a horizontal direction, or an oblique direction on the pixel array).
Preferably, in the case where the pixel value of the parallax image or the captured image at the second position is lower than a predetermined luminance value (parameter a0 or I0), the image processing unit 125b replaces the pixel value with the predetermined luminance value. More preferably, the predetermined luminance value is set to change in accordance with the number of partial pupil areas. Preferably, the predetermined luminance value is set to change in accordance with the first position (position to be corrected). Preferably, the predetermined luminance value is set to change according to the image capturing condition information. The imaging condition information includes at least one of an ISO sensitivity, an aperture value of the imaging optical system, and an exit pupil distance.
Preferably, the image processing apparatus includes a memory (memory 134) for storing the defect information about the first location, or an inspector for inspecting the defect information about the first location. The image processing unit 125b performs correction processing based on the defect information stored in the memory or the defect information obtained as the inspection result of the inspector.
Preferably, the parallax image is generated by collecting light reception signals of a plurality of sub-pixels (a plurality of first sub-pixels, a plurality of second sub-pixels, a plurality of third sub-pixels, and a plurality of fourth sub-pixels) included in one photoelectric converter for respective partial pupil regions different from each other in the imaging optical system. The captured image is generated by collecting light reception signals of all the sub-pixels (the plurality of first sub-pixels and the plurality of second sub-pixels, and further, the plurality of third sub-pixels and the plurality of fourth sub-pixels) included in the plurality of photoelectric converters.
Preferably, the parallax images include first and second parallax images corresponding to the respective light fluxes that are distinguished by passing through mutually different partial pupils in the imaging optical system. Then, the acquisition unit 125a acquires the captured image and the first parallax image of the parallax images from the image pickup element 107 including a plurality of photoelectric converters. The image processing unit 125b performs correction processing on the first parallax image in the parallax images. Then, the image processing unit 125b generates a second parallax image by subtracting the corrected first parallax image from the captured image. Preferably, the image processing unit 125b (refocus unit) performs refocus processing on the captured image based on the corrected parallax image.
Preferably, the image processing unit performs light amount correction or shading correction of the parallax image based on the captured image. More preferably, the image processing unit performs light amount correction processing of the parallax image for each color of the parallax image based on the captured image. More preferably, the image processing unit performs light amount correction processing of the parallax image based on the projection signal of the captured image and the projection signal of the parallax image. More preferably, after the light amount correction processing is performed on the parallax image by the image processing unit, the image processing unit corrects the parallax image based on the captured image so that defects included in the parallax image after the light amount correction processing is performed are reduced.
OTHER EMBODIMENTS
The present invention can be implemented in a process for supplying a program for implementing one or more functions of the above-described embodiments to a system or apparatus via a network or a storage medium, and reading and executing the program in one or more processors in a computer of the system or apparatus. In addition, the present invention may also be implemented in circuitry (such as an ASIC or the like) for performing one or more functions.
Embodiments may provide an image processing apparatus, an image capturing apparatus, an image processing method, a program, and a non-transitory computer-readable storage medium capable of generating a parallax image with improved quality.
Although the present invention has been described with reference to the exemplary embodiments, the present invention is not limited to the disclosed exemplary embodiments, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (21)

1. An image processing apparatus comprising:
an acquisition unit configured to acquire a parallax image generated based on a signal from one of a plurality of photoelectric converters corresponding to the same microlens of an image pickup element including a plurality of pixels provided with the plurality of photoelectric converters and each of the plurality of pixels corresponding to each of the plurality of microlenses, and acquire a composite image which is an image generated by synthesizing a plurality of signals from the plurality of photoelectric converters including the one photoelectric converter; and
an image processing unit for calculating a correction value by using a signal of the synthesized image as a reference image and correcting the parallax image using the correction value,
wherein the image processing unit corrects the pixel value of the parallax image at a first position based on the pixel value of the synthesized image at the first position, the pixel value of the parallax image at a second position, and the pixel value of the synthesized image at the second position.
2. The image processing apparatus according to claim 1, wherein the first position is a position determined to be defective, and the second position is a position not determined to be defective.
3. The image processing apparatus according to claim 1, wherein the second position is a position adjacent to the first position in a predetermined direction.
4. The image processing apparatus according to claim 1, wherein in a case where a pixel value of the parallax image or the synthesized image at the second position is lower than a predetermined luminance value, the image processing unit replaces the pixel value with the predetermined luminance value.
5. The image processing apparatus according to claim 4, wherein the predetermined luminance value is set to change in accordance with the number of partial pupil areas.
6. The image processing apparatus according to claim 4, wherein the predetermined luminance value is set to change in accordance with the first position.
7. The image processing apparatus according to claim 4, wherein the predetermined luminance value is set to change in accordance with image capturing condition information.
8. The image processing apparatus according to claim 7, wherein the imaging condition information includes at least one of an ISO sensitivity, an aperture value of an imaging optical system, and an exit pupil distance.
9. The image processing apparatus according to claim 1, further comprising a memory for storing defect information relating to the first position,
wherein the image processing unit corrects the parallax image based on the defect information.
10. The image processing apparatus according to claim 1, further comprising an inspector that inspects defect information relating to the first position,
wherein the image processing unit corrects the parallax image based on the defect information.
11. The image processing apparatus according to any one of claims 1 to 10,
the parallax image is generated by collecting light reception signals from a plurality of sub-pixels included in the one photoelectric converter for respective partial pupil regions different from each other in the imaging optical system, an
The composite image is generated by collecting light reception signals from all the sub-pixels included in the plurality of photoelectric converters.
12. The image processing apparatus according to claim 1, wherein the parallax images include first and second parallax images corresponding to light fluxes passing through mutually different partial pupil regions in an imaging optical system, respectively,
wherein the acquisition unit acquires the composite image and the first parallax image of the parallax images from the image pickup element including the plurality of photoelectric converters, an
The image processing unit performs the following operations:
correcting the first parallax image of the parallax images, and
generating the second parallax image by subtracting the corrected first parallax image from the composite image.
13. The image processing apparatus according to claim 1, wherein the image processing unit performs refocus processing on the synthesized image based on the corrected parallax image.
14. The image processing apparatus according to claim 1, wherein the image processing unit performs light amount correction processing on the parallax image based on the synthesized image.
15. The image processing apparatus according to claim 14, wherein the image processing unit performs the light amount correction processing on the parallax image for each color of the parallax image based on the synthesized image.
16. The image processing apparatus according to claim 14, wherein the image processing unit performs the light amount correction processing on the parallax image based on a projection signal of the synthesized image and a projection signal of the parallax image.
17. The image processing apparatus according to any one of claims 14 to 16, wherein the image processing unit corrects the parallax image based on the synthesized image after the light amount correction processing on the parallax image is performed, to reduce a defect included in the parallax image after the light amount correction processing is performed.
18. An image pickup apparatus includes:
an image pickup element including a plurality of arrayed pixels, wherein the plurality of arrayed pixels are provided with a plurality of photoelectric converters, and each arrayed pixel of the plurality of arrayed pixels corresponds to each microlens of a plurality of microlenses;
an acquisition unit configured to acquire a parallax image generated based on a signal from one of the plurality of photoelectric converters corresponding to the same microlens of the image pickup element including a plurality of pixels provided with the plurality of photoelectric converters and acquire a composite image which is an image generated by synthesizing signals from the plurality of photoelectric converters including the one photoelectric converter; and
an image processing unit for calculating a correction value by using a signal of the synthesized image as a reference image and correcting the parallax image using the correction value,
wherein the image processing unit corrects the pixel value of the parallax image at a first position based on the pixel value of the synthesized image at the first position, the pixel value of the parallax image at a second position, and the pixel value of the synthesized image at the second position.
19. The image pickup apparatus according to claim 18, wherein the image pickup element includes the plurality of photoelectric converters for one microlens, and the microlenses are arranged in a two-dimensional manner.
20. An image processing method comprising the steps of:
acquiring a parallax image generated based on a signal from one of a plurality of photoelectric converters corresponding to the same microlens of an image pickup element including a plurality of pixels provided with the plurality of photoelectric converters and each of the plurality of pixels corresponding to each of the plurality of microlenses, and acquiring a composite image which is an image generated by synthesizing a plurality of signals from the plurality of photoelectric converters including the one photoelectric converter; and
by calculating a correction value using a signal of the synthesized image as a reference image and correcting the parallax image using the correction value,
wherein the pixel value of the parallax image at a first position is corrected based on the pixel value of the synthesized image at the first position, the pixel value of the parallax image at a second position, and the pixel value of the synthesized image at the second position.
21. A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process comprising:
acquiring a parallax image generated based on a signal from one of a plurality of photoelectric converters corresponding to the same microlens of an image pickup element including a plurality of pixels provided with the plurality of photoelectric converters and each of the plurality of pixels corresponding to each of the plurality of microlenses, and acquiring a composite image which is an image generated by synthesizing a plurality of signals from the plurality of photoelectric converters including the one photoelectric converter; and
by calculating a correction value using a signal of the synthesized image as a reference image and correcting the parallax image using the correction value,
wherein the pixel value of the parallax image at a first position is corrected based on the pixel value of the synthesized image at the first position, the pixel value of the parallax image at a second position, and the pixel value of the synthesized image at the second position.
CN202110790632.2A 2015-05-08 2016-04-21 Image processing apparatus, image capturing apparatus, image processing method, and storage medium Active CN113596431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110790632.2A CN113596431B (en) 2015-05-08 2016-04-21 Image processing apparatus, image capturing apparatus, image processing method, and storage medium

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
JP2015-095348 2015-05-08
JP2015095348 2015-05-08
JP2016080328A JP6746359B2 (en) 2015-05-08 2016-04-13 Image processing device, imaging device, image processing method, program, and storage medium
JP2016-080328 2016-04-13
PCT/JP2016/002144 WO2016181620A1 (en) 2015-05-08 2016-04-21 Image processing device, imaging device, image processing method, program, and storage medium
CN202110790632.2A CN113596431B (en) 2015-05-08 2016-04-21 Image processing apparatus, image capturing apparatus, image processing method, and storage medium
CN201680026847.4A CN107960120B (en) 2015-05-08 2016-04-21 Image processing apparatus, image capturing apparatus, image processing method, and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201680026847.4A Division CN107960120B (en) 2015-05-08 2016-04-21 Image processing apparatus, image capturing apparatus, image processing method, and storage medium

Publications (2)

Publication Number Publication Date
CN113596431A true CN113596431A (en) 2021-11-02
CN113596431B CN113596431B (en) 2023-11-17

Family

ID=57248930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110790632.2A Active CN113596431B (en) 2015-05-08 2016-04-21 Image processing apparatus, image capturing apparatus, image processing method, and storage medium

Country Status (2)

Country Link
CN (1) CN113596431B (en)
WO (1) WO2016181620A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024016288A1 (en) * 2022-07-21 2024-01-25 北京小米移动软件有限公司 Photographic apparatus and control method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040145664A1 (en) * 2003-01-17 2004-07-29 Hirokazu Kobayashi Method and imaging apparatus for correcting defective pixel of solid-state image sensor, and method for creating pixel information
JP2007124056A (en) * 2005-10-25 2007-05-17 Canon Inc Image processor, control method and program
US20090129663A1 (en) * 2007-11-20 2009-05-21 Quanta Computer Inc. Method and circuit for correcting defect pixels in image signal
US20120113301A1 (en) * 2010-11-09 2012-05-10 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, and image processing method
WO2013069445A1 (en) * 2011-11-11 2013-05-16 富士フイルム株式会社 Three-dimensional imaging device and image processing method
JP2014033415A (en) * 2012-08-06 2014-02-20 Canon Inc Image processor, image processing method and imaging apparatus
CN103828361A (en) * 2011-09-21 2014-05-28 富士胶片株式会社 Image processing device, method, program and recording medium, stereoscopic image capture device, portable electronic apparatus, printer, and stereoscopic image player device
CN104221370A (en) * 2012-03-29 2014-12-17 富士胶片株式会社 Image processing device, imaging device, and image processing method
US20140368696A1 (en) * 2013-06-18 2014-12-18 Canon Kabushiki Kaisha Image pickup apparatus, image pickup system, signal processing method, and non-transitory computer-readable storage medium
JP2015002400A (en) * 2013-06-14 2015-01-05 キヤノン株式会社 Shading correction apparatus, focus detection apparatus, imaging apparatus, shading correction method, program, and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3387177B2 (en) * 1993-11-11 2003-03-17 ソニー株式会社 Shading correction circuit
JP2014086863A (en) * 2012-10-23 2014-05-12 Sony Corp Imaging device, image processing method, and program

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040145664A1 (en) * 2003-01-17 2004-07-29 Hirokazu Kobayashi Method and imaging apparatus for correcting defective pixel of solid-state image sensor, and method for creating pixel information
JP2007124056A (en) * 2005-10-25 2007-05-17 Canon Inc Image processor, control method and program
US20090129663A1 (en) * 2007-11-20 2009-05-21 Quanta Computer Inc. Method and circuit for correcting defect pixels in image signal
US20120113301A1 (en) * 2010-11-09 2012-05-10 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, and image processing method
CN103828361A (en) * 2011-09-21 2014-05-28 富士胶片株式会社 Image processing device, method, program and recording medium, stereoscopic image capture device, portable electronic apparatus, printer, and stereoscopic image player device
WO2013069445A1 (en) * 2011-11-11 2013-05-16 富士フイルム株式会社 Three-dimensional imaging device and image processing method
CN104221370A (en) * 2012-03-29 2014-12-17 富士胶片株式会社 Image processing device, imaging device, and image processing method
JP2014033415A (en) * 2012-08-06 2014-02-20 Canon Inc Image processor, image processing method and imaging apparatus
JP2015002400A (en) * 2013-06-14 2015-01-05 キヤノン株式会社 Shading correction apparatus, focus detection apparatus, imaging apparatus, shading correction method, program, and storage medium
US20140368696A1 (en) * 2013-06-18 2014-12-18 Canon Kabushiki Kaisha Image pickup apparatus, image pickup system, signal processing method, and non-transitory computer-readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
付丽琴;韩焱;陈树越;: "基于立体视觉的DR图像定位技术", 应用基础与工程科学学报, no. 1 *
高盟;: "立体CG制作中的摄影机设置", 广播与电视技术, no. 12 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024016288A1 (en) * 2022-07-21 2024-01-25 北京小米移动软件有限公司 Photographic apparatus and control method

Also Published As

Publication number Publication date
CN113596431B (en) 2023-11-17
WO2016181620A1 (en) 2016-11-17

Similar Documents

Publication Publication Date Title
CN107465866B (en) Image processing apparatus and method, image capturing apparatus, and computer-readable storage medium
CN107960120B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
JP6239857B2 (en) Imaging apparatus and control method thereof
JP6381266B2 (en) IMAGING DEVICE, CONTROL DEVICE, CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
CN107431755B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
JP6700986B2 (en) Image processing device, imaging device, image processing method, and program
US11184521B2 (en) Focus detection apparatus, focus detection method, and storage medium
JP2015194736A (en) Imaging device and method for controlling the same
JP6254843B2 (en) Image processing apparatus and control method thereof
JP6285683B2 (en) Imaging apparatus and control method thereof
CN113596431B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
JP2015210285A (en) Imaging device, manufacturing method of the same, program thereof and recording medium
JP6789810B2 (en) Image processing method, image processing device, and imaging device
JP2015225310A (en) Image capturing device, control method therefor, program, and storage medium
JP2019114912A (en) Image processing device and image processing method, and imaging apparatus
JP6735621B2 (en) Image processing apparatus, control method thereof, program, and imaging apparatus
JP6765829B2 (en) Image processing device, control method of image processing device, imaging device
JP2015215395A (en) Imaging device, control device, control method, program, and storage medium
JP2019092215A (en) Image processing apparatus, imaging apparatus, image processing method, program, and storage medium
JP2017097142A (en) Control device, imaging apparatus, control method, program, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant