WO2015046147A1 - 撮像装置及び画像処理方法 - Google Patents

撮像装置及び画像処理方法 Download PDF

Info

Publication number
WO2015046147A1
WO2015046147A1 PCT/JP2014/075089 JP2014075089W WO2015046147A1 WO 2015046147 A1 WO2015046147 A1 WO 2015046147A1 JP 2014075089 W JP2014075089 W JP 2014075089W WO 2015046147 A1 WO2015046147 A1 WO 2015046147A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
focus detection
imaging
output
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2014/075089
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
淳郎 岡澤
輝彬 山崎
武史 福冨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corp filed Critical Olympus Corp
Priority to EP14849257.2A priority Critical patent/EP3051799A4/en
Priority to CN201480052931.4A priority patent/CN105580354B/zh
Publication of WO2015046147A1 publication Critical patent/WO2015046147A1/ja
Priority to US15/081,409 priority patent/US9503661B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements

Definitions

  • the present invention relates to an image pickup apparatus and an image processing method for processing a pixel output of an image pickup element that detects a focus state using a part of pixels as a phase difference type focus detection element.
  • Japanese Patent No. 3592147 proposes an imaging apparatus that detects a focus state by using a part of pixels of an imaging element as a focus detection element.
  • Japanese Patent No. 3592147 discloses an imaging device in which a part of pixels of an imaging element is set as a focus detection pixel, and a subject light flux that has passed through different pupil regions symmetric with respect to the optical axis center of the photographing lens has a plurality of focal points.
  • the focus state of the photographing lens is detected by forming an image on the detection pixel and detecting the phase difference between the light beams of the subject.
  • the focus detection pixel for example, a part of the region is shielded from light so that one of the subject luminous fluxes passing through different pupil regions of the photographing lens can be received. For this reason, the focus detection pixel is a defective pixel that cannot be used as an image as it is. Therefore, the imaging apparatus disclosed in Japanese Patent Application Laid-Open No. 2010-062640 is used for recording and display by adjusting the gain of the focus detection pixel and interpolating with surrounding pixels. It is possible.
  • the pixel output correction method disclosed in Japanese Patent Application Laid-Open No. 2010-062640 determines whether or not the pixel output is a high frequency component by looking at the standard deviation of the pixel output of the pixels around the focus detection pixel, and performs interpolation according to the determination result. In this case, the application ratio of the pixel output of the interpolation pixel and the pixel output of the focus detection pixel is changed. Therefore, in the method disclosed in Japanese Patent Application Laid-Open No. 2010-062640, there is a possibility that the application ratio cannot be calculated correctly for a subject in which a high-frequency repetitive pattern close to the pixel array of the image sensor is generated. In this case, pixel output reflecting the structure of the subject cannot be obtained in the focus detection pixels, and image quality is significantly deteriorated.
  • the present invention has been made in view of the above circumstances, and in an imaging apparatus that processes a pixel output from an imaging element having a focus detection pixel, an imaging apparatus capable of further suppressing deterioration in image quality due to the focus detection pixel, and An object is to provide an image processing method.
  • an imaging device includes an imaging device having an imaging pixel and a focus detection pixel, and a plurality of imaging pixels and focus detection pixels located around the focus detection pixel. Based on the pixel output of the image sensor, the high-frequency pattern detection unit for detecting the degree to which the subject image pattern received by the image sensor is high frequency, and the pixel output of the imaging pixel located around the focus detection pixel is used for interpolation calculation.
  • an interpolation processing unit that obtains an interpolation output corresponding to the pixel output of the focus detection pixel, weights and mixes the obtained interpolation output and the pixel output of the focus detection pixel And an application determining unit that determines a mixing ratio at the time of weighted mixing in the interpolation processing unit.
  • an image processing method for processing a pixel output of an image pickup device having an image pickup pixel and a focus detection pixel. Based on the pixel outputs of a plurality of imaging pixels and focus detection pixels ⁇ located in the periphery, the degree to which the subject image pattern received by the imaging element is high frequency is detected, and the interpolation output is based on the high frequency level.
  • FIG. 1 is a block diagram illustrating a configuration of a digital camera as an example of an imaging apparatus according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a pixel array of the image sensor.
  • FIG. 3 is a diagram illustrating a detailed configuration of the image processing unit.
  • FIG. 4 is a diagram illustrating a configuration of the interpolation determination processing unit.
  • FIG. 5A is a first diagram for explaining the phase shift.
  • FIG. 5B is a second diagram for explaining the phase shift.
  • FIG. 6 is a flowchart showing the moving image recording process.
  • FIG. 7A is a first diagram for describing high-frequency detection processing.
  • FIG. 7B is a second diagram for describing the high-frequency detection processing.
  • FIG. 1 is a block diagram illustrating a configuration of a digital camera (hereinafter simply referred to as a camera) as an example of an imaging apparatus according to an embodiment of the present invention.
  • a solid line with an arrow indicates a data flow
  • a broken line with an arrow indicates a control signal flow.
  • the camera 1 shown in FIG. 1 includes a photographic lens 11, an aperture 13, a mechanical shutter 15, a drive unit 17, an operation unit 19, an image sensor 21, an imaging control circuit 23, an A-AMP 25, and analog-digital conversion.
  • Unit (ADC) 27 CPU 29, image processing unit 31, focus detection circuit 33, video encoder 35, display unit 37, bus 39, DRAM (Dynamic Random Access Memory) 41, ROM (Read Only Memory) 43 and a recording medium 45.
  • the photographing lens 11 is a photographing optical system for forming an image from the subject 100 on the image sensor 21.
  • the taking lens 11 has a focus lens for adjusting the in-focus position, and may be configured as a zoom lens.
  • the diaphragm 13 is disposed on the optical axis of the photographic lens 11 and has a variable aperture. The diaphragm 13 limits the amount of light flux from the subject 100 that has passed through the photographing lens 11.
  • the mechanical shutter 15 is configured to be openable and closable. The mechanical shutter 15 adjusts the incident time of the subject light flux from the subject 100 to the image sensor 21 (exposure time of the image sensor 21). As the mechanical shutter 15, a known focal plane shutter, lens shutter, or the like can be employed.
  • the driving unit 17 controls driving of the photographing lens 11, the diaphragm 13, and the mechanical shutter 15 based on a control signal from the CPU 29.
  • the operation unit 19 includes various operation buttons such as a power button, a release button, a moving image button, a reproduction button, and a menu button, and various operation members such as a touch panel.
  • the operation unit 19 detects operation states of various operation members and outputs a signal indicating the detection result to the CPU 29.
  • the shooting mode of the camera 1 can be selected by the operation unit 19 of the present embodiment. That is, the user can select the shooting mode of the camera 1 from either the still image shooting mode or the moving image shooting mode by operating the operation unit 19.
  • the still image shooting mode is a shooting mode for shooting a still image
  • the moving image shooting mode is a shooting mode for shooting a moving image.
  • the image sensor 21 is disposed on the optical axis of the photographing lens 11, behind the mechanical shutter 15, and at a position where the subject light beam is imaged by the photographing lens 11.
  • the image sensor 21 is configured by two-dimensionally arranging photodiodes that constitute pixels.
  • the imaging device 21 in the present embodiment includes an imaging pixel for acquiring an image for recording and display and a focus detection pixel for performing focus detection.
  • the photodiode that constitutes the image sensor 21 generates a charge corresponding to the amount of received light.
  • the electric charge generated in the photodiode is accumulated in a capacitor connected to each photodiode.
  • the electric charge accumulated in this capacitor is read out as an image signal.
  • the image sensor 21 in the present embodiment has a plurality of different charge readout methods.
  • the charge accumulated in the image sensor 21 is read according to a control signal from the image capture control circuit 23.
  • a Bayer array color filter is disposed on the front surface of the photodiode constituting the pixel.
  • the Bayer array has a line in which R pixels and G (Gr) pixels are alternately arranged in the horizontal direction, and a line in which G (Gb) pixels and B pixels are alternately arranged.
  • the imaging control circuit 23 sets the drive mode of the image sensor 21 according to the control signal from the CPU 29, and controls the reading of the image signal from the image sensor 21 according to the readout method corresponding to the set drive mode. For example, in a drive mode in which real-time performance is required for reading pixel data from the image sensor 21 during live view display or moving image recording, a plurality of pixels of the same color are used so that pixel data can be read at high speed. These pixel data are mixed and read, or pixel data of a specific pixel is read out. On the other hand, in a drive mode that requires image quality rather than real-time characteristics, for example, when recording a still image, resolution is maintained by reading pixel data of all pixels without performing mixed readout or thinning readout.
  • the A-AMP 25 amplifies the image signal read from the image sensor 21 according to the control of the imaging control circuit 23.
  • the ADC 27 that functions as an imaging unit together with the imaging device 21, the imaging control circuit 23, and the A-AMP 25 converts the image signal output from the A-AMP 25 into a digital image signal (pixel data).
  • pixel data a collection of a plurality of pixel data is referred to as imaging data in this specification.
  • the CPU 29 performs overall control of the camera 1 in accordance with a program stored in the ROM 43.
  • the image processing unit 31 performs various types of image processing on the captured data to generate image data. For example, when recording a still image, the image processing unit 31 performs still image recording image processing to generate still image data. Similarly, when recording a moving image, the image processing unit 31 performs moving image recording image processing to generate moving image data. Furthermore, the image processing unit 31 performs display image processing during live view display to generate display image data. The configuration of the image processing unit 31 will be described in detail later.
  • the focus detection circuit 33 acquires pixel data from the focus detection pixels, and calculates a defocus direction and a defocus amount with respect to the in-focus position of the photographic lens 11 using a known phase difference method based on the acquired pixel data. .
  • the video encoder 35 converts the display image data generated by the image processing unit 31 into video data, inputs the video data to the display unit 37, and causes the display unit 37 to display an image.
  • the display unit 37 is a display unit such as a liquid crystal display or an organic EL display, and is disposed on the back of the camera 1, for example.
  • the display unit 37 displays an image according to the operation of the video encoder 35.
  • the display unit 37 is used for live view display, display of recorded images, and the like.
  • the bus 39 is connected to the ADC 27, the CPU 29, the image processing unit 31, the focus detection circuit 33, the DRAM 41, the ROM 43, and the recording medium 45, and functions as a transfer path for transferring various data generated in these blocks.
  • the DRAM 41 is an electrically rewritable memory, and temporarily stores various data such as the above-described imaging data (pixel data), recording image data, display image data, and processing data in the CPU 29.
  • SDRAM Serial Dynamic Random Access Memory
  • the ROM 43 is a non-volatile memory such as a mask ROM or a flash memory.
  • the ROM 43 stores various data such as a program used by the CPU 29 and adjustment values of the camera 1.
  • the recording medium 45 is configured to be built in or loaded in the camera 1 and records recording image data as an image file of a predetermined format.
  • FIG. 2 is a diagram illustrating an example of a pixel array of the image sensor 21. Further, on the right side of FIG. 2, a part of the pixels is shown in an enlarged manner.
  • FIG. 2 shows an example of the Bayer arrangement, but the arrangement of the color filters is not limited to the Bayer arrangement, and various arrangements can be applied.
  • the Bayer array image sensor 21 includes a pixel row in which R pixels and G (Gr) pixels are alternately arranged in the horizontal direction, and a pixel row in which G (Gb) pixels and B pixels are alternately arranged. And have.
  • a set of four pixels that is, a Gr pixel, an R pixel, a Gb pixel, and a B pixel shown in the enlarged view on the right side, is repeatedly arranged in the horizontal and vertical directions.
  • focus detection pixels 21b are arranged at positions of some imaging pixels 21a.
  • the focus detection pixel is, for example, a pixel in which one of the left and right regions is shielded by a light shielding film.
  • a row of focus detection pixels hereinafter referred to as right aperture focus detection pixels
  • a row of focus detection pixels hereinafter referred to as left aperture focus detection pixels
  • a row of focus detection pixels hereinafter referred to as left aperture focus detection pixels
  • the phase difference can be detected by the pair of the focus detection pixels in the A row and the focus detection pixels in the B row in FIG.
  • the phase difference can also be detected by a pair of focus detection pixels in the C row and focus detection pixels in the D row.
  • the light-shielding region in the focus detection pixel is the left or right region. In this case, it is possible to detect the horizontal phase difference. On the other hand, it is also possible to detect a vertical phase difference or a phase difference in an oblique direction by setting a light shielding region as an upper or lower region or an oblique region. Further, as long as it has a certain area, the light shielding area may not be 1/2 of the pixel region. Furthermore, although the focus detection pixels are arranged in the G pixels in FIG. 2, they may be arranged in any of the R pixels and the B pixels other than the G pixels. The example of FIG. 2 shows an example in which pupil division is performed by shielding a partial area of the focus detection pixel.
  • the focus detection pixel is a pair of subjects that have passed through different pupil areas of the photographing lens 11. It is only necessary to selectively receive one of the light beams. For this reason, instead of a configuration in which a partial region is shielded, pupil division may be performed using, for example, a pupil division microlens.
  • FIG. 2 shows an example in which focus detection pixels are arranged in a cycle of four pixels along the horizontal direction. The period in which the focus detection pixels are arranged is not limited to a specific period.
  • FIG. 3 is a diagram showing a detailed configuration of the image processing unit 31.
  • the image processing unit 31 includes a white balance (WB) correction processing unit 311, a gain amount estimation unit 312, a gain correction unit 313, an interpolation determination processing unit 314, an interpolation processing unit 315, A synchronization processing unit 316, a luminance characteristic conversion unit 317, an edge enhancement processing unit 318, a noise reduction (NR) processing unit 319, and a color reproduction processing unit 320 are included.
  • WB white balance
  • the WB correction processing unit 311 performs white balance correction processing for correcting the color balance of the image by amplifying each color component of the image data with a predetermined gain amount.
  • the gain amount estimation unit 312 estimates a gain amount for correcting the pixel output of the focus detection pixel in the gain correction unit 313. This gain amount is estimated according to the light amount reduction amount of the focus detection pixel with respect to the imaging pixel. The light amount reduction amount of the focus detection pixel is calculated based on the ratio between the pixel output of the focus detection pixel and the pixel output of the imaging pixel in the vicinity of the focus detection pixel. The gain correction unit 313 corrects the pixel output of the focus detection pixel according to the gain amount estimated by the gain amount estimation unit 312.
  • the interpolation determination processing unit 314 determines the application ratio of the pixel output of the focus detection pixel whose gain has been corrected by the gain correction unit 313.
  • the application ratio is, for example, a weighting coefficient when performing weighted addition between the pixel output of the focus detection pixel subjected to gain correction and the pixel output of the imaging pixels around the focus detection pixel.
  • the peripheral imaging pixels are, for example, four imaging pixels of the same color (the same component in the case of the Bayer array) around the focus detection pixel. Of course, the number of surrounding imaging pixels is not limited to four.
  • the application ratio is determined according to, for example, variation (standard deviation) in pixel output of imaging pixels around the phase difference detection pixel.
  • the interpolation processing unit 315 performs an interpolation process in which the pixel output of the focus detection pixel gain-corrected by the gain correction unit 313 and the pixel output of the surrounding imaging pixels are weighted and added according to the application ratio determined by the interpolation determination processing unit 314. Do.
  • the synchronization processing unit 316 includes a plurality of pieces of imaging data in which one pixel corresponds to one color component, such as imaging data output via the imaging element 21 corresponding to a Bayer array. Is converted into image data corresponding to the color component.
  • the luminance characteristic conversion unit 317 converts the luminance characteristic (gamma characteristic) of the image data so as to be suitable for display and recording.
  • the edge enhancement processing unit 318 multiplies the edge signal extracted from the image data using a bandpass filter or the like by an edge enhancement coefficient, and adds the result to the original image data to thereby obtain an edge (contour) component in the image data. To emphasize.
  • the NR processing unit 319 removes noise components in the image data using a coring process or the like.
  • the color reproduction processing unit 320 performs various processes for making color reproduction of image data appropriate. As this process, for example, there is a color matrix calculation process.
  • the color matrix calculation process is a process of multiplying image data by a color matrix coefficient corresponding to, for example, a white balance mode.
  • the color reproduction processing unit 320 corrects saturation and hue.
  • FIG. 4 is a diagram illustrating a configuration of the interpolation determination processing unit 314.
  • the interpolation determination processing unit 314 includes a high frequency pattern detection unit 3141 and an application determination unit 3142.
  • the high frequency pattern detection unit 3141 detects the degree to which the subject image pattern in the imaging data has a high frequency.
  • the gain application determination unit 3142 calculates the application ratio of the pixel output of the focus detection pixel whose gain has been corrected by the gain correction unit 313 based on the degree to which the subject image pattern detected by the high frequency pattern detection unit 3141 has a high frequency. Details of the detection method of the degree of high frequency of the subject image pattern and the calculation method of the application ratio will be described later.
  • FIG. 5A shows an imaging state of an image in the imaging pixel 21a.
  • FIG. 5B shows an image formation state of the image in the focus detection pixel 21b.
  • the pair that is emitted from the subject and passes through different pupil regions that are symmetric with respect to the optical axis center of the taking lens 11. Is formed at the same position on the image sensor 21.
  • the peak position of the subject image formed on the imaging pixel 21a matches the peak position of the subject image formed on the focus detection pixel 21b.
  • both paired subject light fluxes that have passed through different pupil regions are incident. Therefore, there is no reduction in the amount of light for the imaging pixel 21a.
  • the focus detection pixel 21b only one of the paired subject light beams is incident on the image sensor 21, as shown in FIG. 5B. Therefore, the light amount of the focus detection pixel 21b is reduced.
  • a pair of subject luminous fluxes emitted from the subject and passing through different pupil regions of the photographing lens 11 form images at different positions on the image sensor 21. That is, there is a phase difference between subject images formed by these paired subject light fluxes.
  • the defocus amount and the defocus direction of the photographing lens 11 are detected.
  • both the subject luminous fluxes that have passed through different pupil regions are incident on the imaging pixel 21a. Therefore, the imaging pixel 21a is blurred by the subject light flux incident at different positions although the light amount does not decrease.
  • phase shift a phenomenon that the peak position is shifted.
  • the image processing unit 31 corrects the influence of such moire.
  • FIG. 6 is a flowchart showing a moving image recording process by the imaging apparatus.
  • the process of the flowchart shown in FIG. 6 is executed by the CPU 29 based on a program stored in the ROM 43.
  • the process shown in FIG. 6 can also be applied to a still image recording process and a live view display process.
  • the CPU 29 causes the imaging device 21 to perform imaging (exposure) (step 101).
  • An image signal obtained by imaging is read out from the image sensor 21 in accordance with a readout method corresponding to a preset drive mode.
  • the read image signal is amplified by the A-AMP 25, digitized by the ADC 27, and temporarily stored in the DRAM 41 as imaging data.
  • the CPU 29 performs focus detection processing (step S102).
  • the CPU 29 causes the focus detection circuit 33 to execute focus detection processing.
  • the focus detection circuit 33 reads out pixel data corresponding to the focus detection pixel from the imaging data temporarily stored in the DRAM 41, and uses this pixel data for a known phase difference method. To calculate the defocus direction and the defocus amount of the photographing lens 11.
  • the CPU 29 controls the drive unit 17 based on the defocus direction and the defocus amount of the photographing lens 11 detected by the focus detection circuit 33 to focus the photographing lens 11.
  • the CPU 29 causes the image processing unit 31 to execute image processing.
  • the WB correction processing unit 311 of the image processing unit 31 performs white balance correction processing on the pixel data (step S103).
  • the gain amount estimation unit 312 performs gain estimation processing (step S104).
  • the gain amount is estimated from, for example, the ratio or difference between the pixel output of the focus detection pixel and the pixel output of the same color imaging pixels around the focus detection pixel.
  • the pixel output ratio Dif_p is calculated according to the following (Equation 1).
  • Gr1 in (Expression 1) indicates the pixel output of the imaging pixel
  • Gr2 in (Expression 1) indicates the pixel output of the focus detection pixel
  • the imaging pixel Gr1 is a pixel output of imaging pixels of the same color arranged in a direction orthogonal to the detection direction of the phase difference by the focus detection pixel.
  • the image pickup pixel Gr1 is, for example, two pixels of the focus detection pixel Gr2 upward or downward.
  • the amount of pixel shift is not limited to two pixels.
  • the gain correction process is a correction for multiplying the pixel output of each focus detection pixel by the value obtained by (Equation 1). By this correction, the light amount decrease in the pixel output of each focus detection pixel is corrected.
  • the high frequency pattern detection unit 3141 of the interpolation determination processing unit 314 performs high frequency pattern detection processing for detecting the degree to which the subject image pattern in the imaging data is high frequency (step S106).
  • high-frequency pattern detection process an example of the high-frequency pattern detection process will be described.
  • the “subject image pattern has a high degree of high frequency” state refers to a state in which a repetitive pattern due to moire or the like is generated in the subject image.
  • the high frequency pattern detection unit 3141 as shown in FIG. 7A, is the same in the horizontal direction that is the detection direction of the phase difference with respect to the focus detection pixel Gr (AF) to be subjected to the interpolation processing.
  • An arithmetic average value of the pixel outputs of the focus detection pixels straddling the vertical direction of the focus detection pixel Gr (AF) that is the target of the interpolation processing, that is, the pixel of the two focus detection pixels Gr (AF) at the position is calculated.
  • the high frequency pattern detection unit 3141 is shifted by two pixels along the horizontal direction (right direction and left direction) that is the detection direction of the phase difference with respect to the focus detection pixel Gr (AF) to be subjected to interpolation processing.
  • the arithmetic average value of the pixel outputs of the two focus detection pixels Gr (AF) straddling the vertical direction of another focus detection pixel Gr (AF) at the same position is calculated.
  • the high frequency pattern detection unit 3141 integrates the three arithmetic average values obtained by the above calculation.
  • Each arithmetic average value represents an average change amount of the pixel output of the focus detection pixel viewed in a direction perpendicular to the detection direction of the phase difference.
  • the result of integrating these arithmetic average values represents the pattern of the subject image formed on the focus detection pixel.
  • the high frequency pattern detection unit 3141 performs imaging across the vertical direction of the imaging pixel Gb around the focus detection pixel Gr (AF) to be subjected to the interpolation processing (slightly lower right in the drawing). An arithmetic average value of the pixel output of the pixel Gb is calculated. Further, the high frequency pattern detection unit 3141 is arranged in the vertical direction of the imaging pixel Gb at a position shifted by two pixels along the horizontal direction (right direction and left direction) with respect to the imaging pixel Gb whose arithmetic mean value has been calculated previously. An arithmetic average value of pixel outputs of the two imaging pixels Gb straddling is calculated.
  • the high frequency pattern detection unit 3141 integrates the three arithmetic average values obtained by the above calculation.
  • Each arithmetic mean value represents an average change amount of the pixel output of the imaging pixel viewed in a direction perpendicular to the phase difference detection direction.
  • the result of integrating these arithmetic average values represents the pattern of the subject image formed on the imaging pixel.
  • the high frequency pattern detection unit 3141 uses the absolute difference between the integrated value calculated for the focus detection pixel Gr (AF) and the integrated value calculated for the imaging pixel Gb as an evaluation value indicating that the subject image pattern has a high frequency. calculate.
  • the evaluation value increases as the difference between the pixel output change in the focus detection pixel Gr (AF) and the pixel output change in the imaging pixel Gb increases. Therefore, the evaluation value indicates that the higher the value, the higher the frequency of the subject image pattern, that is, the higher the possibility that the subject image is a repetitive pattern. In the case of a repetitive pattern, there is a large difference between the pixel output change of the focus detection pixel and the pixel output change of the surrounding imaging pixels.
  • the focus detection pixel is corrected using the peripheral pixels, the focus is changed.
  • a large error occurs in the pixel output of the detection pixel due to the influence of surrounding pixels. Therefore, in the present embodiment, when the subject image has a high degree of high frequency, the proportion of interpolation processing using surrounding pixels is reduced.
  • the application determination unit 3142 of the interpolation determination processing unit 314 performs gain application determination processing for determining the application ratio of the pixel output of the focus detection pixel whose gain has been corrected by the gain correction unit 313 (step). S107).
  • the application determination unit 3142 first calculates a provisional application ratio according to the variation (standard deviation) of the pixel output of the imaging pixels around the phase difference detection pixel. Then, the application determination unit 3142 determines that the application ratio of the pixel output of the focus detection pixel whose gain is corrected increases as the evaluation value increases, that is, as the degree of high frequency of the subject image pattern increases. Determine the application rate. For example, the magnitude of the application ratio of the pixel output of the focus detection pixel whose gain is linearly corrected with respect to the magnitude of the evaluation value is changed.
  • the interpolation processing unit 315 After the gain application determination process, the interpolation processing unit 315 performs the pixel output of the focus detection pixel gain-corrected by the gain correction unit 313 according to the application ratio determined by the interpolation determination processing unit 314 and the pixel output of the surrounding imaging pixels. Interpolation processing for performing weighted addition is performed (step S108).
  • the image processing unit 31 executes image processing after the interpolation processing (step S109).
  • the CPU 29 records the image data temporarily stored in the DRAM 41 as a result of the image processing on the recording medium 45 (step S110).
  • the CPU 29 determines whether or not to stop moving image recording (step S111).
  • the CPU 29 determines the operation state of the release button of the operation unit 19. That is, when the release button is pressed again, the CPU 29 determines to stop moving image recording.
  • step S111 When it is determined in step S111 that the moving image recording is not stopped, the CPU 29 returns the process to step S101 and continues the moving image recording. On the other hand, when it is determined in step S112 that the moving image recording is to be stopped, the CPU 29 ends the process of FIG.
  • the subject image pattern around the focus detection pixel is determined to some extent as a high-frequency pattern, and if the subject image pattern is a high-frequency pattern as high as a certain degree, imaging around the focus detection pixel is performed.
  • the application ratio of the pixel output of the focus detection pixel whose gain is corrected as compared with the interpolation output from the pixel is increased. As a result, it is possible to reduce image quality degradation caused by applying interpolation processing when the subject image is a repetitive pattern.
  • the arithmetic average value of the pixel outputs of the focus detection pixels two pixels above and two pixels below the focus detection pixels to be subjected to the interpolation processing is calculated.
  • the pixel shift amount from the focus detection pixel to be interpolated is not limited to two pixels.
  • the average value of the pixel outputs of the focus detection pixels above and below the four pixels may be calculated.
  • the focus detection pixel for obtaining the arithmetic mean value may be a focus detection pixel straddling another focus detection pixel shifted in the horizontal direction from the focus detection pixel to be interpolated.
  • the arithmetic mean value is calculated for the imaging pixel adjacent to the horizontal direction one pixel with respect to the focus detection pixel to be interpolated.
  • the amount of horizontal pixel shift is not limited to one pixel. Further, as a result of pixel shifting in the horizontal direction, the color of the imaging pixel for which the arithmetic average value is obtained may be different from the focus detection pixel, or the imaging pixel for which the arithmetic average value is obtained becomes the focus detection pixel. Also good.
  • the pixel shift amount may be adaptively changed according to various conditions such as the drive mode of the image sensor 21.
  • the drive mode of the image sensor 21 is a moving image recording drive mode or during live view display, it is possible to accurately determine to some extent the subject image pattern has a high frequency by reducing the pixel shift amount.
  • an arithmetic average value of pixel outputs of pixels arranged in the vertical direction is calculated.
  • the phase difference detection direction is the horizontal direction.
  • the subject image pattern is determined to some extent at a high frequency by calculating the arithmetic average value of the pixels arranged in the horizontal direction. That is, an arithmetic average value of pixels arranged in a direction perpendicular to the phase difference detection direction may be calculated.
  • an arithmetic average value of pixels arranged in a direction perpendicular to the detection direction of the phase difference is calculated. It may be an average value.
  • the evaluation value may be a difference square value instead of the difference absolute value. As described above, the evaluation value and the like are calculated by appropriately combining the four arithmetic operations.
  • Each process according to the above-described embodiment can be stored as a program that can be executed by the CPU 29.
  • memory cards ROM cards, RAM cards, etc.
  • magnetic disks floppy disks, hard disks, etc.
  • optical disks CD-ROM, DVD, etc.
  • storage media of external storage devices such as semiconductor memories, etc. are distributed. be able to.
  • the CPU 29 can execute the above-described processing by reading a program stored in the storage medium of the external storage device and controlling the operation by the read program.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Focusing (AREA)
PCT/JP2014/075089 2013-09-27 2014-09-22 撮像装置及び画像処理方法 Ceased WO2015046147A1 (ja)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP14849257.2A EP3051799A4 (en) 2013-09-27 2014-09-22 Imaging device and image processing method
CN201480052931.4A CN105580354B (zh) 2013-09-27 2014-09-22 摄像装置和图像处理方法
US15/081,409 US9503661B2 (en) 2013-09-27 2016-03-25 Imaging apparatus and image processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013202540A JP5775918B2 (ja) 2013-09-27 2013-09-27 撮像装置、画像処理方法及び画像処理プログラム
JP2013-202540 2013-09-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/081,409 Continuation US9503661B2 (en) 2013-09-27 2016-03-25 Imaging apparatus and image processing method

Publications (1)

Publication Number Publication Date
WO2015046147A1 true WO2015046147A1 (ja) 2015-04-02

Family

ID=52743283

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/075089 Ceased WO2015046147A1 (ja) 2013-09-27 2014-09-22 撮像装置及び画像処理方法

Country Status (5)

Country Link
US (1) US9503661B2 (enExample)
EP (1) EP3051799A4 (enExample)
JP (1) JP5775918B2 (enExample)
CN (1) CN105580354B (enExample)
WO (1) WO2015046147A1 (enExample)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102346622B1 (ko) * 2014-08-04 2022-01-04 엘지이노텍 주식회사 이미지 센서 및 이를 포함하는 촬상 장치
JP6584059B2 (ja) * 2014-09-26 2019-10-02 キヤノン株式会社 撮像装置及びその制御方法、プログラム、記憶媒体
JP6981410B2 (ja) * 2016-06-28 2021-12-15 ソニーグループ株式会社 固体撮像装置、電子機器、レンズ制御方法および車両
WO2018181164A1 (ja) 2017-03-30 2018-10-04 富士フイルム株式会社 撮像装置及び画像処理方法
EP4518311A4 (en) * 2022-04-25 2025-05-14 Beijing Xiaomi Mobile Software Co., Ltd. PHOTOGRAPHY DEVICES AND METHOD FOR CONTROLLING THEM

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3592147B2 (ja) 1998-08-20 2004-11-24 キヤノン株式会社 固体撮像装置
JP2010062640A (ja) 2008-09-01 2010-03-18 Canon Inc 撮像装置、撮像装置の制御方法及びプログラム
JP2010091848A (ja) * 2008-10-09 2010-04-22 Nikon Corp 焦点検出装置および撮像装置
JP2010181485A (ja) * 2009-02-03 2010-08-19 Nikon Corp 撮像装置および撮像素子
JP2010271670A (ja) * 2009-05-25 2010-12-02 Canon Inc 撮像装置
JP2011081271A (ja) * 2009-10-08 2011-04-21 Canon Inc 撮像装置
JP2011124704A (ja) * 2009-12-09 2011-06-23 Canon Inc 画像処理装置

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020135683A1 (en) * 1999-12-20 2002-09-26 Hideo Tamama Digital still camera system and method
JP4310317B2 (ja) * 2006-02-06 2009-08-05 キヤノン株式会社 可視成分割合算出方法、およびそれを用いた光学機器
KR100976284B1 (ko) * 2007-06-07 2010-08-16 가부시끼가이샤 도시바 촬상 장치
JP4823167B2 (ja) * 2007-08-10 2011-11-24 キヤノン株式会社 撮像装置
WO2009022634A1 (en) * 2007-08-10 2009-02-19 Canon Kabushiki Kaisha Image-pickup apparatus and control method therof
KR101544914B1 (ko) * 2009-02-12 2015-08-17 삼성전자주식회사 보간 픽셀을 생성하는 방법
JP5476810B2 (ja) * 2009-06-23 2014-04-23 株式会社ニコン 撮像装置
JP5045801B2 (ja) * 2009-09-09 2012-10-10 株式会社ニコン 焦点検出装置、撮影レンズユニット、撮像装置およびカメラシステム
JP5212396B2 (ja) * 2010-02-10 2013-06-19 株式会社ニコン 焦点検出装置
US8702372B2 (en) * 2010-05-03 2014-04-22 Bha Altair, Llc System and method for adjusting compressor inlet fluid temperature
JP5642433B2 (ja) * 2010-06-15 2014-12-17 富士フイルム株式会社 撮像装置及び画像処理方法
JP5473977B2 (ja) * 2011-04-14 2014-04-16 キヤノン株式会社 撮像装置およびカメラシステム
JP5956782B2 (ja) * 2011-05-26 2016-07-27 キヤノン株式会社 撮像素子及び撮像装置
US20130002936A1 (en) * 2011-06-30 2013-01-03 Nikon Corporation Image pickup apparatus, image processing apparatus, and storage medium storing image processing program
CN103842879B (zh) * 2011-09-30 2016-06-01 富士胶片株式会社 成像装置和用于计算相位差像素的灵敏度比率的方法
JP5619294B2 (ja) * 2011-09-30 2014-11-05 富士フイルム株式会社 撮像装置及び合焦用パラメータ値算出方法
JP5888940B2 (ja) * 2011-11-11 2016-03-22 オリンパス株式会社 撮像装置、撮像装置の制御方法、およびプログラム
JP6239820B2 (ja) * 2011-12-19 2017-11-29 キヤノン株式会社 撮像装置及びその制御方法
CN108401114A (zh) * 2012-05-07 2018-08-14 株式会社尼康 焦点检测装置
WO2013190899A1 (ja) * 2012-06-19 2013-12-27 富士フイルム株式会社 撮像装置及び自動焦点調節方法
CN104854496B (zh) * 2012-11-22 2017-04-12 富士胶片株式会社 摄像装置、散焦量运算方法及摄像光学系统
WO2014091854A1 (ja) * 2012-12-11 2014-06-19 富士フイルム株式会社 画像処理装置、撮像装置、画像処理方法、及び画像処理プログラム
JP2014179939A (ja) * 2013-03-15 2014-09-25 Sony Corp 信号処理装置および信号処理方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3592147B2 (ja) 1998-08-20 2004-11-24 キヤノン株式会社 固体撮像装置
JP2010062640A (ja) 2008-09-01 2010-03-18 Canon Inc 撮像装置、撮像装置の制御方法及びプログラム
JP2010091848A (ja) * 2008-10-09 2010-04-22 Nikon Corp 焦点検出装置および撮像装置
JP2010181485A (ja) * 2009-02-03 2010-08-19 Nikon Corp 撮像装置および撮像素子
JP2010271670A (ja) * 2009-05-25 2010-12-02 Canon Inc 撮像装置
JP2011081271A (ja) * 2009-10-08 2011-04-21 Canon Inc 撮像装置
JP2011124704A (ja) * 2009-12-09 2011-06-23 Canon Inc 画像処理装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3051799A4

Also Published As

Publication number Publication date
JP5775918B2 (ja) 2015-09-09
US20160212364A1 (en) 2016-07-21
EP3051799A1 (en) 2016-08-03
EP3051799A4 (en) 2017-05-03
US9503661B2 (en) 2016-11-22
JP2015070432A (ja) 2015-04-13
CN105580354B (zh) 2018-10-30
CN105580354A (zh) 2016-05-11

Similar Documents

Publication Publication Date Title
JP5529928B2 (ja) 撮像装置及び撮像装置の制御方法
JP6460653B2 (ja) 画像処理装置、それを備えた撮像装置、画像処理方法、及び画像処理プログラム
CN104333680B (zh) 拍摄装置以及图像处理方法
JP6021622B2 (ja) 画像処理装置及び画像処理方法
JP6099536B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP5826901B2 (ja) 撮像装置及び光軸位置算出方法
JP5775918B2 (ja) 撮像装置、画像処理方法及び画像処理プログラム
JP5701942B2 (ja) 撮像装置、カメラシステム及び画像処理方法
JP6364259B2 (ja) 撮像装置、画像処理方法、及び画像処理プログラム
JP6270400B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP5792349B2 (ja) 撮像装置及び撮像装置の制御方法
US20170155882A1 (en) Image processing apparatus, image processing method, imaging apparatus, and recording medium

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201480052931.4

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14849257

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014849257

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014849257

Country of ref document: EP