WO2018196704A1 - 双核对焦图像传感器及其对焦控制方法和成像装置 - Google Patents

双核对焦图像传感器及其对焦控制方法和成像装置 Download PDF

Info

Publication number
WO2018196704A1
WO2018196704A1 PCT/CN2018/084023 CN2018084023W WO2018196704A1 WO 2018196704 A1 WO2018196704 A1 WO 2018196704A1 CN 2018084023 W CN2018084023 W CN 2018084023W WO 2018196704 A1 WO2018196704 A1 WO 2018196704A1
Authority
WO
WIPO (PCT)
Prior art keywords
focus
photosensitive
dual
core
output value
Prior art date
Application number
PCT/CN2018/084023
Other languages
English (en)
French (fr)
Inventor
曾元清
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP18790152.5A priority Critical patent/EP3618425A4/en
Publication of WO2018196704A1 publication Critical patent/WO2018196704A1/zh
Priority to US16/664,323 priority patent/US10893187B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/285Systems for automatic generation of focusing signals including two or more different focus detection devices, e.g. both an active and a passive focus detecting device
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14603Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • H01L27/14627Microlenses
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14643Photodiode arrays; MOS imagers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present application relates to the field of image device technologies, and in particular, to a dual-core in-focus image sensor, a focus control method thereof, and an imaging device.
  • dual-core full-pixel focusing technology has become the most advanced focusing technology on the market. Compared to contrast focusing, laser focusing and phase focusing technology, dual-core full-pixel focusing technology has a faster focusing speed and a wider focusing range. In addition, due to the dual-core full-pixel focusing technology, the "dual-core" photodiode is "combined" into one pixel for output during imaging, which can ensure the focusing performance without affecting the image quality.
  • the purpose of the present application is to solve at least one of the above technical problems to some extent.
  • the first object of the present application is to provide a focus control method for a dual-core focus image sensor, which can increase the amount of light passing through the focus pixel, effectively improve the focus speed in a low light environment, and ensure accurate image color reproduction. Sex.
  • a second object of the present application is to propose a dual-core in-focus image sensor.
  • a third object of the present application is to propose an image forming apparatus.
  • a fourth object of the present application is to propose a mobile terminal.
  • the first aspect of the present application provides a focus control method for a dual-core in-focus image sensor, wherein the dual-core in-focus image sensor includes: a photosensitive unit array, and a filter unit disposed on the photosensitive unit array.
  • microlens array positioned above the array of filter elements, wherein the microlens array comprises a first microlens and a second microlens, the first microlens being elliptical, one of the first micro
  • the lens covers a white filter unit, and the white filter unit covers a focus photosensitive unit, one of the white filter units has an area of one half of the focus photosensitive unit, and the other half of the focus photosensitive unit
  • Focusing control is performed based on the first phase difference information and the second phase difference information.
  • the focus control method of the dual-core in-focus image sensor of the embodiment of the present application covers one white filter unit based on an elliptical first microlens, half of the focus photosensitive unit is covered by the white filter unit, and the other half is composed of a plurality of second micro a lens covering, a second microlens covering a dual-core focusing photosensitive pixel, by reading first phase difference information of the focus photosensitive unit and second phase difference information of the dual-core focusing photosensitive pixel, and according to the first phase difference information and the second phase
  • the difference information is subjected to focus control, which can increase the amount of light passing through the focus pixel, and effectively improve the focus speed in a low light environment; by retaining half of the dual-core focus photosensitive pixels in the focus photosensitive unit, the accuracy of image color reproduction can be ensured.
  • a second aspect of the present application provides a dual-core in-focus image sensor, including: a photosensitive cell array, an array of filter cells disposed on the photosensitive cell array, and an array of the filter cells.
  • a microlens array wherein the microlens array comprises a first microlens and a second microlens, the first microlens is elliptical, and the first microlens covers a white filter unit, a The white filter unit covers a focus photosensitive unit, one of the white filter units has an area of one half of the focus photosensitive unit, and the other half of the focus photosensitive unit is covered by a plurality of second microlenses, one The two microlenses cover a dual-core focus sensitive pixel.
  • the first microlens is set to an elliptical shape by providing a microlens array including the first microlens and the second microlens, and one half of the focus photosensitive unit is covered by the white filter unit, and the other Half is covered by a plurality of second microlenses, and a second microlens covers a dual-core focusing photosensitive pixel, which can increase the amount of light passing through the focusing pixels, and provide a hardware basis for improving the focusing speed in a low light environment and ensuring the accuracy of image color reproduction. .
  • an embodiment of the third aspect of the present application provides an imaging apparatus, comprising: the dual-core in-focus image sensor according to the second aspect of the invention; and a control module, wherein the control module controls the photosensitive unit
  • the array enters a focus mode; reading first phase difference information of the focus photosensitive unit and second phase difference information of the dual-core focus sensitive pixel; and focusing according to the first phase difference information and the second phase difference information control.
  • the imaging device of the embodiment of the present application covers a white filter unit based on an elliptical first microlens, half of the focus photosensitive unit is covered by the white filter unit, and the other half is covered by the plurality of second microlenses, and the second The microlens covers a dual-core focusing photosensitive pixel, and reads the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual-core focusing photosensitive pixel, and performs focus control according to the first phase difference information and the second phase difference information, It can increase the amount of light passing through the focusing pixel, effectively improving the focusing speed in low light conditions; by retaining half of the dual-core focusing photosensitive pixels in the focusing photosensitive unit, the accuracy of image color reproduction can be ensured.
  • a fourth aspect of the present application further provides a mobile terminal, which includes a housing, a processor, a memory, a circuit board, and a power supply circuit, wherein the circuit board is disposed in the housing Inside the enclosed space, the processor and the memory are disposed on the circuit board; the power circuit is configured to supply power to each circuit or device of the mobile terminal; and the memory is used to store an executable program a processor executing a program corresponding to the executable program code by reading executable program code stored in the memory for performing focusing of the dual-core in-focus image sensor proposed by the first aspect of the invention Control Method.
  • the mobile terminal of the embodiment of the present application covers a white filter unit based on an elliptical first microlens, half of the focus photosensitive unit is covered by the white filter unit, and the other half is covered by the plurality of second microlenses, and the second The microlens covers a dual-core focusing photosensitive pixel, and reads the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual-core focusing photosensitive pixel, and performs focus control according to the first phase difference information and the second phase difference information, It can increase the amount of light passing through the focusing pixel, effectively improving the focusing speed in low light conditions; by retaining half of the dual-core focusing photosensitive pixels in the focusing photosensitive unit, the accuracy of image color reproduction can be ensured.
  • FIG. 1 is a schematic structural view of a conventional dual-core in-focus image sensor
  • FIG. 2 is a cross-sectional view of a dual-core in-focus image sensor in accordance with one embodiment of the present application
  • FIG. 3 is a top plan view of a dual-core in-focus image sensor in accordance with an embodiment of the present application
  • Figure 4 is a first microlens arrangement density distribution diagram
  • FIG. 5 is a flowchart of a focus control method of a dual-core in-focus image sensor according to an embodiment of the present application
  • FIG. 6 is a flowchart of a focus control method of a dual-core in-focus image sensor according to another embodiment of the present application.
  • FIG. 7 is a schematic diagram of acquiring a pixel value of a focus photosensitive unit by using an interpolation algorithm
  • FIG. 8 is a schematic structural diagram of an image forming apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
  • Dual-core full-pixel focusing technology is the most advanced focusing technology on the market.
  • the dual-core focusing sensor structure used in this focusing technology is shown in Figure 1.
  • Each microlens corresponds to two photodiodes. .
  • the values of "1" and “2" are added to obtain a single-component pixel value.
  • the values of "1" and “2” are read out, and the driving amount and driving direction of the lens can be calculated by calculating the phase difference between the two.
  • the photodiode of each pixel is divided into two, so that the amount of light passing through is reduced, thereby causing difficulty in dual-core focusing in a low light environment.
  • the present application proposes a focusing control method for a dual-core focusing image sensor, which can increase the amount of light passing through the focusing pixel and effectively improve the focusing in a low-light environment. Speed while ensuring the accuracy of image color reproduction.
  • the dual-core in-focus image sensor required for implementing the focus control method of the dual-core in-focus image sensor proposed in the present application is first described below.
  • FIG. 2 is a cross-sectional view of a dual-core in-focus image sensor in accordance with an embodiment of the present application
  • FIG. 3 is a top plan view of a dual-core in-focus image sensor in accordance with an embodiment of the present application.
  • the dual-core in-focus image sensor 100 includes a photosensitive cell array 10, a filter unit array 20, and a microlens array 30.
  • the filter unit array 20 is disposed on the photosensitive cell array 10, and the microlens array 30 is disposed on the filter unit array 20.
  • the microlens array 30 includes a first microlens 31 and a second microlens 32, and the first microlens 31 is provided in an elliptical shape.
  • a first microlens 31 covers a white filter unit 21, and a white filter unit 21 covers a focus photosensitive unit 11.
  • the area of one white filter unit 21 is one half of a focus photosensitive unit 11, and one focus photosensitive unit 11
  • the other half is covered by a plurality of second microlenses 32, one second microlens 32 covers one filter unit 22, and one filter unit 22 covers one dual core focus sensitive pixel 12.
  • the arrangement of the dual-core focus photosensitive pixels 12 is a Bayer pattern.
  • the Bayer structure can use the traditional algorithm for Bayer structure to process image signals, so that no major adjustments in hardware structure are required.
  • the dual-core focusing photosensitive pixel 12 has two photodiodes, which are a first photodiode 121 and a second photodiode 122, respectively corresponding to "1" and "2" of each of the dual-core focusing photosensitive pixels 12 in FIG.
  • the focus photosensitive unit 11 includes N*N photosensitive pixels 110, and the white filter unit 21 covers the upper half, the lower half, the left half, or the right half of the focus photosensitive unit 11, The application does not limit the position where the white filter unit 21 is covered in the focus photosensitive unit 11.
  • the focus photosensitive unit 11 includes 2*2 photosensitive pixels 110, and the white filter unit 21, that is, the image covers the upper half of the focus photosensitive unit 11, and the focus photosensitive unit
  • the lower half of 11 is covered by two second microlenses 32, one of which covers one red filter unit and the other of which covers a blue filter unit.
  • N*N photosensitive pixels 110 are divided into two, half of which is covered by one first microlens 31, and is covered by the first microlens 31.
  • the other half is covered by a plurality of second microlenses 32, and the portion covered by any one of the second microlenses 32 corresponds to one dual-core focus sensitive pixel.
  • the microlens array 30 includes a horizontal centerline and a vertical centerline, and the first microlenses 31 are plural.
  • the plurality of first microlenses 31 include a first group of first microlenses 31 disposed at a horizontal center line and a second group of first microlenses 31 disposed at a vertical center line.
  • the microlens array 30 may further include four edge lines.
  • the plurality of first microlenses 31 further include a third group of first microlenses 31 disposed on the four side lines.
  • the lens densities of the first group of first microlenses 31 and the second group of first microlenses 31 are greater than those of the third group of first microlenses 31 Lens density.
  • FIG. 4 is a first microlens arrangement density distribution map.
  • the white filter unit 21 covered by the first microlens 31, that is, W in FIG. 4 is scattered throughout the dual-core in-focus image sensor, accounting for 3% to 5% of the total number of pixels.
  • the horizontal center line and the vertical center line of the lens array 30 are more densely distributed, and the W distribution is sparse on the four side lines, and the focus accuracy and speed of the middle portion of the picture are prioritized, which is effective without affecting the image quality. Increase the focus speed.
  • W in FIGS. 3 and 4 indicates that the filter unit covered by the first microlens 31 in the dual-core in-focus image sensor is the white filter unit 21, and a larger amount of light can be obtained when the white filter unit 21 is used.
  • the filter unit covered by the first microlens 31 may also be a green filter unit, that is, W in FIGS. 3 and 4 may be replaced with G, and when the green filter unit is used, more information is available at the time of image processing. It should be understood that, in the embodiment of the present application, only the white filter unit is taken as an example, and the present invention is not limited thereto.
  • FIG. 5 is a flowchart of a focus control method of a dual-core in-focus image sensor according to an embodiment of the present application. As shown in FIG. 5, the method includes the following steps:
  • the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual core focus sensitive pixel may be further read.
  • reading the first phase difference information of the focus photosensitive unit may include: reading an output value of a part of the photosensitive pixels in the focus photosensitive unit as a first output value; reading the focus And outputting a value of another portion of the photosensitive pixels in the photosensitive unit as a second output value; acquiring first phase difference information according to the first output value and the second output value.
  • the first phase difference information of the focus photosensitive unit is read, specifically, the part of the focus photosensitive unit covered by the white filter unit is read. A phase difference information.
  • the upper half of the focus photosensitive unit is covered by a white filter unit (W in the figure), and the left part of the white filter unit can be used as a part of the focus photosensitive unit, and the right part of the white filter unit can be used as the right part.
  • the output value at “1” in the white filter unit is the output value of a part of the photosensitive pixels in the focus photosensitive unit, and is used as the first output value, “2” in the white filter unit.
  • the output value at that point is the output value of another part of the photosensitive pixels in the focus photosensitive unit, and is used as the second output value.
  • the first phase difference information is obtained according to the first output value and the second output value. For example, the difference between the first output value and the second output value may be used as the first phase difference information.
  • the white filter unit may cover the upper half, the lower half, the left half or the right half of the focus photosensitive unit.
  • the process of acquiring the first phase difference information is the same, by acquiring the "1" and "2" in the white filter unit.
  • the output value is used to obtain the first phase difference information.
  • the present application only covers the upper half of the focus photosensitive unit with a white filter unit as an example. The case where the white filter unit covers other positions of the focus photosensitive unit will not be described in detail.
  • reading the second phase difference information of the dual-core focus photosensitive pixel may include: reading an output value of the first photodiode as a third output value; and reading the second photoelectric The output value of the diode is used as a fourth output value; the second phase difference information is obtained according to the third output value and the fourth output value.
  • the second phase difference information of all the dual-core in-focus pixels is calculated in the same manner.
  • the second phase difference information at Gr in FIG. 3 is calculated as an example.
  • the output value of "1" at Gr is read as the third output value
  • the output value of "2" at Gr is read as the fourth output value
  • the second phase difference information is obtained according to the third output value and the fourth output value.
  • the difference between the third output value and the fourth output value can be calculated as the second phase difference information.
  • the second phase difference information of the dual-core focus photosensitive pixels can be obtained by calculating the difference between "1" and "2" in the dual-core focus photosensitive pixels.
  • the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual-core focus photosensitive unit may be used for focusing. control.
  • the phase difference is usually calculated according to the output values of two photodiodes in the dual-core focusing photosensitive pixel, thereby calculating the driving amount and driving direction of the lens, thereby achieving focusing.
  • the focus speed is slower.
  • the first microlens based on the ellipse covers a white filter unit, and the white filter unit covers a focus photosensitive unit, and the white filter unit can still be obtained in a low light environment.
  • the first phase difference information of the larger amount of light is used for focusing processing, thereby improving the focusing speed in a low light environment.
  • the focus control method of the dual-core in-focus image sensor of the embodiment of the present application covers one white filter unit based on an elliptical first microlens, half of the focus photosensitive unit is covered by the white filter unit, and the other half is composed of a plurality of second micro a lens covering, a second microlens covering a dual-core focusing photosensitive pixel, by reading first phase difference information of the focus photosensitive unit and second phase difference information of the dual-core focusing photosensitive pixel, and according to the first phase difference information and the second phase
  • the difference information is subjected to focus control, which can increase the amount of light passing through the focus pixel, and effectively improve the focus speed in a low light environment; by retaining half of the dual-core focus photosensitive pixels in the focus photosensitive unit, the accuracy of image color reproduction can be ensured.
  • the method further includes:
  • the photosensitive cell array is further controlled to enter an imaging mode.
  • the pixel value of the portion of the focus photosensitive unit covered by the white filter unit is obtained by an interpolation reduction algorithm.
  • the photosensitive cell array after the photosensitive cell array enters the imaging mode, the photosensitive cell array is controlled to perform exposure, and the output value of the photosensitive cell array is read, thereby obtaining a pixel value generation image of the photosensitive cell array.
  • reading the output value of the photosensitive cell array to obtain the pixel value of the photosensitive cell array may include: reading the output values of the two photodiodes in the dual-core focusing photosensitive pixel, and then combining the two photodiodes The output values are added to obtain the pixel value of the dual-core focusing photosensitive pixel; for the portion covered by the white filtering unit in the focus photosensitive unit, the interpolation value is used to obtain the pixel value of the portion, wherein the interpolation restoration algorithm may be the nearest neighbor interpolation Any of an algorithm, a bilinear interpolation algorithm, and a cubic convolution interpolation algorithm.
  • the nearest neighbor interpolation algorithm can be used to obtain the pixel value of the focus photosensitive unit, that is, the gray value of the input pixel closest to the position to which the focus photosensitive unit is mapped is selected as the interpolation result, that is, the pixel value of the focus photosensitive unit. .
  • FIG. 7 is a schematic diagram of acquiring the pixel value of the focus photosensitive unit by using an interpolation algorithm.
  • the upper half of the focus photosensitive unit is a white filter unit, in order to output image quality.
  • the output values of the photosensitive pixels corresponding to "1" and "2" in the white filter unit need to be interpolated and restored, that is, the RGB values of each photosensitive pixel in the white filter unit need to be calculated.
  • the lower half of the focus photosensitive unit retains the pixel information of red (R) and blue (B), when the photosensitive pixels in the white filter unit are interpolated and restored, it is possible to be adjacent to the white filter unit.
  • the pixel value of each pixel acquires the RGB value of the photosensitive pixel in the white filter unit.
  • the R pixel value at "1" is denoted as R 1
  • the G pixel value is recorded as G 1
  • the B pixel value is recorded as B 1
  • the calculation formulas are as follows:
  • the interpolation and reduction method of the RGB value at the "2" in the white filter unit is the same as the RGB value reduction method at the "1", and the adjacent pixel points are selected for interpolation and restoration, in order to avoid redundancy.
  • the adjacent pixel points are selected for interpolation and restoration, in order to avoid redundancy.
  • pixel values of adjacent pixels can be used for interpolation restoration, and are not limited to pixel values of adjacent pixels, wherein higher values are assigned to pixel values with close distances.
  • a lower weight is assigned to a pixel value that is far away, that is, the weight of the pixel value in the interpolation restoration algorithm is inversely proportional to the distance of the restored pixel.
  • an image can be generated according to the pixel values of the respective pixel points in the photosensitive cell array.
  • the focus control method of the dual-core in-focus image sensor of the embodiment of the present application controls the photosensitive cell array to enter the imaging mode after the focus control is completed, and controls the photosensitive cell array to perform exposure, and reads the output value of the photosensitive cell array to obtain the photosensitive cell array.
  • the pixel value thus generates an image, and by retaining the R pixel and the B pixel in the same array as the white filter unit, the accuracy of image color reproduction can be ensured, and the picture quality can be improved.
  • FIG. 2 is a cross-sectional view of a dual-core in-focus image sensor according to an embodiment of the present application
  • FIG. 3 is a dual-core focusing according to an embodiment of the present application. Top view of the image sensor.
  • the first microlens is set to an elliptical shape by providing a microlens array including the first microlens and the second microlens, and one half of the focus photosensitive unit is covered by the white filter unit, and the other Half is covered by a plurality of second microlenses, and a second microlens covers a dual-core focusing photosensitive pixel, which can increase the amount of light passing through the focusing pixels, and provide a hardware basis for improving the focusing speed in a low light environment and ensuring the accuracy of image color reproduction. .
  • FIG. 8 is a schematic structural view of an imaging device according to an embodiment of the present application.
  • the imaging apparatus 800 includes the dual-core in-focus image sensor 100 and the control module 810 of the above-described embodiment. among them,
  • the control module 810 controls the photosensitive cell array to enter the focus mode, reads the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual-core focus sensitive pixel, and performs focus control according to the first phase difference information and the second phase difference information.
  • control module 810 is configured to read an output value of a part of the photosensitive pixels in the focus photosensitive unit and read the output value of another photosensitive pixel in the focus photosensitive unit as the first output value. And as the second output value, the first phase difference information is acquired according to the first output value and the second output value.
  • the first phase difference information of the focus photosensitive unit is read, specifically, the part of the focus photosensitive unit covered by the white filter unit is read. A phase difference information.
  • the dual-core focus photosensitive pixel in the dual-core focus image sensor 100 has two photodiodes, which are a first photodiode and a second photodiode, respectively. Therefore, the control module 810 is further configured to read an output value of the first photodiode as a third output value, and read an output value of the second photodiode as a fourth output value according to the third output value and the fourth output value. Obtaining second phase difference information.
  • control module 810 is further configured to control the photosensitive cell array to enter the imaging mode, and control the photosensitive cell array to perform exposure. And reading the output value of the photosensitive cell array to obtain the pixel value of the photosensitive cell array to generate an image, wherein the pixel value of the portion of the focus photosensitive unit covered by the white filter unit is obtained by an interpolation restoration algorithm.
  • the imaging device of the embodiment of the present application covers a white filter unit based on an elliptical first microlens, half of the focus photosensitive unit is covered by the white filter unit, and the other half is covered by the plurality of second microlenses, and the second The microlens covers a dual-core focusing photosensitive pixel, and reads the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual-core focusing photosensitive pixel, and performs focus control according to the first phase difference information and the second phase difference information, It can increase the amount of light passing through the focusing pixel, effectively improving the focusing speed in low light conditions; by retaining half of the dual-core focusing photosensitive pixels in the focusing photosensitive unit, the accuracy of image color reproduction can be ensured.
  • FIG. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
  • the mobile terminal 90 includes a housing 91, a processor 92, a memory 93, a circuit board 94, and a power supply circuit 95, wherein the circuit board 94 is disposed inside the space enclosed by the housing 91, the processor 92 and The memory 93 is disposed on the circuit board 94; the power supply circuit 95 is for supplying power to the respective circuits or devices of the mobile terminal; the memory 93 is for storing executable program code; and the processor 92 is for reading the executable program code stored in the memory 93.
  • a program corresponding to the executable program code is executed for executing the focus control method of the dual-core in-focus image sensor in the above embodiment.
  • the mobile terminal of the embodiment of the present application covers a white filter unit based on an elliptical first microlens, half of the focus photosensitive unit is covered by the white filter unit, and the other half is covered by the plurality of second microlenses, and the second The microlens covers a dual-core focusing photosensitive pixel, and reads the first phase difference information of the focus photosensitive unit and the second phase difference information of the dual-core focusing photosensitive pixel, and performs focus control according to the first phase difference information and the second phase difference information, It can increase the amount of light passing through the focusing pixel, effectively improving the focusing speed in low light conditions; by retaining half of the dual-core focusing photosensitive pixels in the focusing photosensitive unit, the accuracy of image color reproduction can be ensured.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Automatic Focus Adjustment (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Focusing (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

本申请公开了一种双核对焦图像传感器及其对焦控制方法和成像装置,其中,双核对焦图像传感器包括:感光单元阵列,设置在所述感光单元阵列上的滤光单元阵列,以及位于所述滤光单元阵列之上的微透镜阵列,其中,所述微透镜阵列包括第一微透镜和第二微透镜,所述第一微透镜为椭圆形,一个所述第一微透镜覆盖一个白色滤光单元,一个所述白色滤光单元覆盖一个对焦感光单元,一个所述白色滤光单元的面积为一个所述对焦感光单元的一半,一个所述对焦感光单元的另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,能够增加对焦像素的通光量,为提升低光环境下的对焦速度保证图像颜色还原的准确度提供硬件基础。

Description

双核对焦图像传感器及其对焦控制方法和成像装置
相关申请的交叉引用
本申请要求广东欧珀移动通信有限公司于2017年04月28日提交的、发明名称为“双核对焦图像传感器及其对焦控制方法和成像装置”的、中国专利申请号“201710296079.0”的优先权。
技术领域
本申请涉及图像设备技术领域,尤其涉及一种双核对焦图像传感器及其对焦控制方法和成像装置。
背景技术
在相关对焦技术中,双核全像素对焦技术已成为目前市场上最先进的对焦技术。相比较于反差对焦、激光对焦和相位对焦技术,双核全像素对焦技术的对焦速度更快,且对焦范围更广。此外,由于双核全像素对焦技术中,“双核”光电二极管在成像时“合并”为一个像素进行输出,能够在保证对焦性能的同时不影响画质。
发明内容
本申请的目的旨在至少在一定程度上解决上述的技术问题之一。
为此,本申请的第一个目的在于提出一种双核对焦图像传感器的对焦控制方法,该方法能够增加对焦像素的通光量,有效提升低光环境下的对焦速度,并保证图像颜色还原的准确性。
本申请的第二个目的在于提出一种双核对焦图像传感器。
本申请的第三个目的在于提出一种成像装置。
本申请的第四个目的在于提出一种移动终端。
为了实现上述目的,本申请第一方面实施例提出一种双核对焦图像传感器的对焦控制方法,其中,所述双核对焦图像传感器包括:感光单元阵列、设置在所述感光单元阵列上的滤光单元阵列和位于所述滤光单元阵列之上的微透镜阵列,其中,所述微透镜阵列包括第一微透镜和第二微透镜,所述第一微透镜为椭圆形,一个所述第一微透镜覆盖一个白色滤光单元,一个所述白色滤光单元覆盖一个对焦感光单元,一个所述白色滤光单元的面积为一个所述对焦感光单元的一半,一个所述对焦感光单元的另一半由多个第二微透镜覆盖, 一个第二微透镜覆盖一个双核对焦感光像素,所述方法包括以下步骤:
控制所述感光单元阵列进入对焦模式;
读取所述对焦感光单元的第一相位差信息和所述双核对焦感光像素的第二相位差信息;
根据所述第一相位差信息和所述第二相位差信息进行对焦控制。
本申请实施例的双核对焦图像传感器的对焦控制方法,基于一个椭圆形的第一微透镜覆盖一个白色滤光单元,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,通过读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,并根据第一相位差信息和第二相位差信息进行对焦控制,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度;通过在对焦感光单元中保留一半的双核对焦感光像素,能够保证图像颜色还原的准确性。为了实现上述目的,本申请第二方面实施例提出了一种双核对焦图像传感器,包括:感光单元阵列,设置在所述感光单元阵列上的滤光单元阵列,以及位于所述滤光单元阵列之上的微透镜阵列,其中,所述微透镜阵列包括第一微透镜和第二微透镜,所述第一微透镜为椭圆形,一个所述第一微透镜覆盖一个白色滤光单元,一个所述白色滤光单元覆盖一个对焦感光单元,一个所述白色滤光单元的面积为一个所述对焦感光单元的一半,一个所述对焦感光单元的另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素。
本申请实施例的双核对焦图像传感器,通过设置包括第一微透镜和第二微透镜的微透镜阵列,将第一微透镜设置为椭圆形,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,能够增加对焦像素的通光量,为提升低光环境下的对焦速度和保证图像颜色还原的准确度提供硬件基础。
为了实现上述目的,本申请第三方面实施例提出了一种成像装置,该成像装置包括:上述第二方面实施例提出的双核对焦图像传感器;和控制模块,所述控制模块控制所述感光单元阵列进入对焦模式;读取所述对焦感光单元的第一相位差信息和所述双核对焦感光像素的第二相位差信息;根据所述第一相位差信息和所述第二相位差信息进行对焦控制。
本申请实施例的成像装置,基于一个椭圆形的第一微透镜覆盖一个白色滤光单元,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,通过读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,并根据第一相位差信息和第二相位差信息进行对焦控制,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度;通过在对焦感光单元中保留一半的双核对焦感光像素,能够保证图像颜色还原的准确性。
为了实现上述目的,本申请第四方面实施例还提出了一种移动终端,该移动终端包括 壳体、处理器、存储器、电路板和电源电路,其中,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路,用于为所述移动终端的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行上述第一方面实施例提出的双核对焦图像传感器的对焦控制方法。
本申请实施例的移动终端,基于一个椭圆形的第一微透镜覆盖一个白色滤光单元,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,通过读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,并根据第一相位差信息和第二相位差信息进行对焦控制,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度;通过在对焦感光单元中保留一半的双核对焦感光像素,能够保证图像颜色还原的准确性。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是传统双核对焦图像传感器结构示意图;
图2是根据本申请的一个实施例的双核对焦图像传感器的剖面图;
图3是根据本申请的一个实施例的双核对焦图像传感器的俯视图;
图4是第一微透镜排布密度分布图;
图5是根据本申请一实施例的双核对焦图像传感器的对焦控制方法的流程图;
图6是根据本申请另一实施例的双核对焦图像传感器的对焦控制方法的流程图;
图7是采用插值算法获取对焦感光单元像素值的示意图;
图8是根据本申请一实施例的成像装置的结构示意图;
图9是根据本申请一实施例的移动终端的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的双核对焦图像传感器及其对焦控制方法和成像装 置。
双核全像素对焦技术是目前市场上最先进的对焦技术,该对焦技术所采用的双核对焦传感器结构如图1所示,每个微透镜(图1中圆圈表示微透镜)下对应两个光电二极管。进行成像处理时,将“1”和“2”的值相加获得单分量像素值。进行对焦处理时,分别读出“1”和“2”的值,通过计算两者之间的相位差即可计算出镜头的驱动量和驱动方向。
能够理解的是,随着像素总数的增加,“1”和“2”对应的感光面积变小,使通过量减少,导致低光环境下的相位信息容易被噪声淹没,对焦困难。
基于上述分析可知,,采用双核全像素对焦技术进行对焦时,每个像素的光电二极管被一分为二,使通光量减小,进而导致低光环境下双核对焦困难。
为了解决现有双核全像素对焦技术在低光环境下对焦困难的问题,本申请提出了一种双核对焦图像传感器的对焦控制方法,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度,同时保证图像颜色还原的准确性。
下面先对实现本申请提出的双核对焦图像传感器的对焦控制方法所需的双核对焦图像传感器进行介绍。
图2是根据本申请的一个实施例的双核对焦图像传感器的剖面图,图3是根据本申请的一个实施例的双核对焦图像传感器的俯视图。
如图2和图3所示,该双核对焦图像传感器100包括感光单元阵列10、滤光单元阵列20和微透镜阵列30。
其中,滤光单元阵列20设置在感光单元阵列10上,微透镜阵列30位于滤光单元阵列20之上。微透镜阵列30包括第一微透镜31和第二微透镜32,第一微透镜31被设置为椭圆形。一个第一微透镜31覆盖一个白色滤光单元21,一个白色滤光单元21覆盖一个对焦感光单元11,一个白色滤光单元21的面积为一个对焦感光单元11的一半,一个对焦感光单元11的另一半由多个第二微透镜32覆盖,一个第二微透镜32覆盖一个滤光单元22,一个滤光单元22覆盖一个双核对焦感光像素12。
在本申请的实施例中,双核对焦感光像素12的排列为拜耳阵列(Bayer pattern)。采用拜耳结构能采用传统针对拜耳结构的算法来处理图像信号,从而不需要硬件结构上做大的调整。双核对焦感光像素12具有两个光电二极管,分别为第一光电二极管121和第二光电二极管122,分别对应于图3中每个双核对焦感光像素12的“1”和“2”。
在本申请的实施例中,对焦感光单元11包括N*N个感光像素110,白色滤光单元21覆盖对焦感光单元11中的上半部分、下半部分、左半部分或者右半部分,本申请对白色滤光单元21在对焦感光单元11中所覆盖的位置不作限制。在如图3所示的双核对焦图像传感器结构中,对焦感光单元11包括2*2个感光像素110,白色滤光单元21即图中W覆盖 对焦感光单元11中的上半部分,对焦感光单元11的下半部分由两个第二微透镜32覆盖,其中一个第二微透镜32覆盖一个红色滤光单元,另一个第二微透镜覆盖一个蓝色滤光单元。
概括地说,本申请实施例的双核对焦图像传感器100中,N*N个感光像素110被一分为二,其中一半被一个第一微透镜31覆盖,且被第一微透镜31覆盖的部分对应一个白色滤光单元21,另一半被多个第二微透镜32覆盖,且被任一个第二微透镜32覆盖的部分对应一个双核对焦感光像素。
在本申请的一个实施例中,微透镜阵列30包括水平中心线和竖直中心线,第一微透镜31为多个。多个第一微透镜31包括设置在水平中心线的第一组第一微透镜31和设置在竖直中心线的第二组第一微透镜31。
在本申请的一个实施例中,微透镜阵列30还可以包括四个边线,此时,多个第一微透镜31还包括设置在四个边线的第三组第一微透镜31。
当微透镜阵列30包括水平中心线、竖直中心线和四个边线时,第一组第一微透镜31和第二组第一微透镜31的透镜密度大于第三组第一微透镜31的透镜密度。
为了便于理解,下面结合附图说明微透镜阵列30中第一微透镜31的排布方式。图4是第一微透镜排布密度分布图。如图4所示,由第一微透镜31覆盖的白色滤光单元21,即图4中W,在整个双核对焦图像传感器中零散分布,占总像素个数的3%~5%,在微透镜阵列30的水平中心线和竖直中心线上W分布更密集,在四个边线上W分布较为稀疏,优先考虑画面中间区域的对焦准确度和速度,在不影响画质的情况下,有效提升对焦速度。
需要说明的是,图3和图4中的W表示双核对焦图像传感器中第一微透镜31覆盖的滤光单元为白色滤光单元21,采用白色滤光单元21时能够获得更大的通光量。第一微透镜31覆盖的滤光单元也可以是绿色滤光单元,即图3和图4中的W可以替换为G,采用绿色滤光单元时,成像处理时的可用信息更多。应当理解的是,本申请实施例中仅以白色滤光单元为例进行说明,而不能作为对本申请的限制。
基于图2-图4中双核对焦图像传感器的结构,下面对本申请实施例的双核对焦图像传感器的对焦控制方法进行说明。图5是根据本申请一实施例的双核对焦图像传感器的对焦控制方法的流程图,如图5所示,该方法包括以下步骤:
S51,控制感光单元阵列进入对焦模式。
当使用相机进行拍照时,若感觉显示的照片清晰度不足,此时,可以控制感光单元阵列进入对焦模式,以通过对焦改善照片的清晰度。
S52,读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息。
在本申请的实施例中,在进入对焦模式之后,可以进一步读取对焦感光单元的第一相 位差信息和双核对焦感光像素的第二相位差信息。
可选地,在本申请的一个实施例中,读取对焦感光单元的第一相位差信息,可以包括:读取对焦感光单元中一部分感光像素的输出值并作为第一输出值;读取对焦感光单元中另一部分感光像素的输出值并作为第二输出值;根据第一输出值和第二输出值获取第一相位差信息。
需要说明的是,在本申请的实施例中,读取对焦感光单元的第一相位差信息,具体指的是读取对焦感光单元中被白色滤光单元所覆盖的那部分对焦感光单元的第一相位差信息。
为便于理解,下面结合图3进行解释说明。
如图3所示,对焦感光单元的上半部分被白色滤光单元(图中W)覆盖,可以将白色滤光单元的左边部分作为对焦感光单元的一部分,将白色滤光单元的右边部分作为对焦感光单元的另一部分,则白色滤光单元中“1”处的输出值即为对焦感光单元中一部分感光像素的输出值,并将其作为第一输出值,白色滤光单元中“2”处的输出值即为对焦感光单元中另一部分感光像素的输出值,并将其作为第二输出值。最终根据第一输出值和第二输出值获取第一相位差信息,比如,可以将第一输出值和第二输出值的差值作为第一相位差信息。
需要说明的是,白色滤光单元可以覆盖对焦感光单元的上半部分、下半部分、左半部分或右半部分。但应当理解的是,无论白色滤光单元覆盖对焦感光单元的哪一部分,获取第一相位差信息的过程都是相同的,都是通过获取白色滤光单元中“1”和“2”处的输出值来获取第一相位差信息,本申请仅以白色滤光单元覆盖对焦感光单元的上半部分为例进行说明,对白色滤光单元覆盖对焦感光单元的其他位置的情况不再详述。
可选地,在本申请的一个实施例中,读取双核对焦感光像素的第二相位差信息,可以包括:读取第一光电二极管的输出值,作为第三输出值;读取第二光电二极管的输出值,作为第四输出值;根据第三输出值和第四输出值获取第二相位差信息。
仍以图3为例,图3中,所有双核对焦感光像素的第二相位差信息的计算方式相同,此处仅以计算图3中Gr处的第二相位差信息为例加以说明。首先读取Gr处“1”的输出值作为第三输出值,再读取Gr处“2”的输出值作为第四输出值,根据第三输出值和第四输出值获取第二相位差信息,比如,可以通过计算第三输出值和第四输出值之间的差值作为第二相位差信息。概括地说,双核对焦感光像素的第二相位差信息可以通过计算双核对焦感光像素中“1”和“2”处的差值获得。
S53,根据第一相位差信息和第二相位差信息进行对焦控制。
在本申请的实施例中,在读取了对焦感光单元的第一相位差信息和双核对焦感光单元 的第二相位差信息之后,即可根据第一相位差信息和第二相位差信息进行对焦控制。
在相关双核对焦技术中,通常是根据双核对焦感光像素中两个光电二极管的输出值计算相位差,由此计算出镜头的驱动量和驱动方向,进而实现对焦。在低光环境下,对焦速度较慢。
而在本申请的实施例中,基于椭圆形的第一微透镜覆盖一个白色滤光单元,一个白色滤光单元覆盖一个对焦感光单元,通过采用白色滤光单元,在低光环境下仍能获得更大通光量的第一相位差信息供对焦处理,进而提升低光环境下的对焦速度。
本申请实施例的双核对焦图像传感器的对焦控制方法,基于一个椭圆形的第一微透镜覆盖一个白色滤光单元,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,通过读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,并根据第一相位差信息和第二相位差信息进行对焦控制,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度;通过在对焦感光单元中保留一半的双核对焦感光像素,能够保证图像颜色还原的准确性。
应当理解的是,对焦的目的是为了获得清晰度更高的照片。在实际应用中,在完成对焦处理之后,通常还包括进一步的成像过程,从而,如图6所示,在上述图5所示的基础上,步骤S53之后还包括:
S61,控制感光单元阵列进入成像模式。
在本申请的实施例中,在完成对焦控制之后,进一步控制感光单元阵列进入成像模式。
S62,控制感光单元阵列进行曝光,并读取感光单元阵列的输出值,以得到感光单元阵列的像素值从而生成图像。
其中,对焦感光单元被白色滤光单元覆盖的部分的像素值通过插值还原算法获得。
在本申请的实施例中,当感光单元阵列进入成像模式之后,控制感光单元阵列进行曝光,并读取感光单元阵列的输出值,进而得到感光单元阵列的像素值生成图像。
在本申请的一个实施例中,读取感光单元阵列的输出值进而得到感光单元阵列的像素值,可以包括:读取双核对焦感光像素中两个光电二极管的输出值后,将两个光电二极管的输出值相加获得该双核对焦感光像素的像素值;对于对焦感光单元中被白色滤光单元覆盖的部分,采用插值还原算法获得该部分的像素值,其中,插值还原算法可以是最近邻插值算法、双线性插值算法和立方卷积插值算法中的任意一种。
简单起见,可以采用最近邻插值算法获得对焦感光单元的像素值,即选择离该对焦感光单元所映射到的位置最近的输入像素的灰度值作为插值结果,也就是该对焦感光单元的像素值。
图7是采用插值算法获取对焦感光单元像素值的示意图,如图7所示,在2*2的对焦 感光单元中,对焦感光单元的上半部分为白色滤光单元,为了能够输出画质较好的图像,需要对白色滤光单元中“1”和“2”所对应的感光像素的输出值进行插值还原,即需要计算获得白色滤光单元中每个感光像素的RGB值。由于对焦感光单元的下半部分保留了红色(R)和蓝色(B)的像素信息,因此,在对白色滤光单元中的感光像素进行插值还原时,能够根据白色滤光单元临近的8个像素的像素值获取白色滤光单元中感光像素的RGB值。以计算白色滤光单元中“1”处的RGB值为例,为描述方便,将“1”处的R像素值记为R 1,G像素值记为G 1,B像素值记为B 1,计算公式分别如下:
Figure PCTCN2018084023-appb-000001
Figure PCTCN2018084023-appb-000002
Figure PCTCN2018084023-appb-000003
需要说明的是,白色滤光单元中“2”处的RGB值的插值还原方法与“1”处的RGB值还原方法同理,均是选取相邻的像素点进行插值还原,为避免赘余,此处不再一一举例。
需要说明的是,上述对获取对焦感光单元像素值的算法的描述仅用于解释说明本申请,而不能作为对本申请的限制。实际处理时,为获取更精确的像素值,相邻几个像素的像素值都可以用于插值还原,而不仅限于相邻像素的像素值,其中,对距离近的像素值分配较高的权重,对距离远的像素值分配较低的权重,即像素值在插值还原算法中所占的权重与被还原像素的距离成反比。
在本申请的实施例中,将对焦感光单元的像素值还原后,即可根据感光单元阵列中各个像素点的像素值生成图像。
本申请实施例的双核对焦图像传感器的对焦控制方法,在完成对焦控制后控制感光单元阵列进入成像模式,并控制感光单元阵列进行曝光,通过读取感光单元阵列的输出值以得到感光单元阵列的像素值从而生成图像,通过保留与白色滤光单元同一阵列中的R像素和B像素,能够保证图像颜色还原的准确性,提高画面质量。
为了实现上述实施例,本申请还提出了一种双核对焦图像传感器,图2是根据本申请的一个实施例的双核对焦图像传感器的剖面图,图3是根据本申请的一个实施例的双核对焦图像传感器的俯视图。
需要说明的是,前述双核对焦图像传感器的对焦控制方法实施例中对双核对焦图像传感器的相关解释说明也适用于本申请实施例的双核对焦图像传感器,其实现原理类似,此处不再赘述。
本申请实施例的双核对焦图像传感器,通过设置包括第一微透镜和第二微透镜的微透镜阵列,将第一微透镜设置为椭圆形,对焦感光单元的一半由白色滤光单元覆盖,另一半 由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,能够增加对焦像素的通光量,为提升低光环境下的对焦速度和保证图像颜色还原的准确度提供硬件基础。
为了实现上述实施例,本申请还提出了一种成像装置,图8是根据本申请一实施例的成像装置的结构示意图。
如图8所示,该成像装置800包括上述实施例的双核对焦图像传感器100和控制模块810。其中,
控制模块810控制感光单元阵列进入对焦模式,读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,根据第一相位差信息和第二相位差信息进行对焦控制。
可选地,在本申请的一个实施例中,控制模块810用于读取对焦感光单元中一部分感光像素的输出值并作为第一输出值,读取对焦感光单元中另一部分感光像素的输出值并作为第二输出值,根据第一输出值和第二输出值获取第一相位差信息。
需要说明的是,在本申请的实施例中,读取对焦感光单元的第一相位差信息,具体指的是读取对焦感光单元中被白色滤光单元所覆盖的那部分对焦感光单元的第一相位差信息。
在本申请的实施例中,双核对焦图像传感器100中的双核对焦感光像素具有两个光电二极管,分别为第一光电二极管和第二光电二极管。因此,控制模块810还用于读取第一光电二极管的输出值,作为第三输出值,读取第二光电二极管的输出值,作为第四输出值,根据第三输出值和第四输出值获取第二相位差信息。
应当理解的是,对焦的目的是为了获得清晰度更高的照片。在实际应用中,在完成对焦处理之后,通常还包括进一步的成像过程,因此,在本申请的一个实施例中,控制模块810还用于控制感光单元阵列进入成像模式,控制感光单元阵列进行曝光,并读取感光单元阵列的输出值,以得到感光单元阵列的像素值从而生成图像,其中,对焦感光单元被白色滤光单元覆盖的部分的像素值通过插值还原算法获得。
本申请实施例的成像装置,基于一个椭圆形的第一微透镜覆盖一个白色滤光单元,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,通过读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,并根据第一相位差信息和第二相位差信息进行对焦控制,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度;通过在对焦感光单元中保留一半的双核对焦感光像素,能够保证图像颜色还原的准确性。
为了实现上述实施例,本申请还提出了一种移动终端,图9是根据本申请一实施例的移动终端的结构示意图。
如图9所示,该移动终端90包括壳体91、处理器92、存储器93、电路板94和电源电路95,其中,电路板94安置在壳体91围成的空间内部,处理器92和存储器93设置在电路板94上;电源电路95,用于为移动终端的各个电路或器件供电;存储器93用于存储可执行程序代码;处理器92通过读取存储器93中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行上述实施例中的双核对焦图像传感器的对焦控制方法。
本申请实施例的移动终端,基于一个椭圆形的第一微透镜覆盖一个白色滤光单元,对焦感光单元的一半由白色滤光单元覆盖,另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,通过读取对焦感光单元的第一相位差信息和双核对焦感光像素的第二相位差信息,并根据第一相位差信息和第二相位差信息进行对焦控制,能够增加对焦像素的通光量,有效提升低光环境下的对焦速度;通过在对焦感光单元中保留一半的双核对焦感光像素,能够保证图像颜色还原的准确性。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实 施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
需要说明的是,在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种双核对焦图像传感器的对焦控制方法,其特征在于,所述双核对焦图像传感器包括:感光单元阵列、设置在所述感光单元阵列上的滤光单元阵列和位于所述滤光单元阵列之上的微透镜阵列,其中,所述微透镜阵列包括第一微透镜和第二微透镜,所述第一微透镜为椭圆形,一个所述第一微透镜覆盖一个白色滤光单元,一个所述白色滤光单元覆盖一个对焦感光单元,一个所述白色滤光单元的面积为一个所述对焦感光单元的一半,一个所述对焦感光单元的另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素,所述方法包括以下步骤:
    控制所述感光单元阵列进入对焦模式;
    读取所述对焦感光单元的第一相位差信息和所述双核对焦感光像素的第二相位差信息;
    根据所述第一相位差信息和所述第二相位差信息进行对焦控制。
  2. 如权利要求1所述的方法,其特征在于,读取所述对焦感光单元的第一相位差信息,包括:
    读取所述对焦感光单元中一部分感光像素的输出值并作为第一输出值;
    读取所述对焦感光单元中另一部分感光像素的输出值并作为第二输出值;
    根据所述第一输出值和第二输出值获取所述第一相位差信息。
  3. 如权利要求1或2所述的方法,其特征在于,所述双核对焦感光像素具有两个光电二极管,第一光电二极管和第二光电二极管,读取所述双核对焦感光像素的第二相位差信息,包括:
    读取所述第一光电二极管的输出值,作为第三输出值;
    读取所述第二光电二极管的输出值,作为第四输出值;
    根据所述第三输出值和所述第四输出值获取所述第二相位差信息。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述双核对焦感光像素排列为拜耳阵列。
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述微透镜阵列包括水平中心线和竖直中心线,所述第一微透镜为多个,所述多个第一微透镜包括:
    设置在所述水平中心线的第一组第一微透镜;和
    设置在所述竖直中心线的第二组第一微透镜。
  6. 如权利要求5所述的方法,其特征在于,所述微透镜阵列包括四个边线,所述多个第一微透镜还包括:
    设置在所述四个边线的第三组第一微透镜。
  7. 如权利要求6所述的方法,其特征在于,所述第一组第一微透镜和所述第二组第一微透镜的透镜密度大于所述第三组第一微透镜的透镜密度。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述对焦感光单元包括N*N个感光像素,所述白色滤光单元覆盖所述对焦感光单元中上半部分或下半部分或左半部分或右半部分,所述方法还包括:
    控制所述感光单元阵列进入成像模式;
    控制所述感光单元阵列进行曝光,并读取所述感光单元阵列的输出值,以得到所述感光单元阵列的像素值从而生成图像,其中,所述对焦感光单元被所述白色滤光单元覆盖的部分的像素值通过插值还原算法获得。
  9. 一种双核对焦图像传感器,其特征在于,包括:
    感光单元阵列;
    设置在所述感光单元阵列上的滤光单元阵列;
    位于所述滤光单元阵列之上的微透镜阵列;
    其中,所述微透镜阵列包括第一微透镜和第二微透镜,所述第一微透镜为椭圆形,一个所述第一微透镜覆盖一个白色滤光单元,一个所述白色滤光单元覆盖一个对焦感光单元,一个所述白色滤光单元的面积为一个所述对焦感光单元的一半,一个所述对焦感光单元的另一半由多个第二微透镜覆盖,一个第二微透镜覆盖一个双核对焦感光像素。
  10. 如权利要求9所述的双核对焦图像传感器,其特征在于,所述双核对焦感光像素排列为拜耳阵列。
  11. 如权利要求9或10所述的双核对焦图像传感器,其特征在于,所述微透镜阵列包括水平中心线和竖直中心线,所述第一微透镜为多个,所述多个第一微透镜包括:
    设置在所述水平中心线的第一组第一微透镜;和
    设置在所述竖直中心线的第二组第一微透镜。
  12. 如权利要求11所述的双核对焦图像传感器,其特征在于,所述微透镜阵列包括四个边线,所述多个第一微透镜还包括:
    设置在所述四个边线的第三组第一微透镜。
  13. 如权利要求12所述的双核对焦图像传感器,其特征在于,所述第一组第一微透镜和所述第二组第一微透镜的透镜密度大于所述第三组第一微透镜的透镜密度。
  14. 如权利要求9-13任一项所述的双核对焦图像传感器,其特征在于,所述对焦感光单元包括N*N个感光像素,所述白色滤光单元覆盖所述对焦感光单元中上半部分或下半部分或左半部分或右半部分。
  15. 一种成像装置,其特征在于,包括:
    如权利要求9-14任一项所述的双核对焦图像传感器;和
    控制模块,所述控制模块控制所述感光单元阵列进入对焦模式;
    读取所述对焦感光单元的第一相位差信息和所述双核对焦感光像素的第二相位差信息;
    根据所述第一相位差信息和所述第二相位差信息进行对焦控制。
  16. 如权利要求15所述的成像装置,其特征在于,所述控制模块具体用于:
    读取所述对焦感光单元中一部分感光像素的输出值并作为第一输出值;
    读取所述对焦感光单元中另一部分感光像素的输出值并作为第二输出值;
    根据所述第一输出值和第二输出值获取所述第一相位差信息。
  17. 如权利要求15或16所述的成像装置,其特征在于,所述双核对焦感光像素具有两个光电二极管,第一光电二极管和第二光电二极管,所述控制模块具体用于:
    读取所述第一光电二极管的输出值,作为第三输出值;
    读取所述第二光电二极管的输出值,作为第四输出值;
    根据所述第三输出值和所述第四输出值获取所述第二相位差信息。
  18. 如权利要求15-17任一项所述的成像装置,其特征在于,所述控制模块还用于:
    控制所述感光单元阵列进入成像模式;
    控制所述感光单元阵列进行曝光,并读取所述感光单元阵列的输出值,以得到所述感光单元阵列的像素值从而生成图像,其中,所述对焦感光单元被所述白色滤光单元覆盖的部分的像素值通过插值还原算法获得。
  19. 一种移动终端,包括壳体、处理器、存储器、电路板和电源电路,其中,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路,用于为所述移动终端的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行如权利要求1至8中任一项所述的双核对焦图像传感器的对焦控制方法。
  20. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,当所述存储介质中的指令由用户设备的处理器执行时,实现如权利要求1至8中任一项所述的双核对焦图像传感器的对焦控制方法。
PCT/CN2018/084023 2017-04-28 2018-04-23 双核对焦图像传感器及其对焦控制方法和成像装置 WO2018196704A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18790152.5A EP3618425A4 (en) 2017-04-28 2018-04-23 TWO-CORE FOCUSING IMAGE SENSOR, FOCUSING CONTROL METHOD THEREFOR, AND IMAGING DEVICE
US16/664,323 US10893187B2 (en) 2017-04-28 2019-10-25 Dual-core focusing image sensor, control-focusing method therefor, and mobile terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710296079.0 2017-04-28
CN201710296079.0A CN107146797B (zh) 2017-04-28 2017-04-28 双核对焦图像传感器及其对焦控制方法和成像装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/664,323 Continuation US10893187B2 (en) 2017-04-28 2019-10-25 Dual-core focusing image sensor, control-focusing method therefor, and mobile terminal

Publications (1)

Publication Number Publication Date
WO2018196704A1 true WO2018196704A1 (zh) 2018-11-01

Family

ID=59775269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084023 WO2018196704A1 (zh) 2017-04-28 2018-04-23 双核对焦图像传感器及其对焦控制方法和成像装置

Country Status (4)

Country Link
US (1) US10893187B2 (zh)
EP (1) EP3618425A4 (zh)
CN (1) CN107146797B (zh)
WO (1) WO2018196704A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200069024A (ko) * 2018-12-06 2020-06-16 삼성전자주식회사 복수의 서브 픽셀들을 덮는 마이크로 렌즈를 통해 발생된 광의 경로 차에 의해 깊이 데이터를 생성하는 이미지 센서 및 그 이미지 센서를 포함하는 전자 장치

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146797B (zh) 2017-04-28 2020-03-27 Oppo广东移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN108600712B (zh) 2018-07-19 2020-03-31 维沃移动通信有限公司 一种图像传感器、移动终端及图像拍摄方法
CN110087065A (zh) * 2019-04-30 2019-08-02 德淮半导体有限公司 半导体装置及其制造方法
CN114424517B (zh) * 2019-11-28 2024-03-01 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件及移动终端
CN111464733B (zh) * 2020-05-22 2021-10-01 Oppo广东移动通信有限公司 控制方法、摄像头组件和移动终端
WO2022000485A1 (zh) * 2020-07-03 2022-01-06 深圳市汇顶科技股份有限公司 光电转换单元、图像传感器及对焦方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010212649A (ja) * 2009-02-13 2010-09-24 Nikon Corp 撮像素子
US20110109776A1 (en) * 2009-11-10 2011-05-12 Fujifilm Corporation Imaging device and imaging apparatus
CN105378926A (zh) * 2014-06-12 2016-03-02 索尼公司 固态成像器件、固态成像器件的制造方法及电子装置
CN105611124A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及电子装置
CN105979160A (zh) * 2015-03-11 2016-09-28 佳能株式会社 摄像装置及摄像装置的控制方法
CN107146797A (zh) * 2017-04-28 2017-09-08 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4578797B2 (ja) 2003-11-10 2010-11-10 パナソニック株式会社 撮像装置
JP4967296B2 (ja) * 2005-10-03 2012-07-04 株式会社ニコン 撮像素子、焦点検出装置、および、撮像システム
JP4720508B2 (ja) * 2006-01-05 2011-07-13 株式会社ニコン 撮像素子および撮像装置
CN101385332B (zh) 2006-03-22 2010-09-01 松下电器产业株式会社 摄像装置
JP5246424B2 (ja) 2009-05-11 2013-07-24 ソニー株式会社 撮像装置
JP5956782B2 (ja) * 2011-05-26 2016-07-27 キヤノン株式会社 撮像素子及び撮像装置
WO2013145886A1 (ja) * 2012-03-28 2013-10-03 富士フイルム株式会社 撮像素子及びこれを用いた撮像装置及び撮像方法
KR101373132B1 (ko) * 2013-09-03 2014-03-14 (주)실리콘화일 마이크로 렌즈를 이용한 위상차 검출 픽셀
JP6415140B2 (ja) * 2014-07-04 2018-10-31 キヤノン株式会社 撮像装置及びその制御方法
EP3104595A1 (en) * 2015-06-08 2016-12-14 Thomson Licensing Light field imaging device
KR102374112B1 (ko) * 2015-07-15 2022-03-14 삼성전자주식회사 오토 포커싱 픽셀을 포함하는 이미지 센서, 및 이를 포함하는 이미지 처리 시스템
US10044959B2 (en) * 2015-09-24 2018-08-07 Qualcomm Incorporated Mask-less phase detection autofocus
US9804357B2 (en) * 2015-09-25 2017-10-31 Qualcomm Incorporated Phase detection autofocus using masked and unmasked photodiodes
KR20180024604A (ko) * 2016-08-30 2018-03-08 삼성전자주식회사 이미지 센서 및 그 구동 방법
US10070042B2 (en) * 2016-12-19 2018-09-04 Intel Corporation Method and system of self-calibration for phase detection autofocus
US20180288306A1 (en) * 2017-03-30 2018-10-04 Qualcomm Incorporated Mask-less phase detection autofocus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010212649A (ja) * 2009-02-13 2010-09-24 Nikon Corp 撮像素子
US20110109776A1 (en) * 2009-11-10 2011-05-12 Fujifilm Corporation Imaging device and imaging apparatus
CN105378926A (zh) * 2014-06-12 2016-03-02 索尼公司 固态成像器件、固态成像器件的制造方法及电子装置
CN105979160A (zh) * 2015-03-11 2016-09-28 佳能株式会社 摄像装置及摄像装置的控制方法
CN105611124A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及电子装置
CN107146797A (zh) * 2017-04-28 2017-09-08 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3618425A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200069024A (ko) * 2018-12-06 2020-06-16 삼성전자주식회사 복수의 서브 픽셀들을 덮는 마이크로 렌즈를 통해 발생된 광의 경로 차에 의해 깊이 데이터를 생성하는 이미지 센서 및 그 이미지 센서를 포함하는 전자 장치
EP3867956A4 (en) * 2018-12-06 2021-12-29 Samsung Electronics Co., Ltd. Image sensor for generating depth data by a path difference of light generated through micro lens covering a plurality of sub-pixels and electronic device including the image sensor
KR102624107B1 (ko) * 2018-12-06 2024-01-12 삼성전자주식회사 복수의 서브 픽셀들을 덮는 마이크로 렌즈를 통해 발생된 광의 경로 차에 의해 깊이 데이터를 생성하는 이미지 센서 및 그 이미지 센서를 포함하는 전자 장치

Also Published As

Publication number Publication date
EP3618425A1 (en) 2020-03-04
CN107146797A (zh) 2017-09-08
US10893187B2 (en) 2021-01-12
EP3618425A4 (en) 2020-03-11
US20200059592A1 (en) 2020-02-20
CN107146797B (zh) 2020-03-27

Similar Documents

Publication Publication Date Title
US11089201B2 (en) Dual-core focusing image sensor, focusing control method for the same, and electronic device
WO2018196704A1 (zh) 双核对焦图像传感器及其对焦控制方法和成像装置
US10764522B2 (en) Image sensor, output method, phase focusing method, imaging device, and terminal
JP6878604B2 (ja) 撮像方法および電子装置
CN106982328B (zh) 双核对焦图像传感器及其对焦控制方法和成像装置
WO2018196703A1 (zh) 图像传感器、对焦控制方法、成像装置和移动终端
WO2018099008A1 (zh) 控制方法、控制装置及电子装置
WO2018099011A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099012A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
CN107105140B (zh) 双核对焦图像传感器及其对焦控制方法和成像装置
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098984A1 (zh) 控制方法、控制装置、成像装置及电子装置
WO2018099030A1 (zh) 控制方法和电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
CN107040702B (zh) 图像传感器、对焦控制方法、成像装置和移动终端
CN107124536B (zh) 双核对焦图像传感器及其对焦控制方法和成像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790152

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018790152

Country of ref document: EP

Effective date: 20191128