WO2021223590A1 - 图像传感器、控制方法、摄像头组件和移动终端 - Google Patents

图像传感器、控制方法、摄像头组件和移动终端 Download PDF

Info

Publication number
WO2021223590A1
WO2021223590A1 PCT/CN2021/088404 CN2021088404W WO2021223590A1 WO 2021223590 A1 WO2021223590 A1 WO 2021223590A1 CN 2021088404 W CN2021088404 W CN 2021088404W WO 2021223590 A1 WO2021223590 A1 WO 2021223590A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
pixels
pixel
color
panchromatic
Prior art date
Application number
PCT/CN2021/088404
Other languages
English (en)
French (fr)
Inventor
杨鑫
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021223590A1 publication Critical patent/WO2021223590A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14603Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Definitions

  • This application relates to the field of imaging technology, and more specifically, to an image sensor, a control method, a camera assembly, and a mobile terminal.
  • the focus methods used in mobile phone shooting mainly include contrast focus and phase detection auto focus (PDAF).
  • PDAF phase detection auto focus
  • the embodiments of the present application provide an image sensor, a control method, a camera assembly, and a mobile terminal.
  • the image sensor of the embodiment of the present application includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array includes a minimum repeating unit. In the minimum repeating unit, the panchromatic pixels are arranged in a first diagonal direction, and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel includes at least two sub-pixels.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the control method of the embodiment of the present application is used for an image sensor.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array includes a minimum repeating unit. In the minimum repeating unit, the panchromatic pixels are arranged in a first diagonal direction, and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel includes at least two sub-pixels.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the control method includes: controlling the exposure of the at least two sub-pixels to output at least two sub-pixel information; determining phase information for focusing according to the at least two sub-pixel information; and controlling the two-dimensional pixel array in a focused state Exposure to obtain the target image.
  • the camera assembly of the embodiment of the present application includes a lens and an image sensor.
  • the image sensor can receive light passing through the lens.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array includes a minimum repeating unit. In the minimum repeating unit, the panchromatic pixels are arranged in a first diagonal direction, and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel includes at least two sub-pixels.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the mobile terminal of the embodiment of the present application includes a casing and a camera assembly.
  • the camera assembly is installed on the casing.
  • the camera assembly includes a lens and an image sensor.
  • the image sensor can receive light passing through the lens.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array includes a minimum repeating unit. In the minimum repeating unit, the panchromatic pixels are arranged in a first diagonal direction, and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel includes at least two sub-pixels.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the mobile terminal of the embodiment of the present application includes a casing and a camera assembly.
  • the camera assembly is installed on the casing.
  • the camera assembly includes a lens and an image sensor.
  • the image sensor can receive light passing through the lens.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array includes a minimum repeating unit. In the minimum repeating unit, the panchromatic pixels are arranged in a first diagonal direction, and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel includes at least two sub-pixels.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the mobile terminal further includes a processor configured to implement a control method: controlling the exposure of the at least two sub-pixels to output at least two sub-pixel information; determining phase information for focusing according to the at least two sub-pixel information; In the in-focus state, controlling the exposure of the two-dimensional pixel array to obtain a target image.
  • Fig. 1 is a schematic diagram of an image sensor according to some embodiments of the present application.
  • 2 to 8 are schematic diagrams of the distribution of sub-pixels in some embodiments of the present application.
  • FIG. 9 is a schematic diagram of a pixel circuit according to some embodiments of the present application.
  • Figure 10 is a schematic diagram of exposure saturation time for different color channels
  • 11 to 20 are schematic diagrams of the pixel arrangement of the smallest repeating unit and the lens covering manner in some embodiments of the present application;
  • FIG. 21 is a schematic flowchart of a control method of some embodiments of the present application.
  • Fig. 22 is a schematic diagram of a camera assembly according to some embodiments of the present application.
  • FIG. 23 is a schematic diagram of the principle of the control method of some embodiments of the present application.
  • FIG. 24 is a schematic flowchart of a control method of certain embodiments of the present application.
  • 25 to 27 are schematic diagrams of the principle of the control method of some embodiments of the present application.
  • FIG. 28 is a schematic diagram of a mobile terminal according to some embodiments of the present application.
  • the image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array 11 includes a minimum repeating unit. In the minimum repeating unit, panchromatic pixels are arranged in a first diagonal direction, and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel 101 includes at least two sub-pixels 102.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 101.
  • each pixel 101 includes two sub-pixels 102, and the dividing line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11.
  • each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11.
  • each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is inclined with respect to the length direction X or the width direction Y of the two-dimensional pixel array 11.
  • each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11;
  • the boundary line between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11.
  • the boundary line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11 and the boundary line between the two sub-pixels 102 and the two-dimensional pixel array 11
  • the pixels 101 whose width direction Y is parallel to each other are staggered.
  • each pixel 101 includes four sub-pixels 102, and the four sub-pixels 102 are distributed in a 2*2 matrix.
  • the image sensor 10 further includes a filter array 16, and the filter array 16 includes a plurality of filters 160, and each filter 160 covers one pixel 101.
  • each pixel 101 includes two sub-pixels 102, one of the sub-pixel 102 includes a first photoelectric conversion element 1171, and the other sub-pixel 102 includes a second photoelectric conversion element 1172
  • Each pixel 101 also includes a first exposure control circuit 1161 connected to the first photoelectric conversion element 1171 and a second exposure control circuit 1162 connected to the second photoelectric conversion element 1172.
  • the first exposure control circuit 1161 is used to transfer the first
  • the photoelectric conversion element 1171 receives the charge generated by the light to output pixel information
  • the second exposure control circuit 1162 is used to transfer the charge generated after the second photoelectric conversion element 1172 receives the light to output the pixel information.
  • each pixel 101 further includes a reset circuit, which is connected to the first exposure control circuit 1161 and the second exposure control circuit 1162 at the same time; when the two sub-pixels 102 are exposed to output pixel information, or output separately When the pixel information and phase information are output, the first exposure control circuit 1161 first transfers the charge generated by the first photoelectric conversion element 1171 after receiving light to output the first sub-pixel information.
  • the second exposure control circuit 1162 transfers the second The charge generated by the second photoelectric conversion element 1172 after receiving light is used to output the second sub-pixel information; when the two sub-pixels 102 are exposed to combine and output pixel information, the first exposure control circuit 1161 transfers the charge generated by the first photoelectric conversion element 1171 after receiving the light While charging, the second exposure control circuit 1162 can transfer the charge generated by the second photoelectric conversion element 1172 after receiving light to output combined pixel information; when two sub-pixels 102 are exposed to combine output pixel information and output phase information, the first The exposure control circuit 1161 first transfers the charge generated after the first photoelectric conversion element 1171 receives light to output the first sub-pixel information.
  • the second exposure control circuit 1172 transfers the charge generated after the second photoelectric conversion element 1172 receives the light.
  • the charge is used to output the second sub-pixel information, and the first sub-pixel information and the second sub-pixel information are used to merge into combined pixel information.
  • the image sensor 10 when two sub-pixels 102 are exposed to combine output pixel information and output phase information, the image sensor 10 further includes a buffer, which is used to store the first sub-pixel information and the second sub-pixel information to output the phase information. information.
  • the image sensor 10 further includes: a first exposure control line and a second exposure control line.
  • the first exposure control line is used to transmit the first exposure signal to control the first exposure time of the panchromatic pixels; the second exposure control line is used to transmit the second exposure signal to control the second exposure time of the color pixels; where the first exposure time It is less than the second exposure time, or the first exposure time is equal to the second exposure time.
  • the image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array 11 includes a minimum repeating unit. In the minimum repeating unit, panchromatic pixels are arranged in a first diagonal direction, and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction.
  • each pixel 101 includes at least two sub-pixels 102.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 101. Control methods include:
  • control method further includes:
  • Step 03 In the in-focus state, controlling the exposure of the two-dimensional pixel array 11 to obtain a target image includes:
  • control method further includes:
  • Step 03 In the in-focus state, controlling the exposure of the two-dimensional pixel array 11 to obtain a target image includes:
  • control method further includes:
  • Step 03 In the in-focus state, controlling the exposure of the two-dimensional pixel array 11 to obtain a target image includes:
  • the present application also provides a mobile terminal 90.
  • the mobile terminal 90 includes a casing 80 and a camera assembly 40.
  • the camera assembly 40 is mounted on the casing 80.
  • the camera assembly 40 includes a lens 30 and an image sensor 10.
  • the image sensor 10 can receive light passing through the lens 30.
  • the image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array 11 includes a minimum repeating unit.
  • panchromatic pixels are arranged in a first diagonal direction, and color pixels are arranged in a second diagonal direction.
  • the first diagonal direction is different from the second diagonal direction.
  • each pixel 101 includes at least two sub-pixels 102.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 101.
  • the present application also provides a mobile terminal 90.
  • the mobile terminal 90 includes a casing 80 and a camera assembly 40.
  • the camera assembly 40 is mounted on the casing 80.
  • the camera assembly 40 includes a lens 30 and an image sensor 10.
  • the image sensor 10 can receive light passing through the lens 30.
  • the image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array 11 includes a minimum repeating unit.
  • panchromatic pixels are arranged in a first diagonal direction, and color pixels are arranged in a second diagonal direction.
  • the first diagonal direction is different from the second diagonal direction.
  • each pixel 101 includes at least two sub-pixels 102.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 101.
  • the mobile terminal 90 also includes a processor 60, and the processor 60 is used to implement a control method:
  • the focus methods used in mobile phone shooting mainly include contrast focus and phase detection auto focus (PDAF). Contrast focusing is more accurate, but the speed is too slow. The phase focusing speed is fast.
  • the current phase focusing on the market is implemented on a color sensor (Bayer Sensor), and the focusing performance is not good enough in a dark light environment.
  • the present application provides an image sensor 10 (shown in FIG. 1).
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels. Compared with a general color sensor, the amount of light is increased and the signal-to-noise ratio is better. The focusing performance is better in dark light, and the sensitivity of the sub-pixel 102 is also higher.
  • each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.
  • the image sensor 10 may use a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge-coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS complementary metal oxide semiconductor
  • CCD Charge-coupled Device
  • the two-dimensional pixel array 11 includes a plurality of pixels 101 two-dimensionally arranged in an array.
  • Each pixel 101 includes at least two sub-pixels 102.
  • each pixel 101 includes two sub-pixels 102, three sub-pixels 102, four sub-pixels 102, or more sub-pixels 102.
  • the filter array 16 includes a plurality of filters 160, and each filter 160 covers a corresponding pixel 101.
  • the spectral response of each pixel 101 (that is, the color of light that the pixel 101 can receive) is determined by the color of the filter 160 corresponding to the pixel 102.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers a corresponding pixel 101.
  • each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.
  • FIG. 2 is a schematic diagram of the distribution of sub-pixels 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11.
  • the cross-sectional shape of each sub-pixel 102 is rectangular, and the cross-sectional areas of the two sub-pixels 102 in each pixel 101 are equal.
  • the cross section refers to the section taken along the light-receiving direction perpendicular to the image sensor 10 (the same applies hereinafter).
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • the phase information can be determined according to the two sub-pixel information outputted by the exposure of the two sub-pixels 102 in each pixel 101, thereby achieving phase focusing.
  • each pixel 101 includes a boundary line parallel to the length direction X of the two-dimensional pixel array 11
  • the two sub-pixels 102 can increase the vertical resolution of the image sensor 10.
  • FIG. 3 is a schematic diagram of the distribution of another seed pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11.
  • the cross-sectional shape of each sub-pixel 102 is rectangular, and the cross-sectional areas of the two sub-pixels 102 in each pixel 101 are equal.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • the phase information can be determined according to the two sub-pixel information outputted by the exposure of the two sub-pixels 102 in each pixel 101, thereby achieving phase focusing.
  • each pixel 101 includes a boundary line parallel to the width direction Y of the two-dimensional pixel array 11
  • the two sub-pixels 102 can improve the lateral resolution of the image sensor 10.
  • FIG. 4 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is inclined with respect to the length direction X of the two-dimensional pixel array 11.
  • the cross-sectional shape of each sub-pixel 102 is a trapezoid
  • the cross-section of one sub-pixel 102 in the same pixel 101 is a trapezoid with a narrow top and a wide bottom
  • the cross-sectional shape of the other sub-pixel 102 is a trapezoid with a wide top and a narrow bottom.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101. Taking the center point of each pixel 101 as the origin, the length direction parallel to the two-dimensional pixel array 11 as the horizontal axis, and the width direction as the vertical axis, a rectangular coordinate system is established.
  • the two sub-pixels 102 are on the positive and negative half of the horizontal axis.
  • the axes are all distributed, and the two sub-pixels 102 are distributed on both the positive semi-axis and the negative semi-axis of the vertical axis.
  • each pixel 101 includes two sub-pixels 102 whose dividing line is inclined with respect to the length direction X of the two-dimensional pixel array 11, which can improve the horizontal resolution or the vertical resolution of the image sensor 10.
  • FIG. 5 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is inclined with respect to the width direction Y of the two-dimensional pixel array 11.
  • the cross-sectional shape of each sub-pixel 102 is a trapezoid
  • the cross-section of one sub-pixel 102 in the same pixel 101 is a trapezoid with a wide top and a narrow bottom
  • the cross-sectional shape of the other sub-pixel 102 is a trapezoid with a narrow top and a wide bottom.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101. Taking the center point of each pixel 101 as the origin, the length direction parallel to the two-dimensional pixel array 11 as the horizontal axis, and the width direction as the vertical axis, a rectangular coordinate system is established.
  • the two sub-pixels 102 are on the positive and negative half of the horizontal axis.
  • the axes are all distributed, and the two sub-pixels 102 are distributed on both the positive semi-axis and the negative semi-axis of the vertical axis.
  • each pixel 101 includes two sub-pixels 102 whose dividing line is inclined with respect to the width direction Y of the two-dimensional pixel array 11, which can improve the horizontal resolution or the vertical resolution of the image sensor 10.
  • FIG. 6 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11; in some pixels 101, the dividing line between the two sub-pixels 102 It is parallel to the width direction Y of the two-dimensional pixel array 11.
  • a pixel 101 whose boundary line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11 and a pixel 101 whose boundary line between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11 Alternately distributed or alternately distributed.
  • two sub-pixels 102 can obtain phase information in the vertical direction; in some pixels 101, two sub-pixels 102 can obtain phase information in the horizontal direction, so that the image sensor 10 can be applied to a large number of pure colors. In a scene of horizontal stripes, it can also be applied to a scene containing a large number of pure vertical stripes.
  • the image sensor 10 has better scene adaptability and higher accuracy of phase focusing.
  • part of the pixels 101 includes two sub-pixels 102 with a dividing line parallel to the length direction X of the two-dimensional pixel array 11, and part of the pixels 101 includes two sub-pixels 102 with a dividing line parallel to the width direction Y of the two-dimensional pixel array 11.
  • the horizontal resolution and vertical resolution of the image sensor 10 are improved to a certain extent.
  • the pixel 101 whose boundary line between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11 and the boundary line between the two sub-pixels 102 and the pixels 101 parallel to the width direction Y of the two-dimensional pixel array 11 are interlaced. The distribution or inter-column distribution does not increase the wiring difficulty of the two-dimensional pixel array 11 too much.
  • FIG. 7 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • the dividing line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11; in some pixels 101, the dividing line between the two sub-pixels 102 It is parallel to the width direction Y of the two-dimensional pixel array 11.
  • a pixel 101 whose boundary line between the two sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11 and a pixel 101 whose boundary line between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11 Staggered distribution.
  • two sub-pixels 102 can obtain phase information in the vertical direction; in some pixels 101, two sub-pixels 102 can obtain phase information in the horizontal direction, so that the image sensor 10 can be applied to a large number of pure colors. In a scene of horizontal stripes, it can also be applied to a scene containing a large number of pure vertical stripes.
  • the image sensor 10 has better scene adaptability and higher accuracy of phase focusing.
  • part of the pixels 101 includes two sub-pixels 102 with a dividing line parallel to the length direction X of the two-dimensional pixel array 11, and part of the pixels 101 includes two sub-pixels 102 with a dividing line parallel to the width direction Y of the two-dimensional pixel array 11.
  • the horizontal resolution and vertical resolution of the image sensor 10 are improved to a certain extent.
  • the boundary line between the two sub-pixels 102 and the pixel 101 parallel to the length direction X of the two-dimensional pixel array 11 and the boundary line between the two sub-pixels 102 and the pixels 101 parallel to the width direction Y of the two-dimensional pixel array 11 are staggered. Distribution can maximize the phase focus effect.
  • FIG. 8 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes four sub-pixels 102, and the four sub-pixels 102 are distributed in a 2*2 matrix.
  • the phase information can be accurately determined according to the at least two sub-pixel information output by the exposure of any at least two sub-pixels 102 in each pixel 101, so as to achieve phase focusing.
  • each pixel 101 includes four sub-pixels distributed in a 2*2 matrix.
  • the pixel 102 can increase the horizontal resolution and the vertical resolution of the image sensor 10 at the same time.
  • each pixel 101 when each pixel 101 includes four sub-pixels 102, the four sub-pixels 102 may also be distributed along the width direction Y of the two-dimensional pixel array 11, and the dividing line between two adjacent sub-pixels 102 is similar to the two-dimensional The length direction X of the pixel array 11 is parallel; or, the four sub-pixels 102 may be distributed along the length direction X of the two-dimensional pixel array 11, and the dividing line between the two adjacent sub-pixels 102 and the width direction Y of the two-dimensional pixel array 11 Parallel or the like, or a case where the dividing line between two adjacent sub-pixels 102 is inclined with respect to the length direction X or the width direction Y of the two-dimensional pixel array 11, etc., which is not limited here.
  • the cross-sectional shape of the sub-pixel 102 may also be other regular or irregular shapes, which are not limited herein.
  • each pixel 101 includes two sub-pixels 102 and each pixel 101 includes four sub-pixels 102, it is also possible that some pixels 101 include two sub-pixels 102, and some pixels 101 include four sub-pixels 102. Or it may include any other number of at least two sub-pixels 102, which is not limited here.
  • FIG. 9 is a schematic diagram of a pixel circuit 110 in an embodiment of the present application.
  • the embodiments of the present application are described by taking each pixel 101 including two sub-pixels 102 as an example, and the case where each pixel 101 includes more than two sub-pixels 102 will be described later.
  • the working principle of the pixel circuit 110 will be described below with reference to FIGS. 1 and 9.
  • the pixel circuit 110 includes a first photoelectric conversion element 1171 (e.g., photodiode PD), a second photoelectric conversion element 1172 (e.g., photodiode PD), and a first exposure control circuit 1161 (e.g., Transfer transistor 112), second exposure control circuit 1162 (for example, transfer transistor 112), reset circuit (for example, reset transistor 113), amplification circuit (for example, amplification transistor 114), and selection circuit (for example, selection transistor 115).
  • one of the sub-pixels 102 includes the aforementioned first photoelectric conversion element 1171, and the other sub-pixel 102 includes the aforementioned second photoelectric conversion element 1172.
  • the transfer transistor 112, the reset transistor 113, the amplifying transistor 114, and the selection transistor 115 are, for example, MOS transistors, but are not limited thereto.
  • the gate TG of the transfer transistor 112 is connected to the vertical driving unit (not shown in the figure) of the image sensor 10 through an exposure control line (not shown in the figure); the gate RG of the reset transistor 113 is The vertical driving unit is connected through a reset control line (not shown in the figure); the gate SEL of the selection transistor 115 is connected to the vertical driving unit through a selection line (not shown in the figure).
  • the first exposure control circuit 1161 in each pixel circuit 110 is electrically connected to the first photoelectric conversion element 1171 for transferring the electric potential accumulated by the first photoelectric conversion element 1171 after being irradiated; the second exposure control in each pixel circuit 110
  • the circuit 1162 is electrically connected to the second photoelectric conversion element 1172, and is used to transfer the electric potential accumulated by the second photoelectric conversion element 1172 after being irradiated with light.
  • the first photoelectric conversion element 1171 and the second photoelectric conversion element 1172 each include a photodiode PD, and the anode of the photodiode PD is connected to the ground, for example.
  • the photodiode PD converts the received light into electric charge.
  • the cathode of the photodiode PD is connected to the floating diffusion unit FD via the first exposure control circuit 1161 or the second exposure control circuit 1162 (for example, the transfer transistor 112).
  • the floating diffusion unit FD is connected to the gate of the amplifying transistor 114 and the source of the reset transistor 113.
  • both the first exposure control circuit 1161 and the second exposure control circuit 1162 may be the transfer transistor 112, and the control terminals TG of the first exposure control circuit 1161 and the second exposure control circuit 1162 are the gates of the transfer transistor 112.
  • a pulse of an active level for example, VPIX level
  • the transfer transistor 112 is turned on.
  • the transfer transistor 112 transfers the charge photoelectrically converted by the photodiode PD to the floating diffusion unit FD.
  • the reset circuit is connected to the first exposure control circuit 1161 and the second exposure control circuit 1162 at the same time.
  • the reset circuit may be a reset transistor 113.
  • the drain of the reset transistor 113 is connected to the pixel power supply VPIX.
  • the source of the reset transistor 113 is connected to the floating diffusion unit FD.
  • a pulse of an effective reset level is transmitted to the gate of the reset transistor 113 via the reset line, and the reset transistor 113 is turned on.
  • the reset transistor 113 resets the floating diffusion unit FD to the pixel power supply VPIX.
  • the gate of the amplifying transistor 114 is connected to the floating diffusion unit FD.
  • the drain of the amplifying transistor 114 is connected to the pixel power supply VPIX.
  • the amplifying transistor 114 After the floating diffusion unit FD is reset by the reset transistor 113, the amplifying transistor 114 outputs the reset level through the output terminal OUT via the selection transistor 115.
  • the amplifying transistor 114 After the charge of the photodiode PD is transferred by the transfer transistor 112, the amplifying transistor 114 outputs a signal level through the output terminal OUT via the selection transistor 115.
  • the drain of the selection transistor 115 is connected to the source of the amplifying transistor 114.
  • the source of the selection transistor 115 is connected to the column processing unit (not shown in the figure) in the image sensor 10 through the output terminal OUT.
  • the selection transistor 115 is turned on.
  • the signal output by the amplifying transistor 114 is transmitted to the column processing unit through the selection transistor 115.
  • the first exposure control circuit 1161 first transfers the charge generated by the first photoelectric conversion element 1171 after receiving light to output the first sub-pixel information After the reset circuit is reset, the second exposure control circuit 1162 then transfers the charge generated by the second photoelectric conversion element 1172 after receiving the light to output the second sub-pixel information.
  • the second exposure control circuit 1162 can transfer the first photoelectric conversion element 1171.
  • the second exposure control circuit 1162 also transfers the charge generated after the second photoelectric conversion element 1172 receives light, or It may be that the first exposure control circuit 1161 first transfers the charge generated after the first photoelectric conversion element 1171 receives light, and the second exposure control circuit 1162 then transfers the charge generated after the second photoelectric conversion element 1172 receives the light.
  • the first exposure control circuit 1161 first transfers the charge generated by the first photoelectric conversion element 1171 after receiving light to output the first sub-pixel information.
  • the second exposure control circuit 1162 then transfers the charge generated by the second photoelectric conversion element 1172 after receiving the light to output second sub-pixel information.
  • the first sub-pixel information and the second sub-pixel information are combined into combined pixel information.
  • the image sensor 10 may further include a buffer (not shown), and the buffer is used to store the first sub-pixel information and the second sub-pixel information to output phase information.
  • pixel information and phase information can be output separately through the Mobile Industry Processor Interface (MIPI). Since the demand for phase information is small, the pixel circuit 110 can be used to output four rows of pixel information. When, output one more line of phase information. Of course, in practical applications, it is also possible to output one line of phase information every time the pixel circuit 110 outputs one line of pixel information as required, which is not limited here.
  • MIPI Mobile Industry Processor Interface
  • the pixel structure of the pixel circuit 110 in the embodiment of the present application is not limited to the structure shown in FIG. 9.
  • the pixel circuit 110 may have a three-transistor pixel structure, in which the functions of the amplifying transistor 114 and the selecting transistor 115 are performed by one transistor.
  • the first exposure control circuit 1161 and the second exposure control circuit 1162 are not limited to the way of a single transfer transistor 112, and other electronic devices or structures with the function of controlling the conduction of the control terminal can be used as the exposure control in the embodiment of the present application.
  • the implementation of a single transfer transistor 112 is simple, low in cost, and easy to control.
  • each pixel 101 includes more than two sub-pixels 102 (that is, each pixel 101 includes more than two photoelectric conversion elements)
  • each sub-pixel 102 in each pixel 101 share the reset circuit, the amplifying circuit, and the selection circuit, which is beneficial to reduce the space occupied by the pixel circuit 110 and has a lower cost.
  • each sub-pixel 102 in each pixel 101 may have a corresponding reset circuit, amplifying circuit, selection circuit, etc., which is not limited herein.
  • pixels of different colors receive different amounts of exposure per unit time. After some colors are saturated, some colors have not yet been exposed to the ideal state. For example, exposure to 60%-90% of the saturated exposure may have a relatively good signal-to-noise ratio and accuracy, but the embodiments of the present application are not limited thereto.
  • RGBW red, green, blue, full color
  • the horizontal axis is the exposure time
  • the vertical axis is the exposure
  • Q is the saturated exposure
  • LW is the exposure curve of the panchromatic pixel W
  • LG is the exposure curve of the green pixel G
  • LR is the red pixel R
  • the exposure curve of LB is the exposure curve of the blue pixel.
  • the slope of the exposure curve LW of the panchromatic pixel W is the largest, that is, the panchromatic pixel W can obtain more exposure per unit time, and reach saturation at t1.
  • the slope of the exposure curve LG of the green pixel G is the second, and the green pixel is saturated at time t2.
  • the slope of the exposure curve LR of the red pixel R is again the same, and the red pixel is saturated at time t3.
  • the slope of the exposure curve LB of the blue pixel B is the smallest, and the blue pixel is saturated at t4. It can be seen from FIG. 10 that the amount of exposure received by the panchromatic pixel W per unit time is greater than the amount of exposure received by the color pixel per unit time, that is, the sensitivity of the panchromatic pixel W is higher than the sensitivity of the color pixel.
  • phase focusing If an image sensor that only includes color pixels is used to achieve phase focusing, then in a high-brightness environment, the three color pixels of R, G, and B can receive more light, and can output pixels with high signal-to-noise ratio Information, the accuracy of phase focusing is higher at this time; but in a low-brightness environment, the R, G, and B three types of pixels can receive less light, and the signal-to-noise ratio of the output pixel information is low. The accuracy of phase focusing is also low.
  • the image sensor 10 of the embodiment of the present application can arrange panchromatic pixels and color pixels in the two-dimensional pixel array 11 at the same time. Compared with the general color sensor, the amount of light is increased and the signal-to-noise is better. In contrast, the focusing performance is better in dark light, and the sensitivity of the sub-pixel 102 will also be higher. In this way, the image sensor 10 of the embodiment of the present application can achieve accurate focusing in scenes with different environmental brightness, which improves the scene adaptability of the image sensor 10.
  • each pixel 101 that is, the color of light that the pixel 101 can receive
  • the color of the filter 160 corresponding to the pixel 101 is determined by the color of the filter 160 corresponding to the pixel 101.
  • the color pixels and panchromatic pixels in the full text of this application refer to the pixels 101 that can respond to light whose color is the same as that of the corresponding filter 160.
  • the plurality of pixels 101 in the two-dimensional pixel array 11 may simultaneously include a plurality of panchromatic pixels W and a plurality of color pixels (for example, a plurality of first color pixels A, a plurality of second color pixels B And a plurality of third color pixels C), where the color pixels and the panchromatic pixels are distinguished by the band of light that can pass through the filter 160 (shown in FIG. 1) covered thereon, and the color pixels have more Narrow spectral response, the response spectrum of a color pixel is, for example, a part of the W response spectrum of a panchromatic pixel.
  • Each panchromatic pixel includes at least two sub-pixels 102, and each color pixel includes at least two sub-pixels 102.
  • the two-dimensional pixel array 11 is composed of a plurality of minimum repeating units (FIGS. 11 to 20 show examples of the minimum repeating unit in various image sensors 10 ), and the minimum repeating units are duplicated and arranged in rows and columns.
  • Each minimum repeating unit includes a plurality of sub-units, and each sub-unit includes a plurality of single-color pixels and a plurality of full-color pixels.
  • each minimum repeating unit includes four sub-units, where one sub-unit includes multiple single-color pixels A (ie, first-color pixels A) and multiple full-color pixels W, and two sub-units include multiple single-color pixels B. (Ie, the second color pixel B) and multiple panchromatic pixels W, and the remaining one subunit includes multiple single color pixels C (ie, the third color pixel C) and multiple panchromatic pixels W.
  • one sub-unit includes multiple single-color pixels A (ie, first-color pixels A) and multiple full-color pixels W
  • two sub-units include multiple single-color pixels B. (Ie, the second color pixel B) and multiple panchromatic pixels W
  • the remaining one subunit includes multiple single color pixels C (ie, the third color pixel C) and multiple panchromatic pixels W.
  • the number of pixels 101 in the rows and columns of the smallest repeating unit is equal.
  • the minimum repeating unit includes, but is not limited to, a minimum repeating unit of 4 rows and 4 columns, 6 rows and 6 columns, 8 rows and 8 columns, and 10 rows and 10 columns.
  • the number of pixels 101 in the rows and columns of the subunit is equal.
  • the subunits include, but are not limited to, subunits with 2 rows and 2 columns, 3 rows and 3 columns, 4 rows and 4 columns, and 5 rows and 5 columns. This setting helps to balance the resolution and color performance of the image in the row and column directions, and improve the display effect.
  • the panchromatic pixel W is arranged in the first diagonal direction D1
  • the color pixel is arranged in the second diagonal direction D2
  • the first diagonal direction D1 is opposite to the second diagonal direction D1.
  • the direction D2 is different.
  • FIG. 11 is a schematic diagram of the arrangement of the pixels 101 of the smallest repeating unit and the coverage of the lens 170 in the embodiment of the present application; the smallest repeating unit is 4 rows, 4 columns and 16 pixels, and the subunits are 2 rows, 2 columns and 4 pixels.
  • the arrangement method is:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 11), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. 11
  • the direction connecting the lower left corner and the upper right corner) the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • first diagonal direction D1 and the second diagonal direction D2 are not limited to the diagonal, but also include directions parallel to the diagonal.
  • the "direction” here is not a single direction, but can be understood as the concept of a “straight line” indicating the arrangement, and there can be two-way directions at both ends of the straight line.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • FIG. 12 is a schematic diagram of another arrangement of pixels 101 of the smallest repeating unit and coverage of lenses 170 in an embodiment of the present application.
  • the minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, and the subunit is 2 rows, 2 columns and 4 pixels 101.
  • the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixel W is arranged in the first diagonal direction D1 (that is, the direction connecting the upper right corner and the lower left corner in Fig. 12), and the color pixels are arranged in the second diagonal direction D2 (for example, in Fig. 12).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • FIG. 13 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another arrangement of pixels 101 of the smallest repeating unit and coverage of lenses 170 in an embodiment of the present application.
  • the first color pixel A is the red pixel R
  • the second color pixel B is the green pixel G
  • the third color pixel C Is the blue pixel Bu.
  • the response band of the panchromatic pixel W is the visible light band (for example, 400 nm-760 nm).
  • the panchromatic pixel W is provided with an infrared filter to filter out infrared light.
  • the response wavelength band of the panchromatic pixel W is the visible light wavelength band and the near-infrared wavelength band (for example, 400 nm-1000 nm), which matches the response wavelength band of the photoelectric conversion element (for example, photodiode PD) in the image sensor 10.
  • the panchromatic pixel W may not be provided with a filter, and the response band of the panchromatic pixel W is determined by the response band of the photodiode, that is, the two match.
  • the embodiments of the present application include, but are not limited to, the above-mentioned waveband range.
  • the first color pixel A may also be a red pixel R
  • the second color pixel B may also be a yellow pixel Y
  • the third color pixel C may be Blue pixel Bu.
  • the first color pixel A may also be a magenta pixel M
  • the second color pixel B may also be a cyan pixel Cy
  • the third color pixel C may also be Can be a yellow pixel Y.
  • FIG. 15 is a schematic diagram of another arrangement of pixels 101 of the smallest repeating unit and coverage of lenses 170 in an embodiment of the present application.
  • the minimum repeating unit is 36 pixels 101 in 6 rows and 6 columns, and the sub-unit is 9 pixels 101 in 3 rows, 3 columns, and the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 15), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. 15
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • FIG. 16 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the minimum repeating unit is 36 pixels 101 in 6 rows and 6 columns, and the sub-unit is 9 pixels 101 in 3 rows, 3 columns, and the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 16), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. The direction where the upper left corner and the lower right corner connect).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • the first color pixel A in the minimum repeating unit shown in FIGS. 15 and 16 may be a red pixel R
  • the second color pixel B may be a green pixel G
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A in the minimum repeating unit shown in FIGS. 15 and 16 may be a red pixel R
  • the second color pixel B may be a yellow pixel Y
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A in the minimum repeating unit shown in FIGS. 15 and 16 may be a magenta pixel M
  • the second color pixel B may be a cyan pixel Cy
  • the third color pixel C may be a yellow pixel Y.
  • FIG. 17 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the smallest repeating unit is 8 rows, 8 columns and 64 pixels 101, and the sub-unit is 4 rows, 4 columns and 16 pixels 101.
  • the arrangement is:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 17), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. The direction connecting the lower left corner and the upper right corner), the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • FIG. 18 is a schematic diagram of the arrangement of the pixels 101 of the smallest repeating unit and the coverage of the lens 170 in the embodiment of the present application.
  • the smallest repeating unit is 8 rows, 8 columns and 64 pixels 101, and the sub-unit is 4 rows, 4 columns and 16 pixels 101.
  • the arrangement is:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 18), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. 18 The direction where the upper left corner and the lower right corner connect).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • adjacent panchromatic pixels W are arranged diagonally, and adjacent color pixels are also arranged diagonally.
  • adjacent panchromatic pixels are arranged in the horizontal direction, and adjacent color pixels are also arranged in the horizontal direction; or, adjacent panchromatic pixels are arranged in the vertical direction, and the adjacent panchromatic pixels are arranged in the vertical direction.
  • the color pixels are also arranged in the vertical direction.
  • the panchromatic pixels in adjacent subunits can be arranged in a horizontal direction or a vertical direction, and the color pixels in adjacent subunits can also be arranged in a horizontal direction or a vertical direction.
  • FIG. 19 is a schematic diagram of another minimal repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, and the subunit is 2 rows, 2 columns and 4 pixels 101.
  • the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • adjacent panchromatic pixels W are arranged along the vertical direction, and adjacent color pixels are also arranged along the vertical direction.
  • One lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • FIG. 20 is a schematic diagram of the arrangement of pixels 101 of the smallest repeating unit and the coverage of lenses 170 in the embodiment of the present application.
  • the minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, and the subunit is 2 rows, 2 columns and 4 pixels 101.
  • the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • panchromatic pixels W are arranged along the horizontal direction, and adjacent color pixels are also arranged along the horizontal direction.
  • One lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • the first color pixel A may be a red pixel R
  • the second color pixel B may be a green pixel G
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A may be a red pixel R
  • the second color pixel B may be a yellow pixel Y
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A may be a magenta pixel M
  • the second color pixel B may be a cyan pixel Cy
  • the third color pixel C may be a yellow pixel Y.
  • each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • each panchromatic pixel includes four sub-pixels 102, and each color pixel includes four sub-pixels 102; or each panchromatic pixel includes four sub-pixels 102, and each color pixel includes two sub-pixels. 102, etc., there is no restriction here.
  • the dividing line between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11.
  • the boundary line between the two sub-pixels 102 may be parallel to the length direction X of the two-dimensional pixel array 11, or any of the foregoing boundary line forms or combinations thereof, etc.
  • the multiple panchromatic pixels and the multiple color pixels in any one of the two-dimensional pixel arrays 11 shown in FIGS. 11 to 20 can be controlled by different exposure control lines, so as to realize the exposure time of the panchromatic pixels. And independent control of the exposure time of color pixels.
  • the control terminal of the exposure control circuit of at least two panchromatic pixels adjacent in the first diagonal direction and the first exposure control The line is electrically connected, and the control ends of the exposure control circuits of at least two color pixels adjacent in the second diagonal direction are electrically connected to the second exposure control line.
  • the control terminal of the exposure control circuit for panchromatic pixels in the same row or column is electrically connected to the first exposure control line, and the color pixels in the same row or column are exposed
  • the control terminal of the control circuit is electrically connected with the second exposure control line.
  • the first exposure control line can transmit a first exposure signal to control the first exposure time of the panchromatic pixel
  • the second exposure control line can transmit a second exposure signal to control the second exposure time of the color pixel.
  • the first exposure time of the panchromatic pixel may be less than the second exposure time of the color pixel.
  • the ratio of the first exposure time to the second exposure time may be one of 1:2, 1:3, or 1:4.
  • the ratio of the first exposure time to the second exposure time can be adjusted to 1:2, 1:3, or 1:4 according to the brightness of the environment.
  • the relative relationship between the first exposure time and the second exposure time can be determined according to the environmental brightness. For example, when the ambient brightness is less than or equal to the brightness threshold, the panchromatic pixels are exposed at the first exposure time equal to the second exposure time; when the ambient brightness is greater than the brightness threshold, the panchromatic pixels are exposed at the first exposure time less than the second exposure time Time to expose.
  • the relative relationship between the first exposure time and the second exposure time can be determined according to the brightness difference between the ambient brightness and the brightness threshold. For example, the greater the brightness difference, the first exposure time and the second exposure time The ratio of the second exposure time is smaller.
  • the ratio of the first exposure time to the second exposure time is 1:2; when the brightness difference is within the second range [b,c) , The ratio of the first exposure time to the second exposure time is 1:3; when the brightness difference is greater than or equal to c, the ratio of the first exposure time to the second exposure time is 1:4.
  • Control methods include:
  • the control method of the embodiment of the present application can be implemented by the camera assembly 40 of the embodiment of the present application.
  • the camera assembly 40 includes a lens 30, the image sensor 10 described in any one of the above embodiments, and a processing chip 20.
  • the image sensor 10 may receive light incident through the lens 30 and generate electrical signals.
  • the image sensor 10 is electrically connected to the processing chip 20.
  • the processing chip 20 and the image sensor 10 and the lens 30 may be packaged in the housing of the camera assembly 40; or, the image sensor 10 and the lens 30 are packaged in the housing of the camera assembly 40, and the processing chip 20 is arranged outside the housing. Step 01, step 02, and step 03 can all be implemented by the processing chip 20.
  • the processing chip 20 can be used to: control the exposure of at least two sub-pixels 102 to output at least two sub-pixel information; determine the phase information for focusing according to the at least two sub-pixel information; and control the two-dimensional pixel in the in-focus state
  • the array 11 is exposed to obtain a target image.
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels. Compared with a general color sensor, the amount of light passing is increased and the information is better. The noise ratio, the focusing performance is better in dark light, and the sensitivity of the sub-pixel 102 will also be higher.
  • each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.
  • control method and the camera assembly 40 of the embodiment of the present application do not need to be designed to shield the pixels 101 in the image sensor 10. All the pixels 101 can be used for imaging, and no dead pixel compensation is required, which is beneficial to improve the acquisition of the camera assembly 40. The quality of the target image.
  • all the pixels 101 including at least two sub-pixels 102 in the control method and camera assembly 40 of the embodiments of the present application can be used for phase focusing, and the accuracy of phase focusing is higher.
  • the phase information may be a phase difference.
  • Determining phase information for focusing based on at least two sub-pixel information includes: (1) calculating a phase difference based only on panchromatic sub-pixel information for focusing; (2) calculating a phase difference based on only color sub-pixel information for focusing; (3) At the same time, the phase difference is calculated according to the panchromatic sub-pixel information and the color sub-pixel information to focus.
  • the control method and the camera assembly 40 of the embodiment of the present application use the image sensor 10 including panchromatic pixels and color pixels to achieve phase focusing, so that it can be used in an environment with low brightness (for example, the brightness is less than or equal to the first preset brightness) Full-color pixels with higher sensitivity are used for phase focusing. In an environment with higher brightness (for example, the brightness is greater than or equal to the second preset brightness), color pixels with lower sensitivity are used for phase focusing, and when the brightness is moderate (for example, greater than In an environment where the first preset brightness is less than the second preset brightness), at least one of panchromatic pixels and color pixels is used for phase focusing.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the full-color sub-pixel information includes first full-color sub-pixel information and second full-color sub-pixel information.
  • the first panchromatic sub-pixel information and the second panchromatic sub-pixel information are respectively output by the panchromatic sub-pixels located in the first orientation of the lens 170 and the panchromatic sub-pixels located in the second orientation of the lens 170.
  • One first panchromatic sub-pixel information and a corresponding second panchromatic sub-pixel information serve as a pair of panchromatic sub-pixel information.
  • the step of calculating the phase difference according to the panchromatic sub-pixel information for focusing includes: forming a first curve according to the first panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information;
  • the second panchromatic sub-pixel information forms a second curve; and the phase difference is calculated according to the first curve and the second curve for focusing.
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 23 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 23. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the panchromatic sub-pixel W) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (that is, the panchromatic sub-pixel W ) Is located at the second orientation P2 of the lens 170.
  • the first panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the first orientation P1 of the lens 170
  • the second panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the second orientation P2 of the lens 170.
  • panchromatic sub-pixels W 11, P1 , W 13, P1 , W 15, P1 , W 17, P1 , W 22, P1 , W 24, P1 , W 26, P1 , W 28, P1, etc. are located in the first position P1
  • panchromatic sub-pixels W 11, P2 , W 13, P2 , W 15, P2 , W 17, P2 , W 22, P2 , W 24, P2 , W 26, P2 , W 28, P2, etc. are located in the second position P2.
  • the two panchromatic sub-pixels W in the same panchromatic pixel form a pair of panchromatic sub-pixel pairs.
  • panchromatic sub-pixel information of the two panchromatic sub-pixels in the same panchromatic pixel W form a pair of panchromatic sub-pixels.
  • the color sub-pixel information pair for example, the pan-color sub-pixel information of the pan-color sub-pixel W 11, P1 and the pan-color sub-pixel information of the pan-color sub-pixel W 11, P2 form a pair of pan-color sub-pixel information.
  • the full-color sub-pixel information of pixels W 13, P1 and the full-color sub-pixel information of pan-color sub-pixels W 13, P2 form a pair of pan-color sub-pixel information
  • the pan-color sub-pixel information of pan-color sub-pixels W 15, P1 It forms a pair of panchromatic sub-pixel information with panchromatic sub-pixels W 15, P2 .
  • the color sub-pixel information constitutes a pair of full-color sub-pixel information, and so on.
  • the processing chip 20 After obtaining multiple pairs of panchromatic sub-pixel information, the processing chip 20 forms a first curve according to the first panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information, and forms a first curve according to the pairs of panchromatic sub-pixel information.
  • the second panchromatic sub-pixel information forms a second curve, and then the phase difference is calculated according to the first curve and the second curve.
  • a plurality of first panchromatic sub-pixel information can depict one histogram curve (ie, a first curve)
  • a plurality of second panchromatic sub-pixel information can depict another histogram curve (ie, a second curve).
  • the processing chip 20 can calculate the phase difference between the two histogram curves according to the positions of the peaks of the two histogram curves. Subsequently, the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the full-color sub-pixel information includes first full-color sub-pixel information and second full-color sub-pixel information.
  • the first panchromatic sub-pixel information and the second panchromatic sub-pixel information are output by the panchromatic sub-image in the first orientation of the lens 170 and the panchromatic sub-image in the second orientation of the lens 170, respectively.
  • the plurality of first panchromatic sub-pixel information and the corresponding plurality of second panchromatic sub-pixel information form a pair of panchromatic sub-pixel information.
  • Calculating the phase difference according to the panchromatic sub-pixel information for focusing includes: calculating the third panchromatic sub-pixel information according to the multiple first panchromatic sub-pixel information in each pair of panchromatic sub-pixel information; and according to each pair of panchromatic sub-pixel information
  • the fourth panchromatic sub-pixel information is calculated from the multiple second panchromatic sub-pixel information in the pixel information pair; the first curve is formed according to the multiple third panchromatic sub-pixel information; the first curve is formed according to the multiple fourth panchromatic sub-pixel information.
  • the second curve; and the phase difference is calculated according to the first curve and the second curve for focusing.
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 23 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 23. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the panchromatic sub-pixel W) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (that is, the panchromatic sub-pixel W) Located at the second position P2 of the lens 170.
  • the first panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the first orientation P1 of the lens 170
  • the second panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the second orientation P2 of the lens 170.
  • panchromatic sub-pixels W 11, P2 , W 13, P2 , W 15, P2 , W 17, P2 , W 22, P2 , W 24, P2 , W 26, P2 , W 28, P2, etc. are located in the second position P2.
  • a plurality of panchromatic sub-pixels W located in the first direction P1 and a plurality of panchromatic sub-pixels W located in the second direction P2 form a pair of panchromatic sub-pixels.
  • the information of the plurality of first panchromatic sub-pixels corresponds to The plurality of second panchromatic sub-pixel information as a pair of panchromatic sub-pixel information.
  • panchromatic subpixel W 11 , P1 , W 22, P1 panchromatic sub-pixel information and panchromatic pixels W 11, P2 , W 22, P2 panchromatic sub-pixel information form a pair of panchromatic sub-pixel information
  • panchromatic sub-pixel W 13, P1 , W 24, P1 panchromatic sub-pixel information and pan-color sub-pixels W 13, P2 , W 24, P2 pan -color sub-pixel information form a pair of pan-color sub-pixel information pairs, pan-color sub-pixels W 15, P1
  • the full-color sub-pixel information of W 26, P1 and the full-color sub-pixel information of the full-color sub-pixels W 15, P2 , W 26, P2 form a pair of full-color sub-pixel information, and the full-color sub-pixels W 17, P1 , W 28.
  • Panchromatic sub-pixel information of P1 and panchromatic sub-pixel information of panchromatic sub-pixels W 17, P2 , W 28, P2 form a pair of panchromatic sub-pixel information, and so on.
  • multiple first panchromatic sub-pixel information in the same minimum repeating unit and multiple second panchromatic sub-pixel information in the minimum repeating unit are used as a pair of panchromatic sub-pixel information, that is, panchromatic sub-pixels.
  • the processing chip 20 calculates the third panchromatic sub-pixel information according to the multiple first panchromatic sub-pixel information in each pair of panchromatic sub-pixel information pairs, and calculates the third panchromatic sub-pixel information according to each pair of panchromatic sub-pixel information.
  • the multiple second panchromatic sub-pixel information in the sub-pixel information pair calculates fourth panchromatic sub-pixel information.
  • a pair of panchromatic sub-pixel information composed of panchromatic pixel information of panchromatic sub-pixels W 11, P1 , W 22, and P1 and panchromatic sub-pixel information of panchromatic sub-pixels W 11, P2 , W 22, P2
  • panchromatic sub-pixel information For panchromatic sub-pixels W 11, P1 , W 13, P1 , W 22, P1, W 24, P1 , W 31, P1 , W 33, P1 , W 42, P1 , W 44, P1 panchromatic sub-pixel information It is composed of panchromatic sub-pixel information of panchromatic pixels W 11, P2 , W 13, P2 , W 22, P2, W 24, P2 , W 31, P2 , W 33, P12 , W 42, P2 , W 44, P2
  • the processing chip 20 can obtain a plurality of third panchromatic sub-pixel information and a plurality of fourth panchromatic sub-pixel information.
  • the plurality of third panchromatic sub-pixel information can depict one histogram curve (ie, the first curve), and the plurality of fourth panchromatic sub-pixel information can depict another histogram curve (ie, the second curve).
  • the processing chip 20 can calculate the phase difference according to the two histogram curves.
  • the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters.
  • the processing chip 20 can control the distance required to move the lens 30 to make the lens 30 in focus.
  • the color pixel includes two color sub-pixels.
  • the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information.
  • the first color sub-pixel information and the second sub-pixel information are respectively output by the color sub-pixels located in the first orientation of the lens 170 and the color sub-pixels located in the second orientation of the lens 170.
  • One first color sub-pixel information and a corresponding second color sub-pixel information serve as a pair of color sub-pixel information.
  • the step of calculating the phase difference according to the color sub-pixel information for focusing includes: forming a third curve according to the first color sub-pixel information in the multiple pairs of color sub-pixel information; and according to the second color sub-pixel in the multiple pairs of color sub-pixel information
  • the pixel information forms a fourth curve; and the phase difference is calculated according to the third curve and the fourth curve for focusing. This process is similar to the foregoing process of only calculating the phase difference based on the panchromatic sub-pixel information for focusing, and will not be further described again.
  • the color pixel includes two color sub-pixels.
  • the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information.
  • the first color sub-pixel information and the second color sub-pixel information are respectively output by the color sub-pixels in the first orientation of the lens 170 and the color sub-pixels in the second orientation of the lens 170.
  • the plurality of first color sub-pixel information and the corresponding plurality of second color sub-pixel information serve as a pair of color sub-pixel information.
  • Calculating the phase difference according to the color sub-pixel information for focusing includes: calculating the third color sub-pixel information according to the multiple first color sub-pixel information in each pair of color sub-pixel information; Calculate fourth color sub-pixel information based on multiple second color sub-pixel information; form a third curve based on multiple third color sub-pixel information; form a fourth curve based on multiple fourth color sub-pixel information; and based on the third curve and The fourth curve calculates the phase difference for focusing. This process is similar to the aforementioned process of only calculating the phase difference based on the panchromatic sub-pixel information for focusing, and will not be described again.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the color pixel includes two color sub-pixels.
  • the panchromatic subpixel information includes first panchromatic subpixel information and second panchromatic subpixel information, and the color subpixel information includes first color subpixel information and second color subpixel information.
  • the first full-color sub-pixel information, the second full-color sub-pixel information, the first color sub-pixel information, and the second color sub-pixel information are respectively composed of the full-color sub-pixel located in the first orientation of the lens 170 and the second full-color sub-pixel located in the lens 170.
  • the bidirectional panchromatic sub-pixels, the color sub-pixels located in the first direction of the lens 170, and the color sub-pixels located in the second direction of the lens 170 are output.
  • a first full-color sub-pixel information and a corresponding second full-color sub-pixel information are used as a pair of full-color sub-pixel information, and a first color sub-pixel information and a corresponding second color sub-pixel information are used as a pair of color Sub-pixel information pair.
  • Calculating the phase difference according to the panchromatic sub-pixel information and the color sub-pixel information for focusing includes: forming a first curve based on the first panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information; and according to the multiple pairs of panchromatic sub-pixels
  • the second full-color sub-pixel information in the information pair forms a second curve
  • the third curve is formed according to the first color sub-pixel information in the multiple pairs of color sub-pixel information pairs
  • the second color in the multiple pairs of color sub-pixel information pairs is formed
  • the sub-pixel information forms a fourth curve
  • the phase difference is calculated according to the first curve, the second curve, the third curve, and the fourth curve for focusing.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the color pixel includes two color sub-pixels.
  • the panchromatic subpixel information includes first panchromatic subpixel information and second panchromatic subpixel information, and the color subpixel information includes first color subpixel information and second color subpixel information.
  • the first full-color sub-pixel information, the second full-color sub-pixel information, the first color sub-pixel information, and the second color sub-pixel information are respectively composed of the full-color sub-pixel located in the first orientation of the lens 170 and the second full-color sub-pixel located in the lens 170.
  • the bidirectional panchromatic sub-pixels, the color sub-pixels located in the first direction of the lens 170, and the color sub-pixels located in the second direction of the lens 170 are output.
  • the plurality of first panchromatic sub-pixel information and the corresponding plurality of second panchromatic sub-pixel information serve as a pair of panchromatic sub-pixel information, and the plurality of first color sub-pixel information and the corresponding plurality of second color sub-pixel information As a pair of color sub-pixel information.
  • Calculating the phase difference according to the panchromatic subpixel information and the color subpixel information for focusing includes: calculating the third panchromatic subpixel information according to the multiple first panchromatic subpixel information in each pair of panchromatic subpixel information; Calculate the fourth full-color sub-pixel information based on the multiple second full-color sub-pixel information in each pair of full-color sub-pixel information pairs; calculate the third color based on the multiple first-color sub-pixel information in each pair of color sub-pixel information pairs Sub-pixel information; calculate fourth color sub-pixel information according to multiple second color sub-pixel information in each pair of color sub-pixel information; form a first curve based on multiple third full-color sub-pixel information; according to multiple fourth The full-color sub-pixel information forms a second curve; the third curve is formed according to the information of the plurality of third color sub-pixels; the fourth curve is formed according to the information of the plurality of fourth color sub-pixels; and the fourth curve is formed according to the first curve, the second curve, and the third curve.
  • the curve and the fourth curve calculate the phase difference for focusing. This process is similar to the foregoing process of calculating the phase difference according to the panchromatic sub-pixel information for focusing, and calculating the phase difference according to the color sub-pixel information for focusing, and will not be further described again.
  • control method further includes:
  • Step 03 In the in-focus state, controlling the exposure of the two-dimensional pixel array 11 to obtain a target image includes:
  • step 04, step 031, and step 032 can all be implemented by the processing chip 10. That is to say, the processing chip 20 can be used to: obtain the ambient brightness; when the ambient brightness is greater than the first predetermined brightness, in the in-focus state, control the exposure of at least two sub-pixels 102 in each panchromatic pixel to output the full Color sub-pixel information, controlling exposure of at least two sub-pixels 102 in each color pixel to respectively output color sub-pixel information; generating a target image according to the full-color sub-pixel information and the color sub-pixel information.
  • the panchromatic sub-pixel information includes the first panchromatic pixel information and the first panchromatic pixel information.
  • the first panchromatic sub-pixel information and the second panchromatic sub-pixel information are respectively output by the panchromatic sub-pixels located in the first orientation P1 of the lens 170 and the panchromatic sub-pixels located in the second orientation P2 of the lens 170 (not shown in FIG. 25).
  • the first orientation P1 and the second orientation P2 of the lens 170 are marked, which can be divided in conjunction with the orientation of the lens 170 in FIG. 23).
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • one sub-pixel 102 that is, the panchromatic sub-pixel W
  • the other sub-pixel 102 that is, the panchromatic sub-pixel W
  • each panchromatic pixel W the panchromatic sub-pixel W located in the first orientation P1 of the lens 170 and the panchromatic sub-pixel W located in the second orientation P2 of the lens 170 are exposed to output the first panchromatic sub-pixel information and The second panchromatic sub-pixel information.
  • full-color sub-pixels W11, P1 and full-color sub-pixels W11, P2 are exposed to output first and second full-color sub-pixel information, respectively, pan-color sub-pixels W22, P1 and pan-color sub-pixel W22, P2 exposure outputs the first panchromatic sub-pixel information and the second panchromatic sub-pixel information respectively, and so on.
  • the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information.
  • the first color sub-pixel information and the second color sub-pixel information are respectively output by the color sub-pixels located at the first orientation P1 of the lens 170 and the color sub-pixels located at the second orientation P2 of the lens 170 (the lens 170 is not shown in FIG. 25).
  • the first orientation P1 and the second orientation P2 can be combined with the orientation of the lens 170 in FIG. 23).
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • one sub-pixel 102 (that is, color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located in the first orientation P1 of lens 170, and the other sub-pixel 102 (that is, color sub-pixel A, color sub-pixel The pixel B or the color sub-pixel C) is located in the second orientation P2 of the lens 170.
  • the first color sub-pixel information and the second color sub-pixel information are output respectively.
  • color sub-pixels A12, P1 and color sub-pixels A12, P2 are exposed to output first color sub-pixel information and second color sub-pixel information
  • color sub-pixels B14, P1 and color sub-pixels B14, P2 are exposed to output first
  • the color sub-pixel information and the second color sub-pixel information, the color sub-pixels C34, P1 and the color sub-pixels C34, P2 are exposed to output the first color sub-pixel information and the second color sub-pixel information, and so on.
  • each panchromatic pixel and each color pixel are divided into multiple sub-pixels to output sub-pixel information, which can improve the target The resolution of the image.
  • each panchromatic pixel and each color pixel are divided into two sub-pixels to output sub-pixel information, and the lateral resolution of the output target image can be doubled (when each panchromatic pixel
  • the horizontal resolution and vertical resolution of the output target image are both doubled. At this time, the phase Focus performance is also better).
  • control method further includes: 04: Obtain environmental brightness
  • Step 03 In the in-focus state, controlling the exposure of the two-dimensional pixel array 11 to obtain a target image includes:
  • step 04, step 033, and step 034 can all be implemented by the processing chip 10. That is to say, the processing chip 20 can be used to: obtain the ambient brightness; when the ambient brightness is less than the second predetermined brightness, in the in-focus state, control the exposure of at least two sub-pixels in each panchromatic pixel to output the panchromatic respectively.
  • the sub-pixel information controls the exposure of at least two sub-pixels in each color pixel to combine and output color-combined pixel information; and generate a target image according to the full-color sub-pixel information and the color-combined pixel information.
  • the second predetermined brightness is less than or equal to the first predetermined brightness.
  • the panchromatic sub-pixel information includes the first panchromatic pixel information and the first panchromatic pixel information. Two panchromatic sub-pixel information.
  • the first panchromatic sub-pixel information and the second panchromatic sub-pixel information are respectively output by the panchromatic sub-pixels located in the first orientation P1 of the lens 170 and the panchromatic sub-pixels located in the second orientation P2 of the lens 170 (not shown in FIG. 26).
  • the first orientation P1 and the second orientation P2 of the lens 170 are marked, which can be divided in conjunction with the orientation of the lens 170 in FIG. 23).
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • one sub-pixel 102 that is, the panchromatic sub-pixel W
  • the other sub-pixel 102 that is, the panchromatic sub-pixel W
  • each panchromatic pixel W the panchromatic sub-pixel W located in the first orientation P1 of the lens 170 and the panchromatic sub-pixel W located in the second orientation P2 of the lens 170 are exposed to output the first panchromatic sub-pixel information and The second panchromatic sub-pixel information.
  • the exposure of pan-color sub-pixels W11, P1 and pan-color sub-pixels W11, P2 respectively output first pan-color sub-pixel information and second pan-color sub-pixel information
  • pan-color sub-pixels W22, P1 and pan-color sub-pixel W22, P2 exposure outputs the first panchromatic sub-pixel information and the second panchromatic sub-pixel information respectively, and so on.
  • the color combined pixel information is output by combining the color sub-pixels at the first orientation P1 of the lens 170 and the color sub-pixels at the second orientation P2 of the lens 170 (the first orientation P1 and the second orientation of the lens 170 are not shown in FIG. 26).
  • P2 can be combined with the orientation division of the lens 170 in FIG. 23).
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • one sub-pixel 102 (that is, color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located in the first orientation P1 of lens 170, and the other sub-pixel 102 (that is, color sub-pixel A, color sub-pixel The pixel B or the color sub-pixel C) is located in the second orientation P2 of the lens 170.
  • the color sub-pixel located in the first direction P1 of the lens 170 and the color sub-pixel located in the second direction P2 of the lens 170 are combined to output color combined pixel information after exposure.
  • color sub-pixels A12, P1 and color sub-pixels A12, P2 are exposed and combined to output color combined pixel information
  • color sub-pixels B14, P1 and color sub-pixels B14, P2 are combined to output color combined pixel information
  • each panchromatic pixel is divided into multiple sub-pixels to output sub-pixel information, and the sub-pixel information in each color pixel Combining multiple sub-pixels to output combined pixel information can improve the resolution of the target image to a certain extent. For example, in FIG. 26, each panchromatic pixel is divided into two sub-pixels to output sub-pixel information respectively, and the lateral resolution of the output target image can be changed to 1.5 times the original.
  • control method further includes: 04: Obtain environmental brightness
  • Step 03 In the in-focus state, controlling the exposure of the two-dimensional pixel array 11 to obtain a target image includes:
  • step 04, step 035, and step 036 can all be implemented by the processing chip 10.
  • the processing chip 20 can be used to: obtain the ambient brightness; when the ambient brightness is less than the third predetermined brightness, in the in-focus state, control the exposure of at least two sub-pixels in each panchromatic pixel to combine and output the panchromatic Combining the pixel information, controlling the exposure of at least two sub-pixels in each color pixel to combine and output the color combined pixel information; generate a target image according to the full-color combined pixel information and the color combined pixel information.
  • the third predetermined brightness is less than the second predetermined brightness.
  • the panchromatic combined pixel information is determined by the information of the panchromatic combined pixel located at the first position P1 of the lens 170
  • the combined output of the panchromatic sub-pixels and the panchromatic sub-pixels located in the second position P2 of the lens 170 (the first position P1 and the second position P2 of the lens 170 are not marked in FIG. 27, which can be combined with the position of the lens 170 in FIG. 23 ).
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • one sub-pixel 102 that is, the panchromatic sub-pixel W
  • the other sub-pixel 102 that is, the panchromatic sub-pixel W
  • the panchromatic sub-pixel W located in the first position P1 of the lens 170 and the panchromatic sub-pixel W located in the second position P2 of the lens 170 are combined to output panchromatic combined pixel information after exposure.
  • pan-color sub-pixels W11, P1 and pan-color sub-pixels W11, P2 are exposed and combined to output pan-color-combined pixel information
  • pan-color sub-pixels W22, P1 and pan-color sub-pixels W22, P2 are exposed and combined to output pan-color-combined pixel information. So on and so forth.
  • the color combined pixel information is combined and output by the color sub-pixels located in the first orientation P1 of the lens 170 and the color sub-pixels located in the second orientation P2 of the lens 170 (the first orientation P1 and the second orientation of the lens 170 are not shown in FIG. 27).
  • P2 can be combined with the orientation division of the lens 170 in FIG. 23).
  • the first orientation P1 of each lens 170 is the position corresponding to the left half of the lens 170
  • the second orientation P2 is the position corresponding to the right half of the lens 170.
  • one sub-pixel 102 (that is, color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located in the first orientation P1 of lens 170, and the other sub-pixel 102 (that is, color sub-pixel A, color sub-pixel The pixel B or the color sub-pixel C) is located in the second orientation P2 of the lens 170.
  • the color sub-pixel located in the first direction P1 of the lens 170 and the color sub-pixel located in the second direction P2 of the lens 170 are combined to output color combined pixel information after exposure.
  • color sub-pixels A12, P1 and color sub-pixels A12, P2 are exposed and combined to output color combined pixel information
  • color sub-pixels B14, P1 and color sub-pixels B14, P2 are combined to output color combined pixel information
  • each panchromatic pixel is divided into a plurality of sub-pixels to output sub-pixel information, and each panchromatic pixel is Multiple sub-pixels in each color pixel are combined to output combined pixel information to increase the signal amount and improve the signal-to-noise ratio.
  • the two-dimensional pixel array 11 exposes and outputs the original image including panchromatic sub-pixel information and color merged pixel information (as shown in FIG. 25), or outputs the original image including panchromatic subpixel information and color merged pixel information (as shown in FIG. 26). Shown), or after outputting the original image including full-color combined pixel information and color combined pixel information (as shown in FIG. 27), the processing chip 20 can also correspond to the original image with respect to the three original images with different resolutions
  • the demosaicing algorithm such as bilinear interpolation algorithm
  • processing to complement the pixel information or sub-pixel information of each channel (such as the red channel, the green channel and the blue channel), maintain the complete presentation of the image color, and finally get the color The target image.
  • the processing chip 20 may also perform black level correction processing, lens shading correction processing, dead pixel compensation processing, color correction processing, global tone mapping processing, and processing on the original image. Any one or more of the color conversion processing to have a better image effect.
  • the present application also provides a mobile terminal 90.
  • the mobile terminal 90 of the embodiment of the present application may be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (such as a smart watch, a smart bracelet, a smart glasses, a smart helmet, etc.), a head-mounted display device, a virtual reality device, etc., here No restrictions.
  • the mobile terminal 90 of the embodiment of the present application includes the camera assembly 40, the processor 60, the memory 70, and the casing 80 of any of the above embodiments.
  • the camera assembly 40, the processor 60 and the memory 70 are all mounted on the casing 80. Wherein, the image sensor 10 in the camera assembly 40 is connected to the processor 60.
  • the processor 60 can perform the same functions as the processing chip 20 in the camera assembly 40. In other words, the processor 60 can implement the functions that can be implemented by the processing chip 20 in any one of the foregoing embodiments.
  • the memory 70 is connected to the processor 60, and the memory 70 can store data obtained after processing by the processor 60, such as a target image.
  • the processor 60 and the image sensor 10 may be mounted on the same substrate. At this time, the image sensor 10 and the processor 60 can be regarded as a camera assembly 40. Of course, the processor 60 and the image sensor 10 may also be mounted on a different substrate.
  • the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels. Compared with a general color sensor, the amount of light passing is increased, and the signal-to-noise ratio is better. The focusing performance is better in dark light, and the sensitivity of the sub-pixel 102 is also higher.
  • each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Electromagnetism (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

一种图像传感器(10)、控制方法、摄像头组件(40)和移动终端(90)。图像传感器(10)包括二维像素阵列(11)及透镜阵列(17)。在二维像素阵列(11)的最小重复单元中,全色像素设置在第一对角线方向,彩色像素设置在第二对角线方向。每个像素(101)包括至少两个子像素(102)。透镜阵列(17)包括多个透镜(170),每个透镜(170)覆盖一个像素(101)。

Description

图像传感器、控制方法、摄像头组件和移动终端
优先权信息
本申请请求2020年05月07日向中国国家知识产权局提交的、专利申请号为202010377218.4的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及成像技术领域,更具体而言,涉及一种图像传感器、控制方法、摄像头组件和移动终端。
背景技术
随着电子技术的发展,具有照相功能的终端在人们的生活中已经得到了普及。目前手机拍摄采用的对焦方法主要有反差对焦和相位对焦(Phase Detection Auto Focus,PDAF)。
发明内容
本申请实施方式提供一种图像传感器、控制方法、摄像头组件和移动终端。
本申请实施方式的图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应。所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向。所述第一对角线方向与所述第二对角线方向不同。其中,每个像素包括至少两个子像素。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
本申请实施方式的控制方法用于图像传感器。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应。所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向。所述第一对角线方向与所述第二对角线方向不同。其中,每个像素包括至少两个子像素。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。所述控制方法包括:控制所述至少两个子像素曝光以输出至少两个子像素信息;根据所述至少两个子像素信息确定相位信息以进行对焦;在合焦状态下,控制所述二维像素阵列曝光以获取目标图像。
本申请实施方式的摄像头组件包括镜头及图像传感器。所述图像传感器能够接收穿过所述镜头的光线。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应。所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向。所述第一对角线方向与所述第二对角线方向不同。其中,每个像素包括至少两个子像素。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
本申请实施方式的移动终端包括机壳及摄像头组件。所述摄像头组件安装在所述机壳上。所述摄像头组件包括镜头及图像传感器。所述图像传感器能够接收穿过所述镜头的光线。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应。所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向。所述第一对角线方向与所述第二对角线方向不同。其中,每个像素包括至少两个子像素。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
本申请实施方式的移动终端包括机壳及摄像头组件。所述摄像头组件安装在所述机壳上。所述摄像头组件包括镜头及图像传感器。所述图像传感器能够接收穿过所述镜头的光线。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应。所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向。所述第一对角线方向与所述第二对角线方向不同。其中,每个像素包括至少两个子像素。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。所述移动终端还包括处理器,所述处理器用于实现控制方法:控制所述至少两个子像素 曝光以输出至少两个子像素信息;根据所述至少两个子像素信息确定相位信息以进行对焦;在合焦状态下,控制所述二维像素阵列曝光以获取目标图像。
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的图像传感器的示意图;
图2至图8是本申请某些实施方式的子像素的分布示意图;
图9是本申请某些实施方式的一种像素电路的示意图;
图10是不同色彩通道曝光饱和时间的示意图;
图11至图20是本申请某些实施方式的最小重复单元的像素排布及透镜覆盖方式的示意图;
图21是本申请某些实施方式的控制方法的流程示意图;
图22是本申请某些实施方式的摄像头组件的示意图;
图23是本申请某些实施方式的控制方法的原理示意图;
图24是本申请某些实施方式的控制方法的流程示意图;
图25至图27是本申请某些实施方式的控制方法的原理示意图;
图28是本申请某些实施方式的移动终端的示意图。
具体实施方式
下面详细描述本申请的实施方式,实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
请参阅图1,本申请提供一种图像传感器10。图像传感器10包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个彩色像素和多个全色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,在最小重复单元中,全色像素设置在第一对角线方向,彩色像素设置在第二对角线方向。第一对角线方向与第二对角线方向不同。其中,每个像素101包括至少两个子像素102。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。
请参阅图2,在某些实施方式中,每个像素101包括两个子像素102,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行。
请参阅图3,在某些实施方式中,每个像素101包括两个子像素102。两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行。
请参阅图4及图5,在某些实施方式中,每个像素101包括两个子像素102。两个子像素102之间的分界线相对于二维像素阵列11的长度方向X或宽度方向Y倾斜。
请参阅图6,在某些实施方式中,每个像素101包括两个子像素102,部分像素101中,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行;部分像素101中,两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行。
请参阅图7,在某些实施方式中,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行的像素与两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行的像素101交错分布。
请参阅图8,在某些实施方式中,每个像素101包括四个子像素102,四个子像素102呈2*2矩阵分布。
请参阅图1,在某些实施方式中,图像传感器10还包括滤光片阵列16,滤光片阵列16包括多个滤光片160,每个滤光片160覆盖一个像素101。
请参阅图1和图9,在某些实施方式中,每个像素101包括两个子像素102,其中一个子像素102包括第一光电转换元件1171,另外一个子像素102包括第二光电转换元件1172,每个像素101还包括与第一光电转换元件1171连接的第一曝光控制电路1161和与第二光电转换元件1172连接的第二曝光 控制电路1162,第一曝光控制电路1161用于转移第一光电转换元件1171接收光线后生成的电荷以输出像素信息,第二曝光控制电路1162用于转移第二光电转换元件1172接收光线后生成的电荷以输出像素信息。
在某些实施方式中,每个像素101还包括复位电路,复位电路同时与第一曝光控制电路1161和第二曝光控制电路1162连接;当两个子像素102曝光以分别输出像素信息、或分别输出像素信息并输出相位信息时,第一曝光控制电路1161先转移第一光电转换元件1171接收光线后生成的电荷以输出第一子像素信息,复位电路复位后,第二曝光控制电路1162再转移第二光电转换元件1172接收光线后生成的电荷以输出第二子像素信息;当两个子像素102曝光以合并输出像素信息时,第一曝光控制电路1161转移第一光电转换元件1171接收光线后生成的电荷的同时,第二曝光控制电路1162能够转移第二光电转换元件1172接收光线后生成的电荷,以输出合并像素信息;当两个子像素102曝光以合并输出像素信息并输出相位信息时,第一曝光控制电路1161先转移第一光电转换元件1171接收光线后生成的电荷以输出第一子像素信息,复位电路复位后,第二曝光控制电路1172再转移第二光电转换元件1172接收光线后生成的电荷以输出第二子像素信息,第一子像素信息和第二子像素信息用于合并为合并像素信息。
在某些实施方式中,当两个子像素102曝光以合并输出像素信息并输出相位信息时,图像传感器10还包括缓存器,缓存器用于存储第一子像素信息和第二子像素信息以输出相位信息。
在某些实施方式中,图像传感器10还包括:第一曝光控制线及第二曝光控制线。第一曝光控制线用于传输第一曝光信号以控制全色像素的第一曝光时间;第二曝光控制线用于传输第二曝光信号以控制彩色像素的第二曝光时间;其中第一曝光时间小于第二曝光时间,或第一曝光时间等于第二曝光时间。
请参阅图1和图21,本申请还提供一种控制方法。控制方法用于图像传感器10。图像传感器10包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个彩色像素和多个全色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,在最小重复单元中,全色像素设置在第一对角线方向,彩色像素设置在第二对角线方向。第一对角线方向与第二对角线方向不同。其中,每个像素101包括至少两个子像素102。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。控制方法包括:
01:控制至少两个子像素102曝光以输出至少两个子像素信息;
02:根据至少两个子像素信息确定相位信息以进行对焦;
03:在合焦状态下,控制二维像素阵列11曝光以获取目标图像。
请参阅图1和图24,在某些实施方式中,控制方法还包括:
04:获取环境亮度;
步骤03在合焦状态下,控制二维像素阵列11曝光以获取目标图像,包括:
031:在环境亮度大于第一预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素102曝光以分别输出全色子像素信息,控制每个彩色像素中的至少两个子像素102曝光以分别输出彩色子像素信息;
032:根据全色子像素信息和彩色子像素信息生成目标图像。
请参阅图1和图24,在某些实施方式中,控制方法还包括:
04:获取环境亮度;
步骤03在合焦状态下,控制二维像素阵列11曝光以获取目标图像,包括:
033:在环境亮度小于第二预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素曝光以分别输出全色子像素信息,控制每个彩色像素中的至少两个子像素曝光以合并输出彩色合并像素信息;
034:根据全色子像素信息和彩色合并像素信息生成目标图像。
请参阅图1和图24,在某些实施方式中,控制方法还包括:
04:获取环境亮度;
步骤03在合焦状态下,控制二维像素阵列11曝光以获取目标图像,包括:
035:在环境亮度小于第三预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素曝光以合并输出全色合并像素信息,控制每个彩色像素中的至少两个子像素曝光以合并输出彩色合并像素信息;
036:根据全色合并像素信息和彩色合并像素信息生成目标图像。
请参阅图1和图22,本申请还提供一种摄像头组件40。摄像头组件40包括镜头30及图像传感器10。图像传感器10能够接收穿过镜头30的光线。图像传感器10包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个彩色像素和多个全色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,在最小重复单元中,全色像素设置在第一对角线方向,彩色像素设置在第二对角线方向。第一对角线方向与第二对角线方向不同。其中,每个像素101包括至少两个子像素102。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。
请参阅图1、图22和图28,本申请还提供一种移动终端90。移动终端90包括机壳80及摄像头组件40。摄像头组件40安装在机壳80上。摄像头组件40包括镜头30及图像传感器10。图像传感器10能够接收穿过镜头30的光线。图像传感器10包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个彩色像素和多个全色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,在最小重复单元中,全色像素设置在第一对角线方向,彩色像素设置在第二对角线方向。第一对角线方向与第二对角线方向不同。其中,每个像素101包括至少两个子像素102。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。
请参阅图1、图21、图22和图28,本申请还提供一种移动终端90。移动终端90包括机壳80及摄像头组件40。摄像头组件40安装在机壳80上。摄像头组件40包括镜头30及图像传感器10。图像传感器10能够接收穿过镜头30的光线。图像传感器10包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个彩色像素和多个全色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,在最小重复单元中,全色像素设置在第一对角线方向,彩色像素设置在第二对角线方向。第一对角线方向与第二对角线方向不同。其中,每个像素101包括至少两个子像素102。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。移动终端90还包括处理器60,处理器60用于实现控制方法:
01:控制至少两个子像素102曝光以输出至少两个子像素信息;
02:根据至少两个子像素信息确定相位信息以进行对焦;
03:在合焦状态下,控制二维像素阵列11曝光以获取目标图像。
随着电子技术的发展,具有照相功能的终端在人们的生活中已经得到了普及。目前手机拍摄采用的对焦方法主要有反差对焦和相位对焦(Phase Detection Auto Focus,PDAF)。反差对焦比较精准,但速度太慢。相位对焦速度快,目前市场上的相位对焦都是在彩色传感器(Bayer Sensor)上实现的,在暗光环境下对焦性能也不够好。
基于上述原因,本申请提供一种图像传感器10(图1所示)。本申请实施方式的图像传感器10中,二维像素阵列11包括多个彩色像素和多个全色像素,相较于一般的彩色传感器而言,增加了通光量,具有更好的信噪比,在暗光下对焦性能更好,子像素102的灵敏度也会更高。此外,每个像素101包括至少两个子像素102,可以在实现相位对焦的同时,提高图像传感器10的分辨率。
接下来介绍一下图像传感器10的基本结构。请参阅图1,图1是本申请实施方式的图像传感器10的示意图及像素101的示意图。图像传感器10包括二维像素阵列11、滤光片阵列16、及透镜阵列17。沿图像传感器10的收光方向,透镜阵列17、滤光片160、及二维像素阵列11依次设置。
图像传感器10可以采用互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
二维像素阵列11包括以阵列形式二维排列的多个像素101。每个像素101包括至少两个子像素102。例如,每个像素101包括两个子像素102、三个子像素102、四个子像素102或更多个子像素102。
滤光片阵列16包括多个滤光片160,每个滤光片160覆盖对应的一个像素101。每个像素101的光谱响应(即像素101能够接收的光线的颜色)由对应该像素102的滤光片160的颜色决定。
透镜阵列17包括多个透镜170,每个透镜170覆盖对应的一个像素101。
图2至图8示出了多种图像传感器10中子像素102的分布示意图。图2至图8所示的子像素102的分布方式中,每个像素101包括至少两个子像素102,可以在实现相位对焦的同时,提高图像传感器10的分辨率。
例如,图2是本申请实施方式的一种子像素102的分布示意图。每个像素101均包括两个子像素102。 两个子像素102之间的分界线与二维像素阵列11的长度方向X平行。每个子像素102的横截面的形状均为矩形,且每个像素101中的两个子像素102的横截面的面积相等。其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面(后同)。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。根据每个像素101中的两个子像素102曝光以输出的两个子像素信息可以确定相位信息,从而实现相位对焦,同时,每个像素101包括分界线与二维像素阵列11的长度方向X平行的两个子像素102,可以提高图像传感器10的纵向分辨率。
例如,图3是本申请实施方式的另一种子像素102的分布示意图。每个像素101均包括两个子像素102。两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行。每个子像素102的横截面的形状均为矩形,且每个像素101中的两个子像素102的横截面的面积相等。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。根据每个像素101中的两个子像素102曝光以输出的两个子像素信息可以确定相位信息,从而实现相位对焦,同时,每个像素101包括分界线与二维像素阵列11的宽度方向Y平行的两个子像素102,可以提高图像传感器10的横向分辨率。
例如,图4是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。两个子像素102之间的分界线相对于二维像素阵列11的长度方向X倾斜。每个子像素102的横截面的形状均为梯形,同一像素101中的一个子像素102的横截面为上窄下宽的梯形,另一个子像素102的横截面的形状为上宽下窄的梯形。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向为横轴,宽度方向为纵轴,建立直角坐标系,两个子像素102在横轴的正半轴及负半轴均有分布,且两个子像素102在纵轴的正半轴及负半轴均有分布。每个像素101中的一个子像素102同时分布在第一象限、第二象限、及第四象限,另一个像素102同时分布在第二象限、第三象限、及第四象限。两个子像素102既能获取到水平方向上的相位信息,又能获取到垂直方向上的相位信息,如此使得图像传感器10既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,图像传感器10的场景适应性较好,相位对焦的准确度较高。同时,每个像素101包括分界线相对于二维像素阵列11的长度方向X倾斜的两个子像素102,可以提高图像传感器10的横向分辨率或纵向分辨率。
例如,图5是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。两个子像素102之间的分界线相对于二维像素阵列11的宽度方向Y倾斜。每个子像素102的横截面的形状均为梯形,同一像素101中的一个子像素102的横截面为上宽下窄的梯形,另一个子像素102的横截面的形状为上窄下宽的梯形。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向为横轴,宽度方向为纵轴,建立直角坐标系,两个子像素102在横轴的正半轴及负半轴均有分布,且两个子像素102在纵轴的正半轴及负半轴均有分布。每个像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个像素102同时分布在第一象限、第三象限、及第四象限。两个子像素102既能获取到水平方向上的相位信息,又能获取到垂直方向上的相位信息,如此使得图像传感器10既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,图像传感器10的场景适应性较好,相位对焦的准确度较高。同时,每个像素101包括分界线相对于二维像素阵列11的宽度方向Y倾斜的两个子像素102,可以提高图像传感器10的横向分辨率或纵向分辨率。
例如,图6是本申请实施方式的又一种子像素102的分布示意图。每个像素101包括两个子像素102,部分像素101中,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行;部分像素101中,两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行。进一步地,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行的像素101与两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行的像素101隔行分布或隔列分布。部分像素101中,两个子像素102可以获取到垂直方向上的相位信息;部分像素101中,两个子像素102可以获取到水平方向上的相位信息,如此使得图像传感器10既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,图像传感器10的场景适应性较好,相位对焦的准确度较高。同时,部分像素101包括分界线与二维像素阵列11的长度方向X平行的两个子像素102,部分像素101包括分界线与二维像素阵列11的宽度方向Y平行的两个子像素102,可以在一定程度上提高图像传感器10的横向分辨率和纵向分辨率。此外,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行的像素101与两个 子像素102之间的分界线与二维像素阵列11的宽度方向Y平行的像素101隔行分布或隔列分布,不会过多增加二维像素阵列11的走线难度。
例如,图7是本申请实施方式的又一种子像素102的分布示意图。每个像素101包括两个子像素102,部分像素101中,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行;部分像素101中,两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行。进一步地,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行的像素101与两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行的像素101交错分布。部分像素101中,两个子像素102可以获取到垂直方向上的相位信息;部分像素101中,两个子像素102可以获取到水平方向上的相位信息,如此使得图像传感器10既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,图像传感器10的场景适应性较好,相位对焦的准确度较高。同时,部分像素101包括分界线与二维像素阵列11的长度方向X平行的两个子像素102,部分像素101包括分界线与二维像素阵列11的宽度方向Y平行的两个子像素102,可以在一定程度上提高图像传感器10的横向分辨率和纵向分辨率。此外,两个子像素102之间的分界线与二维像素阵列11的长度方向X平行的像素101与两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行的像素101交错分布,可以最大程度提高相位对焦效果。
例如,图8是本申请实施方式的又一种子像素102的分布示意图。每个像素101包括四个子像素102,四个子像素102呈2*2矩阵分布。根据每个像素101中的任意至少两个子像素102曝光以输出的至少两个子像素信息可以准确地确定相位信息,从而实现相位对焦,同时,每个像素101包括呈2*2矩阵分布的四个子像素102,可以同时提高图像传感器10的横向分辨率和纵向分辨率。在其他例子中,当每个像素101包括四个子像素102时,四个子像素102还可以沿二维像素阵列11的宽度方向Y分布,相邻的两个子像素102之间的分界线与二维像素阵列11的长度方向X平行;或者,四个子像素102可以沿二维像素阵列11的长度方向X分布,相邻的两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行等,或者还可以是相邻的两个子像素102之间的分界线相对于二维像素阵列11的长度方向X或宽度方向Y倾斜的情况等,在此不作限制。
需要说明的是,除了图2至图8所示的子像素102的横截面的形状示例,子像素102的横截面的形状还可以是其他规则或不规则的形状,在此不作限制。
此外,除了每个像素101均包括两个子像素102与每个像素101均包括四个子像素102的情况外,还可以是:部分像素101包括两个子像素102、部分像素101包括四个子像素102,或者是包括其他任意数量的至少两个子像素102,在此不作限制。
图9是本申请实施方式中一种像素电路110的示意图。本申请实施方式以每个像素101包括两个子像素102为例进行说明,每个像素101包括多于两个子像素102的情况将在后续进行说明。下面结合图1和图9对像素电路110的工作原理进行说明。
如图1和图9所示,像素电路110包括第一光电转换元件1171(例如,光电二极管PD)、第二光电转换元件1172(例如,光电二极管PD)、第一曝光控制电路1161(例如,转移晶体管112)、第二曝光控制电路1162(例如,转移晶体管112)、复位电路(例如,复位晶体管113)、放大电路(例如,放大晶体管114)和选择电路(例如,选择晶体管115)。此时,其中一个子像素102包括上述第一光电转换元件1171,另外一个子像素102包括上述第二光电转换元件1172。在本申请的实施例中,转移晶体管112、复位晶体管113、放大晶体管114和选择晶体管115例如是MOS管,但不限于此。
例如,参见图1和图9,转移晶体管112的栅极TG通过曝光控制线(图中未示出)连接图像传感器10的垂直驱动单元(图中未示出);复位晶体管113的栅极RG通过复位控制线(图中未示出)连接垂直驱动单元;选择晶体管115的栅极SEL通过选择线(图中未示出)连接垂直驱动单元。每个像素电路110中的第一曝光控制电路1161与第一光电转换元件1171电连接,用于转移第一光电转换元件1171经光照后积累的电势;每个像素电路110中的第二曝光控制电路1162与第二光电转换元件1172电连接,用于转移第二光电转换元件1172经光照后积累的电势。例如,第一光电转换元件1171和第二光电转换元件1172均包括光电二极管PD,光电二极管PD的阳极例如连接到地。光电二极管PD将所接收的光转换为电荷。光电二极管PD的阴极经由第一曝光控制电路1161或第二曝光控制电路1162(例如,转移晶体管112)连接到浮动扩散单元FD。浮动扩散单元FD与放大晶体管114的栅极、复位晶体管113 的源极连接。
例如,第一曝光控制电路1161和第二曝光控制电路1162均可以为转移晶体管112,第一曝光控制电路1161和第二曝光控制电路1162的控制端TG为转移晶体管112的栅极。当有效电平(例如,VPIX电平)的脉冲通过曝光控制线传输到转移晶体管112的栅极时,转移晶体管112导通。转移晶体管112将光电二极管PD光电转换的电荷传输到浮动扩散单元FD。
例如,复位电路同时与第一曝光控制电路1161和第二曝光控制电路1162连接。复位电路可以为复位晶体管113。复位晶体管113的漏极连接到像素电源VPIX。复位晶体管113的源极连接到浮动扩散单元FD。在电荷被从光电二极管PD转移到浮动扩散单元FD之前,有效复位电平的脉冲经由复位线传输到复位晶体管113的栅极,复位晶体管113导通。复位晶体管113将浮动扩散单元FD复位到像素电源VPIX。
例如,放大晶体管114的栅极连接到浮动扩散单元FD。放大晶体管114的漏极连接到像素电源VPIX。在浮动扩散单元FD被复位晶体管113复位之后,放大晶体管114经由选择晶体管115通过输出端OUT输出复位电平。在光电二极管PD的电荷被转移晶体管112转移之后,放大晶体管114经由选择晶体管115通过输出端OUT输出信号电平。
例如,选择晶体管115的漏极连接到放大晶体管114的源极。选择晶体管115的源极通过输出端OUT连接到图像传感器10中的列处理单元(图中未示出)。当有效电平的脉冲通过选择线被传输到选择晶体管115的栅极时,选择晶体管115导通。放大晶体管114输出的信号通过选择晶体管115传输到列处理单元。
当两个子像素102曝光以分别输出像素信息、或分别输出像素信息并输出相位信息时,第一曝光控制电路1161先转移第一光电转换元件1171接收光线后生成的电荷以输出第一子像素信息,复位电路复位后,第二曝光控制电路1162再转移第二光电转换元件1172接收光线后生成的电荷以输出第二子像素信息。
当两个子像素102曝光以合并输出像素信息时(不包括相位信息),第一曝光控制电路1161转移第一光电转换元件1171接收光线后生成的电荷的同时,第二曝光控制电路1162能够转移第二光电转换元件1172接收光线后生成的电荷,以输出合并像素信息。也即是说,第一曝光控制电路1161转移第一光电转换元件1171接收光线后生成的电荷的同时,第二曝光控制电路1162也转移第二光电转换元件1172接收光线后生成的电荷,或者也可以是,第一曝光控制电路1161先转移第一光电转换元件1171接收光线后生成的电荷,第二曝光控制电路1162再转移第二光电转换元件1172接收光线后生成的电荷。
当两个子像素102曝光以合并输出像素信息并输出相位信息时,第一曝光控制电路1161先转移第一光电转换元件1171接收光线后生成的电荷以输出第一子像素信息,复位电路复位后,第二曝光控制电路1162再转移第二光电转换元件1172接收光线后生成的电荷以输出第二子像素信息,第一子像素信息和第二子像素信息用于合并为合并像素信息。此时,图像传感器10还可以包括缓存器(图未示),缓存器用于存储第一子像素信息和第二子像素信息以输出相位信息。
在信号传输中,像素信息和相位信息可以通过移动产业处理器接口(Mobile Industry Processor Interface,MIPI)分别单独输出,由于相位信息的需求量较小,因此可以采取像素电路110每输出四行像素信息时,再输出一行相位信息。当然,在实际应用中也可以根据需要像素电路110每输出一行像素信息时,再输出一行相位信息,在此不作限制。
需要说明的是,本申请实施例中像素电路110的像素结构并不限于图9所示的结构。例如,像素电路110可以具有三晶体管像素结构,其中放大晶体管114和选择晶体管115的功能由一个晶体管完成。例如,第一曝光控制电路1161和第二曝光控制电路1162也不局限于单个转移晶体管112的方式,其它具有控制端控制导通功能的电子器件或结构均可以作为本申请实施例中的曝光控制电路,单个转移晶体管112的实施方式简单、成本低、易于控制。
此外,当每个像素101包括多于两个子像素102时(即每个像素101包括多于两个光电转换元件),只需要对应增加曝光控制电路的数量即可,每个曝光控制电路与对应的光电转换元件连接,所有的曝光控制电路均与复位电路连接。
本申请实施例中,每个像素101中的多个子像素102共用复位电路、放大电路和选择电路,有利于减小像素电路110占用空间,且成本较低。在其他实施例中,每个像素101中的每个子像素102都可以 具有对应的复位电路、放大电路和选择电路等,在此不作限制。
在包含多种色彩的像素的图像传感器中,不同色彩的像素单位时间内接收的曝光量不同。在某些色彩饱和后,某些色彩还未曝光到理想的状态。例如,曝光到饱和曝光量的60%-90%可以具有比较好的信噪比和精确度,但本申请的实施例不限于此。
图10中以RGBW(红、绿、蓝、全色)为例说明。参见图10,图10中横轴为曝光时间、纵轴为曝光量,Q为饱和的曝光量,LW为全色像素W的曝光曲线,LG为绿色像素G的曝光曲线,LR为红色像素R的曝光曲线,LB为蓝色像素的曝光曲线。
从图10可以看出,全色像素W的曝光曲线LW的斜率最大,也就是说在单位时间内全色像素W可以获得更多的曝光量,在t1时刻即达到饱和。绿色像素G的曝光曲线LG的斜率次之,绿色像素在t2时刻饱和。红色像素R的曝光曲线LR的斜率再次之,红色像素在t3时刻饱和。蓝色像素B的曝光曲线LB的斜率最小,蓝色像素在t4时刻饱和。由图10可知,全色像素W单位时间内接收的曝光量是大于彩色像素单位时间内接收的曝光量的,也即全色像素W的灵敏度要高于彩色像素的灵敏度。
如果采用仅包括彩色像素的图像传感器来实现相位对焦,那么在亮度较高的环境下,R、G、B三种彩色像素可以接收到的较多的光线,能够输出信噪比较高的像素信息,此时相位对焦的准确度较高;但是在亮度较低的环境下,R、G、B三种像素能够接收到的光线较少,输出的像素信息的信噪比较低,此时相位对焦的准确度也较低。
基于上述原因,本申请实施方式的图像传感器10在二维像素阵列11中可以同时布置全色像素和彩色像素,相较于一般的彩色传感器而言,增加了通光量,具有更好的信噪比,在暗光下对焦性能更好,子像素102的灵敏度也会更高。如此,本申请实施方式的图像传感器10可以在环境亮度不同的场景下实现准确的对焦,提升了图像传感器10的场景适应性。
需要说明的是,每个像素101的光谱响应(即像素101能够接收的光线的颜色)由对应该像素101的滤光片160的颜色决定。本申请全文的彩色像素和全色像素指的是能够响应颜色与对应的滤光片160颜色相同的光线的像素101。
图11至图20示出了多种图像传感器10(图1所示)中像素101排布的示例。请参见图11至图20,二维像素阵列11中的多个像素101可以同时包括多个全色像素W及多个彩色像素(例如多个第一颜色像素A、多个第二颜色像素B和多个第三颜色像素C),其中,彩色像素和全色像素通过其上覆盖的滤光片160(图1所示)能够通过的光线的波段来区分,彩色像素具有比全色像素更窄的光谱响应,彩色像素的响应光谱例如为全色像素W响应光谱中的部分。每个全色像素包含至少两个子像素102,每个彩色像素包含至少两个子像素102。二维像素阵列11由多个最小重复单元组成(图11至图20示出了多种图像传感器10中的最小重复单元的示例),最小重复单元在行和列上复制并排列。每个最小重复单元均包括多个子单元,每个子单元包括多个单颜色像素及多个全色像素。例如,每个最小重复单元包括四个子单元,其中,一个子单元包括多个单颜色像素A(即第一颜色像素A)和多个全色像素W,两个子单元包括多个单颜色像素B(即第二颜色像素B)和多个全色像素W,剩余一个子单元包括多个单颜色像素C(即第三颜色像素C)和多个全色像素W。
例如,最小重复单元的行和列的像素101的数量相等。例如最小重复单元包括但不限于,4行4列、6行6列、8行8列、10行10列的最小重复单元。例如,子单元的行和列的像素101的数量相等。例如子单元包括但不限于,2行2列、3行3列、4行4列、5行5列的子单元。这种设置有助于均衡行和列方向图像的分辨率和均衡色彩表现,提高显示效果。
在一个例子中,在最小重复单元中,全色像素W设置在第一对角线方向D1,彩色像素设置在第二对角线方向D2,第一对角线方向D1与第二对角线方向D2不同。
例如,图11是本申请实施方式中一种最小重复单元的像素101排布及透镜170覆盖方式示意图;最小重复单元为4行4列16个像素,子单元为2行2列4个像素,排布方式为:
Figure PCTCN2021088404-appb-000001
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素; C表示多个彩色像素中的第三颜色像素。
如图11所示,全色像素W设置在第一对角线方向D1(即图11中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图11中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
需要说明的是,第一对角线方向D1和第二对角线方向D2并不局限于对角线,还包括平行于对角线的方向。这里的“方向”并非单一指向,可以理解为指示排布的“直线”的概念,可以有直线两端的双向指向。
如图11所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
例如,图12是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式示意图的示意图。最小重复单元为4行4列16个像素101,子单元为2行2列4个像素101,排布方式为:
Figure PCTCN2021088404-appb-000002
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图12所示,全色像素W设置在第一对角线方向D1(即图12中右上角和左下角连接的方向),彩色像素设置在第二对角线方向D2(例如图12中左上角和右下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图12所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
例如,图13是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。图14是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。在图13和图14的实施例中,分别对应图11和图12的排布及覆盖方式,第一颜色像素A为红色像素R;第二颜色像素B为绿色像素G;第三颜色像素C为蓝色像素Bu。
需要说明的是,在一些实施例中,全色像素W的响应波段为可见光波段(例如,400nm-760nm)。例如,全色像素W上设置有红外滤光片,以实现红外光的滤除。在一些实施例中,全色像素W的响应波段为可见光波段和近红外波段(例如,400nm-1000nm),与图像传感器10中的光电转换元件(例如光电二极管PD)响应波段相匹配。例如,全色像素W可以不设置滤光片,全色像素W的响应波段由光电二极管的响应波段确定,即两者相匹配。本申请的实施例包括但不局限于上述波段范围。
在一些实施例中,图11及图12所示的最小重复单元中,第一颜色像素A也可以为红色像素R,第二颜色像素B也可以为黄色像素Y;第三颜色像素C可以为蓝色像素Bu。
在一些实施例中,图11及图12所示的最小重复单元中,第一颜色像素A也可以为品红色像素M,第二颜色像素B也可以为青色像素Cy,第三颜色像素C也可以为黄色像素Y。
例如,图15是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为6行6列36个像素101,子单元为3行3列9个像素101,排布方式为:
Figure PCTCN2021088404-appb-000003
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图15所示,全色像素W设置在第一对角线方向D1(即图15中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图15中左下角和右上角连接的方向),第一对角线方向D1 与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图15所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
例如,图16是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为6行6列36个像素101,子单元为3行3列9个像素101,排布方式为:
Figure PCTCN2021088404-appb-000004
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图16所示,全色像素W设置在第一对角线方向D1(即图16中右上角和左下角连接的方向),彩色像素设置在第二对角线方向D2(例如图16中左上角和右下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图16所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
示例地,图15及图16所示的最小重复单元中的第一颜色像素A可以为红色像素R,第二颜色像素B可以为绿色像素G,第三颜色像素C可以为蓝色像素Bu。或者;图15及图16所示的最小重复单元中的第一颜色像素A可以为红色像素R,第二颜色像素B可以为黄色像素Y,第三颜色像素C可以为蓝色像素Bu。或者;图15及图16所示的最小重复单元中的第一颜色像素A可以为品红色像素M,第二颜色像素B可以为青色像素Cy,第三颜色像素C可以为黄色像素Y。
例如,图17是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为8行8列64个像素101,子单元为4行4列16个像素101,排布方式为:
Figure PCTCN2021088404-appb-000005
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图17所示,全色像素W设置在第一对角线方向D1(即图17中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图17中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图17所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
例如,图18是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为8行8列64个像素101,子单元为4行4列16个像素101,排布方式为:
Figure PCTCN2021088404-appb-000006
Figure PCTCN2021088404-appb-000007
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图18所示,全色像素W设置在第一对角线方向D1(即图18中右上角和左下角连接的方向),彩色像素设置在第二对角线方向D2(例如图18中左上角和右下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图18所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
图11至图18所示例子中,每一个子单元内,相邻的全色像素W呈对角线设置,相邻的彩色像素也呈对角线设置。在另一个例子中,每一个子单元内,相邻的全色像素沿水平方向设置,相邻的彩色像素也沿水平方向设置;或者,相邻的全色像素沿垂直方向设置,相邻的彩色像素也沿垂直方向设置。相邻子单元中的全色像素可以呈水平方向设置或呈垂直方向设置,相邻子单元的中的彩色像素也可以呈水平方向设置或呈垂直方向设置。
例如,图19是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为4行4列16个像素101,子单元为2行2列4个像素101,排布方式为:
Figure PCTCN2021088404-appb-000008
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图19所示,在每一个子单元内,相邻的全色像素W沿垂直方向设置,相邻的彩色像素也沿垂直方向设置。一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
例如,图20是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为4行4列16个像素101,子单元为2行2列4个像素101,排布方式为:
Figure PCTCN2021088404-appb-000009
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图20所示,在每一个子单元内,相邻的全色像素W沿水平方向设置,相邻的彩色像素也沿水平方向设置。一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102。
图19和图20所示的最小重复单元中,第一颜色像素A可以为红色像素R,第二颜色像素B可以为绿色像素G,第三颜色像素C可以为蓝色像素Bu。或者;图19和图20所示的最小重复单元中,第一颜色像素A可以为红色像素R,第二颜色像素B可以为黄色像素Y,第三颜色像素C可以为蓝色像素Bu。或者;图19和图20所示的最小重复单元中,第一颜色像素A可以为品红色像素M,第二颜色像素B可以为青色像素Cy,第三颜色像素C可以为黄色像素Y。
图11至图20所示的最小重复单元中,每个全色像素及每个彩色像素均包括两个子像素102。在其他实施例中,还可以是每个全色像素包括四个子像素102,每个彩色像素包括四个子像素102;或者每个全色像素包括四个子像素102,每个彩色像素包括两个子像素102等,在此不作限制。
图11至图20所示的最小重复单元中,两个子像素102之间的分界线与二维像素阵列11的宽度方向Y平行。在其他实施例中,还可以是两个子像素102之间的分界线与二维像素阵列11的长度方向X平行,或者前述任意分界线形式或其组合等。
图11至图20所示的任意一种排布的二维像素阵列11中的多个全色像素和多个彩色像素均可以分 别由不同的曝光控制线控制,从而实现全色像素的曝光时间和彩色像素的曝光时间的独立控制。其中,对于图11至图18所示的任意一种排布的二维像素阵列11,第一对角线方向相邻的至少两个全色像素的曝光控制电路的控制端与第一曝光控制线电连接,第二对角线方向相邻的至少两个彩色像素的曝光控制电路的控制端与第二曝光控制线电连接。对于图19和图20所示的二维像素阵列11,同一行或同一列的全色像素的曝光控制电路的控制端与第一曝光控制线电连接,同一行或同一列的彩色像素的曝光控制电路的控制端与第二曝光控制线电连接。第一曝光控制线可以传输第一曝光信号以控制全色像素的第一曝光时间,第二曝光控制线可以传输第二曝光信号以控制彩色像素的第二曝光时间。其中,当全色像素包括两个子像素102时,则该全色像素中的两个子像素102均由同一第一曝光控制线电连接。当彩色像素包括两个子像素102时,则该彩色像素中的两个子像素102均由同一第二曝光控制线电连接。
全色像素的曝光时间与彩色像素的曝光时间独立控制时,全色像素的第一曝光时间可以小于彩色像素的第二曝光时间。例如,第一曝光时间与第二曝光时间的比例可以为1:2、1:3或1:4中的一种。例如,在光线比较暗的环境下,彩色像素更容易曝光不足,可以根据环境亮度调整第一曝光时间与第二曝光时间的比例为1:2,1:3或1:4。其中,曝光比例为上述整数比或接近整数比的情况下,有利于时序的设置信号的设置和控制。
在某些实施方式中,可以根据环境亮度来确定第一曝光时间与第二曝光时间的相对关系。例如,在环境亮度小于或等于亮度阈值时,全色像素以等于第二曝光时间的第一曝光时间来曝光;在环境亮度大于亮度阈值时,全色像素以小于第二曝光时间的第一曝光时间来曝光。在环境亮度大于亮度阈值时,可以根据环境亮度与亮度阈值之间的亮度差值来确定第一曝光时间与第二曝光时间的相对关系,例如,亮度差值越大,第一曝光时间与第二曝光时间的比例越小。示例地,在亮度差值位于第一范围[a,b)内时,第一曝光时间与第二曝光时间的比例为1:2;在亮度差值位于第二范围[b,c)内时,第一曝光时间与第二曝光时间的比例为1:3;在亮度差值大于或等于c时,第一曝光时间与第二曝光时间的比例为1:4。
请参阅图1和图21,本申请还提供一种控制方法。本申请实施方式的控制方法可以用于上述任意一项实施方式所述的图像传感器10。控制方法包括:
01:控制至少两个子像素102曝光以输出至少两个子像素信息;
02:根据至少两个子像素信息确定相位信息以进行对焦;
03:在合焦状态下,控制二维像素阵列11曝光以获取目标图像。
请参阅图1及图22,本申请实施方式的控制方法可以由本申请实施方式的摄像头组件40实现。摄像头组件40包括镜头30、上述任意一项实施方式所述的图像传感器10、及处理芯片20。图像传感器10可以接收穿过镜头30入射的光线并生成电信号。图像传感器10与处理芯片20电连接。处理芯片20可以与图像传感器10、镜头30封装在摄像头组件40的壳体内;或者,图像传感器10和镜头30封装在摄像头组件40的壳体内,处理芯片20设置在壳体外。步骤01、步骤02和步骤03均可以由处理芯片20实现。也即是说,处理芯片20可以用于:控制至少两个子像素102曝光以输出至少两个子像素信息;根据至少两个子像素信息确定相位信息以进行对焦;在合焦状态下,控制二维像素阵列11曝光以获取目标图像。
本申请实施方式的控制方法及摄像头组件40中,二维像素阵列11包括多个彩色像素和多个全色像素,相较于一般的彩色传感器而言,增加了通光量,具有更好的信噪比,在暗光下对焦性能更好,子像素102的灵敏度也会更高。此外,每个像素101包括至少两个子像素102,可以在实现相位对焦的同时,提高图像传感器10的分辨率。
此外,本申请实施方式的控制方法及摄像头组件40不需要对图像传感器10中的像素101进行遮挡设计,所有像素101都可以用于成像,不需要进行坏点补偿,有利于提升摄像头组件40获取的目标图像的质量。
另外,本申请实施方式的控制方法及摄像头组件40中的所有包含至少两个子像素102的像素101都可以用于相位对焦,相位对焦的准确度更高。
具体地,相位信息可以是相位差。根据至少两个子像素信息确定相位信息以进行对焦包括:(1)仅根据全色子像素信息计算相位差以进行对焦;(2)仅根据彩色子像素信息计算相位差以进行对焦;(3)同时根据全色子像素信息及彩色子像素信息计算相位差以进行对焦。
本申请实施方式的控制方法及摄像头组件40采用包括全色像素和彩色像素的图像传感器10来实现 相位对焦,从而可以在亮度较低(例如亮度小于或等于第一预设亮度)的环境下采用灵敏度较高的全色像素来进行相位对焦,在亮度较高(例如亮度大于或等于第二预设亮度)的环境下采用灵敏度较低的彩色像素来进行相位对焦,而在亮度适中(例如大于第一预设亮度且小于第二预设亮度)的环境下采用全色像素和彩色像素中的至少一种来进行相位对焦。如此,可以避免在环境亮度较低时采用彩色像素进行相位对焦,因彩色像素中的子像素102输出的彩色子像素信息信噪比过低导致对焦不准确的问题,也可以避免在环境亮度较高时采用全色像素进行对焦,因全色像素中的子像素102过饱和导致对焦不准确的问题,由此使得相位对焦在多类应用场景下的准确度均较高,相位对焦的场景适应性较好。
请参阅图1和图11,在某些实施方式中,全色像素包括两个全色子像素。全色子像素信息包括第一全子色像素信息及第二全色子像素信息。第一全色子像素信息及第二全色子像素信息分别由位于透镜170的第一方位的全色子像素及位于透镜170的第二方位的全色子像素输出。一个第一全色子像素信息与对应的一个第二全色子像素信息作为一对全色子像素信息对。根据全色子像素信息计算相位差以进行对焦的步骤包括:根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线;根据多对全色子像素信息对中的第二全色子像素信息形成第二曲线;及根据第一曲线及第二曲线计算相位差以进行对焦。
具体地,请结合图23,在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。需要说明的是,图23所示的第一方位P1及第二方位P2是根据图23所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图23的像素阵列11的每一个全色像素W中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。第一全色子像素信息由位于透镜170的第一方位P1的全色子像素W输出,第二全色子像素信息由位于透镜170的第二方位P2的全色子像素W输出。例如,全色子像素W 11,P1、W 13,P1、W 15,P1、W 17,P1、W 22,P1、W 24,P1、W 26,P1、W 28,P1等位于第一方位P1,全色子像素W 11,P2、W 13,P2、W 15,P2、W 17,P2、W 22,P2、W 24,P2、W 26,P2、W 28,P2等位于第二方位P2。同一个全色像素中的两个全色子像素W组成一对全色子像素对,相应的,同一个全色像素W中的两个全色子像素的全色子像素信息组成一对全色子像素信息对,例如,全色子像素W 11,P1的全色子像素信息与全色子像素W 11,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 13,P1的全色子像素信息与全色子像素W 13,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 15,P1的全色子像素信息与全色子像素W 15,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 17,P1的全色子像素信息与全色子像素W 17,P2的全色子像素信息组成一对全色子像素信息对等,依此类推。
在获取到多对全色子像素信息对之后,处理芯片20根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线,并根据多对全色子像素对中的第二全色子像素信息形成第二曲线,再根据第一曲线和第二曲线计算出相位差。示例地,多个第一全色子像素信息可以描绘出一条直方图曲线(即第一曲线),多个第二全色子像素信息可以描绘出另一条直方图曲线(即第二曲线)。随后,处理芯片20可以根据两条直方图曲线的峰值所处的位置来计算出两条直方图曲线之间的相位差。随后,处理芯片20即可根据相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1和图11,在某些实施方式中,全色像素包括两个全色子像素。全色子像素信息包括第一全色子像素信息及第二全色子像素信息。第一全色子像素信息及第二全色子像素信息分别由位于透镜170的第一方位的全色子像及位于透镜170的第二方位的全色子像输出。多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对。根据全色子像素信息计算相位差以进行对焦,包括:根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息;根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息;根据多个第三全色子像素信息形成第一曲线;根据多个第四全色子像素信息形成第二曲线;及根据第一曲线及第二曲线计算相位差以进行对焦。
具体地,请再结合图23,在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。需要说明的是,图23所示的第一方位P1及第二方位P2是根据图23所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图23的像素阵列11的每一个全色像素中,一个 子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。第一全色子像素信息由位于透镜170的第一方位P1的全色子像素W输出,第二全色子像素信息由位于透镜170的第二方位P2的全色子像素W输出。例如,全色子像素W 11,P1、W 13,P1、W 15,P1、W 17,P1、W 22,P1、W 24,P1、W 26,P1、W 28,P1等位于第一方位P1,全色子像素W 11,P2、W 13,P2、W 15,P2、W 17,P2、W 22,P2、W 24,P2、W 26,P2、W 28,P2等位于第二方位P2。多个位于第一方位P1的全色子像素W与多个位于第二方位P2的全色子像素W组成一对全色子像素对,相应的,多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对。例如,同一子单元中的多个第一全色子像素信息与该子单元中的多个第二全色子像素信息作为一对全色子像素信息对,也即,全色子像素W 11,P1、W 22,P1的全色子像素信息与全色像素W 11,P2、W 22,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 13,P1、W 24,P1的全色子像素信息与全色子像素W 13,P2、W 24,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 15,P1、W 26,P1的全色子像素信息与全色子像素W 15,P2、W 26,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 17,P1、W 28,P1的全色子像素信息与全色子像素W 17,P2、W 28,P2的全色子像素信息组成一对全色子像素信息对,依此类推。再例如,同一个最小重复单元中的多个第一全色子像素信息与该最小重复单元中的多个第二全色子像素信息作为一对全色子像素信息对,即全色子像素W 11,P1、W 13,P1、W 22,P1、W 24,P1、W 31,P1、W 33,P1、W 42,P1、W 44,P1的全色子像素信息与全色像素W 11,P2、W 13,P2、W 22,P2、W 24,P2、W 31,P2、W 33,P12、W 42,P2、W 44,P2的全色子像素信息组成一对全色子像素信息对等,以此类推。
在获取多对全色子像素信息对之后,处理芯片20根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息,并根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息。示例地,对于全色子像素W 11,P1、W 22,P1的全色像素信息与全色子像素W 11,P2、W 22,P2的全色子像素信息组成的全色子像素信息对,第三全色子像素信息的计算方式可为:LT1=W 11,P1+W 22,P1,第四全色子像素信息的计算方式可为:RB1=W 11,P2+W 22,P2。对于全色子像素W 11,P1、W 13,P1、W 22,P1、W 24,P1、W 31,P1、W 33,P1、W 42,P1、W 44,P1的全色子像素信息与全色像素W 11,P2、W 13,P2、W 22,P2、W 24,P2、W 31,P2、W 33,P12、W 42,P2、W 44,P2的全色子像素信息组成一对全色子像素信息对,第三全色子像素信息的计算方式可为:LT1=(W 11,P1+W 13,P1+W 22,P1+W 24,P1+W 31,P1+W 33,P1+W 42,P1+W 44,P1)/8,第四全色子像素信息的计算方式可为:RB1=(W 11,P2+W 13,P2+W 22,P2+W 24,P2+W 31,P2+W 33,P12+W 42,P2+W 44,P2)/8。其余的全色子像素信息对的第三全色子像素信息及第四全色子像素信息的计算方式与此类似,在此不再赘述。如此,处理芯片20可以得到多个第三全色子像素信息及多个第四子全色像素信息。多个第三全色子像素信息可以描绘出一条直方图曲线(即第一曲线),多个第四全色子像素信息可以描绘出另一条直方图曲线(即第二曲线)。随后,处理芯片20可以根据两条直方图曲线计算出相位差。随后,处理芯片20即可根据相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1和图11,在某些实施方式中,彩色像素包括两个彩色子像素。彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一彩色子像素信息及第二子像素信息分别由位于透镜170的第一方位的彩色子像素及位于透镜170的第二方位的彩色子像素输出。一个第一彩色子像素信息与对应的一个第二彩色子像素信息作为一对彩色子像素信息对。根据彩色子像素信息计算相位差以进行对焦的步骤包括:根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线;根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线;及根据第三曲线及第四曲线计算相位差以进行对焦。该过程与前述仅根据全色子像素信息计算相位差以进行对焦的过程类似,再次不再展开说明。
请参阅图1和图11,在某些实施方式中,彩色像素包括两个彩色子像素。彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一彩色子像素信息及第二彩色子像素信息分别由位于透镜170的第一方位的彩色子像素及位于透镜170的第二方位的彩色子像素输出。多个第一彩色子像素信息与对应的多个第二彩色子像素信息作为一对彩色子像素信息对。根据彩色子像素信息计算相位差以进行对焦,包括:根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息;根据每对彩色子像素信息对中的多个第二彩色子像素信息计算第四彩色子像素信息;根据多个第三彩色子像素信息形成第三曲线;根据多个第四彩色子像素信息形成第四曲线;及根据第三曲线及第四曲线计算相位差以进行对焦。该过程与前述仅根据全色子像素信息计算相位差以进行对焦的过程类似,再次不再展 开说明。
请参阅图1和图11,在某些实施方式中,全色像素包括两个全色子像素。彩色像素包括两个彩色子像素。全色子像素信息包括第一全色子像素信息及第二全色子像素信息,彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一全色子像素信息、第二全色子像素信息、第一彩色子像素信息、及第二彩色子像素信息分别由位于透镜170的第一方位的全色子像素、位于透镜170的第二方位的全色子像素、位于透镜170的第一方位的彩色子像素、及位于透镜170的第二方位的彩色子像素输出。一个第一全色子像素信息与对应的一个第二全色子像素信息作为一对全色子像素信息对,一个第一彩色子像素信息与对应的一个第二彩色子像素信息作为一对彩色子像素信息对。根据全色子像素信息及彩色子像素信息计算相位差以进行对焦,包括:根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线;根据多对全色子像素信息对中的第二全色子像素信息形成第二曲线;根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线;根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线;及根据第一曲线、第二曲线、第三曲线、及第四曲线计算相位差以进行对焦。该过程与前述分别根据全色子像素信息计算相位差以进行对焦、根据彩色子像素信息计算相位差以进行对焦的过程类似,再次不再展开说明。
请参阅图1和图11,在某些实施方式中,全色像素包括两个全色子像素。彩色像素包括两个彩色子像素。全色子像素信息包括第一全色子像素信息及第二全色子像素信息,彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一全色子像素信息、第二全色子像素信息、第一彩色子像素信息、及第二彩色子像素信息分别由位于透镜170的第一方位的全色子像素、位于透镜170的第二方位的全色子像素、位于透镜170的第一方位的彩色子像素、及位于透镜170的第二方位的彩色子像素输出。多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对,多个第一彩色子像素信息与对应的多个第二彩色子像素信息作为一对彩色子像素信息对。根据全色子像素信息及彩色子像素信息计算相位差以进行对焦,包括:根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息;根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息;根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息;根据每对彩色子像素信息对中的多个第二彩色子像素信息计算第四彩色子像素信息;根据多个第三全色子像素信息形成第一曲线;根据多个第四全色子像素信息形成第二曲线;根据多个第三彩色子像素信息形成第三曲线;根据多个第四彩色子像素信息形成第四曲线;及根据第一曲线、第二曲线、第三曲线、及第四曲线计算相位差以进行对焦。该过程与前述分别根据全色子像素信息计算相位差以进行对焦、根据彩色子像素信息计算相位差以进行对焦的过程类似,再次不再展开说明。
请参阅图1和图24,在某些实施方式中,控制方法还包括:
04:获取环境亮度;
步骤03在合焦状态下,控制二维像素阵列11曝光以获取目标图像,包括:
031:在环境亮度大于第一预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素102曝光以分别输出全色子像素信息,控制每个彩色像素中的至少两个子像素102曝光以分别输出彩色子像素信息;
032:根据全色子像素信息和彩色子像素信息生成目标图像。
请参阅图1和图22,在某些实施方式中,步骤04、步骤031、及步骤032均可以由处理芯片10实现。也即是说,处理芯片20可以用于:获取环境亮度;在环境亮度大于第一预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素102曝光以分别输出全色子像素信息,控制每个彩色像素中的至少两个子像素102曝光以分别输出彩色子像素信息;根据全色子像素信息和彩色子像素信息生成目标图像。
具体地,请结合图25,以每个全色像素包括两个全色子像素、每个彩色像素包括两个彩色子像素为例,全色子像素信息包括第一全子色像素信息及第二全色子像素信息。第一全色子像素信息及第二全色子像素信息分别由位于透镜170的第一方位P1的全色子像素及位于透镜170的第二方位P2的全色子像素输出(图25中未标出透镜170的第一方位P1和第二方位P2,可结合图23中透镜170的方位划分)。在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。每个全色像素W中,一个子像素102(即全色子像素W)位于透镜170的 第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。每个全色像素W中,位于透镜170的第一方位P1的全色子像素W和位于透镜170的第二方位P2的全色子像素W曝光后,分别输出第一全色子像素信息和第二全色子像素信息。例如,全色子像素W11,P1和全色子像素W11,P2曝光分别输出第一全色子像素信息和第二全色子像素信息,全色子像素W22,P1和全色子像素W22,P2曝光分别输出第一全色子像素信息和第二全色子像素信息,依此类推。
同样地,彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一彩色子像素信息及第二彩色子像素信息分别由位于透镜170的第一方位P1的彩色子像素及位于透镜170的第二方位P2的彩色子像素输出(图25中未标出透镜170的第一方位P1和第二方位P2,可结合图23中透镜170的方位划分)。在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。每个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。每个彩色像素中,位于透镜170的第一方位P1的彩色子像素和位于透镜170的第二方位P2的彩色子像素曝光后,分别输出第一彩色子像素信息和第二彩色子像素信息。例如,彩色子像素A12,P1和彩色子像素A12,P2曝光分别输出第一彩色子像素信息和第二彩色子像素信息,彩色子像素B14,P1和彩色子像素B14,P2曝光分别输出第一彩色子像素信息和第二彩色子像素信息,彩色子像素C34,P1和彩色子像素C34,P2曝光分别输出第一彩色子像素信息和第二彩色子像素信息,依此类推。
本申请实施方式中,在环境亮度大于第一预定亮度时(即光线充足的条件下),每个全色像素和每个彩色像素均拆分为多个子像素分别输出子像素信息,可以提高目标图像的分辨率。例如图25中,每个全色像素和每个彩色像素均拆分为两个子像素分别输出子像素信息,输出的目标图像的横向分辨率能够变为原来的2倍(当每个全色像素和每个彩色像素均拆分为呈2*2矩阵分布的四个子像素分别输出子像素信息时,则输出的目标图像的横向分辨率和纵向分辨率均变为原来的2倍,此时相位对焦性能也更好)。
请参阅图1和图24,在某些实施方式中,控制方法还包括:04:获取环境亮度;
步骤03在合焦状态下,控制二维像素阵列11曝光以获取目标图像,包括:
033:在环境亮度小于第二预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素曝光以分别输出全色子像素信息,控制每个彩色像素中的至少两个子像素曝光以合并输出彩色合并像素信息;
034:根据全色子像素信息和彩色合并像素信息生成目标图像。
请参阅图1和图22,在某些实施方式中,步骤04、步骤033、及步骤034均可以由处理芯片10实现。也即是说,处理芯片20可以用于:获取环境亮度;在环境亮度小于第二预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素曝光以分别输出全色子像素信息,控制每个彩色像素中的至少两个子像素曝光以合并输出彩色合并像素信息;根据全色子像素信息和彩色合并像素信息生成目标图像。
其中,第二预定亮度小于或等于第一预定亮度。
具体地,请结合图26,以每个全色像素包括两个全色子像素、每个彩色像素包括两个彩色子像素为例,全色子像素信息包括第一全子色像素信息及第二全色子像素信息。第一全色子像素信息及第二全色子像素信息分别由位于透镜170的第一方位P1的全色子像素及位于透镜170的第二方位P2的全色子像素输出(图26中未标出透镜170的第一方位P1和第二方位P2,可结合图23中透镜170的方位划分)。在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。每个全色像素W中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。每个全色像素W中,位于透镜170的第一方位P1的全色子像素W和位于透镜170的第二方位P2的全色子像素W曝光后,分别输出第一全色子像素信息和第二全色子像素信息。例如,全色子像素W11,P1和全色子像素W11,P2曝光分别输出第一全色子像素信息和第二全色子像素信息,全色子像素W22,P1和全色子像素W22,P2曝光分别输出第一全色子像素信息和第二全色子像素信息,依此类推。
彩色合并像素信息由位于透镜170的第一方位P1的彩色子像素及位于透镜170的第二方位P2的彩 色子像素合并输出(图26中未标出透镜170的第一方位P1和第二方位P2,可结合图23中透镜170的方位划分)。在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。每个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。每个彩色像素中,位于透镜170的第一方位P1的彩色子像素和位于透镜170的第二方位P2的彩色子像素曝光后,合并输出彩色合并像素信息。例如,彩色子像素A12,P1和彩色子像素A12,P2曝光合并输出彩色合并像素信息,彩色子像素B14,P1和彩色子像素B14,P2曝光合并输出彩色合并像素信息,彩色子像素C34,P1和彩色子像素C34,P2曝光合并输出彩色合并像素信息,依此类推。
在光线较暗的条件下,全色像素的灵敏度远高于彩色像素,若每个彩色像素拆分为多个子像素分别输出子像素信息,会严重降低图像的信噪比。因此,本申请实施方式中,在环境亮度小于第二预定亮度时(即光线较暗的条件下),每个全色像素拆分为多个子像素分别输出子像素信息,每个彩色像素中的多个子像素合并输出合并像素信息,在一定程度上可以提高目标图像的分辨率。例如图26中,每个全色像素均拆分为两个子像素分别输出子像素信息,输出的目标图像的横向分辨率能够变为原来的1.5倍。
请参阅图1和图24,在某些实施方式中,控制方法还包括:04:获取环境亮度;
步骤03在合焦状态下,控制二维像素阵列11曝光以获取目标图像,包括:
035:在环境亮度小于第三预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素曝光以合并输出全色合并像素信息,控制每个彩色像素中的至少两个子像素曝光以合并输出彩色合并像素信息;
036:根据全色合并像素信息和彩色合并像素信息生成目标图像。
请参阅图1和图22,在某些实施方式中,步骤04、步骤035、及步骤036均可以由处理芯片10实现。也即是说,处理芯片20可以用于:获取环境亮度;在环境亮度小于第三预定亮度时,在合焦状态下,控制每个全色像素中的至少两个子像素曝光以合并输出全色合并像素信息,控制每个彩色像素中的至少两个子像素曝光以合并输出彩色合并像素信息;根据全色合并像素信息和彩色合并像素信息生成目标图像。
其中,第三预定亮度小于第二预定亮度。
具体地,请结合图27,以每个全色像素包括两个全色子像素、每个彩色像素包括两个彩色子像素为例,全色合并像素信息由位于透镜170的第一方位P1的全色子像素及位于透镜170的第二方位P2的全色子像素合并输出(图27中未标出透镜170的第一方位P1和第二方位P2,可结合图23中透镜170的方位划分)。在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。每个全色像素W中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。每个全色像素W中,位于透镜170的第一方位P1的全色子像素W和位于透镜170的第二方位P2的全色子像素W曝光后,合并输出全色合并像素信息。例如,全色子像素W11,P1和全色子像素W11,P2曝光合并输出全色合并像素信息,全色子像素W22,P1和全色子像素W22,P2曝光合并输出全色合并像素信息,依此类推。
彩色合并像素信息由位于透镜170的第一方位P1的彩色子像素及位于透镜170的第二方位P2的彩色子像素合并输出(图27中未标出透镜170的第一方位P1和第二方位P2,可结合图23中透镜170的方位划分)。在一个例子中,每个透镜170的第一方位P1为透镜170的左半部分对应的位置,第二方位P2为透镜170的右半部分对应的位置。每个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。每个彩色像素中,位于透镜170的第一方位P1的彩色子像素和位于透镜170的第二方位P2的彩色子像素曝光后,合并输出彩色合并像素信息。例如,彩色子像素A12,P1和彩色子像素A12,P2曝光合并输出彩色合并像素信息,彩色子像素B14,P1和彩色子像素B14,P2曝光合并输出彩色合并像素信息,彩色子像素C34,P1和彩色子像素C34,P2曝光合并输出彩色合并像素信息,依此类推。
在光线极暗的条件下,全色像素和彩色像素的信号都较低,追求图像的分辨率意义不大。因此,本 申请实施方式中,在环境亮度小于第三预定亮度时(即光线极暗的条件下),每个全色像素拆分为多个子像素分别输出子像素信息,每个全色像素和每个彩色像素中的多个子像素均合并输出合并像素信息,以提高信号量、提高信噪比。
在二维像素阵列11曝光输出包括全色子像素信息和彩色合并像素信息的原始图像(如图25所示),或者输出包括全色合并像素信息和彩色合并像素信息的原始图像(如图26所示),或者输出包括全色合并像素信息和彩色合并像素信息的原始图像(如图27所示)后,针对这三种不同分辨率的原始图像,处理芯片20还可以对原始图像进行对应的去马赛克算法(例如双线性插值算法)处理,以补全各个通道(例如包括红色通道、绿色通道和蓝色通道)的像素信息或子像素信息,保持图像色彩的完整呈现,最终得到彩色的目标图像。在一个例子中,在由原始图像得到目标图像的过程中,处理芯片20还可以对原始图像进行黑电平矫正处理、镜头阴影矫正处理、坏点补偿处理、色彩矫正处理、全局色调映射处理和色彩转换处理中的任意一个或多个,以具有更好的图像效果。
请参阅图1、图22和图28,本申请还提供一种移动终端90。本申请实施方式的移动终端90可以是手机、平板电脑、笔记本电脑、智能穿戴设备(如智能手表、智能手环、智能眼镜、智能头盔等)、头显设备、虚拟现实设备等等,在此不做限制。本申请实施方式的移动终端90包括上述任一实施方式的摄像头组件40、处理器60、存储器70和机壳80。摄像头组件40、处理器60和存储器70均安装在机壳80上。其中,摄像头组件40中的图像传感器10与处理器60连接。处理器60可以执行与摄像头组件40中的处理芯片20相同的功能,换言之,处理器60可以实现上述任意一项实施方式的处理芯片20所能实现的功能。存储器70与处理器60连接,存储器70可以存储处理器60处理后得到的数据,如目标图像等。处理器60可以与图像传感器10安装在同一个基板上,此时图像传感器10和处理器60可视为一个摄像头组件40。当然,处理器60也可以与图像传感器10安装在不同的基板上。
本申请实施方式的移动终端90中,二维像素阵列11包括多个彩色像素和多个全色像素,相较于一般的彩色传感器而言,增加了通光量,具有更好的信噪比,在暗光下对焦性能更好,子像素102的灵敏度也会更高。此外,每个像素101包括至少两个子像素102,可以在实现相位对焦的同时,提高图像传感器10的分辨率。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (20)

  1. 一种图像传感器,其特征在于,包括:
    二维像素阵列,所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同;其中,每个像素包括至少两个子像素;及
    透镜阵列,所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
  2. 根据权利要求1所述的图像传感器,其特征在于,每个所述像素包括两个所述子像素,两个所述子像素之间的分界线与所述二维像素阵列的长度方向平行;或者
    两个所述子像素之间的分界线与所述二维像素阵列的宽度方向平行。
  3. 根据权利要求1所述的图像传感器,其特征在于,每个所述像素包括两个所述子像素,两个所述子像素之间的分界线相对于所述二维像素阵列的长度方向或宽度方向倾斜。
  4. 根据权利要求1所述的图像传感器,其特征在于,每个所述像素包括两个所述子像素,部分所述像素中,两个所述子像素之间的分界线与所述二维像素阵列的长度方向平行;部分所述像素中,两个所述子像素之间的分界线与所述二维像素阵列的宽度方向平行。
  5. 根据权利要求4所述的图像传感器,其特征在于,两个所述子像素之间的分界线与所述二维像素阵列的长度方向平行的所述像素与两个所述子像素之间的分界线与所述二维像素阵列的宽度方向平行的所述像素隔行分布或隔列分布。
  6. 根据权利要求4所述的图像传感器,其特征在于,两个所述子像素之间的分界线与所述二维像素阵列的长度方向平行的所述像素与两个所述子像素之间的分界线与所述二维像素阵列的宽度方向平行的所述像素交错分布。
  7. 根据权利要求1所述的图像传感器,其特征在于,每个所述像素包括四个所述子像素,四个所述子像素呈2*2矩阵分布。
  8. 根据权利要求1所述的图像传感器,其特征在于,所述图像传感器还包括滤光片阵列,所述滤光片阵列包括多个滤光片,每个所述滤光片覆盖一个所述像素。
  9. 根据权利要求1所述的图像传感器,其特征在于,每个所述像素包括两个所述子像素,其中一个所述子像素包括第一光电转换元件,另外一个所述子像素包括第二光电转换元件,每个所述像素还包括与所述第一光电转换元件连接的第一曝光控制电路和与所述第二光电转换元件连接的第二曝光控制电路,所述第一曝光控制电路用于转移所述第一光电转换元件接收光线后生成的电荷以输出像素信息,所述第二曝光控制电路用于转移所述第二光电转换元件接收光线后生成的电荷以输出像素信息。
  10. 根据权利要求9所述的图像传感器,其特征在于,每个所述像素还包括复位电路,所述复位电路同时与所述第一曝光控制电路和所述第二曝光控制电路连接;
    当两个所述子像素曝光以分别输出像素信息、或分别输出像素信息并输出相位信息时,所述第一曝光控制电路先转移所述第一光电转换元件接收光线后生成的电荷以输出第一子像素信息,所述复位电路复位后,所述第二曝光控制电路再转移所述第二光电转换元件接收光线后生成的电荷以输出第二子像素信息;
    当两个所述子像素曝光以合并输出像素信息时,所述第一曝光控制电路转移所述第一光电转换元件接收光线后生成的电荷的同时,所述第二曝光控制电路能够转移所述第二光电转换元件接收光线后生成的电荷,以输出合并像素信息;
    当两个所述子像素曝光以合并输出像素信息并输出相位信息时,所述第一曝光控制电路先转移所述第一光电转换元件接收光线后生成的电荷以输出第一子像素信息,所述复位电路复位后,所述第二曝光控制电路再转移所述第二光电转换元件接收光线后生成的电荷以输出第二子像素信息,所述第一子像素信息和所述第二子像素信息用于合并为合并像素信息。
  11. 根据权利要求10所述的图像传感器,其特征在于,当两个所述子像素曝光以合并输出像素信息并输出相位信息时,所述图像传感器还包括缓存器,所述缓存器用于存储所述第一子像素信息和所述第二子像素信息以输出相位信息。
  12. 根据权利要求1所述的图像传感器,其特征在于,所述图像传感器还包括:
    第一曝光控制线,所述第一曝光控制线用于传输第一曝光信号以控制所述全色像素的第一曝光时间;及
    第二曝光控制线,所述第二曝光控制线用于传输第二曝光信号以控制所述彩色像素的第二曝光时间;其中所述第一曝光时间小于所述第二曝光时间,或所述第一曝光时间等于所述第二曝光时间
  13. 一种控制方法,用于图像传感器,其特征在于,所述图像传感器包括二维像素阵列及透镜阵列,所述二维像素阵列包括多个彩色像素和多个全色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同;其中,每个像素包括至少两个子像素;所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素;所述控制方法包括:
    控制所述至少两个子像素曝光以输出至少两个子像素信息;
    根据所述至少两个子像素信息确定相位信息以进行对焦;
    在合焦状态下,控制所述二维像素阵列曝光以获取目标图像。
  14. 根据权利要求13所述的控制方法,其特征在于,所述控制方法还包括:
    获取环境亮度;
    所述在合焦状态下,控制所述二维像素阵列曝光以获取目标图像,包括:
    在所述环境亮度大于第一预定亮度时,在合焦状态下,控制每个所述全色像素中的所述至少两个子像素曝光以分别输出全色子像素信息,控制每个所述彩色像素中的所述至少两个子像素曝光以分别输出彩色子像素信息;
    根据所述全色子像素信息和所述彩色子像素信息生成所述目标图像。
  15. 根据权利要求13所述的控制方法,其特征在于,所述控制方法还包括:
    获取环境亮度;
    所述在合焦状态下,控制所述二维像素阵列曝光以获取目标图像,包括:
    在所述环境亮度小于第二预定亮度时,在合焦状态下,控制每个所述全色像素中的所述至少两个子像素曝光以分别输出全色子像素信息,控制每个所述彩色像素中的所述至少两个子像素曝光以合并输出彩色合并像素信息;
    根据所述全色子像素信息和所述彩色合并像素信息生成所述目标图像。
  16. 根据权利要求13所述的控制方法,其特征在于,所述控制方法还包括:
    获取环境亮度;
    所述在合焦状态下,控制所述二维像素阵列曝光以获取目标图像,包括:
    在所述环境亮度小于第三预定亮度时,在合焦状态下,控制每个所述全色像素中的所述至少两个子像素曝光以合并输出全色合并像素信息,控制每个所述彩色像素中的所述至少两个子像素曝光以合并输出彩色合并像素信息;
    根据所述全色合并像素信息和所述彩色合并像素信息生成所述目标图像。
  17. 一种摄像头组件,其特征在于,包括:
    镜头;及
    权利要求1-12任意一项所述的图像传感器,所述图像传感器能够接收穿过所述镜头的光线。
  18. 根据权利要求17所述的摄像头组件,其特征在于,所述摄像头组件还包括处理芯片,所述处理芯片用于实现权利要求13-16任意一项所述的控制方法。
  19. 一种移动终端,其特征在于,包括:
    机壳;及
    权利要求17或18所述的摄像头组件,所述摄像头组件安装在所述机壳上。
  20. 一种移动终端,其特征在于,包括:
    机壳;及
    权利要求17所述的摄像头组件,所述摄像头组件安装在所述机壳上;
    所述移动终端还包括处理器,所述处理器用于实现权利要求13-16任意一项所述的控制方法。
PCT/CN2021/088404 2020-05-07 2021-04-20 图像传感器、控制方法、摄像头组件和移动终端 WO2021223590A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010377218.4A CN111586323A (zh) 2020-05-07 2020-05-07 图像传感器、控制方法、摄像头组件和移动终端
CN202010377218.4 2020-05-07

Publications (1)

Publication Number Publication Date
WO2021223590A1 true WO2021223590A1 (zh) 2021-11-11

Family

ID=72126248

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/088404 WO2021223590A1 (zh) 2020-05-07 2021-04-20 图像传感器、控制方法、摄像头组件和移动终端

Country Status (2)

Country Link
CN (1) CN111586323A (zh)
WO (1) WO2021223590A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI841235B (zh) 2022-02-23 2024-05-01 美商豪威科技股份有限公司 用於在具有相位檢測自動對焦的高密度、高像素數影像感測器中減少影像偽影的電路和方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端
WO2022073364A1 (zh) * 2020-10-09 2022-04-14 Oppo广东移动通信有限公司 图像获取方法及装置、终端和计算机可读存储介质
CN114845015A (zh) * 2020-10-15 2022-08-02 Oppo广东移动通信有限公司 图像传感器、控制方法、成像装置、终端及可读存储介质
CN114697585B (zh) * 2020-12-31 2023-12-29 杭州海康威视数字技术股份有限公司 一种图像传感器、图像处理系统及图像处理方法
WO2022141349A1 (zh) * 2020-12-31 2022-07-07 Oppo广东移动通信有限公司 图像处理管道、图像处理方法、摄像头组件和电子设备
CN113178457B (zh) * 2021-04-12 2022-11-11 维沃移动通信有限公司 像素结构和图像传感器
CN113676708B (zh) * 2021-07-01 2023-11-14 Oppo广东移动通信有限公司 图像生成方法、装置、电子设备和计算机可读存储介质
CN113676675B (zh) * 2021-08-16 2023-08-15 Oppo广东移动通信有限公司 图像生成方法、装置、电子设备和计算机可读存储介质
CN113660425B (zh) * 2021-08-19 2023-08-22 维沃移动通信(杭州)有限公司 图像处理方法、装置、电子设备及可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
CN114554046A (zh) * 2021-12-01 2022-05-27 Oppo广东移动通信有限公司 图像传感器、摄像模组、电子设备、图像生成方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233762A (zh) * 2005-07-28 2008-07-30 伊斯曼柯达公司 具有改善的光敏感度的图像传感器
CN103460702A (zh) * 2011-03-24 2013-12-18 富士胶片株式会社 彩色摄像元件、摄像装置及摄像程序
CN107040724A (zh) * 2017-04-28 2017-08-11 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN110996077A (zh) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN111464733A (zh) * 2020-05-22 2020-07-28 Oppo广东移动通信有限公司 控制方法、摄像头组件和移动终端
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6149369B2 (ja) * 2012-09-27 2017-06-21 株式会社ニコン 撮像素子
CN104978920B (zh) * 2015-07-24 2018-10-16 京东方科技集团股份有限公司 像素阵列、显示装置及其显示方法
CN107124536B (zh) * 2017-04-28 2020-05-08 Oppo广东移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
JP2019080141A (ja) * 2017-10-24 2019-05-23 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置及び電子機器
CN110649056B (zh) * 2019-09-30 2022-02-18 Oppo广东移动通信有限公司 图像传感器、摄像头组件及移动终端
CN110740272B (zh) * 2019-10-31 2021-05-14 Oppo广东移动通信有限公司 图像采集方法、摄像头组件及移动终端
CN110913152B (zh) * 2019-11-25 2022-02-15 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN111031297B (zh) * 2019-12-02 2021-08-13 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233762A (zh) * 2005-07-28 2008-07-30 伊斯曼柯达公司 具有改善的光敏感度的图像传感器
CN103460702A (zh) * 2011-03-24 2013-12-18 富士胶片株式会社 彩色摄像元件、摄像装置及摄像程序
CN107040724A (zh) * 2017-04-28 2017-08-11 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN110996077A (zh) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端
CN111464733A (zh) * 2020-05-22 2020-07-28 Oppo广东移动通信有限公司 控制方法、摄像头组件和移动终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI841235B (zh) 2022-02-23 2024-05-01 美商豪威科技股份有限公司 用於在具有相位檢測自動對焦的高密度、高像素數影像感測器中減少影像偽影的電路和方法

Also Published As

Publication number Publication date
CN111586323A (zh) 2020-08-25

Similar Documents

Publication Publication Date Title
WO2021223590A1 (zh) 图像传感器、控制方法、摄像头组件和移动终端
CN110649056B (zh) 图像传感器、摄像头组件及移动终端
WO2021179806A1 (zh) 图像获取方法、成像装置、电子设备及可读存储介质
WO2021233040A1 (zh) 控制方法、摄像头组件和移动终端
WO2021196553A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
WO2021063162A1 (zh) 图像传感器、摄像头组件及移动终端
WO2021233039A1 (zh) 控制方法、摄像头组件和移动终端
CN112235494B (zh) 图像传感器、控制方法、成像装置、终端及可读存储介质
US20220336508A1 (en) Image sensor, camera assembly and mobile terminal
WO2021159944A1 (zh) 图像传感器、摄像头组件及移动终端
KR102632474B1 (ko) 이미지 센서의 픽셀 어레이 및 이를 포함하는 이미지 센서
WO2021062661A1 (zh) 图像传感器、摄像头组件及移动终端
US20220150450A1 (en) Image capturing method, camera assembly, and mobile terminal
US20220139974A1 (en) Image sensor, camera assembly, and mobile terminal
CN114424517B (zh) 图像传感器、控制方法、摄像头组件及移动终端
US20220279108A1 (en) Image sensor and mobile terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21800101

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21800101

Country of ref document: EP

Kind code of ref document: A1