WO2021102832A1 - 图像传感器、控制方法、摄像头组件及移动终端 - Google Patents

图像传感器、控制方法、摄像头组件及移动终端 Download PDF

Info

Publication number
WO2021102832A1
WO2021102832A1 PCT/CN2019/121697 CN2019121697W WO2021102832A1 WO 2021102832 A1 WO2021102832 A1 WO 2021102832A1 CN 2019121697 W CN2019121697 W CN 2019121697W WO 2021102832 A1 WO2021102832 A1 WO 2021102832A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
color
pixel information
panchromatic
pixels
Prior art date
Application number
PCT/CN2019/121697
Other languages
English (en)
French (fr)
Inventor
徐锐
唐城
杨鑫
李小涛
王文涛
孙剑波
蓝和
张海裕
张弓
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2019/121697 priority Critical patent/WO2021102832A1/zh
Priority to CN201980100683.9A priority patent/CN114424517B/zh
Publication of WO2021102832A1 publication Critical patent/WO2021102832A1/zh
Priority to US17/747,907 priority patent/US11696041B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/702SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets

Definitions

  • This application relates to the field of imaging technology, and in particular to an image sensor, a control method, a camera assembly, and a mobile terminal.
  • phase focusing there are usually two ways to achieve phase focusing: (1) Arrange multiple pairs of phase detection pixels in the pixel array to detect the phase difference, and each pair of phase detection pixels includes a left half that is blocked. (2) Each pixel includes two photodiodes, and the two photodiodes form a phase detection pixel to detect the phase difference.
  • the embodiments of the present application provide an image sensor, a control method, a camera assembly, and a mobile terminal.
  • the image sensor of the embodiment of the present application includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of pixels. At least some of the pixels include two sub-pixels. Taking the center point of each pixel as the origin, the length direction parallel to the two-dimensional pixel array is the X axis, and the width direction is the Y axis to establish a rectangular coordinate system. Both the positive half axis and the negative half axis are distributed, and the two sub-pixels are distributed on both the positive half axis and the negative half axis of the Y axis.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the control method of the embodiment of the present application is used for an image sensor.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of pixels. At least some of the pixels include two sub-pixels. Taking the center point of each pixel as the origin, the length direction parallel to the two-dimensional pixel array is the X axis, and the width direction is the Y axis to establish a rectangular coordinate system. Both the positive half axis and the negative half axis are distributed, and the two sub-pixels are distributed on both the positive half axis and the negative half axis of the Y axis.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the control method includes: exposing the sub-pixels to output sub-pixel information; calculating a phase difference according to the sub-pixel information to perform focusing; in a focused state, exposing a plurality of the pixels in the two-dimensional pixel array To obtain the target image.
  • the camera assembly of the embodiment of the present application includes a lens and an image sensor.
  • the image sensor can receive light passing through the lens.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of pixels. At least some of the pixels include two sub-pixels. Taking the center point of each pixel as the origin, the length direction parallel to the two-dimensional pixel array is the X axis, and the width direction is the Y axis to establish a rectangular coordinate system. Both the positive half axis and the negative half axis are distributed, and the two sub-pixels are distributed on both the positive half axis and the negative half axis of the Y axis.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • the mobile terminal of the embodiment of the present application includes a casing and an image sensor.
  • the image sensor is installed in the casing.
  • the image sensor includes a two-dimensional pixel array and a lens array.
  • the two-dimensional pixel array includes a plurality of pixels. At least some of the pixels include two sub-pixels. Taking the center point of each pixel as the origin, the length direction parallel to the two-dimensional pixel array is the X axis, and the width direction is the Y axis to establish a rectangular coordinate system. Both the positive half axis and the negative half axis are distributed, and the two sub-pixels are distributed on both the positive half axis and the negative half axis of the Y axis.
  • the lens array includes a plurality of lenses, and each of the lenses covers one of the pixels.
  • Fig. 1 is a schematic diagram of an image sensor according to some embodiments of the present application.
  • FIG. 2 is a schematic diagram of a pixel circuit according to some embodiments of the present application.
  • 3 to 10 are schematic diagrams of the distribution of sub-pixels in some embodiments of the present application.
  • Figure 11 is a schematic diagram of exposure saturation time for different color channels
  • 12 to 21 are schematic diagrams of the pixel arrangement and lens coverage of the smallest repeating unit in some embodiments of the present application.
  • FIG. 22 is a schematic flowchart of a control method according to some embodiments of the present application.
  • FIG. 23 is a schematic diagram of a camera assembly according to some embodiments of the present application.
  • FIG. 26 and FIG. 27 are schematic diagrams of the principle of the control method of some embodiments of the present application.
  • FIG. 34 to FIG. 37 are schematic diagrams of the principle of the control method of some embodiments of the present application.
  • FIG. 38 is a schematic flowchart of a control method according to some embodiments of the present application.
  • FIG. 39 to FIG. 41 are schematic diagrams of the principle of the control method of some embodiments of the present application.
  • FIG. 42 is a schematic diagram of a mobile terminal according to some embodiments of the present application.
  • the image sensor includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of pixels 101. At least some of the pixels 101 include two sub-pixels 102. Taking the center point of each pixel 101 as the origin, the length direction LD parallel to the two-dimensional pixel array 11 is the X axis, and the width direction WD is the Y axis. A rectangular coordinate system is established.
  • the two sub-pixels 102 are on the positive half axis of the X axis and Both the negative semi-axis are distributed, and the two sub-pixels 102 are distributed on both the positive semi-axis and the negative semi-axis of the Y-axis.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 101.
  • the image sensor includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of pixels 101. At least some of the pixels 101 include two sub-pixels 102. Taking the center point of each pixel 101 as the origin, the length direction LD parallel to the two-dimensional pixel array 11 is the X axis, and the width direction WD is the Y axis. A rectangular coordinate system is established.
  • the two sub-pixels 102 are on the positive half axis of the X axis and Both the negative semi-axis are distributed, and the two sub-pixels 102 are distributed on both the positive semi-axis and the negative semi-axis of the Y-axis.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 101.
  • the control method includes: exposing the sub-pixel 102 to output sub-pixel information; calculating the phase difference according to the sub-pixel information for focusing; in the in-focus state, exposing a plurality of pixels 102 in the two-dimensional pixel array 11 to obtain a target image.
  • the present application also provides a camera assembly 40.
  • the camera assembly 40 includes an image sensor 10 and a lens 30.
  • the image sensor 10 can receive light passing through the lens 30.
  • the image sensor includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of pixels 101. At least some of the pixels 101 include two sub-pixels 102. Taking the center point of each pixel 101 as the origin, the length direction LD parallel to the two-dimensional pixel array 11 is the X axis, and the width direction WD is the Y axis. A rectangular coordinate system is established.
  • the two sub-pixels 102 are on the positive half axis of the X axis and Both the negative semi-axis are distributed, and the two sub-pixels 102 are distributed on both the positive semi-axis and the negative semi-axis of the Y-axis.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 102
  • the mobile terminal 90 includes a casing 80 and an image sensor 10.
  • the image sensor 10 is installed in the housing.
  • the image sensor includes a two-dimensional pixel array 11 and a lens array 17.
  • the two-dimensional pixel array 11 includes a plurality of pixels 101. At least some of the pixels 101 include two sub-pixels 102. Taking the center point of each pixel 101 as the origin, the length direction LD parallel to the two-dimensional pixel array 11 is the X axis, and the width direction WD is the Y axis. A rectangular coordinate system is established.
  • the two sub-pixels 102 are on the positive half axis of the X axis and Both the negative semi-axis are distributed, and the two sub-pixels 102 are distributed on both the positive semi-axis and the negative semi-axis of the Y-axis.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers one pixel 102
  • dual-core pixels can be used for phase focusing.
  • Each dual-core pixel contains two sub-pixels, and the two sub-pixels form a pair of phase detection pairs, and the phase difference can be calculated based on the signals output by the two sub-pixels after exposure.
  • the two sub-pixels in the dual-core pixel are usually symmetrically distributed on the left and right, or symmetrically distributed up and down.
  • the symmetrically distributed phase detection pair mainly obtains the phase information in the horizontal direction, and it is difficult to obtain the phase information in the vertical direction. This makes the symmetrically distributed phase detection pair applied to a scene containing a large number of pure horizontal stripes.
  • phase detection is relatively close to the two output signals, and the accuracy of the phase difference calculated based on the two signals is low, which further leads to the low accuracy of focusing.
  • a phase detection pair distributed symmetrically up and down mainly obtains phase information in the vertical direction, but it is difficult to obtain phase information in the horizontal direction.
  • the phase detection is relatively close to the two output signals, and the accuracy of the phase difference calculated based on the two signals is low, which further leads to the low accuracy of focusing.
  • the present application provides an image sensor 10 (shown in FIG. 1). At least some of the pixels 101 in the image sensor 10 of the embodiment of the present application include two sub-pixels 102.
  • the two sub-pixels 102 can obtain phase information in both the horizontal direction and the vertical direction, so that the image sensor 10 can be used in a scene containing a large number of pure color horizontal stripes, or can be used in a scene containing a large number of horizontal stripes. In a scene with pure vertical stripes, the image sensor 10 has better scene adaptability and higher accuracy of phase focusing.
  • FIG. 1 is a schematic diagram of an image sensor 10 and a schematic diagram of a pixel 101 according to an embodiment of the present application.
  • the image sensor 10 includes a two-dimensional pixel array 11, a filter array 16, and a lens array 17. Along the light-receiving direction of the image sensor 10, the lens array 17, the filter 16, and the two-dimensional pixel array 11 are arranged in sequence.
  • the image sensor 10 may use a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge-coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS complementary metal oxide semiconductor
  • CCD Charge-coupled Device
  • the two-dimensional pixel array 11 includes a plurality of pixels 101 two-dimensionally arranged in an array. At least some of the pixels 101 include two sub-pixels 102. With the center point of each pixel 101 as the origin, the longitudinal direction LD parallel to the two-dimensional pixel array 11 as the X axis, and the width direction WD parallel to the two-dimensional pixel array 11 as the Y axis, a rectangular coordinate system is established.
  • the two sub-pixels 102 are distributed on the positive and negative semi-axes of the X axis, and the two sub-pixels 102 are distributed on the positive and negative semi-axes of the Y axis.
  • one sub-pixel 102 is simultaneously distributed in the first, second, and third quadrants of the rectangular coordinate system, and the other sub-pixel 102 is simultaneously distributed in the first and fourth quadrants of the rectangular coordinate system. Quadrant, and the third quadrant.
  • the length direction LD parallel to the two-dimensional pixel array 11 can also be the Y axis
  • the width direction WD parallel to the two-dimensional pixel array 11 can also be the X axis (not shown).
  • one The sub-pixels 102 are simultaneously distributed in the second, first, and fourth quadrants of the rectangular coordinate system
  • the other sub-pixel 102 is simultaneously distributed in the second, third, and fourth quadrants of the rectangular coordinate system.
  • At least some of the pixels 101 include two sub-pixels 102: (1) only some of the pixels 101 include two sub-pixels 102, and the remaining pixels 101 include only one sub-pixel 102; (2) all pixels 101 include two sub-pixels 102.
  • the filter array 16 includes a plurality of filters 160, and each filter 160 covers a corresponding pixel 101.
  • the spectral response of each pixel 101 (that is, the color of light that the pixel 101 can receive) is determined by the color of the filter 160 corresponding to the pixel 102.
  • the lens array 17 includes a plurality of lenses 170, and each lens 170 covers a corresponding pixel 101.
  • FIG. 2 is a schematic diagram of a pixel circuit 110 in an embodiment of the present application.
  • the pixel circuit of each sub-pixel 102 may be the pixel circuit 110 shown in FIG. 2.
  • the pixel circuit of the one sub-pixel 102 may also be the pixel circuit 110 shown in FIG. 2.
  • the working principle of the pixel circuit 110 will be described below in conjunction with FIG. 1 and FIG. 2.
  • the pixel circuit 110 includes a photoelectric conversion element 117 (for example, a photodiode PD), an exposure control circuit 116 (for example, a transfer transistor 112), a reset circuit (for example, a reset transistor 113), and an amplifier circuit (for example, the amplification transistor 114) and the selection circuit (for example, the selection transistor 115).
  • the transfer transistor 112, the reset transistor 113, the amplifying transistor 114, and the selection transistor 115 are, for example, MOS transistors, but are not limited thereto.
  • the gate TG of the transfer transistor 112 is connected to the vertical driving unit (not shown in the figure) of the image sensor 10 through an exposure control line (not shown in the figure); the gate RG of the reset transistor 113 is The vertical driving unit is connected through a reset control line (not shown in the figure); the gate SEL of the selection transistor 115 is connected to the vertical driving unit through a selection line (not shown in the figure).
  • the exposure control circuit 116 (for example, the transfer transistor 112) in each pixel circuit 110 is electrically connected to the photoelectric conversion element 117 for transferring the electric potential accumulated by the photoelectric conversion element 117 after being irradiated.
  • the photoelectric conversion element 117 includes a photodiode PD, and the anode of the photodiode PD is connected to the ground, for example.
  • the photodiode PD converts the received light into electric charge.
  • the cathode of the photodiode PD is connected to the floating diffusion unit FD via the exposure control circuit 116 (for example, the transfer transistor 112).
  • the floating diffusion unit FD is connected to the gate of the amplifying transistor 114 and the source of the reset transistor 113.
  • the exposure control circuit 116 is the transfer transistor 112, and the control terminal TG of the exposure control circuit 116 is the gate of the transfer transistor 112.
  • the transfer transistor 112 When a pulse of an active level (for example, VPIX level) is transmitted to the gate of the transfer transistor 112 through the exposure control line, the transfer transistor 112 is turned on. The transfer transistor 112 transfers the charge photoelectrically converted by the photodiode PD to the floating diffusion unit FD.
  • the drain of the reset transistor 113 is connected to the pixel power supply VPIX.
  • the source of the reset transistor 113 is connected to the floating diffusion unit FD.
  • a pulse of an effective reset level is transmitted to the gate of the reset transistor 113 via the reset line, and the reset transistor 113 is turned on.
  • the reset transistor 113 resets the floating diffusion unit FD to the pixel power supply VPIX.
  • the gate of the amplifying transistor 114 is connected to the floating diffusion unit FD.
  • the drain of the amplifying transistor 114 is connected to the pixel power supply VPIX.
  • the amplifying transistor 114 After the floating diffusion unit FD is reset by the reset transistor 113, the amplifying transistor 114 outputs the reset level through the output terminal OUT via the selection transistor 115.
  • the amplifying transistor 114 After the charge of the photodiode PD is transferred by the transfer transistor 112, the amplifying transistor 114 outputs a signal level through the output terminal OUT via the selection transistor 115.
  • the drain of the selection transistor 115 is connected to the source of the amplifying transistor 114.
  • the source of the selection transistor 115 is connected to the column processing unit (not shown in the figure) in the image sensor 10 through the output terminal OUT.
  • the selection transistor 115 is turned on.
  • the signal output by the amplifying transistor 114 is transmitted to the column processing unit through the selection transistor 115.
  • the pixel structure of the pixel circuit 110 in the embodiment of the present application is not limited to the structure shown in FIG. 2.
  • the pixel circuit 110 may have a three-transistor pixel structure, in which the functions of the amplifying transistor 114 and the selecting transistor 115 are performed by one transistor.
  • the exposure control circuit 116 is not limited to a single transfer transistor 112, and other electronic devices or structures with a control terminal to control the conduction function can be used as the exposure control circuit in the embodiment of the present application.
  • the implementation of the single transfer transistor 112 Simple, low cost, and easy to control.
  • each sub-pixel 102 can obtain phase information in the horizontal direction and the vertical direction at the same time, which is beneficial to improve the accuracy of phase focusing.
  • FIG. 3 is a schematic diagram of the distribution of sub-pixels 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One sub-pixel 102 in each pixel 101 is distributed in the first quadrant, the second quadrant, and the third quadrant at the same time, and the other sub-pixel 102 is distributed in the first quadrant, the fourth quadrant, and the third quadrant at the same time.
  • each sub-pixel 102 The shape of the cross section of each sub-pixel 102 is a triangle, where the cross section refers to a cross section taken along the light-receiving direction perpendicular to the image sensor 10.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • FIG. 4 is a schematic diagram of the arrangement of another seed pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One sub-pixel 102 in some pixels 101 is distributed in the first, second, and third quadrants at the same time, and the other pixel 102 is distributed in the first, fourth, and third quadrants at the same time;
  • One sub-pixel 102 is distributed in the second quadrant, the first quadrant, and the fourth quadrant at the same time, and the other sub-pixel 102 is distributed in the second quadrant, the third quadrant, and the fourth quadrant at the same time.
  • the shape of the cross section of each sub-pixel 102 is a triangle, where the cross section refers to a cross section taken along the light-receiving direction perpendicular to the image sensor 10.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • FIG. 5 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One sub-pixel 102 in each pixel 101 is distributed in the first quadrant, the second quadrant, and the third quadrant at the same time, and the other pixel 102 is distributed in the first quadrant, the fourth quadrant, and the third quadrant at the same time.
  • each sub-pixel 102 is a trapezoid.
  • the cross-sectional shape of one sub-pixel 102 is a trapezoid with a wide top and a narrow bottom
  • the cross-section of the other sub-pixel 102 is a trapezoid with a narrow top and a wide bottom.
  • the cross section refers to the cross section taken along the light-receiving direction perpendicular to the image sensor 10.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • FIG. 6 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One of the sub-pixels 102 in the partial pixels 101 is distributed in the first, second, and third quadrants at the same time, and the other sub-pixel 102 is distributed in the first, fourth, and third quadrants at the same time; in some of the pixels 101
  • One sub-pixel 102 is distributed in the second quadrant, the first quadrant, and the fourth quadrant at the same time, and the other sub-pixel 102 is distributed in the second quadrant, the third quadrant, and the fourth quadrant at the same time.
  • the cross-sectional shape of each sub-pixel 102 is a trapezoid.
  • the cross-sectional shape of one sub-pixel 102 is a trapezoid with a wide top and a narrow bottom
  • the cross-section of the other sub-pixel 102 is a trapezoid with a narrow top and a wide bottom.
  • the cross section refers to the cross section taken along the light-receiving direction perpendicular to the image sensor 10.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • FIG. 7 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One sub-pixel 102 in each pixel 101 is distributed in the first quadrant, the second quadrant, and the third quadrant at the same time, and the other pixel 102 is distributed in the first quadrant, the fourth quadrant, and the third quadrant at the same time.
  • each sub-pixel 102 is an "L" shape
  • the cross-sectional shape of one sub-pixel 102 in the same pixel 101 is an inverted “L” shape
  • the cross-section of the other sub-pixel 102 is a mirror image of "L”. ”Shape, where the cross section refers to the section taken along the direction perpendicular to the light-receiving direction of the image sensor 10.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • FIG. 8 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One of the sub-pixels 102 in the partial pixels 101 is distributed in the first, second, and third quadrants at the same time, and the other sub-pixel 102 is distributed in the first, fourth, and third quadrants at the same time; in some of the pixels 101
  • One sub-pixel 102 is distributed in the second quadrant, the first quadrant, and the fourth quadrant at the same time, and the other sub-pixel 102 is distributed in the second quadrant, the third quadrant, and the fourth quadrant at the same time.
  • each sub-pixel 102 is an "L" shape
  • the cross-sectional shape of one sub-pixel 102 in the same pixel 101 is an inverted “L” shape
  • the cross-section of the other sub-pixel 102 is a mirror image of "L”. ”Shape, where the cross section refers to the section taken along the direction perpendicular to the light-receiving direction of the image sensor 10.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • FIG. 9 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • a part of the pixels 101 includes one sub-pixel 102, and the part of the pixels 101 includes two sub-pixels 102.
  • the pixels 101 including one sub-pixel 102 and the pixels 101 including two sub-pixels 102 are alternately arranged in rows and columns.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • the sub-pixels 102 in the pixel 101 including one sub-pixel 102 are simultaneously distributed in the first quadrant, the second quadrant, the third quadrant, and the fourth quadrant.
  • one sub-pixel 102 is distributed in the first quadrant, the second quadrant, and the third quadrant at the same time, and the other pixel 102 is distributed in the first quadrant, the fourth quadrant, and the third quadrant at the same time.
  • the cross-sectional shape of the sub-pixel 102 in the pixel 101 that partially includes two sub-pixels 102 is a triangle, and the cross-sectional shape of the two sub-pixels 102 in the pixel 101 that partially includes the two sub-pixels 102 is an "L" shape, wherein,
  • the cross section refers to a cross section taken along a direction perpendicular to the light-receiving direction of the image sensor 10.
  • the two sub-pixels 102 in the pixel 101 including the two sub-pixels 102 are all distributed symmetrically about the center of the pixel 101.
  • FIG. 10 is a schematic diagram of the distribution of another sub-pixel 102 according to an embodiment of the present application.
  • Each pixel 101 includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive half axis of the X axis, the negative half axis of the X axis, the positive half axis of the Y axis, and the negative half axis of the Y axis of the corresponding rectangular coordinate system.
  • One of the sub-pixels 102 in the partial pixels 101 is distributed in the first, second, and third quadrants at the same time, and the other sub-pixel 102 is distributed in the first, fourth, and third quadrants at the same time; in some of the pixels 101
  • One sub-pixel 102 is distributed in the second quadrant, the first quadrant, and the fourth quadrant at the same time, and the other sub-pixel 102 is distributed in the second quadrant, the third quadrant, and the fourth quadrant at the same time.
  • the cross-sectional shape of the sub-pixel 102 in some pixels 101 is a triangle, and the cross-sectional shape of the sub-pixel 102 in some pixels 101 is a trapezoid, where the cross-section refers to the section taken along the light-receiving direction perpendicular to the image sensor 10. To the cross section.
  • the two sub-pixels 102 in each pixel 101 are distributed symmetrically about the center of the pixel 101.
  • the cross-sectional shape of the sub-pixel 102 may also be other regular or irregular shapes, which are not limited herein.
  • the sub-pixel 102 with a trapezoidal cross-section and an "L"-shaped cross-section may also be used.
  • the sub-pixel 102 is combined, the sub-pixel 102 with a triangular cross-section is combined with the sub-pixel 102 with an "L"-shaped cross-section, and the sub-pixel 102 with a trapezoidal cross-section is combined, etc., which are not limited herein.
  • the pixel 101 containing only one sub-pixel 102 and the pixel 101 containing two sub-pixels 102 at the same time shown in FIG. 9, the pixel 101 containing only one sub-pixel 102 and the pixel 101 containing two sub-pixels 102 at the same time
  • the arrangement of the pixels 101 can also be as follows: the pixels 102 in some columns of the two-dimensional pixel array 11 contain only one sub-pixel 102, and the pixels 102 in the remaining columns contain two sub-pixels 102 at the same time; or, some rows of the two-dimensional pixel array 11
  • the pixel 102 only includes one sub-pixel 102, and the pixels 102 in the remaining rows include two sub-pixels 102, etc., which are not limited here.
  • pixels of different colors receive different amounts of exposure per unit time. After some colors are saturated, some colors have not yet been exposed to the ideal state. For example, exposure to 60%-90% of the saturated exposure may have a relatively good signal-to-noise ratio and accuracy, but the embodiments of the present application are not limited thereto.
  • RGBW red, green, blue, full color
  • the horizontal axis is the exposure time
  • the vertical axis is the exposure
  • Q is the saturated exposure
  • LW is the exposure curve of the panchromatic pixel W
  • LG is the exposure curve of the green pixel G
  • LR is the red pixel R
  • the exposure curve of LB is the exposure curve of the blue pixel.
  • the slope of the exposure curve LW of the panchromatic pixel W is the largest, that is, the panchromatic pixel W can obtain more exposure per unit time and reach saturation at t1.
  • the slope of the exposure curve LG of the green pixel G is the second, and the green pixel is saturated at time t2.
  • the slope of the exposure curve LR of the red pixel R is again the same, and the red pixel is saturated at time t3.
  • the slope of the exposure curve LB of the blue pixel B is the smallest, and the blue pixel is saturated at t4. It can be seen from FIG. 11 that the amount of exposure received by the panchromatic pixel W per unit time is greater than the amount of exposure received by the color pixel per unit time, that is, the sensitivity of the panchromatic pixel W is higher than that of the color pixel.
  • phase focusing If an image sensor that only includes color pixels is used to achieve phase focusing, then in a high-brightness environment, the three color pixels of R, G, and B can receive more light, and can output pixels with high signal-to-noise ratio Information, the accuracy of phase focusing is higher at this time; but in a low-brightness environment, the R, G, and B three types of pixels can receive less light, and the signal-to-noise ratio of the output pixel information is low. The accuracy of phase focusing is also low.
  • the image sensor 10 of the embodiment of the present application can arrange panchromatic pixels and color pixels in the two-dimensional pixel array 11 at the same time, wherein at least part of the panchromatic pixels include two sub-pixels 102, and at least part of the color pixels include two sub-pixels. Pixel 102, so that the image sensor 10 can achieve accurate focusing in a scene containing a large number of pure color horizontal stripes or vertical stripes, and can also achieve accurate focusing in a scene with different ambient brightness, further improving the scene adaptation of the image sensor 10 Sex.
  • each pixel 101 that is, the color of light that the pixel 101 can receive
  • the color of the filter 160 corresponding to the pixel 101 is determined by the color of the filter 160 corresponding to the pixel 101.
  • the color pixels and panchromatic pixels in the full text of this application refer to the pixels 101 that can respond to light whose color is the same as that of the corresponding filter 160.
  • the plurality of pixels 101 in the two-dimensional pixel array 11 may simultaneously include a plurality of panchromatic pixels W and a plurality of color pixels (for example, a plurality of first color pixels A, a plurality of second color pixels B). And a plurality of third color pixels C), wherein the color pixels and the panchromatic pixels are distinguished by the wavelength band of light that can pass through the filter 160 (shown in FIG. 1) covered thereon, and the color pixels have a higher value than the panchromatic pixels.
  • the response spectrum of a color pixel is, for example, a part of the W response spectrum of a panchromatic pixel. At least part (including part and all) of the full-color pixel includes two sub-pixels 102, and at least part (including part and all) of the color pixel includes two sub-pixels 102.
  • the two-dimensional pixel array 11 is composed of a plurality of minimum repeating units (FIGS. 12 to 21 show examples of the minimum repeating unit in various image sensors 10 ), and the minimum repeating units are duplicated and arranged in rows and columns. Each minimum repeating unit includes a plurality of sub-units, and each sub-unit includes a plurality of single-color pixels and a plurality of full-color pixels.
  • each minimum repeating unit includes four sub-units, where one sub-unit includes multiple single-color pixels A (ie, first-color pixels A) and multiple full-color pixels W, and two sub-units include multiple single-color pixels B (Ie, the second color pixel B) and multiple panchromatic pixels W, and the remaining one subunit includes multiple single color pixels C (ie, the third color pixel C) and multiple panchromatic pixels W.
  • one sub-unit includes multiple single-color pixels A (ie, first-color pixels A) and multiple full-color pixels W
  • two sub-units include multiple single-color pixels B (Ie, the second color pixel B) and multiple panchromatic pixels W
  • the remaining one subunit includes multiple single color pixels C (ie, the third color pixel C) and multiple panchromatic pixels W.
  • the number of pixels 101 in the rows and columns of the smallest repeating unit is equal.
  • the minimum repeating unit includes, but is not limited to, a minimum repeating unit of 4 rows and 4 columns, 6 rows and 6 columns, 8 rows and 8 columns, and 10 rows and 10 columns.
  • the number of pixels 101 in the rows and columns of the subunit is equal.
  • the subunits include, but are not limited to, subunits with 2 rows and 2 columns, 3 rows and 3 columns, 4 rows and 4 columns, and 5 rows and 5 columns. This setting helps to balance the resolution and color performance of the image in the row and column directions, and improve the display effect.
  • the panchromatic pixel W is arranged in the first diagonal direction D1
  • the color pixel is arranged in the second diagonal direction D2
  • the first diagonal direction D1 is opposite to the second diagonal direction D1.
  • the direction D2 is different.
  • FIG. 12 is a schematic diagram of the arrangement of the pixels 101 of the smallest repeating unit and the coverage of the lens 170 in the embodiment of the present application; the smallest repeating unit is 4 rows, 4 columns and 16 pixels, and the subunits are 2 rows, 2 columns and 4 pixels.
  • the arrangement method is:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 12), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. 12).
  • the direction connecting the lower left corner and the upper right corner), the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • first diagonal direction D1 and the second diagonal direction D2 are not limited to the diagonal, but also include directions parallel to the diagonal.
  • the "direction” here is not a single direction, but can be understood as the concept of a “straight line” indicating the arrangement, and there can be two-way directions at both ends of the straight line.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. There is a distribution.
  • FIG. 13 is a schematic diagram of another arrangement of pixels 101 of the smallest repeating unit and coverage of lenses 170 in an embodiment of the present application.
  • the minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, and the subunit is 2 rows, 2 columns and 4 pixels 101.
  • the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 13), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. The direction where the upper left corner and the lower right corner connect).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • each lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • FIG. 14 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another arrangement of pixels 101 of the smallest repeating unit and coverage of lenses 170 in an embodiment of the present application.
  • the first color pixel A is the red pixel R
  • the second color pixel B is the green pixel G
  • the third color pixel C Is the blue pixel Bu.
  • the response band of the panchromatic pixel W is the visible light band (for example, 400 nm-760 nm).
  • the panchromatic pixel W is provided with an infrared filter to filter out infrared light.
  • the response wavelength band of the panchromatic pixel W is the visible light wavelength band and the near-infrared wavelength band (for example, 400 nm-1000 nm), which matches the response wavelength band of the photoelectric conversion element (for example, the photodiode PD) in the image sensor 10.
  • the panchromatic pixel W may not be provided with a filter, and the response band of the panchromatic pixel W is determined by the response band of the photodiode, that is, the two match.
  • the embodiments of the present application include, but are not limited to, the above-mentioned waveband range.
  • the first color pixel A may also be a red pixel R
  • the second color pixel B may also be a yellow pixel Y
  • the third color pixel C may be Blue pixel Bu.
  • the first color pixel A may also be a magenta pixel M
  • the second color pixel B may also be a cyan pixel Cy
  • the third color pixel C may also be a magenta pixel.
  • FIG. 16 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the minimum repeating unit is 36 pixels 101 in 6 rows and 6 columns, and the sub-unit is 9 pixels 101 in 3 rows, 3 columns, and the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 16), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. The direction connecting the lower left corner and the upper right corner), the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • each lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • FIG. 17 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the minimum repeating unit is 36 pixels 101 in 6 rows and 6 columns, and the sub-unit is 9 pixels 101 in 3 rows, 3 columns, and the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 17), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. The direction where the upper left corner and the lower right corner connect).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • one lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • the first color pixel A in the minimum repeating unit shown in FIG. 16 and FIG. 17 may be a red pixel R
  • the second color pixel B may be a green pixel G
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A in the minimum repeating unit shown in FIG. 16 and FIG. 17 may be a red pixel R
  • the second color pixel B may be a yellow pixel Y
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A in the minimum repeating unit shown in FIGS. 16 and 17 may be a magenta pixel M
  • the second color pixel B may be a cyan pixel Cy
  • the third color pixel C may be a yellow pixel Y.
  • FIG. 18 is a schematic diagram of the arrangement of the pixels 101 of the smallest repeating unit and the coverage of the lens 170 in the embodiment of the present application.
  • the smallest repeating unit is 8 rows, 8 columns and 64 pixels 101, and the sub-unit is 4 rows, 4 columns and 16 pixels 101.
  • the arrangement is:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 18), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. 18 The direction connecting the lower left corner and the upper right corner), the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • each lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • FIG. 19 is a schematic diagram of another minimal repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the smallest repeating unit is 8 rows, 8 columns and 64 pixels 101, and the sub-unit is 4 rows, 4 columns and 16 pixels 101.
  • the arrangement is:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 19), and the color pixels are arranged in the second diagonal direction D2 (for example, in FIG. 19).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • each lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • adjacent panchromatic pixels W are arranged diagonally, and adjacent color pixels are also arranged diagonally.
  • adjacent panchromatic pixels are arranged in the horizontal direction, and adjacent color pixels are also arranged in the horizontal direction; or, adjacent panchromatic pixels are arranged in the vertical direction, and the adjacent panchromatic pixels are arranged in the vertical direction.
  • the color pixels are also arranged in the vertical direction.
  • the panchromatic pixels in adjacent subunits can be arranged in a horizontal direction or a vertical direction, and the color pixels in adjacent subunits can also be arranged in a horizontal direction or a vertical direction.
  • FIG. 20 is a schematic diagram of the arrangement of pixels 101 of the smallest repeating unit and the coverage of lenses 170 in the embodiment of the present application.
  • the minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, and the subunit is 2 rows, 2 columns and 4 pixels 101.
  • the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • each sub-unit adjacent panchromatic pixels W are arranged along the vertical direction, and adjacent color pixels are also arranged along the vertical direction.
  • One lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • FIG. 21 is a schematic diagram of another minimum repeating unit arrangement of pixels 101 and lens 170 coverage in an embodiment of the present application.
  • the minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, and the subunit is 2 rows, 2 columns and 4 pixels 101.
  • the arrangement is as follows:
  • W represents a full-color pixel
  • A represents a first color pixel among multiple color pixels
  • B represents a second color pixel among multiple color pixels
  • C represents a third color pixel among multiple color pixels.
  • each subunit adjacent panchromatic pixels W are arranged along the horizontal direction, and adjacent color pixels are also arranged along the horizontal direction.
  • One lens 170 covers one pixel 101.
  • Each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • Each sub-pixel 102 is distributed on the positive and negative semi-axes of the X-axis, and is distributed on the positive and negative semi-axes of the Y-axis. distributed.
  • the first color pixel A may be a red pixel R
  • the second color pixel B may be a green pixel G
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A may be a red pixel R
  • the second color pixel B may be a yellow pixel Y
  • the third color pixel C may be a blue pixel Bu.
  • the first color pixel A may be a magenta pixel M
  • the second color pixel B may be a cyan pixel Cy
  • the third color pixel C may be a yellow pixel Y.
  • each panchromatic pixel and each color pixel includes two sub-pixels 102.
  • all panchromatic pixels include two sub-pixels 102 and some color pixels include two sub-pixels 102; or some panchromatic pixels include two sub-pixels 102 and all color pixels include two sub-pixels 102.
  • each sub-pixel 102 is an "L" shape.
  • the shape of each sub-pixel 102 may also be a trapezoid; or, the shape of each sub-pixel 102 may be a triangle; or, the shape of some sub-pixels 102 may be trapezoidal, and the shape of some sub-pixels 102 may be "L ”Shape; or the shape of some sub-pixels 102 is triangle, the shape of some sub-pixels 102 is trapezoid, the shape of some sub-pixels 102 is “L” shape, etc.
  • the multiple panchromatic pixels and multiple color pixels in the two-dimensional pixel array 11 in any one of the arrangements shown in Figs. 12-21 can be controlled by different exposure control lines, respectively, so as to achieve Independent control of the exposure time of panchromatic pixels and the exposure time of color pixels.
  • the control terminal of the exposure control circuit of at least two panchromatic pixels adjacent in the first diagonal direction and the first exposure control The line is electrically connected, and the control ends of the exposure control circuits of at least two color pixels adjacent in the second diagonal direction are electrically connected to the second exposure control line.
  • the control terminal of the exposure control circuit for panchromatic pixels in the same row or column is electrically connected to the first exposure control line, and the color pixels in the same row or column are exposed
  • the control terminal of the control circuit is electrically connected with the second exposure control line.
  • the first exposure control line can transmit a first exposure signal to control the first exposure time of the panchromatic pixel
  • the second exposure control line can transmit a second exposure signal to control the second exposure time of the color pixel.
  • the first exposure time of the panchromatic pixel may be less than the second exposure time of the color pixel.
  • the ratio of the first exposure time to the second exposure time may be one of 1:2, 1:3, or 1:4.
  • the ratio of the first exposure time to the second exposure time can be adjusted to 1:2, 1:3, or 1:4 according to the brightness of the environment.
  • the relative relationship between the first exposure time and the second exposure time can be determined according to the environmental brightness. For example, when the ambient brightness is less than or equal to the brightness threshold, the panchromatic pixels are exposed at the first exposure time equal to the second exposure time; when the ambient brightness is greater than the brightness threshold, the panchromatic pixels are exposed at the first exposure time less than the second exposure time Time to expose.
  • the relative relationship between the first exposure time and the second exposure time can be determined according to the brightness difference between the ambient brightness and the brightness threshold. For example, the greater the brightness difference, the greater the first exposure time and the second exposure time. The ratio of the second exposure time is smaller.
  • the ratio of the first exposure time to the second exposure time is 1:2; when the brightness difference is within the second range [b,c) , The ratio of the first exposure time to the second exposure time is 1:3; when the brightness difference is greater than or equal to c, the ratio of the first exposure time to the second exposure time is 1:4.
  • the second color pixel B is a green pixel G
  • only the green pixel G may include two sub-pixels 102, and the remaining pixels 101 may only include one sub-pixel 102. Pixel 102.
  • the sensitivity of the green pixel G is higher than the sensitivity of the red pixel R and the blue pixel Bu, and is lower than the sensitivity of the white pixel W.
  • the green pixel G can obtain pixel information with high signal-to-noise ratio.
  • the green pixel G will not be oversaturated, so the same can be done. Improve the scene adaptability of the image sensor 10.
  • Control methods include:
  • the control method of the embodiment of the present application can be implemented by the camera assembly 40 of the embodiment of the present application.
  • the camera assembly 40 includes a lens 30, the image sensor 10 described in any one of the above embodiments, and a processing chip 20.
  • the image sensor 10 may receive light incident through the lens 30 and generate electrical signals.
  • the image sensor 10 is electrically connected to the processing chip 20.
  • the processing chip 20 and the image sensor 10 and the lens 30 may be packaged in the housing of the camera assembly 40; or the image sensor 10 and the lens 30 are packaged in the housing of the camera assembly 40, and the processing chip 20 is arranged outside the housing.
  • Step 01 can be implemented by the image sensor 10.
  • Step 02 can be implemented by the processing chip 20.
  • Step 03 can be implemented by the image sensor 10 and the processing chip 20 together.
  • a plurality of sub-pixels 102 in the image sensor 10 are exposed to output sub-pixel information.
  • the processing chip 20 calculates the phase difference according to the sub-pixel information for focusing. In the in-focus state, multiple pixels 101 in the two-dimensional pixel array 11 of the image sensor 10 are exposed, and the processing chip 20 obtains a target image according to the exposure results of the multiple pixels 101.
  • the control method and the camera assembly 40 of the embodiment of the present application adopt the image sensor 10 that can obtain both the phase information in the horizontal direction and the phase information in the vertical direction, so that the control method and the camera assembly of the embodiment of the present application 40 can be applied to a scene that contains a large number of pure color horizontal stripes, or can be applied to a scene that contains a large number of pure color vertical stripes, which improves the control method of the embodiment of the present application and the scene adaptability of the camera assembly 40 and the accuracy of phase focusing. .
  • control method and the camera assembly 40 of the embodiment of the present application do not need to be designed to shield the pixels 101 in the image sensor 10, all pixels 101 can be used for imaging, and there is no need to perform dead pixel compensation, which is beneficial to improve the acquisition of the camera assembly 40.
  • the quality of the target image is beneficial to improve the acquisition of the camera assembly 40.
  • all the pixels 101 including two sub-pixels 102 in the control method and camera assembly 40 of the embodiments of the present application can be used for phase focusing, and the accuracy of phase focusing is higher.
  • the plurality of pixels 101 includes a plurality of panchromatic pixels and a plurality of color pixels. Narrower spectral response. At least part of the panchromatic pixels and at least part of the color pixels each include two sub-pixels 102. Control methods also include:
  • Step 01 Exposing multiple sub-pixels 102 to output sub-pixel information includes:
  • Step 02 Calculating the phase difference according to the sub-pixel information for focusing includes:
  • Step 01 Exposing multiple sub-pixels 102 to output sub-pixel information further includes:
  • Step 02 Calculating the phase difference according to the sub-pixel information for focusing includes:
  • Step 01 Exposing multiple sub-pixels 102 to output sub-pixel information further includes:
  • the sub-pixel 102 in the panchromatic pixel is exposed to output full-color sub-pixel information, and the sub-pixel 102 in the color pixel is exposed to output the color sub-pixel information;
  • Step 02 Calculating the phase difference according to the sub-pixel information for focusing includes:
  • step 04, step 021, step 022, and step 023 can all be implemented by the processing chip 10.
  • Step 011, step 012, and step 013 can all be implemented by the image sensor 10.
  • the processing chip 20 can obtain the ambient brightness.
  • the environmental brightness is less than the first predetermined brightness
  • the sub-pixel 102 in the panchromatic pixel of the image sensor 10 is exposed to output panchromatic sub-pixel information, and the processing chip 20 calculates the phase difference according to the panchromatic sub-pixel information for focusing.
  • the processing chip 20 calculates a phase difference according to the color sub-pixel information to perform focusing.
  • the processing chip 20 calculates a phase difference for focusing according to at least one of full-color sub-pixel information and color sub-pixel information.
  • the first predetermined brightness is less than the second predetermined brightness.
  • Calculating the phase difference for focusing based on at least one of the panchromatic sub-pixel information and the color sub-pixel information includes: (1) calculating the phase difference based on the panchromatic sub-pixel information for focusing only; (2) based on the color sub-pixel information only Calculate the phase difference for focusing; (3) At the same time, calculate the phase difference for focusing based on the panchromatic sub-pixel information and the color sub-pixel information.
  • the control method and the camera assembly 40 of the embodiment of the present application use the image sensor 10 including panchromatic pixels and color pixels to achieve phase focusing, so that the sensitivity can be used in an environment with low brightness (for example, the brightness is less than or equal to the first predetermined brightness).
  • Higher panchromatic pixels are used for phase focusing.
  • color pixels with lower sensitivity are used for phase focusing, while the brightness is moderate (for example, greater than the first
  • At least one of panchromatic pixels and color pixels is used for phase focusing in an environment with a predetermined brightness and less than the second predetermined brightness).
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the full-color sub-pixel information includes first full-color sub-pixel information and second full-color sub-pixel information.
  • the first panchromatic sub-pixel information and the second panchromatic sub-pixel information are respectively output by the panchromatic sub-pixels located in the first orientation of the lens 170 and the panchromatic sub-pixels located in the second orientation of the lens 170.
  • One first panchromatic sub-pixel information and a corresponding second panchromatic sub-pixel information serve as a pair of panchromatic sub-pixel information.
  • the steps of calculating the phase difference according to the panchromatic sub-pixel information for focusing include:
  • step 0511, step 0512, and step 0513 can all be implemented by the processing chip 20. That is to say, the processing chip 20 can be used to form a first curve according to the first panchromatic sub-pixel information in a plurality of pairs of panchromatic sub-pixel information, and to form a first curve according to the second panchromatic sub-pixel in a plurality of pairs of panchromatic sub-pixel information pairs. The information forms a second curve, and the phase difference is calculated according to the first curve and the second curve for focusing.
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170
  • the second orientation P2 is the position corresponding to the lower right corner of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 26 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 26. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the panchromatic sub-pixel W) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (that is, the panchromatic sub-pixel W) ) Is located at the second orientation P2 of the lens 170.
  • the first panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the first orientation P1 of the lens 170
  • the second panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the second orientation P2 of the lens 170.
  • panchromatic sub-pixels W 11, P1 , W 13, P1 , W 15, P1 , W 17, P1 , W 22, P1 , W 24, P1 , W 26, P1 , W 28, P1, etc. are located in the first position P1
  • panchromatic sub-pixels W 11, P2 , W 13, P2 , W 15, P2 , W 17, P2 , W 22, P2 , W 24, P2 , W 26, P2 , W 28, P2, etc. are in the second position P2.
  • the two panchromatic sub-pixels W in the same panchromatic pixel form a pair of panchromatic sub-pixels.
  • the panchromatic sub-pixel information of the two panchromatic sub-pixels in the same panchromatic pixel W forms a pair of panchromatic sub-pixels.
  • the color sub-pixel information pair for example, the pan-color sub-pixel information of the pan-color sub-pixels W 11 and P1 and the pan-color sub-pixel information of the pan-color sub-pixels W 11 and P2 form a pair of pan-color sub-pixel information.
  • the panchromatic sub-pixel information of the pixels W 13, P1 and the panchromatic sub-pixel information of the pan-color sub-pixels W 13, P2 form a pair of pan-color sub-pixel information
  • the pan-color sub-pixel information of the pan-color sub-pixels W 15, P1 Form a pair of panchromatic sub-pixel information with panchromatic sub-pixels W 15, P2 , panchromatic sub-pixel information of pan-color sub-pixels W 17, P1 and pan-color sub-pixels W 17, P2 .
  • the color sub-pixel information constitutes a pair of full-color sub-pixel information, and so on.
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170
  • the second orientation P2 is the position corresponding to the lower right corner of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 27 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 27. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the panchromatic sub-pixel W) is located at the first orientation P1 of the lens 170, and the other sub-pixel 102 (that is, the panchromatic sub-pixel W) ) Is located at the second orientation P2 of the lens 170.
  • the first panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the first orientation P1 of the lens 170
  • the second panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the second orientation P2 of the lens 170.
  • panchromatic sub-pixels W 11, P1 , W 13, P1 , W 15, P1 , W 17, P1 , W 21, P1 , W 23, P1 , W 25, P1 , W 27, P1, etc. are located in the first position P1
  • panchromatic sub-pixels W 11, P2 , W 13, P2 , W 15, P2 , W 17, P2 , W 21, P2 , W 23, P2 , W 25, P2 , W 27, P2 are located in the second position P2 .
  • the two panchromatic sub-pixels in the same panchromatic pixel W form a pair of panchromatic sub-pixels.
  • the panchromatic sub-pixel information of the two panchromatic pixels in the same panchromatic pixel forms a pair of panchromatic sub-pixels.
  • the pair of pixel information for example, the full-color sub-pixel information of the full-color sub-pixel W 11, P1 and the full-color sub-pixel information of the full-color sub-pixel W 11, P2 form a pair of full-color sub-pixel information
  • the full-color sub-pixel W 13, P1 full-color sub-pixel information and full-color sub-pixel W 13, P2 full-color sub-pixel information form a pair of full-color sub-pixel information pair
  • pan-color sub-pixel W 15, P1 full-color sub-pixel information and full-color sub-pixel information The panchromatic sub-pixel information of the color sub-pixels W 15, P2 constitutes a pair of pan-color sub-pixel information pairs
  • the processing chip 20 After acquiring multiple pairs of panchromatic sub-pixel information, the processing chip 20 forms a first curve according to the first panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information, and forms a first curve according to the pairs of panchromatic sub-pixel information.
  • the second panchromatic sub-pixel information forms a second curve, and then the phase difference is calculated according to the first curve and the second curve.
  • a plurality of first panchromatic sub-pixel information can depict one histogram curve (ie, a first curve)
  • a plurality of second panchromatic sub-pixel information can depict another histogram curve (ie, a second curve).
  • the processing chip 20 can calculate the phase difference between the two histogram curves according to the positions of the peaks of the two histogram curves. Subsequently, the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the full-color sub-pixel information includes first full-color sub-pixel information and second full-color sub-pixel information.
  • the first panchromatic sub-pixel information and the second panchromatic sub-pixel information are output by the panchromatic sub-image in the first orientation of the lens 170 and the panchromatic sub-image in the second orientation of the lens 170, respectively.
  • the plurality of first panchromatic sub-pixel information and the corresponding plurality of second panchromatic sub-pixel information form a pair of panchromatic sub-pixel information. Calculate the phase difference based on the panchromatic sub-pixel information for focusing, including:
  • step 0521, step 0522, step 0523, step 0524, and step 0525 can all be implemented by the processing chip 20.
  • the processing chip 20 may be used to calculate the third panchromatic sub-pixel information according to the multiple first panchromatic sub-pixel information in each pair of panchromatic sub-pixel information, and to center the panchromatic sub-pixel information according to each pair of panchromatic sub-pixel information.
  • the fourth panchromatic sub-pixel information is calculated from the plurality of second panchromatic sub-pixel information.
  • the processing chip 20 may also be used to form a first curve according to the information of a plurality of third panchromatic sub-pixels, form a second curve according to the information of a plurality of fourth panchromatic sub-pixels, and calculate phase difference information according to the first curve and the second curve. To focus.
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170
  • the second orientation P2 is the position corresponding to the lower right corner of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 26 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 26. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the panchromatic sub-pixel W) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (that is, the panchromatic sub-pixel W) Located at the second position P2 of the lens 170.
  • the first panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the first orientation P1 of the lens 170
  • the second panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the second orientation P2 of the lens 170.
  • panchromatic sub-pixels W 11, P2 , W 13, P2 , W 15, P2 , W 17, P2 , W 22, P2 , W 24, P2 , W 26, P2 , W 28, P2, etc. are located in the second position P2.
  • a plurality of panchromatic sub-pixels W located in the first orientation P1 and a plurality of panchromatic sub-pixels W located in the second orientation P2 form a pair of panchromatic sub-pixel pairs.
  • the information of the plurality of first panchromatic sub-pixels corresponds to The plurality of second panchromatic sub-pixel information as a pair of panchromatic sub-pixel information.
  • panchromatic subpixel W 11 , P1 , W 22, P1 panchromatic sub-pixel information and panchromatic pixels W 11, P2 , W 22, P2 panchromatic sub-pixel information form a pair of panchromatic sub-pixel information
  • panchromatic sub-pixel W 13, P1 , W 24, P1 panchromatic sub-pixel information and pan-color sub-pixels W 13, P2 , W 24, P2 pan -color sub-pixel information form a pair of pan-color sub-pixel information pairs, pan-color sub-pixels W 15, P1
  • the panchromatic sub-pixel information of W 26, P1 and the panchromatic sub-pixel information of panchromatic sub-pixels W 15, P2 , W 26, P2 form a pair of pan-color sub-pixel information pairs, and the pan-color sub-pixels W 17, P1 , W 28.
  • Panchromatic sub-pixel information of P1 and panchromatic sub-pixel information of panchromatic sub-pixels W 17, P2 , W 28, P2 form a pair of panchromatic sub-pixel information, and so on.
  • multiple first panchromatic sub-pixel information in the same minimum repeating unit and multiple second panchromatic sub-pixel information in the minimum repeating unit are used as a pair of panchromatic sub-pixel information, that is, panchromatic sub-pixels.
  • the first orientation P1 of each lens 170 corresponds to the upper left corner of the lens 170
  • the second orientation P2 corresponds to the lower right corner of the lens 170 s position.
  • the first orientation P1 and the second orientation P2 shown in FIG. 27 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 27. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the panchromatic sub-pixel W) is located at the first orientation P1 of the lens 170, and the other sub-pixel 102 (that is, the panchromatic sub-pixel W) Located at the second position P2 of the lens 170.
  • the first panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the first orientation P1 of the lens 170
  • the second panchromatic sub-pixel information is output by the panchromatic sub-pixel W in the second orientation P2 of the lens 170.
  • panchromatic sub-pixels W 11, P2 , W 13, P2 , W 15, P2 , W 17, P2 , W 21, P2 , W 23, P2 , W 25, P2 , W 27, P2 are located in the second position P2 .
  • a plurality of panchromatic sub-pixels W located in the first position P1 and a plurality of panchromatic sub-pixels W located in the second position P2 form a pair of panchromatic sub-pixel pairs.
  • the information of the plurality of first panchromatic sub-pixels corresponds to The plurality of second panchromatic sub-pixel information as a pair of panchromatic sub-pixel information.
  • multiple first full-color sub-pixel information in the same sub-unit and multiple second full-color sub-pixel information in the sub-unit are used as a pair of full-color sub-pixel information, that is, full-color sub-pixel W 11 , P1 , W 21, P1 panchromatic sub-pixel information and panchromatic pixels W 11, P2 , W 21, P2 panchromatic sub-pixel information form a pair of panchromatic sub-pixel information pair, panchromatic sub-pixel W 13, P1 , W 23, P1 panchromatic sub-pixel information and pan-color sub-pixels W 13, P2 , W 23, P2 pan -color sub-pixel information form a pair of pan-color sub-pixel information pairs, pan-color sub-pixels W 15, P1 ,
  • the full-color sub-pixel information of W 25, P1 and the full-color sub-pixel information of the full-color sub-pixels W 15, P2 , W 25, P2 form a pair of full-color sub-pixel information, and the full-color sub-pixels W 17, P1
  • Panchromatic sub-pixel information of P1 and panchromatic sub-pixel information of panchromatic sub-pixels W 17, P2 , W 27, P2 form a pair of panchromatic sub-pixel information, and so on.
  • the multiple first panchromatic sub-pixel information in the same minimum repeating unit and the multiple second panchromatic sub-pixel information in the minimum repeating unit are used as a pair of panchromatic sub-pixel information, that is, panchromatic sub-pixels.
  • the processing chip 20 After acquiring multiple pairs of panchromatic sub-pixel information, the processing chip 20 calculates the third panchromatic sub-pixel information according to the multiple first panchromatic sub-pixel information in each pair of panchromatic sub-pixel information, and calculates the third panchromatic sub-pixel information according to each pair of panchromatic sub-pixel information.
  • the multiple second panchromatic sub-pixel information in the sub-pixel information pair calculates fourth panchromatic sub-pixel information.
  • a pair of panchromatic sub-pixel information composed of panchromatic pixel information of panchromatic sub-pixels W 11, P1 , W 22, and P1 and panchromatic sub-pixel information of panchromatic sub-pixels W 11, P2 , W 22, P2
  • panchromatic sub-pixel information Compose with the panchromatic sub-pixel information of panchromatic pixels W 11, P2 , W 13, P2 , W 22, P2, W 24, P2 , W 31, P2 , W 33, P12 , W 42, P2 , W 44, P2
  • the processing chip 20 can obtain a plurality of third panchromatic sub-pixel information and a plurality of fourth panchromatic sub-pixel information.
  • the plurality of third panchromatic sub-pixel information can depict one histogram curve (ie, the first curve), and the plurality of fourth panchromatic sub-pixel information can depict another histogram curve (ie, the second curve). Subsequently, the processing chip 20 can calculate the phase difference according to the two histogram curves.
  • the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the color pixel includes two color sub-pixels.
  • the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information.
  • the first color sub-pixel information and the second sub-pixel information are respectively output by the color sub-pixels located in the first orientation of the lens 170 and the color sub-pixels located in the second orientation of the lens 170.
  • One first color sub-pixel information and a corresponding second color sub-pixel information serve as a pair of color sub-pixel information.
  • the steps of calculating the phase difference according to the color sub-pixel information for focusing include:
  • step 0531, step 0532, and step 0533 can all be implemented by the processing chip 20.
  • the processing chip 20 may form a third curve according to the first color sub-pixel information in the multiple pairs of color sub-pixel information, and form a fourth curve according to the second color sub-pixel information in the multiple pairs of color sub-pixel information. , And calculate the phase difference according to the third curve and the fourth curve for focusing.
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170
  • the second orientation P2 is the position corresponding to the lower right corner of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 26 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 26. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (Ie, the color sub-pixel A, the color sub-pixel B, or the color sub-pixel C) is located at the second orientation P2 of the lens 170.
  • the first color sub-pixel information is output by the color sub-pixel in the first orientation P1 of the lens 170
  • the second color sub-pixel information is output by the color sub-pixel in the second orientation P2 of the lens 170.
  • the color sub-pixels A 12, P1 , B 14, P1 , A 16, P1 , B 18, P1 , A 21, P1 , B 23, P1 , A 25, P1 , B 27, P1, etc. are located in the first orientation P1 .
  • the color sub-pixels A 12, P2 , B 14, P2 , A 16, P2 , B 18, P2 , A 21, P2 , B 23, P2 , A 25, P2 , B 27, P2, etc. are located in the second orientation P2.
  • the color sub-pixels in the same color pixel constitute a pair of color sub-pixel pairs.
  • the color sub-pixel information of the color sub-pixels in the same color pixel constitutes a pair of color sub-pixel information pairs, for example, color sub-pixel A 12
  • the color sub-pixel information of P1 and the color sub-pixel information of color sub-pixels A 12 and P2 form a pair of color sub-pixel information pairs
  • the color sub-pixel information forms a pair of color sub-pixel information pairs
  • the color sub-pixel information of color sub-pixels A 16, P1 and the color sub-pixel information of color sub-pixels A 16, P2 form a pair of color sub-pixel information pairs
  • the color sub-pixel information of B 18 and P1 and the color sub-pixel information of color sub-pixels B 18 and P2 form a pair of color sub-pixel information, and so on.
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170
  • the second orientation P2 is the position corresponding to the lower right corner of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 27 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 27. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the color sub-pixel A, the color sub-pixel B, or the color sub-pixel C) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (Ie, the color sub-pixel A, the color sub-pixel B, or the color sub-pixel C) is located at the second orientation P2 of the lens 170.
  • the first color sub-pixel information is output by the color sub-pixel in the first orientation P1 of the lens 170
  • the second color sub-pixel information is output by the color sub-pixel in the second orientation P2 of the lens 170.
  • the color sub-pixels A 12, P1 , B 14, P1 , A 16, P1 , B 18, P1 , A 22, P1 , B 24, P1 , A 26, P1 , B 28, P1, etc. are located in the first orientation P1 .
  • the color sub-pixels A 12, P2 , B 14, P2 , A 16, P2 , B 18, P2 , A 22, P2 , B 24, P2 , A 26, P2 , B 28, P2, etc. are located in the second orientation P2.
  • the color sub-pixels in the same color pixel constitute a pair of color sub-pixel pairs.
  • the color sub-pixel information of the color sub-pixels in the same color pixel constitutes a pair of color sub-pixel information pairs, for example, color sub-pixel A 12
  • the color sub-pixel information of P1 and the color sub-pixel information of color sub-pixels A 12 and P2 form a pair of color sub-pixel information pairs
  • the color sub-pixel information forms a pair of color sub-pixel information pairs
  • the color sub-pixel information of color sub-pixels A 16, P1 and the color sub-pixel information of color sub-pixels A 16, P2 form a pair of color sub-pixel information pairs
  • the color sub-pixel information of B 18 and P1 and the color sub-pixel information of color sub-pixels B 18 and P2 form a pair of color sub-pixel information, and so on.
  • the processing chip 20 After acquiring multiple pairs of color sub-pixel information, the processing chip 20 forms a third curve according to the first color sub-pixel information in the multiple pairs of color sub-pixel information, and forms a third curve according to the second color sub-pixel in the multiple pairs of color sub-pixel information.
  • the information forms a fourth curve, and then the phase difference is calculated based on the third curve and the fourth curve.
  • a plurality of first color sub-pixel information can depict one histogram curve (ie, a third curve)
  • a plurality of second color sub-pixel information can depict another histogram curve (ie, a fourth curve).
  • the processing chip 20 can calculate the phase difference between the two histogram curves according to the positions of the peaks of the two histogram curves.
  • the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the color pixel includes two color sub-pixels.
  • the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information.
  • the first color sub-pixel information and the second color sub-pixel information are respectively output by the color sub-pixels in the first orientation of the lens 170 and the color sub-pixels in the second orientation of the lens 170.
  • the plurality of first color sub-pixel information and the corresponding plurality of second color sub-pixel information serve as a pair of color sub-pixel information. Calculate the phase difference based on the color sub-pixel information for focusing, including:
  • step 0541, step 0542, step 0543, step 0544, and step 0545 can all be implemented by the processing chip 20.
  • the processing chip 20 may be used to calculate the third color sub-pixel information according to the multiple first color sub-pixel information in each pair of color sub-pixel information, and to calculate the third color sub-pixel information according to the multiple first color sub-pixel information in each pair of color sub-pixel information pairs.
  • the second color sub-pixel information calculates the fourth color sub-pixel information.
  • the processing chip 20 can also be used to form a third curve according to the information of a plurality of third color sub-pixels, form a fourth curve according to the information of a plurality of fourth color sub-pixels, and calculate a phase difference according to the third curve and the fourth curve for focusing. .
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170, and the second orientation P2 is the lower right corner of the lens 170. Corresponding location. It should be noted that the first orientation P1 and the second orientation P2 shown in FIG. 26 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 26. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change. Corresponding to each color pixel of the pixel array 11 in FIG.
  • one sub-pixel 102 (that is, color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (Ie, the color sub-pixel A, the color sub-pixel B, or the color sub-pixel C) is located at the second orientation P2 of the lens 170.
  • the first color sub-pixel information is output by the color sub-pixel in the first orientation P1 of the lens 170
  • the second color sub-pixel information is output by the color sub-pixel in the second orientation P2 of the lens 170.
  • the color sub-pixels A 12, P1 , B 14, P1 , A 16, P1 , B 18, P1 , A 21, P1 , B 23, P1 , A 25, P1 , B 27, P1, etc. are located in the first orientation P1 .
  • the color sub-pixels A 12, P2 , B 14, P2 , A 16, P2 , B 18, P2 , A 21, P2 , B 23, P2 , A 25, P2 , B 27, P2, etc. are located in the second orientation P2.
  • a plurality of color sub-pixels located in the first orientation P1 and a plurality of color sub-pixels located in the second orientation P2 form a pair of color sub-pixel pairs, and correspondingly, a plurality of first color sub-pixel information and a plurality of corresponding second colors
  • the sub-pixel information serves as a pair of color sub-pixel information.
  • a plurality of first color sub-pixel information in the same sub-unit and a plurality of second color sub-pixel information in the sub-unit form a pair of color sub-pixel information, that is, color sub-pixels A 12, P1 , A 21.
  • the color sub-pixel information of P1 and the color sub-pixel information of color sub-pixels A 12, P2 , A 21, and P2 form a pair of color sub-pixel information pairs.
  • the color sub-pixels B 14, P1 , B 23, and P1 form a pair of color sub-pixel information.
  • the pixel information and the color sub-pixel information of the color sub-pixels B 14, P2 , B 23, and P2 form a pair of color sub-pixel information pairs, and the color sub-pixel information of the color sub-pixels A 16, P1 , A 25, and P1 and the color sub-pixels
  • the color sub-pixel information of A 16, P2 , A 25, and P2 form a pair of color sub-pixel information pairs, the color sub-pixel information of color sub-pixels B 18, P1 , B 27, and P1 and the color sub-pixel information of color sub-pixels B 18, P2 , B 27.
  • the color sub-pixel information of P2 forms a pair of color sub-pixel information, and so on.
  • the multiple first color sub-pixel information in the same minimum repeating unit and the multiple second color sub-pixel information in the minimum repeating unit are used as a pair of color sub-pixel information pairs, that is, color sub-pixels A 12, P1 , B 14, P1 , A 21, P1, B 23, P1 , B 32, P1 , C 34, P1 , B 41, P1 , C 43, P1 color sub-pixel information and color sub-pixels A 12, P12 , B 14, the color sub-pixel information of P2, A 21, P12, B 23, P2 , B 32, P2 , C 34, P2 , B 41, P2 , and C 43 form a pair of color sub-pixel information pairs. analogy.
  • the first orientation P1 of each lens 170 is the position corresponding to the upper left corner of the lens 170
  • the second orientation P2 is the position corresponding to the lower right corner of the lens 170.
  • the first orientation P1 and the second orientation P2 shown in FIG. 27 are determined according to the distribution example of the sub-pixels 102 shown in FIG. 27. For other types of distributed sub-pixels 102, the first orientation P1 and the second position P2 will correspondingly change.
  • one sub-pixel 102 (that is, the color sub-pixel A, the color sub-pixel B, or the color sub-pixel C) is located in the first orientation P1 of the lens 170, and the other sub-pixel 102 (Ie, the color sub-pixel A, the color sub-pixel B, or the color sub-pixel C) is located at the second orientation P2 of the lens 170.
  • the first color sub-pixel information is output by the color sub-pixel in the first orientation P1 of the lens 170
  • the second color sub-pixel information is output by the color sub-pixel in the second orientation P2 of the lens 170.
  • the color sub-pixels A 12, P1 , B 14, P1 , A 16, P1 , B 18, P1 , A 22, P1 , B 24, P1 , A 26, P1 , B 28, P1, etc. are located in the first orientation P1 .
  • the color sub-pixels A 12, P2 , B 14, P2 , A 16, P2 , B 18, P2 , A 22, P2 , B 24, P2 , A 26, P2 , B 28, P2, etc. are located in the second orientation P2.
  • a plurality of color sub-pixels located in the first orientation P1 and a plurality of color sub-pixels located in the second orientation P2 form a pair of color sub-pixel pairs, and correspondingly, a plurality of first color sub-pixel information and a plurality of corresponding second colors
  • the sub-pixel information serves as a pair of color sub-pixel information.
  • a plurality of first color sub-pixel information in the same sub-unit and a plurality of second color sub-pixel information in the sub-unit form a pair of color sub-pixel information, that is, color sub-pixels A 12, P1 , A 22.
  • the color sub-pixel information of P1 and the color sub-pixel information of color sub-pixels A 12, P2 , A 22, and P2 form a pair of color sub-pixel information pairs.
  • the color sub-pixels B 14, P1 , B 24, and P1 form a pair of color sub-pixel information.
  • the pixel information and the color sub-pixel information of the color sub-pixels B 14, P2 , B 24, and P2 form a pair of color sub-pixel information pairs, and the color sub-pixel information of the color sub-pixels A 16, P1 , A 26, and P1 and the color sub-pixels
  • the color sub-pixel information of A 16, P2 , A 26, and P2 constitute a pair of color sub-pixel information pairs, the color sub-pixel information of color sub-pixels B 18, P1 , B 28, and P1 and the color sub-pixel information of color sub-pixels B 18, P2 , B 28.
  • the color sub-pixel information of P2 forms a pair of color sub-pixel information, and so on.
  • the multiple first color sub-pixel information in the same minimum repeating unit and the multiple second color sub-pixel information in the minimum repeating unit are used as a pair of color sub-pixel information pairs, that is, color sub-pixels A 12, P1 , B 14, P1 , A 22, P1, B 24, P1 , B 32, P1 , C 34, P1 , B 42, P1 , C 44, P1 color sub-pixel information and color sub-pixels A 12, P2 , B 14, P2 , A 22, P2, B 24, P2 , B 32, P2 , C 34, P2 , B 42, P2 , C 44, P2 color sub-pixel information form a pair of color sub-pixel information pairs, so analogy.
  • the processing chip 20 calculates the third color sub-pixel information according to the multiple first color sub-pixel information in each pair of color sub-pixel information pairs, and centers the third color sub-pixel information according to each pair of color sub-pixel information.
  • the fourth color sub-pixel information is calculated from the plurality of second color sub-pixel information.
  • the color sub-pixel information of the color sub-pixels A 12, P1 , A 21, and P1 and the color sub-pixel information of the color sub-pixels A 12, P2 , A 21, P2 form a pair of color sub-pixel information
  • the processing chip 20 can obtain a plurality of third color sub-pixel information and a plurality of fourth color sub-pixel information.
  • the plurality of third color sub-pixel information can depict one histogram curve (ie, the third curve), and the plurality of fourth color sub-pixel information can depict another histogram curve (ie, the fourth curve).
  • the processing chip 20 can calculate the phase difference according to the two histogram curves.
  • the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters.
  • the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the color pixel includes two color sub-pixels.
  • the panchromatic subpixel information includes first panchromatic subpixel information and second panchromatic subpixel information, and the color subpixel information includes first color subpixel information and second color subpixel information.
  • the first full-color sub-pixel information, the second full-color sub-pixel information, the first color sub-pixel information, and the second color sub-pixel information are respectively composed of the full-color sub-pixel located in the first orientation of the lens 170 and the second full-color sub-pixel located in the lens 170.
  • the bidirectional panchromatic sub-pixels, the color sub-pixels located in the first direction of the lens 170, and the color sub-pixels located in the second direction of the lens 170 are output.
  • a first full-color sub-pixel information and a corresponding second full-color sub-pixel information are used as a pair of full-color sub-pixel information, and a first color sub-pixel information and a corresponding second color sub-pixel information are used as a pair of color Sub-pixel information pair.
  • Calculating the phase difference for focusing based on the panchromatic sub-pixel information and the color sub-pixel information includes:
  • step 0551, step 0552, step 0553, step 0554, and step 0555 can all be implemented by the processing chip 20.
  • the processing chip 20 can be used to form a first curve according to the first panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information, and to form a first curve according to the second panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information.
  • the sub-pixel information forms a second curve.
  • the processing chip 20 may also be used to form a third curve according to the first color sub-pixel information of the multiple pairs of color sub-pixel information, and form a fourth curve according to the second color sub-pixel information of the multiple pairs of color sub-pixel information.
  • the processing chip 20 can also be used to calculate the phase difference according to the first curve, the second curve, the third curve, and the fourth curve for focusing.
  • the definitions of the first position and the second position are the same as the definitions of the first position P1 and the second position P2 in the control method of the embodiment shown in FIG. 25 and FIG. 29, and will not be repeated here.
  • the pair of panchromatic sub-pixel information and the pair of color sub-pixel information have the same meaning as the pair of panchromatic sub-pixel information and the pair of color sub-pixel information in the control method of the embodiment shown in FIGS. 25 and 29, and will not be repeated here. .
  • the processing chip 20 may form a first curve according to the first panchromatic sub-pixel information in the multiple pairs of panchromatic sub-pixel information, and The second curve is formed according to the second panchromatic pixel information in the multiple pairs of panchromatic sub-pixel information, and the third curve can also be formed according to the first color sub-pixel information in the multiple pairs of color sub-pixel information.
  • the second color sub-pixel information in the color sub-pixel information pair forms a fourth curve.
  • the processing chip 20 calculates a first phase difference according to the first curve and the second curve, and calculates a second phase difference according to the third curve and the fourth curve, and then calculates according to the first phase difference and the second phase difference Draw out the final phase difference.
  • the processing chip 20 may calculate the average value of the first phase difference and the second phase difference and use the average value as the final phase difference; in another example, the processing chip 20 may assign the first phase difference a first weight. Value, the second weight value is assigned to the second phase difference, where the first weight value and the second weight value are not equal, the processing chip 20 then according to the first phase difference, the first weight value, the second position difference, and the second weight Value to calculate the final phase difference.
  • the processing chip 20 can determine the distance that the lens 30 needs to move according to the final phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the panchromatic pixel includes two panchromatic sub-pixels.
  • the color pixel includes two color sub-pixels.
  • the panchromatic subpixel information includes first panchromatic subpixel information and second panchromatic subpixel information, and the color subpixel information includes first color subpixel information and second color subpixel information.
  • the first full-color sub-pixel information, the second full-color sub-pixel information, the first color sub-pixel information, and the second color sub-pixel information are respectively composed of the full-color sub-pixel located in the first orientation of the lens 170 and the second full-color sub-pixel located in the lens 170.
  • the bidirectional panchromatic sub-pixels, the color sub-pixels located in the first direction of the lens 170, and the color sub-pixels located in the second direction of the lens 170 are output.
  • the plurality of first panchromatic sub-pixel information and the corresponding plurality of second panchromatic sub-pixel information serve as a pair of panchromatic sub-pixel information, and the plurality of first color sub-pixel information and the corresponding plurality of second color sub-pixel information As a pair of color sub-pixel information.
  • Calculating the phase difference for focusing based on the panchromatic sub-pixel information and the color sub-pixel information includes:
  • a fourth curve is formed according to the information of a plurality of fourth color sub-pixels.
  • step 0561, step 0562, step 0563, step 0564, step 0565, step 0566, step 0567, step 0568, and step 0569 can all be implemented by the processing chip 20.
  • the processing chip 20 may be used to calculate the third panchromatic sub-pixel information according to the multiple first panchromatic sub-pixel information in each pair of panchromatic sub-pixel information, and to center the panchromatic sub-pixel information according to each pair of panchromatic sub-pixel information.
  • the processing chip 20 may also be used to form a first curve according to information of a plurality of third panchromatic sub-pixels, form a second curve according to information of a plurality of fourth panchromatic sub-pixels, and form a third curve according to information of a plurality of third color sub-pixels. And forming a fourth curve according to the information of a plurality of fourth color sub-pixels.
  • the processing chip 20 can also be used to calculate the phase difference according to the first curve, the second curve, the third curve, and the fourth curve for focusing.
  • the definitions of the first position and the second position are the same as those of the first position P1 and the second position P2 in the control method of the embodiment shown in FIG. 28 and FIG. 30, and will not be repeated here.
  • the pair of panchromatic sub-pixel information and the pair of color sub-pixel information have the same meaning as the pair of panchromatic sub-pixel information and the pair of color sub-pixel information in the control method of the embodiment shown in FIG. 28 and FIG. 30, and will not be repeated here.
  • the calculation method of the third panchromatic subpixel information and the fourth panchromatic subpixel information is the same as the calculation method of the third panchromatic subpixel information and the fourth panchromatic subpixel information in the control method of the embodiment shown in FIG. 28. This will not be repeated here.
  • the calculation methods of the third color sub-pixel information and the fourth color sub-pixel information are the same as the calculation methods of the third color sub-pixel information and the fourth color sub-pixel information in the control method of the embodiment shown in FIG. 30, and will not be repeated here. .
  • the processing chip 20 can be based on a plurality of The third full-color sub-pixel information forms a first curve, and a second curve can also be formed based on a plurality of fourth sub-panchromatic pixel information, a third curve can also be formed based on a plurality of third color sub-pixel information, and a third curve can be formed based on a plurality of third color sub-pixel information.
  • the four-color sub-pixel information forms a fourth curve.
  • the processing chip 20 calculates a first phase difference according to the first curve and the second curve, and calculates a second phase difference according to the third curve and the fourth curve, and then calculates according to the first phase difference and the second phase difference Draw out the final phase difference.
  • the processing chip 20 may calculate the average value of the first phase difference and the second phase difference and use the average value as the final phase difference; in another example, the processing chip 20 may assign the first phase difference a first weight. Value, the second weight value is assigned to the second phase difference, where the first weight value and the second weight value are not equal, the processing chip 20 then according to the first phase difference, the first weight value, the second position difference, and the second weight Value to calculate the final phase difference.
  • the processing chip 20 can determine the distance that the lens 30 needs to move according to the final phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the distance required to move the lens 30 so that the lens 30 is in focus.
  • the plurality of pixels 101 includes a plurality of panchromatic pixels and a plurality of color pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array 11 includes a minimum repeating unit, each minimum repeating unit includes a plurality of sub-units, and each sub-unit includes a plurality of single-color pixels and a plurality of full-color pixels.
  • Step 03: Exposing the multiple pixels 101 in the two-dimensional pixel array 11 to obtain a target image includes:
  • 034 Fuse the full-color intermediate image and the color intermediate image to obtain the target image.
  • step 031 may be implemented by the image sensor 10.
  • Step 032, step 033, and step 034 can all be implemented by the processing chip 20.
  • the processing chip 20 can be used to interpolate the full-color original image, and obtain the pixel information of all pixels 101 in each sub-unit to obtain the full-color intermediate image.
  • the processing chip 20 can also be used to interpolate and process the color original image to obtain a color intermediate image, and the corresponding subunits in the color intermediate image are arranged in a Bayer array.
  • the processing chip 20 can also be used to fuse a full-color intermediate image and a color intermediate image to obtain a target image.
  • the pixel information of the pixel 101 refers to: (1) When there is only one sub-pixel 102 in the pixel 101, the sub-pixel information of the one sub-pixel 102 is regarded as the sub-pixel information of the pixel 101 Pixel information; (2) When there are two sub-pixels 102 in the pixel 101, the sum of the sub-pixel information of the two sub-pixels 102 is regarded as the pixel information of the pixel 101.
  • a frame of full-color original image is output after multiple panchromatic pixels are exposed, and a frame of color original image is output after multiple color pixels are exposed.
  • the panchromatic original image includes a plurality of panchromatic pixels W and a plurality of null pixels N (NULL).
  • the empty pixel is neither a panchromatic pixel nor a color pixel.
  • the position of the empty pixel N in the panchromatic original image can be regarded as no pixel at that position, or the pixel information of the empty pixel can be regarded as zero. Comparing the two-dimensional pixel array 11 with the full-color original image, it can be seen that for each sub-unit in the two-dimensional pixel array 11, the sub-unit includes two full-color pixels W and two color pixels (color pixel A, color pixel B, Or color pixel C).
  • the full-color original image also has a sub-unit corresponding to each sub-unit in the two-dimensional pixel array 11.
  • the sub-unit of the full-color original image includes two full-color pixels W and two empty pixels N, two empty pixels
  • the position of N corresponds to the position of the two color pixels in the subunit of the two-dimensional pixel array 11.
  • the color original image includes a plurality of color pixels and a plurality of empty pixels N.
  • the empty pixel is neither a full-color pixel nor a color pixel.
  • the position of the empty pixel N in the color original image can be regarded as no pixel at that position, or the pixel information of the empty pixel can be regarded as zero.
  • the sub-unit includes two panchromatic pixels W and two color pixels.
  • the color original image also has a subunit corresponding to each subunit in the two-dimensional pixel array 11.
  • the subunit of the color original image includes two color pixels and two empty pixels N, where the two empty pixels N are located. Corresponding to the positions of the two panchromatic pixels W in the subunit of the two-dimensional pixel array 11.
  • the processing chip 20 After the processing chip 20 receives the full-color original image and the color original image output by the image sensor 10, it can further process the full-color original image to obtain a full-color intermediate image, and further process the color original image to obtain a color intermediate image.
  • a full-color original image can be transformed into a full-color intermediate image in the manner shown in FIG. 35.
  • the full-color original image includes multiple sub-units, each sub-unit includes two empty pixels N and two pan-color pixels W, the processing chip 20 needs to replace each empty pixel N in each sub-unit with a full-color pixel W, and calculate the pixel information of each panchromatic pixel W at the position of the empty pixel N after replacement.
  • the processing chip 20 replaces the empty pixel N with a panchromatic pixel W, and determines the replaced panchromatic pixel W according to the pixel information of the remaining panchromatic pixels W adjacent to the replaced panchromatic pixel W Pixel information of the panchromatic pixel W.
  • panchromatic pixels W 1,3 panchromatic pixel W 2,2 , panchromatic pixel W 2,4 , and panchromatic pixel W 3,3 in the panchromatic original image.
  • the processing chip 20 sets the panchromatic pixel The average value of the pixel information of W 1,3 , the pixel information of panchromatic pixels W 2,2 , the pixel information of panchromatic pixels W 2,4 , and the pixel information of panchromatic pixels W 3,3 are used as the replaced panchromatic pixels W 2,3 pixel information.
  • the color original image can be transformed into a color intermediate image in the manner shown in FIG. 36.
  • the color original image includes a plurality of sub-units, and each sub-unit includes two single-color color pixels (ie, single-color pixel A, single-color pixel B, or single-color pixel C).
  • some sub-units include Two empty pixels N and two single-color pixels A
  • some sub-units include two empty pixels N and two single-color pixels B
  • some sub-units include two empty pixels N and two single-color pixels C.
  • the processing chip 20 first determines the specific arrangement of the Bayer array in each subunit. For example, the arrangement of the ABBC shown in FIG.
  • the processing chip 20 replaces the empty pixels N 1,1 in the color original image with color pixels A 1,1 , replaces the color pixels A 1,2 in the color original image with color pixels B 1,2 , and replaces the color pixels
  • the color pixel A 2,1 in the original image is replaced with color pixel B 2,1
  • the empty pixel N 2,2 in the color original image is replaced with color pixel C 2,2 .
  • the processing chip 20 then calculates the pixel information of the color pixel A 1,1 , the pixel information of the color pixel B 1,2, the pixel information of the color pixel B 2,1 , and the pixel information of the color pixel C 2,2 . In this way, the processing chip 20 can obtain a frame of color intermediate image.
  • the processing chip 20 After the processing chip 20 obtains the full-color intermediate image and the color intermediate image, it can fuse the full-color intermediate image and the color intermediate image to obtain the target image.
  • the full-color intermediate image and the color intermediate image can be merged in the manner shown in 37 to obtain the target image.
  • the processing chip 20 first separates the color and brightness of the color intermediate image to obtain a color-brightness separated image.
  • L represents brightness
  • CLR represents color.
  • the processing chip 20 can convert the color intermediate image in the RGB space into Color and brightness separation image in YCrCb space, at this time Y in YCrCb is the brightness L in the color and brightness separation image, and Cr and Cb in YCrCb are the color CLR in the color and brightness separation image; (2) The processing chip 20 can also The RGB color intermediate image is converted into a color-brightness separated image in Lab space.
  • L in Lab is the brightness L in the color-brightness separated image
  • a and b in Lab are the color CLRs in the color-brightness separated image.
  • L+CLR in the color-light separation image shown in FIG. 37 does not mean that the pixel information of each pixel is formed by adding L and CLR, but only that the pixel information of each pixel is composed of L and CLR.
  • the processing chip 20 fuses the brightness of the color-brightness separated image and the brightness of the full-color intermediate image.
  • the pixel information of each panchromatic pixel W in the panchromatic intermediate image is the brightness information of each panchromatic pixel
  • the processing chip 20 may correspond to the L of each pixel in the color-brightness separation image with that in the panchromatic intermediate image. Add the W of the panchromatic pixel at the position to obtain the pixel information after brightness correction.
  • the processing chip 20 forms a brightness-corrected color-brightness separated image according to a plurality of brightness-corrected pixel information, and then uses color space conversion to convert the brightness-corrected color-brightness separated image into a brightness-corrected color image.
  • the processing chip 20 performs interpolation processing on the brightness-corrected color image to obtain a target image, wherein the pixel information of each pixel in the target image includes information of three components A, B, and C. It should be noted that A+B+C in the target image in FIG. 37 indicates that the pixel information of each pixel is composed of A, B, and C three color components.
  • the control method and the camera assembly 40 of the embodiment of the present application obtain a high-resolution panchromatic original image and a color original image when the lens 30 is in focus, and use the panchromatic original image to correct the brightness of the color original image, so that the final
  • the target image has both high definition and sufficient brightness, and the quality of the target image is better.
  • the plurality of pixels 101 includes a plurality of panchromatic pixels and a plurality of color pixels, and the color pixels have a narrower spectral response than the panchromatic pixels.
  • the two-dimensional pixel array 11 includes a minimum repeating unit, each minimum repeating unit includes a plurality of sub-units, and each sub-unit includes a plurality of single-color pixels and a plurality of full-color pixels.
  • Step 03: Exposing the multiple pixels 101 in the two-dimensional pixel array 11 to obtain a target image includes:
  • a plurality of pixels 101 in the two-dimensional pixel array 11 are exposed and output a full-color original image and a color original image;
  • 038 Fuse the full-color intermediate image and the color intermediate image to obtain the target image.
  • step 035 may be implemented by the image sensor 10.
  • step 036, step 037, and step 038 can all be implemented by the processing chip 20.
  • a plurality of pixels 101 in the two-dimensional pixel array 11 of the image sensor 10 are exposed and output a full-color original image and a color original image.
  • the processing chip 20 may be used to process a full-color original image, treat all pixels 101 of each subunit as a full-color large pixel, and output pixel information of the full-color large pixel to obtain a full-color intermediate image.
  • the processing chip 20 can also be used to process the color original image, to use all the pixels 101 of each sub-unit as the single-color large pixels corresponding to the single color in the sub-unit, and output the pixel information of the single-color large pixels to obtain the color intermediate image .
  • the processing chip 20 can also be used to fuse a full-color intermediate image and a color intermediate image to obtain a target image.
  • a frame of full-color original image is output after multiple panchromatic pixels are exposed, and a frame of color original image is output after multiple color pixels are exposed.
  • the processing chip 20 After the processing chip 20 receives the full-color original image and the color original image output by the image sensor 10, it can further process the full-color original image to obtain a full-color intermediate image, and further process the color original image to obtain a color intermediate image.
  • a full-color original image can be transformed into a full-color intermediate image in the manner shown in FIG. 39.
  • the full-color original image includes a plurality of sub-units, and each sub-unit includes two empty pixels N and two pan-color pixels W.
  • the processing chip 20 may regard all the pixels 101 in each sub-unit including the empty pixel N and the full-color pixel W as the full-color large pixel W corresponding to the sub-unit.
  • the processing chip 20 can form a full-color intermediate image based on the plurality of full-color large pixels W.
  • the processing chip 20 may use all the pixels 101 of each sub-unit in the full-color original image as the full-color large pixel W corresponding to the sub-unit in the following manner: the processing chip 20 first merges all the pixels 101 in each sub-unit In order to obtain the pixel information of the full-color large pixel W, the full-color intermediate image is formed according to the pixel information of the multiple full-color large pixels W. Specifically, for each full-color large pixel, the processing chip 20 may add all the pixel information in the sub-units including the empty pixel N and the full-color pixel W, and use the result of the addition as the full-color corresponding to the sub-unit. The pixel information of the large pixel W, where the pixel information of the empty pixel N can be regarded as zero. In this way, the processing chip 20 can obtain the pixel information of a plurality of full-color large pixels W.
  • the color original image can be converted into a color intermediate image in the manner shown in FIG. 40.
  • the color original image includes a plurality of sub-units, and each sub-unit includes a plurality of empty pixels N and a plurality of single-color color pixels (also called single-color pixels).
  • some sub-units include two empty pixels N and two single-color pixels A
  • some sub-units include two empty pixels N and two single-color pixels B
  • some sub-units include two empty pixels N and Two single-color pixels C.
  • the processing chip 20 may regard all the pixels in the sub-unit including the empty pixel N and the single-color pixel A as the single-color large pixel A corresponding to the single-color A in the sub-unit, and will include the empty pixel N and the single-color pixel B. All the pixels in the sub-unit are regarded as the single-color large pixels B corresponding to the single color B in the sub-unit, and all the pixels in the sub-unit including the empty pixel N and the single-color pixel C are regarded as the single-color pixels in the sub-unit. C corresponds to the single-color large pixel C.
  • the processing chip 20 can form a color intermediate image based on the plurality of monochromatic large pixels A, the plurality of monochromatic large pixels B, and the plurality of monochromatic large pixels C.
  • the processing chip 20 may combine the pixel information of all pixels in each sub-unit to obtain the pixel information of the monochromatic large pixels, thereby forming a color intermediate image according to the pixel information of a plurality of monochromatic large pixels.
  • the processing chip 20 may add the pixel information of all pixels in the sub-units including the empty pixel N and the single-color pixel A, and use the result of the addition as a single unit corresponding to the sub-unit.
  • the pixel information of the large color pixel A where the pixel information of the empty pixel N can be regarded as zero, the same below; the processing chip 20 can add the pixel information of all pixels in the subunit including the empty pixel N and the single-color pixel B , And use the result of the addition as the pixel information of the single-color large pixel B corresponding to the sub-unit; the processing chip 20 may add the pixel information of all pixels in the sub-unit including the empty pixel N and the single-color pixel C, and The result of the addition is used as the pixel information of the single-color large pixel C corresponding to the sub-unit.
  • the processing chip 20 can obtain pixel information of a plurality of single large pixels A, pixel information of a plurality of monochromatic large pixels B, and pixel information of a plurality of monochromatic large pixels C.
  • the processing chip 20 then forms a color intermediate image according to the pixel information of the plurality of monochromatic large pixels A, the pixel information of the plurality of monochromatic large pixels B, and the pixel information of the plurality of monochromatic large pixels C.
  • the processing chip 20 After the processing chip 20 obtains the full-color intermediate image and the color intermediate image, it can fuse the full-color intermediate image and the color intermediate image to obtain the target image.
  • the full-color intermediate image and the color intermediate image can be merged in the manner shown in 41 to obtain the target image.
  • the processing chip 20 first separates the color and brightness of the color intermediate image to obtain a color-brightness separated image.
  • L represents brightness
  • CLR represents color.
  • the processing chip 20 can convert the color intermediate in the RGB space The image is converted into a color-brightness separated image in YCrCb space.
  • Y in YCrCb is the brightness L in the color-brightness separated image
  • Cr and Cb in YCrCb are the color CLR in the color-brightness separated image
  • (2) processing chip 20 It is also possible to convert the color intermediate image of RGB into the color and brightness separation image in Lab space.
  • L in Lab is the brightness L in the color and brightness separation image
  • a and b in Lab are the color and brightness separation images in the color and brightness separation image.
  • Color CLR It should be noted that L+CLR in the color and light separation image shown in FIG. 41 does not mean that the pixel information of each pixel is formed by adding L and CLR, but only that the pixel information of each pixel is composed of L and CLR.
  • the processing chip 20 fuses the brightness of the color-brightness separated image and the brightness of the full-color intermediate image.
  • the pixel information of each panchromatic large pixel W in the panchromatic intermediate image is the brightness information of each panchromatic large pixel, and the processing chip 20 can separate the color and brightness of each monochromatic large pixel in the image from the L and the full color.
  • the W of the panchromatic large pixels at the corresponding positions in the color intermediate image are added together to obtain the pixel information after brightness correction.
  • the processing chip 20 forms a brightness-corrected color-brightness separated image according to a plurality of brightness-corrected pixel information, and then uses color space conversion to convert the brightness-corrected color-brightness separated image into a brightness-corrected color image.
  • the processing chip 20 performs interpolation processing on the brightness-corrected color image to obtain a target image, wherein the pixel information of each large pixel in the target image includes information of three components A, B, and C. It should be noted that A+B+C in the target image in FIG. 41 indicates that the pixel information of each large pixel is composed of A, B, and C three color components.
  • the control method and the camera assembly 40 of the embodiment of the present application obtain a high-resolution panchromatic original image and a color original image when the lens 30 is in focus, and use the panchromatic original image to correct the brightness of the color original image, so that the final
  • the target image has both high definition and sufficient brightness, and the quality of the target image is better.
  • the resolution of the target image acquired by the control method of the embodiment shown in FIG. 33 is higher than the resolution of the target image acquired by the control method of the embodiment shown in FIG. 38.
  • the processing chip 20 may determine which control method of the embodiment is used to calculate the target image according to the brightness of the environment. For example, when the environmental brightness is high (for example, the brightness is greater than or equal to the first predetermined brightness), the control method of the embodiment shown in FIG. 33 is used to calculate the target image, and the target image with higher resolution and better brightness can be used at this time; When the environmental brightness is low, the control method of the embodiment shown in FIG. 38 is used to calculate the target image, and the target image can have the most sufficient brightness at this time.
  • the first exposure time of the full-color pixels can be controlled by the first exposure control line, and the second exposure time of the color pixels It can be controlled by the second exposure control line, so that when the environmental brightness is high (for example, the brightness is greater than or equal to the first predetermined brightness), the first exposure time can be set to be less than the second exposure time.
  • the problem of over-saturation of panchromatic pixels can be avoided, so that the panchromatic original image cannot be used to correct the brightness of the color original image.
  • the mobile terminal 90 of the embodiment of the present application may be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (such as a smart watch, a smart bracelet, a smart glasses, a smart helmet, etc.), a head-mounted display device, a virtual reality device, etc., here No restrictions.
  • the mobile terminal 90 of the embodiment of the present application includes an image sensor 10, a processor 60, a memory 70, and a casing 80.
  • the image sensor 10, the processor 60, and the memory 70 are all installed in the housing 80. Among them, the image sensor 10 is connected to the processor 60.
  • the processor 60 can perform the same functions as the processing chip 20 in the camera assembly 40 (shown in FIG. 23).
  • the processor 60 can implement the functions that can be implemented by the processing chip 20 described in any of the foregoing embodiments.
  • the memory 70 is connected to the processor 60, and the memory 70 can store data obtained after processing by the processor 60, such as a target image.
  • the processor 60 and the image sensor 10 may be mounted on the same substrate. At this time, the image sensor 10 and the processor 60 can be regarded as a camera assembly 40. Of course, the processor 60 and the image sensor 10 may also be mounted on a different substrate.
  • the mobile terminal 90 of the embodiment of the present application adopts the image sensor 10 that can obtain both the phase information in the horizontal direction and the phase information in the vertical direction, so that the mobile terminal 90 of the embodiment of the present application can be applied to A scene with a large number of pure color horizontal stripes can also be applied to a scene with a large number of pure color vertical stripes, which improves the scene adaptability and phase focusing accuracy of the mobile terminal 90 of the embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

一种图像传感器(10)、控制方法、摄像头组件(40)及移动终端(90)。图像传感器(10)的二维像素阵列(11)中至少部分像素(101)包括两个子像素(102)。以每个像素(101)中心点为原点平行于二维像素阵列(11)长度方向为X轴、宽度方向为Y轴建立直角坐标系,两个子像素(102)在X、Y轴的正负半轴均有分布。图像传感器(10)的透镜阵列(17)中每个透镜(170)覆盖一个像素(101)。

Description

图像传感器、控制方法、摄像头组件及移动终端 技术领域
本申请涉及成像技术领域,特别涉及一种图像传感器、控制方法、摄像头组件及移动终端。
背景技术
相关技术中,相位对焦的实现方式通常有以下两种:(1)在像素阵列中布置多对相位侦测像素来侦测相位差,每对相位侦测像素均包括一个被遮挡了左半部分的像素和一个被遮挡了右半部分的像素;(2)每一个像素均包括两个光电二极管,两个光电二极管组成相位侦测像素来侦测相位差。
发明内容
本申请实施方式提供一种图像传感器、控制方法、摄像头组件及移动终端。
本申请实施方式的图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个像素。至少部分所述像素包括两个子像素。以每个所述像素的中心点为原点,平行于所述二维像素阵列的长度方向为X轴,宽度方向为Y轴,建立直角坐标系,两个所述子像素在所述X轴的正半轴及负半轴均有分布,且两个所述子像素在所述Y轴的正半轴及负半轴均有分布。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
本申请实施方式的控制方法用于图像传感器。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个像素。至少部分所述像素包括两个子像素。以每个所述像素的中心点为原点,平行于所述二维像素阵列的长度方向为X轴,宽度方向为Y轴,建立直角坐标系,两个所述子像素在所述X轴的正半轴及负半轴均有分布,且两个所述子像素在所述Y轴的正半轴及负半轴均有分布。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。所述控制方法包括:所述子像素曝光以输出子像素信息;根据所述子像素信息计算相位差以进行对焦;在合焦状态下,所述二维像素阵列中的多个所述像素曝光以获取目标图像。
本申请实施方式的摄像头组件包括镜头及图像传感器。所述图像传感器能够接收穿过所述镜头的光线。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个像素。至少部分所述像素包括两个子像素。以每个所述像素的中心点为原点,平行于所述二维像素阵列的长度方向为X轴,宽度方向为Y轴,建立直角坐标系,两个所述子像素在所述X轴的正半轴及负半轴均有分布,且两个所述子像素在所述Y轴的正半轴及负半轴均有分布。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
本申请实施方式的移动终端包括机壳及图像传感器。所述图像传感器安装在所述机壳内。所述图像传感器包括二维像素阵列及透镜阵列。所述二维像素阵列包括多个像素。至少部分所述像素包括两个子像素。以每个所述像素的中心点为原点,平行于所述二维像素阵列的长度方向为X轴,宽度方向为Y轴,建立直角坐标系,两个所述子像素在所述X轴的正半轴及负半轴均有分布,且两个所述子像素在所述Y轴的正半轴及负半轴均有分布。所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的图像传感器的示意图;
图2是本申请某些实施方式的一种像素电路的示意图;
图3至图10是本申请某些实施方式的子像素的分布示意图;
图11是不同色彩通道曝光饱和时间的示意图;
图12至图21是本申请某些实施方式的最小重复单元的像素排布及透镜覆盖方式的示意图;
图22是本申请某些实施方式的控制方法的流程示意图;
图23是本申请某些实施方式的摄像头组件的示意图;
图24和图25是本申请某些实施方式的控制方法的流程示意图;
图26和图27是本申请某些实施方式的控制方法的原理示意图;
图28至图33是本申请某些实施方式的控制方法的流程示意图;
图34至图37是本申请某些实施方式的控制方法的原理示意图;
图38是本申请某些实施方式的控制方法的流程示意图;
图39至图41是本申请某些实施方式的控制方法的原理示意图;
图42是本申请某些实施方式的移动终端的示意图。
具体实施方式
下面详细描述本申请的实施方式,实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
请参阅图1,本申请提供一种图像传感器10。图像传感器包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个像素101。至少部分像素101包括两个子像素102。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向LD为X轴,宽度方向WD为Y轴,建立直角坐标系,两个子像素102在X轴的正半轴及负半轴均有分布,且两个子像素102在Y轴的正半轴及负半轴均有分布。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。
请参阅图1,本申请还提供一种控制方法。本申请实施方式的控制方法用于图像传感器10。图像传感器包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个像素101。至少部分像素101包括两个子像素102。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向LD为X轴,宽度方向WD为Y轴,建立直角坐标系,两个子像素102在X轴的正半轴及负半轴均有分布,且两个子像素102在Y轴的正半轴及负半轴均有分布。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素101。控制方法包括:子像素102曝光以输出子像素信息;根据子像素信息计算相位差以进行对焦;在合焦状态下,二维像素阵列11中的多个像素102曝光以获取目标图像。
请参阅图1和图23,本申请还提供一种摄像头组件40。摄像头组件40包括图像传感器10及镜头30。图像传感器10能够接收穿过镜头30的光线。图像传感器包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个像素101。至少部分像素101包括两个子像素102。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向LD为X轴,宽度方向WD为Y轴,建立直角坐标系,两个子像素102在X轴的正半轴及负半轴均有分布,且两个子像素102在Y轴的正半轴及负半轴均有分布。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素102
请参阅图1和图42,本申请还提供一种移动终端90。移动终端90包括机壳80及图像传感器10。图像传感器10安装在机壳内。图像传感器包括二维像素阵列11及透镜阵列17。二维像素阵列11包括多个像素101。至少部分像素101包括两个子像素102。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向LD为X轴,宽度方向WD为Y轴,建立直角坐标系,两个子像素102在X轴的正半轴及负半轴均有分布,且两个子像素102在Y轴的正半轴及负半轴均有分布。透镜阵列17包括多个透镜170,每个透镜170覆盖一个像素102
相关技术中可以采用双核像素进行相位对焦。每个双核像素均包含两个子像素,两个子像素组成一对相位检测对,根据两个子像素曝光后输出的信号即可计算出相位差。目前双核像素中的两个子像素通常是呈左右对称分布,或者是呈上下对称分布的。但左右对称分布的相位检测对获取的主要是水平方向上的相位信息,而难以获取垂直方向上的相位信息,如此使得左右对称分布的相位检测对应用在包含大量纯色横条纹的场景中时,该相位检测对输出的两个信号较为接近,基于这两个信号计算出来的相位差的准确度较低,进一步导致对焦的准确度也较低。类似地,上下对称分布的相位检测对获取的主要是垂直方向上的相位信息,而难以获取水平方向上的相位信息,上下对称分布的相位检测对应用在包含大量纯色竖条纹的场景中时,该相位检测对输出的两个信号较为接近,基于这两个信号计算出来的相位差的准确度较低,进一步导致对焦的准确度也较低。
基于上述原因,本申请提供一种图像传感器10(图1所示)。本申请实施方式的图像传感器10中的 至少部分像素101包括两个子像素102。两个子像素102既能获取到水平方向上的相位信息,又能获取到垂直方向上的相位信息,如此使得图像传感器10既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,图像传感器10的场景适应性较好,相位对焦的准确度较高。
接下来首先介绍一下图像传感器10的基本结构。请参阅图1,图1是本申请实施方式的图像传感器10的示意图及像素101的示意图。图像传感器10包括二维像素阵列11、滤光片阵列16、及透镜阵列17。沿图像传感器10的收光方向,透镜阵列17、滤光片16、及二维像素阵列11依次设置。
图像传感器10可以采用互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
二维像素阵列11包括以阵列形式二维排列的多个像素101。至少部分像素101包括两个子像素102。以每个像素101的中心点为原点,平行于二维像素阵列11的长度方向LD为X轴,平行于二维像素阵列11的宽度方向WD为Y轴,建立直角坐标系。两个子像素102在X轴的正半轴及负半轴均有分布,且两个子像素102在Y轴的正半轴及负半轴均有分布。如图1所示例子中,一个子像素102同时分布在直角坐标系的第一象限、第二象限、及第三象限,另一个子像素102同时分布在直角坐标系的第一象限、第四象限、及第三象限。在其他实施方式中,平行于二维像素阵列11的长度方向LD的也可以为Y轴,平行于二维像素阵列11的宽度方向WD也可以为X轴(图未示),此时,一个子像素102同时分布在直角坐标系的第二象限、第一象限、及第四象限,另一个子像素102同时分布在直角坐标系的第二象限、第三象限、及第四象限。至少部分像素101包括两个子像素102可以是:(1)仅部分像素101包括两个子像素102,剩余的像素101仅包括一个子像素102;(2)所有像素101均包括两个子像素102。
滤光片阵列16包括多个滤光片160,每个滤光片160覆盖对应的一个像素101。每个像素101的光谱响应(即像素101能够接收的光线的颜色)由对应该像素102的滤光片160的颜色决定。
透镜阵列17包括多个透镜170,每个透镜170覆盖对应的一个像素101。
图2是本申请实施方式中一种像素电路110的示意图。当像素101包含两个子像素102时,每个子像素102的像素电路均可以为图2所示的像素电路110。当像素101仅包含一个子像素102时,该一个子像素102的像素电路也可以为图2所示的像素电路110。下面结合图1和图2对像素电路110的工作原理进行说明。
如图1和图2所示,像素电路110包括光电转换元件117(例如,光电二极管PD)、曝光控制电路116(例如,转移晶体管112)、复位电路(例如,复位晶体管113)、放大电路(例如,放大晶体管114)和选择电路(例如,选择晶体管115)。在本申请的实施例中,转移晶体管112、复位晶体管113、放大晶体管114和选择晶体管115例如是MOS管,但不限于此。
例如,参见图1和图2,转移晶体管112的栅极TG通过曝光控制线(图中未示出)连接图像传感器10的垂直驱动单元(图中未示出);复位晶体管113的栅极RG通过复位控制线(图中未示出)连接垂直驱动单元;选择晶体管115的栅极SEL通过选择线(图中未示出)连接垂直驱动单元。每个像素电路110中的曝光控制电路116(例如,转移晶体管112)与光电转换元件117电连接,用于转移光电转换元件117经光照后积累的电势。例如,光电转换元件117包括光电二极管PD,光电二极管PD的阳极例如连接到地。光电二极管PD将所接收的光转换为电荷。光电二极管PD的阴极经由曝光控制电路116(例如,转移晶体管112)连接到浮动扩散单元FD。浮动扩散单元FD与放大晶体管114的栅极、复位晶体管113的源极连接。
例如,曝光控制电路116为转移晶体管112,曝光控制电路116的控制端TG为转移晶体管112的栅极。当有效电平(例如,VPIX电平)的脉冲通过曝光控制线传输到转移晶体管112的栅极时,转移晶体管112导通。转移晶体管112将光电二极管PD光电转换的电荷传输到浮动扩散单元FD。
例如,复位晶体管113的漏极连接到像素电源VPIX。复位晶体管113的源极连接到浮动扩散单元FD。在电荷被从光电二极管PD转移到浮动扩散单元FD之前,有效复位电平的脉冲经由复位线传输到复位晶体管113的栅极,复位晶体管113导通。复位晶体管113将浮动扩散单元FD复位到像素电源VPIX。
例如,放大晶体管114的栅极连接到浮动扩散单元FD。放大晶体管114的漏极连接到像素电源VPIX。在浮动扩散单元FD被复位晶体管113复位之后,放大晶体管114经由选择晶体管115通过输出端OUT输出复位电平。在光电二极管PD的电荷被转移晶体管112转移之后,放大晶体管114经由选择晶体管115通过输出端OUT输出信号电平。
例如,选择晶体管115的漏极连接到放大晶体管114的源极。选择晶体管115的源极通过输出端OUT连接到图像传感器10中的列处理单元(图中未示出)。当有效电平的脉冲通过选择线被传输到选择晶体管115的栅极时,选择晶体管115导通。放大晶体管114输出的信号通过选择晶体管115传输到列处理单元。
需要说明的是,本申请实施例中像素电路110的像素结构并不限于图2所示的结构。例如,像素电路110可以具有三晶体管像素结构,其中放大晶体管114和选择晶体管115的功能由一个晶体管完成。例如,曝光控制电路116也不局限于单个转移晶体管112的方式,其它具有控制端控制导通功能的电子器件或结构均可以作为本申请实施例中的曝光控制电路,单个转移晶体管112的实施方式简单、成本低、易于控制。
图3至图10示出了多种图像传感器10中子像素102的分布示意图。图3至图10所示的子像素102的分布方式中,每个子像素102均可以同时获得水平方向及垂直方向的相位信息,有利于提升相位对焦准确度。
例如,图3是本申请实施方式的一种子像素102的分布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。每个像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个子像素102同时分布在第一象限、第四象限、及第三象限。每个子像素102的横截面的形状均为三角形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图4是本申请实施方式的另一种子像素102的排布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。部分像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个像素102同时分布在第一象限、第四象限、及第三象限;部分像素101中的一个子像素102同时分布在第二象限、第一象限、及第四象限,另一个子像素102同时分布在第二象限、第三象限、及第四象限。每个子像素102的横截面的形状均为三角形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图5是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。每个像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个像素102同时分布在第一象限、第四象限、及第三象限。每个子像素102的横截面的形状均为梯形,同一像素101中的一个子像素102的横截面的形状为上宽下窄的梯形,另一个子像素102的横截面为上窄下宽的梯形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图6是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。部分像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个子像素102同时分布在第一象限、第四象限、及第三象限;部分像素101中的一个子像素102同时分布在第二象限、第一象限、及第四象限,另一个子像素102同时分布在第二象限、第三象限、及第四象限。每个子像素102的横截面的形状均为梯形,同一像素101中的一个子像素102的横截面的形状为上宽下窄的梯形,另一个子像素102的横截面为上窄下宽的梯形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图7是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。每个像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个像素102同时分布在第一象限、第四象限、及第三象限。每个子像素102的横截面的形状均为“L”形,同一像素101中的一个子像素102的横截面的形状为倒“L”形,另一个子像素102的横截面为镜像的“L”形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中 的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图8是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。部分像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个子像素102同时分布在第一象限、第四象限、及第三象限;部分像素101中的一个子像素102同时分布在第二象限、第一象限、及第四象限,另一个子像素102同时分布在第二象限、第三象限、及第四象限。每个子像素102的横截面的形状均为“L”形,同一像素101中的一个子像素102的横截面的形状为倒“L”形,另一个子像素102的横截面为镜像的“L”形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图9是本申请实施方式的又一种子像素102的分布示意图。部分像素101包括一个子像素102,部分像素101包括两个子像素102,包括一个子像素102的像素101与包括两个子像素102的像素101在行和列上均交替分布。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。包括一个子像素102的像素101中的子像素102同时分布在第一象限、第二象限、第三象限、及第四象限。包括两个子像素102的像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个像素102同时分布在第一象限、第四象限、及第三象限。部分包含两个子像素102的像素101中的子像素102的横截面的形状为三角形,部分包含两个子像素102的像素101中的两个子像素102的横截面的形状为“L”形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。包含两个子像素102的像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
例如,图10是本申请实施方式的又一种子像素102的分布示意图。每个像素101均包括两个子像素102。每个子像素102在对应的直角坐标系的X轴的正半轴、X轴的负半轴、Y轴的正半轴、及Y轴的负半轴均有分布。部分像素101中的一个子像素102同时分布在第一象限、第二象限、及第三象限,另一个子像素102同时分布在第一象限、第四象限、及第三象限;部分像素101中的一个子像素102同时分布在第二象限、第一象限、及第四象限,另一个子像素102同时分布在第二象限、第三象限、及第四象限。部分像素101中的子像素102的横截面的形状为三角形,部分像素101中的子像素102的横截面的形状为梯形,其中,横截面指的是沿垂直于图像传感器10的收光方向截取到的截面。每个像素101中的两个子像素102均关于该像素101的中心点呈中心对称分布。
需要说明的是,除了图3至图10所示的子像素102的横截面的形状示例,子像素102的横截面的形状还可以是其他规则或不规则的形状,在此不作限制。
另外,除了图9和图10所示的不同横截面形状的子像素102的结合例之外,在其他例子中,还可以是横截面为梯形的子像素102与横截面为“L”形的子像素102做结合,横截面为三角形的子像素102与横截面为“L”形的子像素102、及横截面为梯形的子像素102做结合等,在此不作限制。
此外,除了图9所示的仅包含一个子像素102的像素101与同时包含两个子像素102的像素101的排布方式以外,仅包含一个子像素102的像素101与同时包含两个子像素102的像素101的排布方式还可以是:二维像素阵列11中部分列的像素102仅包含一个子像素102,剩余列的像素102同时包含两个子像素102;或者,二维像素阵列11中部分行的像素102仅包含一个子像素102,剩余行的像素102同时包含两个子像素102等,在此不作限制。
在包含多种色彩的像素的图像传感器中,不同色彩的像素单位时间内接收的曝光量不同。在某些色彩饱和后,某些色彩还未曝光到理想的状态。例如,曝光到饱和曝光量的60%-90%可以具有比较好的信噪比和精确度,但本申请的实施例不限于此。
图11中以RGBW(红、绿、蓝、全色)为例说明。参见图11,图11中横轴为曝光时间、纵轴为曝光量,Q为饱和的曝光量,LW为全色像素W的曝光曲线,LG为绿色像素G的曝光曲线,LR为红色像素R的曝光曲线,LB为蓝色像素的曝光曲线。
从图11可以看出,全色像素W的曝光曲线LW的斜率最大,也就是说在单位时间内全色像素W可以获得更多的曝光量,在t1时刻即达到饱和。绿色像素G的曝光曲线LG的斜率次之,绿色像素在t2时刻饱和。红色像素R的曝光曲线LR的斜率再次之,红色像素在t3时刻饱和。蓝色像素B的曝光曲 线LB的斜率最小,蓝色像素在t4时刻饱和。由图11可知,全色像素W单位时间内接收的曝光量是大于彩色像素单位时间内接收的曝光量的,也即全色像素W的灵敏度要高于彩色像素的灵敏度。
如果采用仅包括彩色像素的图像传感器来实现相位对焦,那么在亮度较高的环境下,R、G、B三种彩色像素可以接收到的较多的光线,能够输出信噪比较高的像素信息,此时相位对焦的准确度较高;但是在亮度较低的环境下,R、G、B三种像素能够接收到的光线较少,输出的像素信息的信噪比较低,此时相位对焦的准确度也较低。
基于上述原因,本申请实施方式的图像传感器10在二维像素阵列11中可以同时布置全色像素和彩色像素,其中,至少部分全色像素包括两个子像素102,且至少部分彩色像素包括两个子像素102,从而使得图像传感器10既可以在包含大量纯色横条纹或竖条纹的场景下能够实现准确的对焦,还可以在环境亮度不同的场景下实现准确的对焦,进一步提升图像传感器10的场景适应性。
需要说明的是,每个像素101的光谱响应(即像素101能够接收的光线的颜色)由对应该像素101的滤光片160的颜色决定。本申请全文的彩色像素和全色像素指的是能够响应颜色与对应的滤光片160颜色相同的光线的像素101。
图12至图21示出了多种图像传感器10(图1所示)中像素101排布的示例。请参见图12至图21,二维像素阵列11中的多个像素101可以同时包括多个全色像素W及多个彩色像素(例如多个第一颜色像素A、多个第二颜色像素B和多个第三颜色像素C),其中,彩色像素和全色像素通过其上覆盖的滤光片160(图1所示)能够通过的光线的波段来区分,彩色像素具有比全色像素更窄的光谱响应,彩色像素的响应光谱例如为全色像素W响应光谱中的部分。至少部分(包括部分和全部)全色像素包含两个子像素102,且至少部分(包括部分和全部)彩色像素包含两个子像素102。二维像素阵列11由多个最小重复单元组成(图12至图21示出了多种图像传感器10中的最小重复单元的示例),最小重复单元在行和列上复制并排列。每个最小重复单元均包括多个子单元,每个子单元包括多个单颜色像素及多个全色像素。例如,每个最小重复单元包括四个子单元,其中,一个子单元包括多个单颜色像素A(即第一颜色像素A)和多个全色像素W,两个子单元包括多个单颜色像素B(即第二颜色像素B)和多个全色像素W,剩余一个子单元包括多个单颜色像素C(即第三颜色像素C)和多个全色像素W。
例如,最小重复单元的行和列的像素101的数量相等。例如最小重复单元包括但不限于,4行4列、6行6列、8行8列、10行10列的最小重复单元。例如,子单元的行和列的像素101的数量相等。例如子单元包括但不限于,2行2列、3行3列、4行4列、5行5列的子单元。这种设置有助于均衡行和列方向图像的分辨率和均衡色彩表现,提高显示效果。
在一个例子中,在最小重复单元中,全色像素W设置在第一对角线方向D1,彩色像素设置在第二对角线方向D2,第一对角线方向D1与第二对角线方向D2不同。
例如,图12是本申请实施方式中一种最小重复单元的像素101排布及透镜170覆盖方式示意图;最小重复单元为4行4列16个像素,子单元为2行2列4个像素,排布方式为:
Figure PCTCN2019121697-appb-000001
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图12所示,全色像素W设置在第一对角线方向D1(即图12中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图12中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
需要说明的是,第一对角线方向D1和第二对角线方向D2并不局限于对角线,还包括平行于对角线的方向。这里的“方向”并非单一指向,可以理解为指示排布的“直线”的概念,可以有直线两端的双向指向。
如图12所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102均在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
例如,图13是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式示意图 的示意图。最小重复单元为4行4列16个像素101,子单元为2行2列4个像素101,排布方式为:
Figure PCTCN2019121697-appb-000002
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图13所示,全色像素W设置在第一对角线方向D1(即图13中右上角和左下角连接的方向),彩色像素设置在第二对角线方向D2(例如图13中左上角和右下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图13所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
例如,图14是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。图15是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。在图14和图15的实施例中,分别对应图12和图13的排布及覆盖方式,第一颜色像素A为红色像素R;第二颜色像素B为绿色像素G;第三颜色像素C为蓝色像素Bu。
需要说明的是,在一些实施例中,全色像素W的响应波段为可见光波段(例如,400nm-760nm)。例如,全色像素W上设置有红外滤光片,以实现红外光的滤除。在一些实施例中,全色像素W的响应波段为可见光波段和近红外波段(例如,400nm-1000nm),与图像传感器10中的光电转换元件(例如光电二极管PD)响应波段相匹配。例如,全色像素W可以不设置滤光片,全色像素W的响应波段由光电二极管的响应波段确定,即两者相匹配。本申请的实施例包括但不局限于上述波段范围。
在一些实施例中,图12及图13所示的最小重复单元中,第一颜色像素A也可以为红色像素R,第二颜色像素B也可以为黄色像素Y;第三颜色像素C可以为蓝色像素Bu。
在一些实施例中,图12及图13所示的最小重复单元中,第一颜色像素A也可以为品红色像素M,第二颜色像素B也可以为青色像素Cy,第三颜色像素C也可以为黄色像素Y。
例如,图16是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为6行6列36个像素101,子单元为3行3列9个像素101,排布方式为:
Figure PCTCN2019121697-appb-000003
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图16所示,全色像素W设置在第一对角线方向D1(即图16中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图16中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图16所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
例如,图17是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为6行6列36个像素101,子单元为3行3列9个像素101,排布方式为:
Figure PCTCN2019121697-appb-000004
Figure PCTCN2019121697-appb-000005
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图17所示,全色像素W设置在第一对角线方向D1(即图17中右上角和左下角连接的方向),彩色像素设置在第二对角线方向D2(例如图17中左上角和右下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图17所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
示例地,图16及图17所示的最小重复单元中的第一颜色像素A可以为红色像素R,第二颜色像素B可以为绿色像素G,第三颜色像素C可以为蓝色像素Bu。或者;图16及图17所示的最小重复单元中的第一颜色像素A可以为红色像素R,第二颜色像素B可以为黄色像素Y,第三颜色像素C可以为蓝色像素Bu。或者;图16及图17所示的最小重复单元中的第一颜色像素A可以为品红色像素M,第二颜色像素B可以为青色像素Cy,第三颜色像素C可以为黄色像素Y。
例如,图18是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为8行8列64个像素101,子单元为4行4列16个像素101,排布方式为:
Figure PCTCN2019121697-appb-000006
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图18所示,全色像素W设置在第一对角线方向D1(即图18中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图18中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图18所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
例如,图19是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为8行8列64个像素101,子单元为4行4列16个像素101,排布方式为:
Figure PCTCN2019121697-appb-000007
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图19所示,全色像素W设置在第一对角线方向D1(即图19中右上角和左下角连接的方向),彩色像素设置在第二对角线方向D2(例如图19中左上角和右下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
如图19所示,一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
图12至图19所示例子中,每一个子单元内,相邻的全色像素W呈对角线设置,相邻的彩色像素也呈对角线设置。在另一个例子中,每一个子单元内,相邻的全色像素沿水平方向设置,相邻的彩色像素也沿水平方向设置;或者,相邻的全色像素沿垂直方向设置,相邻的彩色像素也沿垂直方向设置。相邻子单元中的全色像素可以呈水平方向设置或呈垂直方向设置,相邻子单元的中的彩色像素也可以呈水平方向设置或呈垂直方向设置。
例如,图20是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为4行4列16个像素101,子单元为2行2列4个像素101,排布方式为:
Figure PCTCN2019121697-appb-000008
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图20所示,在每一个子单元内,相邻的全色像素W沿垂直方向设置,相邻的彩色像素也沿垂直方向设置。一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
例如,图21是本申请实施方式中又一种最小重复单元的像素101排布及透镜170覆盖方式的示意图。最小重复单元为4行4列16个像素101,子单元为2行2列4个像素101,排布方式为:
Figure PCTCN2019121697-appb-000009
W表示全色像素;A表示多个彩色像素中的第一颜色像素;B表示多个彩色像素中的第二颜色像素;C表示多个彩色像素中的第三颜色像素。
如图21所示,在每一个子单元内,相邻的全色像素W沿水平方向设置,相邻的彩色像素也沿水平方向设置。一个透镜170覆盖一个像素101。每个全色像素及每个彩色像素均包括两个子像素102,每个子像素102在X轴的正半轴及负半轴均有分布,且在Y轴的正半轴及负半轴均有分布。
图20和21所示的最小重复单元中,第一颜色像素A可以为红色像素R,第二颜色像素B可以为绿色像素G,第三颜色像素C可以为蓝色像素Bu。或者;图20和21所示的最小重复单元中,第一颜色像素A可以为红色像素R,第二颜色像素B可以为黄色像素Y,第三颜色像素C可以为蓝色像素Bu。或者;图20和21所示的最小重复单元中,第一颜色像素A可以为品红色像素M,第二颜色像素B可以为青色像素Cy,第三颜色像素C可以为黄色像素Y。
图12至图21所示的最小重复单元中,每个全色像素及每个彩色像素均包括两个子像素102。在其他实施例中,还可以是所有全色像素均包括两个子像素102,部分彩色像素包括两个子像素102;或者部分全色像素包括两个子像素102,全部彩色像素均包括两个子像素102。
图12至图21所示的最小重复单元中,每个子像素102的形状均为“L”形。在其他实施例中,还可以是每个子像素102的形状均为梯形;或者,每个子像素102的形状均为三角形;或者,部分子像素102形状为梯形,部分子像素102的形状为“L”形;或者部分子像素102的形状为三角形,部分子像素102的形状为梯形,部分子像素102的形状为“L”形等。
图12至图21所示的任意一种排布的二维像素阵列11(图2所示)中的多个全色像素和多个彩色像素均可以分别由不同的曝光控制线控制,从而实现全色像素的曝光时间和彩色像素的曝光时间的独立控制。其中,对于图12至图19所示的任意一种排布的二维像素阵列11,第一对角线方向相邻的至少两个全色像素的曝光控制电路的控制端与第一曝光控制线电连接,第二对角线方向相邻的至少两个彩色像素的曝光控制电路的控制端与第二曝光控制线电连接。对于图20和图21所示的二维像素阵列11,同一行或同一列的全色像素的曝光控制电路的控制端与第一曝光控制线电连接,同一行或同一列的彩色像素的曝光控制电路的控制端与第二曝光控制线电连接。第一曝光控制线可以传输第一曝光信号以控制全色像素的第一曝光时间,第二曝光控制线可以传输第二曝光信号以控制彩色像素的第二曝光时间。其中,当 全色像素包括两个子像素102时,则该全色像素中的两个子像素102均由同一第一曝光控制线电连接。当彩色像素包括两个子像素102时,则该彩色像素中的两个子像素102均由同一第二曝光控制线电连接。
全色像素的曝光时间与彩色像素的曝光时间独立控制时,全色像素的第一曝光时间可以小于彩色像素的第二曝光时间。例如,第一曝光时间与第二曝光时间的比例可以为1:2、1:3或1:4中的一种。例如,在光线比较暗的环境下,彩色像素更容易曝光不足,可以根据环境亮度调整第一曝光时间与第二曝光时间的比例为1:2,1:3或1:4。其中,曝光比例为上述整数比或接近整数比的情况下,有利于时序的设置信号的设置和控制。
在某些实施方式中,可以根据环境亮度来确定第一曝光时间与第二曝光时间的相对关系。例如,在环境亮度小于或等于亮度阈值时,全色像素以等于第二曝光时间的第一曝光时间来曝光;在环境亮度大于亮度阈值时,全色像素以小于第二曝光时间的第一曝光时间来曝光。在环境亮度大于亮度阈值时,可以根据环境亮度与亮度阈值之间的亮度差值来确定第一曝光时间与第二曝光时间的相对关系,例如,亮度差值越大,第一曝光时间与第二曝光时间的比例越小。示例地,在亮度差值位于第一范围[a,b)内时,第一曝光时间与第二曝光时间的比例为1:2;在亮度差值位于第二范围[b,c)内时,第一曝光时间与第二曝光时间的比例为1:3;在亮度差值大于或等于c时,第一曝光时间与第二曝光时间的比例为1:4。
在某些实施方式中,当第二颜色像素B为绿色像素G时,二维像素阵列11的多个像素101中,可以仅绿色像素G包含两个子像素102,剩余的像素101仅包含一个子像素102。可以理解,绿色像素G的灵敏度比红色像素R及蓝色像素Bu的灵敏度高,比白色像素W的灵敏度低。使用绿色像素G来进行相位对焦,在环境亮度较低时绿色像素G能够获得信噪比较高的像素信息,在环境亮度较高时绿色像素G也不会出现过饱和的情况,如此同样可以提升图像传感器10的场景适应性。
请参阅图1和22,本申请还提供一种控制方法。本申请实施方式的控制方法可以用于上述任意一项实施方式所述的图像传感器10。控制方法包括:
01:多个子像素102曝光以输出子像素信息;
02:根据子像素信息计算相位差以进行对焦;及
03:在合焦状态下,二维像素阵列11中的多个像素101曝光以获取目标图像。
请参阅图1及图23,本申请实施方式的控制方法可以由本申请实施方式的摄像头组件40实现。摄像头组件40包括镜头30、上述任意一项实施方式所述的图像传感器10、及处理芯片20。图像传感器10可以接收穿过镜头30入射的光线并生成电信号。图像传感器10与处理芯片20电连接。处理芯片20可以与图像传感器10、镜头30封装在摄像头组件40的壳体内;或者,图像传感器10和镜头30封装在摄像头组件40的壳体内,处理芯片20设置在壳体外。步骤01可以由图像传感器10实现。步骤02可以由处理芯片20实现。步骤03可以由图像传感器10和处理芯片20共同实现。也即是说,图像传感器10中的多个子像素102曝光以输出子像素信息。处理芯片20根据子像素信息计算相位差以进行对焦。在合焦状态下,图像传感器10的二维像素阵列11中的多个像素101曝光,处理芯片20根据多个像素101的曝光结果获取目标图像。
本申请实施方式的控制方法及摄像头组件40采用既能获取到水平方向上的相位信息,又能获取到垂直方向上的相位信息的图像传感器10,从而使得本申请实施方式的控制方法及摄像头组件40既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,提升了本申请实施方式的控制方法及摄像头组件40的场景适应性及相位对焦的准确度。
此外,本申请实施方式的控制方法及摄像头组件40不需要对图像传感器10中的像素101进行遮挡设计,所有像素101都可以用于成像,不需要进行坏点补偿,有利于提升摄像头组件40获取的目标图像的质量。
另外,本申请实施方式的控制方法及摄像头组件40中的所有包含两个子像素102的像素101都可以用于相位对焦,相位对焦的准确度更高。
请参阅图1和图24,在某些实施方式中,如图12至图21所示的实施例,多个像素101包括多个全色像素及多个彩色像素,彩色像素具有比全色像素更窄的光谱响应。至少部分全色像素及至少部分彩色像素均包括两个子像素102。控制方法还包括:
04:获取环境亮度;
步骤01多个子像素102曝光以输出子像素信息包括:
011:在环境亮度小于第一预定亮度时,全色像素中的子像素102曝光以输出全色子像素信息;
步骤02根据子像素信息计算相位差以进行对焦包括:
021:根据全色子像素信息计算相位差以进行对焦。
步骤01多个子像素102曝光以输出子像素信息还包括:
012:在环境亮度大于第二预定亮度时,彩色像素中的子像素102曝光以输出彩色子像素信息;
步骤02根据子像素信息计算相位差以进行对焦包括:
022:根据彩色子像素信息计算相位差以进行对焦。
步骤01多个子像素102曝光以输出子像素信息还包括:
013:在环境亮度大于第一预定亮度且小于第二预定亮度时,全色像素中的子像素102曝光以输出全色子像素信息,彩色像素中的子像素102曝光以输出彩色子像素信息;
步骤02根据子像素信息计算相位差以进行对焦包括:
023:根据全色子像素信息及彩色子像素信息中的至少一种计算相位差以进行对焦。
请参阅图1和图23,在某些实施方式中,步骤04、步骤021、步骤022、及步骤023均可以由处理芯片10实现。步骤011、步骤012、及步骤013均可以由图像传感器10实现。也即是说,处理芯片20可以获取环境亮度。在环境亮度小于第一预定亮度时,图像传感器10的全色像素中的子像素102曝光以输出全色子像素信息,处理芯片20根据全色子像素信息计算相位差以进行对焦。在环境亮度大于第二预定亮度时,图像传感器10的彩色像素中的子像素102曝光以输出彩色子像素信息,处理芯片20根据彩色子像素信息计算相位差以进行对焦。在环境亮度大于第一预定亮度且小于第二预定亮度时,图像传感器10的全色像素中的子像素102曝光以输出全色子像素信息,彩色像素中的子像素102曝光以输出彩色子像素信息,处理芯片20根据全色子像素信息及彩色子像素信息中的至少一种计算相位差以进行对焦。
其中,第一预定亮度小于第二预定亮度。根据全色子像素信息及彩色子像素信息中的至少一种计算相位差以进行对焦包括:(1)仅根据全色子像素信息计算相位差以进行对焦;(2)仅根据彩色子像素信息计算相位差以进行对焦;(3)同时根据全色子像素信息及彩色子像素信息计算相位差以进行对焦。
本申请实施方式的控制方法及摄像头组件40采用包括全色像素和彩色像素的图像传感器10来实现相位对焦,从而可以在亮度较低(例如亮度小于或等于第一预定亮度)的环境下采用灵敏度较高的全色像素来进行相位对焦,在亮度较高(例如亮度大于或等于第二预定亮度)的环境下采用灵敏度较低的彩色像素来进行相位对焦,而在亮度适中(例如大于第一预定亮度且小于第二预定亮度)的环境下采用全色像素和彩色像素中的至少一种来进行相位对焦。如此,可以避免在环境亮度较低时采用彩色像素进行相位对焦,因彩色像素中的子像素102输出的彩色子像素信息信噪比过低导致对焦不准确的问题,也可以避免在环境亮度较高时采用全色像素进行对焦,因全色像素中的子像素102过饱和导致对焦不准确的问题,由此使得相位对焦在多类应用场景下的准确度均较高,相位对焦的场景适应性较好。
请参阅图1、图12和图25,全色像素包括两个全色子像素。全色子像素信息包括第一全子色像素信息及第二全色子像素信息。第一全色子像素信息及第二全色子像素信息分别由位于透镜170的第一方位的全色子像素及位于透镜170的第二方位的全色子像素输出。一个第一全色子像素信息与对应的一个第二全色子像素信息作为一对全色子像素信息对。根据全色子像素信息计算相位差以进行对焦的步骤包括:
0511:根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线;
0512:根据多对全色子像素信息对中的第二全色子像素信息形成第二曲线;及
0513:根据第一曲线及第二曲线计算相位差以进行对焦。
请再参阅图23,在某些实施方式中,步骤0511、步骤0512、及步骤0513均可以由处理芯片20实现。也即是说,处理芯片20可用于根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线、根据多对全色子像素信息对中的第二全色子像素信息形成第二曲线、以及根据第一曲线及第二曲线计算相位差以进行对焦。
具体地,请结合图26,在一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图26所示的第一方位P1及第二方位P2是根据图26所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素 102,第一方位P1与第二方位P2会对应地发生变化。对应到图26的像素阵列11的每一个全色像素W中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。第一全色子像素信息由位于透镜170的第一方位P1的全色子像素W输出,第二全色子像素信息由位于透镜170的第二方位P2的全色子像素W输出。例如,全色子像素W 11,P1、W 13,P1、W 15,P1、W 17,P1、W 22,P1、W 24,P1、W 26,P1、W 28,P1等位于第一方位P1,全色子像素W 11,P2、W 13,P2、W 15,P2、W 17,P2、W 22,P2、W 24,P2、W 26,P2、W 28,P2等位于第二方位P2。同一个全色像素中的两个全色子像素W组成一对全色子像素对,相应的,同一个全色像素W中的两个全色子像素的全色子像素信息组成一对全色子像素信息对,例如,全色子像素W 11,P1的全色子像素信息与全色子像素W 11,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 13,P1的全色子像素信息与全色子像素W 13,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 15,P1的全色子像素信息与全色子像素W 15,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 17,P1的全色子像素信息与全色子像素W 17,P2的全色子像素信息组成一对全色子像素信息对等,依此类推。
请结合图27,在另一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图27所示的第一方位P1及第二方位P2是根据图27所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图27的像素阵列11的每一全色像素W中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。第一全色子像素信息由位于透镜170的第一方位P1的全色子像素W输出,第二全色子像素信息由位于透镜170的第二方位P2的全色子像素W输出。例如,全色子像素W 11,P1、W 13,P1、W 15,P1、W 17,P1、W 21,P1、W 23,P1、W 25,P1、W 27,P1等位于第一方位P1,全色子像素W 11,P2、W 13,P2、W 15,P2、W 17,P2、W 21,P2、W 23,P2、W 25,P2、W 27,P2位于第二方位P2。同一个全色像素W中的两个全色子像素组成一对全色子像素对,相应的,同一个全色像素中的两个全色像素的全色子像素信息组成一对全色子像素信息对,例如,全色子像素W 11,P1的全色子像素信息与全色子像素W 11,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 13,P1的全色子像素信息与全色子像素W 13,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 15,P1的全色子像素信息与全色子像素W 15,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 17,P1的全色子像素信息与全色子像素W 17,P2的全色子像素信息组成一对全色子像素信息对等,依此类推。
在获取到多对全色子像素信息对之后,处理芯片20根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线,并根据多对全色子像素对中的第二全色子像素信息形成第二曲线,再根据第一曲线和第二曲线计算出相位差。示例地,多个第一全色子像素信息可以描绘出一条直方图曲线(即第一曲线),多个第二全色子像素信息可以描绘出另一条直方图曲线(即第二曲线)。随后,处理芯片20可以根据两条直方图曲线的峰值所处的位置来计算出两条直方图曲线之间的相位差。随后,处理芯片20即可根据相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1、图12和图28,在某些实施方式中,全色像素包括两个全色子像素。全色子像素信息包括第一全色子像素信息及第二全色子像素信息。第一全色子像素信息及第二全色子像素信息分别由位于透镜170的第一方位的全色子像及位于透镜170的第二方位的全色子像输出。多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对。根据全色子像素信息计算相位差以进行对焦,包括:
0521:根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息;
0522:根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息;
0523:根据多个第三全色子像素信息形成第一曲线;
0524:根据多个第四全色子像素信息形成第二曲线;及
0525:根据第一曲线及第二曲线计算相位差以进行对焦。
请再参阅图23,在某些实施方式中,步骤0521、步骤0522、步骤0523、步骤0524、及步骤0525均可以由处理芯片20实现。也即是说,处理芯片20可以用于根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息、根据每对全色子像素信息对中的多个第二全色子像素信息计 算第四全色子像素信息。处理芯片20还可以用于根据多个第三全色子像素信息形成第一曲线、根据多个第四全色子像素信息形成第二曲线、及根据第一曲线及第二曲线计算相位差信息以进行对焦。
具体地,请再结合图26,在一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图26所示的第一方位P1及第二方位P2是根据图26所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图26的像素阵列11的每一个全色像素中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。第一全色子像素信息由位于透镜170的第一方位P1的全色子像素W输出,第二全色子像素信息由位于透镜170的第二方位P2的全色子像素W输出。例如,全色子像素W 11,P1、W 13,P1、W 15,P1、W 17,P1、W 22,P1、W 24,P1、W 26,P1、W 28,P1等位于第一方位P1,全色子像素W 11,P2、W 13,P2、W 15,P2、W 17,P2、W 22,P2、W 24,P2、W 26,P2、W 28,P2等位于第二方位P2。多个位于第一方位P1的全色子像素W与多个位于第二方位P2的全色子像素W组成一对全色子像素对,相应的,多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对。例如,同一子单元中的多个第一全色子像素信息与该子单元中的多个第二全色子像素信息作为一对全色子像素信息对,也即,全色子像素W 11,P1、W 22,P1的全色子像素信息与全色像素W 11,P2、W 22,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 13,P1、W 24,P1的全色子像素信息与全色子像素W 13,P2、W 24,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 15,P1、W 26,P1的全色子像素信息与全色子像素W 15,P2、W 26,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 17,P1、W 28,P1的全色子像素信息与全色子像素W 17,P2、W 28,P2的全色子像素信息组成一对全色子像素信息对,依此类推。再例如,同一个最小重复单元中的多个第一全色子像素信息与该最小重复单元中的多个第二全色子像素信息作为一对全色子像素信息对,即全色子像素W 11,P1、W 13,P1、W 22,P1、W 24,P1、W 31,P1、W 33,P1、W 42,P1、W 44,P1的全色子像素信息与全色像素W 11,P2、W 13,P2、W 22,P2、W 24,P2、W 31,P2、W 33,P12、W 42,P2、W 44,P2的全色子像素信息组成一对全色子像素信息对等,以此类推。
请再结合图27,在另一个例子中,在另一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图27所示的第一方位P1及第二方位P2是根据图27所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图27的像素阵列11的每一全色像素中,一个子像素102(即全色子像素W)位于透镜170的第一方位P1,另一个子像素102(即全色子像素W)位于透镜170的第二方位P2。第一全色子像素信息由位于透镜170的第一方位P1的全色子像素W输出,第二全色子像素信息由位于透镜170的第二方位P2的全色子像素W输出。例如,全色子像素W 11,P1、W 13,P1、W 15,P1、W 17,P1、W 21,P1、W 23,P1、W 25,P1、W 27,P1等位于第一方位P1,全色子像素W 11,P2、W 13,P2、W 15,P2、W 17,P2、W 21,P2、W 23,P2、W 25,P2、W 27,P2位于第二方位P2。多个位于第一方位P1的全色子像素W与多个位于第二方位P2的全色子像素W组成一对全色子像素对,相应的,多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对。例如,同一子单元中的多个第一全色子像素信息与该子单元中的多个第二全色子像素信息作为一对全色子像素信息对,也即,全色子像素W 11,P1、W 21,P1的全色子像素信息与全色像素W 11,P2、W 21,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 13,P1、W 23,P1的全色子像素信息与全色子像素W 13,P2、W 23,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 15,P1、W 25,P1的全色子像素信息与全色子像素W 15,P2、W 25,P2的全色子像素信息组成一对全色子像素信息对,全色子像素W 17,P1、W 27,P1的全色子像素信息与全色子像素W 17,P2、W 27,P2的全色子像素信息组成一对全色子像素信息对,依此类推。再例如,同一个最小重复单元中的多个第一全色子像素信息与该最小重复单元中的多个第二全色子像素信息作为一对全色子像素信息对,即全色子像素W 11,P1、W 13,P1、W 21,P1、W 23,P1、W 31,P1、W 33,P1、W 41,P1、W 43,P1的全色子像素信息与全色像素W 11,P2、W 13,P2、W 21,P2、W 23,P2、W 31,P2、W 33,P12、W 41,P2、W 43,P2的全色子像素信息组成一对全色子像素信息对等,以此类推。
在获取多对全色子像素信息对之后,处理芯片20根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息,并根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息。示例地,对于全色子像素W 11,P1、W 22,P1的全色像素信息与全色子像素W 11,P2、W 22,P2 的全色子像素信息组成的全色子像素信息对,第三全色子像素信息的计算方式可为:LT1=W 11,P1+W 22,P1,第四全色子像素信息的计算方式可为:RB1=W 11,P2+W 22,P2。对于全色子像素W 11,P1、W 13,P1、W 22,P1、W 24,P1、W 31,P1、W 33,P1、W 42,P1、W 44,P1的全色子像素信息与全色像素W 11,P2、W 13,P2、W 22,P2、W 24,P2、W 31,P2、W 33,P12、W 42,P2、W 44,P2的全色子像素信息组成一对全色子像素信息对,第三全色子像素信息的计算方式可为:LT1=(W 11,P1+W 13,P1+W 22,P1+W 24,P1+W 31,P1+W 33,P1+W 42,P1+W 44,P1)/8,第四全色子像素信息的计算方式可为:RB1=(W 11,P2+W 13,P2+W 22,P2+W 24,P2+W 31,P2+W 33,P12+W 42,P2+W 44,P2)/8。其余的全色子像素信息对的第三全色子像素信息及第四全色子像素信息的计算方式与此类似,在此不再赘述。如此,处理芯片20可以得到多个第三全色子像素信息及多个第四子全色像素信息。多个第三全色子像素信息可以描绘出一条直方图曲线(即第一曲线),多个第四全色子像素信息可以描绘出另一条直方图曲线(即第二曲线)。随后,处理芯片20可以根据两条直方图曲线计算出相位差。随后,处理芯片20即可根据相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1、图12和图29,在某些实施方式中,彩色像素包括两个彩色子像素。彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一彩色子像素信息及第二子像素信息分别由位于透镜170的第一方位的彩色子像素及位于透镜170的第二方位的彩色子像素输出。一个第一彩色子像素信息与对应的一个第二彩色子像素信息作为一对彩色子像素信息对。根据彩色子像素信息计算相位差以进行对焦的步骤包括:
0531:根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线;
0532:根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线;及
0533:根据第三曲线及第四曲线计算相位差以进行对焦。
请再参阅图23,在某些实施方式中,步骤0531、步骤0532、及步骤0533均可以由处理芯片20实现。也即是说,处理芯片20可根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线、根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线、以及根据第三曲线及第四曲线计算相位差以进行对焦。
具体地,请结合图26,在一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图26所示的第一方位P1及第二方位P2是根据图26所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图26的像素阵列11的每一个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。第一彩色子像素信息由位于透镜170的第一方位P1的彩色子像素输出,第二彩色子像素信息由位于透镜170的第二方位P2的彩色子像素输出。例如,彩色子像素A 12,P1、B 14,P1、A 16,P1、B 18,P1、A 21,P1、B 23,P1、A 25,P1、B 27,P1等位于第一方位P1,彩色子像素A 12,P2、B 14,P2、A 16,P2、B 18,P2、A 21,P2、B 23,P2、A 25,P2、B 27,P2等位于第二方位P2。同一个彩色像素中的彩色子像素组成一对彩色子像素对,相应的,同一个彩色像素中的彩色子像素的彩色子像素信息组成一对彩色子像素信息对,例如,彩色子像素A 12,P1的彩色子像素信息与彩色子像素A 12,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 14,P1的彩色子像素信息与彩色子像素B 14,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素A 16,P1的彩色子像素信息与彩色子像素A 16,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 18,P1的彩色子像素信息与彩色子像素B 18,P2的彩色子像素信息组成一对彩色子像素信息对等,依此类推。
请结合图27,在另一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图27所示的第一方位P1及第二方位P2是根据图27所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图27的像素阵列11的每一个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。第一彩色子像素信息由位于透镜170的第一方位P1的彩色子像素输出,第二彩色子像素信息由位于透镜170的第二方位P2的彩色子像素输出。例如,彩色子像素A 12,P1、B 14,P1、A 16,P1、B 18,P1、A 22,P1、B 24,P1、A 26,P1、B 28,P1 等位于第一方位P1,彩色子像素A 12,P2、B 14,P2、A 16,P2、B 18,P2、A 22,P2、B 24,P2、A 26,P2、B 28,P2等位于第二方位P2。同一个彩色像素中的彩色子像素组成一对彩色子像素对,相应的,同一个彩色像素中的彩色子像素的彩色子像素信息组成一对彩色子像素信息对,例如,彩色子像素A 12,P1的彩色子像素信息与彩色子像素A 12,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 14,P1的彩色子像素信息与彩色子像素B 14,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素A 16,P1的彩色子像素信息与彩色子像素A 16,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 18,P1的彩色子像素信息与彩色子像素B 18,P2的彩色子像素信息组成一对彩色子像素信息对等,依此类推。
在获取多对彩色子像素信息对之后,处理芯片20根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线,并根据多对彩色子像素对中的第二彩色子像素信息形成第四曲线,再根据第三曲线和第四曲线计算出相位差。示例地,多个第一彩色子像素信息可以描绘出一条直方图曲线(即第三曲线),多个第二彩色子像素信息可以描绘出另一条直方图曲线(即第四曲线)。随后,处理芯片20可以根据两条直方图曲线的峰值所处的位置来计算出两条直方图曲线之间的相位差。随后,处理芯片20即可根据相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1、图12和图30,在某些实施方式中,彩色像素包括两个彩色子像素。彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一彩色子像素信息及第二彩色子像素信息分别由位于透镜170的第一方位的彩色子像素及位于透镜170的第二方位的彩色子像素输出。多个第一彩色子像素信息与对应的多个第二彩色子像素信息作为一对彩色子像素信息对。根据彩色子像素信息计算相位差以进行对焦,包括:
0541:根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息;
0542:根据每对彩色子像素信息对中的多个第二彩色子像素信息计算第四彩色子像素信息;
0543:根据多个第三彩色子像素信息形成第三曲线;
0544:根据多个第四彩色子像素信息形成第四曲线;及
0545:根据第三曲线及第四曲线计算相位差以进行对焦。
请再参阅图23,在某些实施方式中,步骤0541、步骤0542、步骤0543、步骤0544、及步骤0545均可以由处理芯片20实现。也即是说,处理芯片20可以用于根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息、根据每对彩色子像素信息对中的多个第二彩色子像素信息计算第四彩色子像素信息。处理芯片20还可以用于根据多个第三彩色子像素信息形成第三曲线、根据多个第四彩色子像素信息形成第四曲线、及根据第三曲线及第四曲线计算相位差以进行对焦。
具体地,请再结合图26,在一个例子中,在一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图26所示的第一方位P1及第二方位P2是根据图26所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图26的像素阵列11的每一个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。第一彩色子像素信息由位于透镜170的第一方位P1的彩色子像素输出,第二彩色子像素信息由位于透镜170的第二方位P2的彩色子像素输出。例如,彩色子像素A 12,P1、B 14,P1、A 16,P1、B 18,P1、A 21,P1、B 23,P1、A 25,P1、B 27,P1等位于第一方位P1,彩色子像素A 12,P2、B 14,P2、A 16,P2、B 18,P2、A 21,P2、B 23,P2、A 25,P2、B 27,P2等位于第二方位P2。多个位于第一方位P1的彩色子像素与多个位于第二方位P2的彩色子像素组成一对彩色子像素对,相应的,多个第一彩色子像素信息与对应的多个第二彩色子像素信息作为一对彩色子像素信息对。例如,同一子单元中的多个第一彩色子像素信息与该子单元中的多个第二彩色子像素信息作为一对彩色子像素信息对,也即,彩色子像素A 12,P1、A 21,P1的彩色子像素信息与彩色子像素A 12,P2、A 21,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 14,P1、B 23,P1的彩色子像素信息与彩色子像素B 14,P2、B 23,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素A 16,P1、A 25,P1的彩色子像素信息与彩色子像素A 16,P2、A 25,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 18,P1、B 27,P1的彩色子像素信息与彩色子像素B 18,P2、B 27,P2的彩色子像素信息组成一对彩色子像素信息对,依此类推。再例如,同一个最小重复单元中的多个第一彩色子像素信息与该最小重复单元中的多 个第二彩色子像素信息作为一对彩色子像素信息对,即彩色子像素A 12,P1、B 14,P1、A 21,P1、B 23,P1、B 32,P1、C 34,P1、B 41,P1、C 43,P1的彩色子像素信息与彩色子像素A 12,P12、B 14,P2、A 21,P12、B 23,P2、B 32,P2、C 34,P2、B 41,P2、C 43,2的彩色子像素信息组成一对彩色子像素信息对等,以此类推。
请再结合图27,在另一个例子中,每个透镜170的第一方位P1为透镜170的左上角部分对应的位置,第二方位P2为透镜170的右下角部分对应的位置。需要说明的是,图27所示的第一方位P1及第二方位P2是根据图27所示的子像素102的分布示例所确定出来的,对于其他类型的分布的子像素102,第一方位P1与第二方位P2会对应地发生变化。对应到图27的像素阵列11的每一个彩色像素中,一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第一方位P1,另一个子像素102(即彩色子像素A、彩色子像素B或彩色子像素C)位于透镜170的第二方位P2。第一彩色子像素信息由位于透镜170的第一方位P1的彩色子像素输出,第二彩色子像素信息由位于透镜170的第二方位P2的彩色子像素输出。例如,彩色子像素A 12,P1、B 14,P1、A 16,P1、B 18,P1、A 22,P1、B 24,P1、A 26,P1、B 28,P1等位于第一方位P1,彩色子像素A 12,P2、B 14,P2、A 16,P2、B 18,P2、A 22,P2、B 24,P2、A 26,P2、B 28,P2等位于第二方位P2。多个位于第一方位P1的彩色子像素与多个位于第二方位P2的彩色子像素组成一对彩色子像素对,相应的,多个第一彩色子像素信息与对应的多个第二彩色子像素信息作为一对彩色子像素信息对。例如,同一子单元中的多个第一彩色子像素信息与该子单元中的多个第二彩色子像素信息作为一对彩色子像素信息对,也即,彩色子像素A 12,P1、A 22,P1的彩色子像素信息与彩色子像素A 12,P2、A 22,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 14,P1、B 24,P1的彩色子像素信息与彩色子像素B 14,P2、B 24,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素A 16,P1、A 26,P1的彩色子像素信息与彩色子像素A 16,P2、A 26,P2的彩色子像素信息组成一对彩色子像素信息对,彩色子像素B 18,P1、B 28,P1的彩色子像素信息与彩色子像素B 18,P2、B 28,P2的彩色子像素信息组成一对彩色子像素信息对,依此类推。再例如,同一个最小重复单元中的多个第一彩色子像素信息与该最小重复单元中的多个第二彩色子像素信息作为一对彩色子像素信息对,即彩色子像素A 12,P1、B 14,P1、A 22,P1、B 24,P1、B 32,P1、C 34,P1、B 42,P1、C 44,P1的彩色子像素信息与彩色子像素A 12,P2、B 14,P2、A 22,P2、B 24,P2、B 32,P2、C 34,P2、B 42,P2、C 44,P2的彩色子像素信息组成一对彩色子像素信息对等,以此类推。
在获取多对彩色子像素信息对之后,处理芯片20根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息,并根据每对彩色子像素信息对中的多个第二彩色子像素信息计算第四彩色子像素信息。示例地,对于彩色子像素A 12,P1、A 21,P1的彩色子像素信息与彩色子像素A 12,P2、A 21,P2的彩色子像素信息组成一对彩色子像素信息对,第三彩色子像素信息的计算方式可为:LT2=A 12,P1+A 21,P1,第四彩色子像素信息的计算方式可为:LB2=A 12,P2+A 21,P2,其中,a、b、c为系数。再例如,对于彩色子像素A 12,P1、B 14,P1、A 21,P1、B 23,P1、B 32,P1、C 34,P1、B 41,P1、C 43,P1的彩色子像素信息与彩色子像素A 12,P12、B 14,P2、A 21,P12、B 23,P2、B 32,P2、C 34,P2、B 41,P2、C 43,2的彩色子像素信息组成一对彩色子像素信息对,第三彩色子像素信息的计算方式可为:LT2=a*(A 12,P1+A 21,P1)+b*(B 14,P1+B 23,P1+B 32,P1+B41 ,P1)+c*(C 34,P1+C 43,P1),第四彩色子像素信息的计算方式可为:LB=a*(A 12,P2+A 21,P2)+b*(B 14,P2+B 23,P2+B 32,P2+B41 ,P2)+c*(C 34,P1+C 43,P2),其中,a、b、c为系数。其余的彩色子像素信息对的第三彩色子像素信息及第四彩色子像素信息的计算方式与此类似,在此不再赘述。如此,处理芯片20可以得到多个第三彩色子像素信息及多个第四彩色子像素信息。多个第三彩色子像素信息可以描绘出一条直方图曲线(即第三曲线),多个第四彩色子像素信息可以描绘出另一条直方图曲线(即第四曲线)。随后,处理芯片20可以根据两条直方图曲线计算出相位差。随后,处理芯片20即可根据相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1及图31,在某些实施方式中,全色像素包括两个全色子像素。彩色像素包括两个彩色子像素。全色子像素信息包括第一全色子像素信息及第二全色子像素信息,彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一全色子像素信息、第二全色子像素信息、第一彩色子像素信息、及第二彩色子像素信息分别由位于透镜170的第一方位的全色子像素、位于透镜170的第二方位的全色子像素、位于透镜170的第一方位的彩色子像素、及位于透镜170的第二方位的彩色子像素输出。一个第一全色子像素信息与对应的一个第二全色子像素信息作为一对全色子像素信息对,一个第一彩色子像素信息与对应的一个第二彩色子像素信息作为一对彩色子像素信息对。根据全色子像素信息及彩色子像素信息计算相位差以进行对焦,包括:
0551:根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线;
0552:根据多对全色子像素信息对中的第二全色子像素信息形成第二曲线;
0553:根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线;
0554:根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线;及
0555:根据第一曲线、第二曲线、第三曲线、及第四曲线计算相位差以进行对焦。
请再参阅图23,在某些实施方式中,步骤0551、步骤0552、步骤0553、步骤0554、及步骤0555均可以由处理芯片20实现。也即是说,处理芯片20可以用于根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线、及根据多对全色子像素信息对中的第二全色子像素信息形成第二曲线。处理芯片20还可以用于根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线、及根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线。处理芯片20还可以用于根据第一曲线、第二曲线、第三曲线、及第四曲线计算相位差以进行对焦。
其中,第一方位、第二方位与图25及图29所示实施方式的控制方法中的第一方位P1、第二方位P2的释义相同,在此不再赘述。全色子像素信息对与彩色子像素信息对与图25及图29所示实施方式的控制方法中的全色子像素信息对与彩色子像素信息对的释义也相同,在此也不再赘述。
在获取到多对全色子像素信息对及多对彩色子像素信息对后,处理芯片20可以根据多对全色子像素信息对中的第一全色子像素信息形成第一曲线,还可根据多对全色子像素信息对中的第二全色像素信息形成第二曲线,还可根据多对彩色子像素信息对中的第一彩色子像素信息形成第三曲线,还可根据多对彩色子像素信息对中的第二彩色子像素信息形成第四曲线。随后,处理芯片20根据第一曲线及第二曲线计算出一个第一相位差,并根据第三曲线及第四曲线计算出一个第二相位差,再根据第一相位差和第二相位差计算出最终的相位差。在一个例子中,处理芯片20可以计算出第一相位差与第二相位差的均值并将该均值作为最终的相位差;在另一个例子中,处理芯片20可以赋予第一相位差第一权值,赋予第二相位差第二权值,其中,第一权值与第二权值不相等,处理芯片20再根据第一相位差、第一权值、第二位差、及第二权值计算出最终的相位差。随后,处理芯片20即可根据最终的相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1及图32,在某些实施方式中,全色像素包括两个全色子像素。彩色像素包括两个彩色子像素。全色子像素信息包括第一全色子像素信息及第二全色子像素信息,彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息。第一全色子像素信息、第二全色子像素信息、第一彩色子像素信息、及第二彩色子像素信息分别由位于透镜170的第一方位的全色子像素、位于透镜170的第二方位的全色子像素、位于透镜170的第一方位的彩色子像素、及位于透镜170的第二方位的彩色子像素输出。多个第一全色子像素信息与对应的多个第二全色子像素信息作为一对全色子像素信息对,多个第一彩色子像素信息与对应的多个第二彩色子像素信息作为一对彩色子像素信息对。根据全色子像素信息及彩色子像素信息计算相位差以进行对焦,包括:
0561:根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息;
0562:根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息;
0563:根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息;
0564:根据每对彩色子像素信息对中的多个第二彩色子像素信息计算第四彩色子像素信息;
0565:根据多个第三全色子像素信息形成第一曲线;
0566:根据多个第四全色子像素信息形成第二曲线;
0567:根据多个第三彩色子像素信息形成第三曲线;
0568:根据多个第四彩色子像素信息形成第四曲线;及
0569:根据第一曲线、第二曲线、第三曲线、及第四曲线计算相位差以进行对焦。
请再参阅图23,在某些实施方式中,步骤0561、步骤0562、步骤0563、步骤0564、步骤0565、步骤0566、步骤0567、步骤0568、及步骤0569均可以由处理芯片20实现。也即是说,处理芯片20可以用于根据每对全色子像素信息对中的多个第一全色子像素信息计算第三全色子像素信息、根据每对全色子像素信息对中的多个第二全色子像素信息计算第四全色子像素信息、根据每对彩色子像素信息对中的多个第一彩色子像素信息计算第三彩色子像素信息、及根据每对彩色子像素信息对中的多个第二彩 色子像素信息计算第四彩色子像素信息。处理芯片20还可以用于根据多个第三全色子像素信息形成第一曲线、根据多个第四全色子像素信息形成第二曲线、根据多个第三彩色子像素信息形成第三曲线、及根据多个第四彩色子像素信息形成第四曲线。处理芯片20还可以用于根据第一曲线、第二曲线、第三曲线、及第四曲线计算相位差以进行对焦。
其中,第一方位、第二方位与图28及图30所示实施方式的控制方法中的第一方位P1、第二方位P2的释义相同,在此不再赘述。全色子像素信息对与彩色子像素信息对与图28及图30所示实施方式的控制方法中的全色子像素信息对与彩色子像素信息对的释义也相同,在此也不再赘述。第三全色子像素信息及第四全色子像素信息的计算方式与图28所示实施方式的控制方法中第三全色子像素信息及第四全色子像素信息的计算方式相同,在此不再赘述。第三彩色子像素信息及第四彩色子像素信息的计算方式与图30所示实施方式的控制方法中第三彩色子像素信息及第四彩色子像素信息的计算方式相同,在此不再赘述。
在获取到多个第三全色子像素信息、多个第四全色子像素信息、多个第三彩色子像素信息、及多个第四彩色子像素信息之后,处理芯片20可以根据多个第三全色子像素信息形成第一曲线,还可以根据多个第四子全色像素信息形成第二曲线,还可以根据多个第三彩色子像素信息形成第三曲线、及根据多个第四彩色子像素信息形成第四曲线。随后,处理芯片20根据第一曲线及第二曲线计算出一个第一相位差,并根据第三曲线及第四曲线计算出一个第二相位差,再根据第一相位差和第二相位差计算出最终的相位差。在一个例子中,处理芯片20可以计算出第一相位差与第二相位差的均值并将该均值作为最终的相位差;在另一个例子中,处理芯片20可以赋予第一相位差第一权值,赋予第二相位差第二权值,其中,第一权值与第二权值不相等,处理芯片20再根据第一相位差、第一权值、第二位差、及第二权值计算出最终的相位差。随后,处理芯片20即可根据最终的相位差及事先标定好的参数来确定出镜头30需要移动的距离。随后,处理芯片20可以控制镜头30移动需要移动的距离以使得镜头30处于合焦状态。
请参阅图1、图3和图33,在某些实施方式中,多个像素101包括多个全色像素及多个彩色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,每个最小重复单元包含多个子单元,每个子单元包括多个单颜色像素及多个全色像素。步骤03二维像素阵列11中的多个像素101曝光以获取目标图像包括:
031:二维像素阵列11中的多个像素101曝光并输出全色原始图像和彩色原始图像;
032:插补处理全色原始图像,获取每个子单元中所有像素101的像素信息以得到全色中间图像;
033:插补处理彩色原始图像以得到彩色中间图像,彩色中间图像中对应的子单元呈拜耳阵列排布;及
034:融合全色中间图像及彩色中间图像以得到目标图像。
请参阅图1和图23,在某些实施方式中,步骤031可以由图像传感器10实现。步骤032、步骤033、及步骤034均可以由处理芯片20实现。也即是说,图像传感器10的二维像素阵列11中的多个像素101曝光并输出全色原始图像和彩色原始图像。处理芯片20可以用于插补处理全色原始图像,获取每个子单元中所有像素101的像素信息以得到全色中间图像。处理芯片20还可以用于插补处理彩色原始图像以得到彩色中间图像,彩色中间图像中对应的子单元呈拜耳阵列排布。处理芯片20还可以用于融合全色中间图像及彩色中间图像以得到目标图像。
其中,像素101(全色像素或彩色像素)的像素信息指的是:(1)当像素101中仅有一个子像素102时,该一个子像素102的子像素信息即视为该像素101的像素信息;(2)当像素101中有两个子像素102时,该两个子像素102的子像素信息之和即视为该像素101的像素信息。
具体地,请结合图34,多个全色像素曝光后输出一帧全色原始图像,多个彩色像素曝光后输出一帧彩色原始图像。
全色原始图像包括多个全色像素W及多个空像素N(NULL)。其中,空像素既不为全色像素,也不为彩色像素,全色原始图像中空像素N所处位置可视为该位置没有像素,或者可以将空像素的像素信息视为零。比较二维像素阵列11与全色原始图像可知,对于二维像素阵列11中的每一个子单元,该子单元包括两个全色像素W和两个彩色像素(彩色像素A、彩色像素B、或彩色像素C)。全色原始图像中也具有与二维像素阵列11中的每一个子单元对应的一个子单元,全色原始图像的子单元包括两个全 色像素W和两个空像素N,两个空像素N所处位置对应二维像素阵列11子单元中的两个彩色像素所处的位置。
同样地,彩色原始图像包括多个彩色像素及多个空像素N。其中,空像素既不为全色像素,也不为彩色像素,彩色原始图像中空像素N所处位置可视为该位置没有像素,或者可以将空像素的像素信息视为零。比较二维像素阵列11与彩色原始图像可知,对于二维像素阵列11中的每一个子单元,该子单元包括两个全色像素W和两个彩色像素。彩色原始图像中也具有与二维像素阵列11中的每一个子单元对应的一个子单元,彩色原始图像的子单元包括两个彩色像素和两个空像素N,两个空像素N所处位置对应二维像素阵列11子单元中的两个全色像素W所处的位置。
处理芯片20接收到图像传感器10输出的全色原始图像和彩色原始图像后,可以对全色原始图像作进一步处理得到全色中间图像,并对彩色原始图像作进一步处理得到彩色中间图像。
例如,全色原始图像可通过图35所示的方式变换为全色中间图像。具体地,全色原始图像包括多个子单元,每个子单元包括两个空像素N和两个全色像素W,处理芯片20需要将每个子单元中的每个空像素N均替换为全色像素W,并计算出替换后位于空像素N所在位置的每个全色像素W的像素信息。对于每一个空像素N,处理芯片20将该空像素N替换为全色像素W,并根据与该替换后的全色像素W相邻的其余全色像素W的像素信息来确定该替换后的全色像素W的像素信息。以图35所示全色原始图像中的空像素N 1,8(“空像素N 1,8”为从左上方算起第一行第八列的空像素N,下同)为例,空像素N 1,8替换为全色像素W 1,8,与全色像素W 1,8相邻的像素为全色原始图像中的全色像素W 1,7以及全色像素W 2,8,作为示例,可以将全色像素W 1,7的像素信息和全色像素W 2,8的像素信息的均值作为全色像素W 1,8的像素信息。以图35所示全色原始图像中的空像素N 2,3为例,空像素N 2,3替换为全色像素W 2,3,与全色像素W 2,3相邻的全色像素为全色原始图像中的全色像素W 1,3、全色像素W 2,2、全色像素W 2,4、以及全色像素W 3,3,作为示例,处理芯片20将全色像素W 1,3的像素信息、全色像素W 2,2的像素信息、全色像素W 2,4的像素信息、以及全色像素W 3,3的像素信息的均值作为替换后的全色像素W 2,3的像素信息。
例如,彩色原始图像可通过图36所示的方式变换为彩色中间图像。具体地,彩色原始图像包括多个子单元,每个子单元均包括两个单颜色的彩色像素(即单颜色像素A、单颜色像素B、或单颜色像素C),具体地,某些子单元包括两个空像素N和两个单颜色像素A,某些子单元包括两个空像素N和两个单颜色像素B,某些子单元包括两个空像素N及两个单颜色像素C。处理芯片20先确定出每个子单元中拜耳阵列的具体排布,例如图36所示的ABBC的排布方式(还可以是CBBA、BABC、BCBA等排布方式),则以左上角的子单元为例,处理芯片20将彩色原始图像中的空像素N 1,1替换为彩色像素A 1,1,将彩色原始图像中的彩色像素A 1,2替换为彩色像素B 1,2,将彩色原始图像中的彩色像素A 2,1替换为彩色像素B 2,1,将彩色原始图像中的空像素N 2,2替换为彩色像素C 2,2。处理芯片20再计算出彩色像素A 1,1的像素信息、彩色像素B 1,2的像素信息、彩色像素B 2,1的像素信息、及彩色像素C 2,2的像素信息。如此,处理芯片20即可获得一帧彩色中间图像。
处理芯片20得到全色中间图像以及彩色中间图像后,可以融合全色中间图像和彩色中间图像以获取目标图像。
例如,全色中间图像及彩色中间图像可以通过37所示的方式融合以得到目标图像。具体地,处理芯片20先分离彩色中间图像的色彩及亮度以得到色亮分离图像。图37中色亮分离图像中的L表示亮度,CLR表示色彩。具体地,假设单颜色像素A为红色像素R,单颜色像素B为绿色像素G,单颜色像素C为蓝色像素Bu,则:(1)处理芯片20可以将RGB空间的彩色中间图像转换为YCrCb空间的色亮分离图像,此时YCrCb中的Y即为色亮分离图像中的亮度L,YCrCb中的Cr和Cb即为色亮分离图像中的色彩CLR;(2)处理芯片20也可以将RGB的彩色中间图像转换为Lab空间的色亮分离图像,此时Lab中的L即为色亮分离图像中的亮度L,Lab中的a和b即为色亮分离图像中的色彩CLR。需要说明的是,图37所示色亮分离图像中L+CLR并不表示每个像素的像素信息由L和CLR相加而成,仅表示每个像素的像素信息是由L和CLR组成。
随后,处理芯片20融合色亮分离图像的亮度以及全色中间图像的亮度。示例地,全色中间图像中每个全色像素W的像素信息即为每个全色像素的亮度信息,处理芯片20可以将色亮分离图像中每个像素的L与全色中间图像中对应位置的全色像素的W相加,即可得到亮度修正后的像素信息。处理芯片20根据多个亮度修正后的像素信息形成一张亮度修正后的色亮分离图像,再利用色彩空间转换将亮度修 正后的色亮分离图像转换为亮度修正彩色图像。
随后,处理芯片20对亮度修正彩色图像进行插值处理以得到目标图像,其中,目标图像中每个像素的像素信息均包括A、B、C三个分量的信息。需要说明的是,图37的目标图像中的A+B+C表示的是每个像素的像素信息由A、B、C三个色彩分量组成。
本申请实施方式的控制方法和摄像头组件40在镜头30合焦状态下获取清晰度较高的全色原始图像和彩色原始图像,并利用全色原始图像来修正彩色原始图像的亮度,使得最终的目标图像既具有较高的清晰度,又具有足够的亮度,目标图像的质量较好。
请参阅图1、图3和图38,在某些实施方式中,多个像素101包括多个全色像素及多个彩色像素,彩色像素具有比全色像素更窄的光谱响应。二维像素阵列11包括最小重复单元,每个最小重复单元包含多个子单元,每个子单元包括多个单颜色像素及多个全色像素。步骤03二维像素阵列11中的多个像素101曝光以获取目标图像包括:
035:二维像素阵列11中的多个像素101曝光并输出全色原始图像和彩色原始图像;
036:处理全色原始图像,将每个子单元的所有像素101作为全色大像素,并输出全色大像素的像素信息以得到全色中间图像;
037:处理彩色原始图像,以将每个子单元的所有像素101作为与该子单元中单颜色对应的单色大像素,并输出单色大像素的像素信息以得到彩色中间图像;及
038:融合全色中间图像及彩色中间图像以得到目标图像。
请参阅图1和图23,在某些实施方式中,步骤035可以由图像传感器10实现。步骤036、步骤037、及步骤038均可以由处理芯片20实现。也即是说,图像传感器10的二维像素阵列11中的多个像素101曝光并输出全色原始图像和彩色原始图像。处理芯片20可以用于处理全色原始图像,将每个子单元的所有像素101作为全色大像素,并输出全色大像素的像素信息以得到全色中间图像。处理芯片20还可以用于处理彩色原始图像,以将每个子单元的所有像素101作为与该子单元中单颜色对应的单色大像素,并输出单色大像素的像素信息以得到彩色中间图像。处理芯片20还可以用于融合全色中间图像及彩色中间图像以得到目标图像。
具体地,具体地,请结合图34,多个全色像素曝光后输出一帧全色原始图像,多个彩色像素曝光后输出一帧彩色原始图像。
处理芯片20接收到图像传感器10输出的全色原始图像和彩色原始图像后,可以对全色原始图像作进一步处理得到全色中间图像,并对彩色原始图像作进一步处理得到彩色中间图像。
例如,全色原始图像可通过图39所示的方式变换为全色中间图像。如图39所示,全色原始图像包括多个子单元,每个子单元都包括两个空像素N和两个全色像素W。处理芯片20可以将每个包括空像素N和全色像素W的子单元中的所有像素101作为与该子单元对应的全色大像素W。由此,处理芯片20即可根据多个全色大像素W形成一张全色中间图像。作为一个示例,处理芯片20可以通过以下方式将全色原始图像中每个子单元的所有像素101作为与该子单元对应的全色大像素W:处理芯片20首先合并每个子单元中的所有像素101的像素信息以得到全色大像素W的像素信息,再根据多个全色大像素W的像素信息形成全色中间图像。具体地,对于每个全色大像素,处理芯片20可以将包括空像素N和全色像素W的子单元中的所有像素信息相加,并将相加的结果作为对应该子单元的全色大像素W的像素信息,其中,空像素N的像素信息可以视为零。由此,处理芯片20即可获得多个全色大像素W的像素信息。
例如,彩色原始图像可通过图40所示的方式变换为彩色中间图像。如图40所示,彩色原始图像包括多个子单元,每个子单元都包括多个空像素N和多个单颜色的彩色像素(也称单颜色像素)。具体地,某些子单元包括两个空像素N和两个单颜色像素A,某些子单元包括两个空像素N和两个单颜色像素B,某些子单元包括两个空像素N及两个单颜色像素C。处理芯片20可以将包括空像素N和单颜色像素A的子单元中的所有像素作为与该子单元中的单颜色A对应的单色大像素A,将包括空像素N和单颜色像素B的子单元中的所有像素作为与该子单元中的单颜色B对应的单色大像素B,将包括空像素N和单颜色像素C的子单元中的所有像素作为与该子单元中的单颜色C对应的单色大像素C。由此,处理芯片20即可根据多个单色大像素A、多个单色大像素B、及多个单色大像素C形成一张彩色中间图像。作为一个示例,处理芯片20可以合并每个子单元中的所有像素的像素信息以得到单色大像素的像素信 息,从而根据多个单色大像素的像素信息形成彩色中间图像。具体地,对于单色大像素A,处理芯片20可以将包括空像素N和单颜色像素A的子单元中的所有像素的像素信息相加,并将相加的结果作为对应该子单元的单色大像素A的像素信息,其中,空像素N的像素信息可以视为零,下同;处理芯片20可以将包括空像素N和单颜色像素B的子单元中的所有像素的像素信息相加,并将相加的结果作为对应该子单元的单色大像素B的像素信息;处理芯片20可以将包括空像素N和单颜色像素C的子单元中的所有像素的像素信息相加,并将相加的结果作为对应该子单元的单色大像素C的像素信息。由此,处理芯片20即可获得多个单个大像素A的像素信息、多个单色大像素B的像素信息、以及多个单色大像素C的像素信息。处理芯片20再根据多个单色大象素A的像素信息、多个单色大像素B的像素信息、以及多个单色大像素C的像素信息形成一张彩色中间图像。
处理芯片20得到全色中间图像以及彩色中间图像后,可以融合全色中间图像和彩色中间图像以获取目标图像。
例如,全色中间图像及彩色中间图像可以通过41所示的方式融合以得到目标图像。具体地,处理芯片20先分离彩色中间图像的色彩及亮度以得到色亮分离图像。图34中色亮分离图像中的L表示亮度,CLR表示色彩。具体地,假设单色大像素A为红色像素R,单色大像素B为绿色像素G,单色大像素C为蓝色像素Bu,则:(1)处理芯片20可以将RGB空间的彩色中间图像转换为YCrCb空间的色亮分离图像,此时YCrCb中的Y即为色亮分离图像中的亮度L,YCrCb中的Cr和Cb即为色亮分离图像中的色彩CLR;(2)处理芯片20也可以将RGB的彩色中间图像转换为Lab空间的色亮分离图像,此时Lab中的L即为色亮分离图像中的亮度L,Lab中的a和b即为色亮分离图像中的色彩CLR。需要说明的是,图41所示色亮分离图像中L+CLR并不表示每个像素的像素信息由L和CLR相加而成,仅表示每个像素的像素信息是由L和CLR组成。
随后,处理芯片20融合色亮分离图像的亮度以及全色中间图像的亮度。示例地,全色中间图像中每个全色大像素W的像素信息即为每个全色大像素的亮度信息,处理芯片20可以将色亮分离图像中每个单色大像素的L与全色中间图像中对应位置的全色大像素的W相加,即可得到亮度修正后的像素信息。处理芯片20根据多个亮度修正后的像素信息形成一张亮度修正后的色亮分离图像,再利用色彩空间转换将亮度修正后的色亮分离图像转换为亮度修正彩色图像。
随后,处理芯片20对亮度修正彩色图像进行插值处理以得到目标图像,其中,目标图像中每个大像素的像素信息均包括A、B、C三个分量的信息。需要说明的是,图41的目标图像中的A+B+C表示的是每个大像素的像素信息由A、B、C三个色彩分量组成。
本申请实施方式的控制方法和摄像头组件40在镜头30合焦状态下获取清晰度较高的全色原始图像和彩色原始图像,并利用全色原始图像来修正彩色原始图像的亮度,使得最终的目标图像既具有较高的清晰度,又具有足够的亮度,目标图像的质量较好。
图33所示实施方式的控制方法获取的目标图像的分辨率高于图38所示实施方式的控制方法获取的目标图像的分辨率。在某些实施方式中,处理芯片20可以根据环境亮度来判断采用哪一种实施方式的控制方法来计算目标图像。例如,当环境亮度较高(例如亮度大于或等于第一预定亮度)时,采用图33所示实施方式的控制方法来计算目标图像,此时可以分辨率较高且亮度较好的目标图像;当环境亮度较低时,采用图38所示实施方式的控制方法来计算目标图像,此时可以使得目标图像具有最够的亮度。
二维像素阵列11中的多个像素101曝光以输出全色原始图像及彩色原始图像的过程中,全色像素的第一曝光时间可以由第一曝光控制线控制,彩色像素的第二曝光时间可以由第二曝光控制线控制,从而可以在环境亮度较高(例如亮度大于或等于第一预定亮度)时,将第一曝光时间设置成小于第二曝光时间。由此,可以避免全色像素过饱和,从而无法利用全色原始图像来修正彩色原始图像的亮度的问题。
请参阅图42,本申请还提供一种移动终端90。本申请实施方式的移动终端90可以是手机、平板电脑、笔记本电脑、智能穿戴设备(如智能手表、智能手环、智能眼镜、智能头盔等)、头显设备、虚拟现实设备等等,在此不做限制。本申请实施方式的移动终端90包括图像传感器10、处理器60、存储器70和机壳80。图像传感器10、处理器60和存储器70均安装在机壳80中。其中,图像传感器10与处理器60连接。处理器60可以执行与摄像头组件40(图23所示)中的处理芯片20相同的功能,换言之,处理器60可以实现上述任意一项实施方式所述的处理芯片20所能实现的功能。存储器70与处理器60连接,存储器70可以存储处理器60处理后得到的数据,如目标图像等。处理器60可以与图像传感器 10安装在同一个基板上,此时图像传感器10和处理器60可视为一个摄像头组件40。当然,处理器60也可以与图像传感器10安装在不同的基板上。
本申请实施方式的移动终端90采用既能获取到水平方向上的相位信息,又能获取到垂直方向上的相位信息的图像传感器10,从而使得本申请实施方式的移动终端90既可以应用在包含大量纯色横条纹的场景中,也可以应用在包含大量纯色竖条纹的场景中,提升了本申请实施方式的移动终端90的场景适应性及相位对焦的准确度。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (54)

  1. 一种图像传感器,其特征在于,包括:
    二维像素阵列,所述二维像素阵列包括多个像素,至少部分所述像素包括两个子像素,以每个所述像素的中心点为原点,平行于所述二维像素阵列的长度方向为X轴,宽度方向为Y轴,建立直角坐标系,两个所述子像素在所述X轴的正半轴及负半轴均有分布,且两个所述子像素在所述Y轴的正半轴及负半轴均有分布;及
    透镜阵列,所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素。
  2. 根据权利要求1所述的图像传感器,其特征在于,一个所述子像素的横截面的形状为上宽下窄的梯形,另一个所述子像素的横截面形状为上窄下宽的梯形。
  3. 根据权利要求1所述的图像传感器,其特征在于,两个所述子像素的横截面的形状均为三角形。
  4. 根据权利要求1所述的图像传感器,其特征在于,一个所述子像素的横截面的形状为倒“L”形,另一个所述子像素的横截面的形状为镜像的“L”形。
  5. 根据权利要求1-4任意一项所述的图像传感器,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应。
  6. 根据权利要求5所述的图像传感器,其特征在于,所述二维像素阵列包括最小重复单元,在所述最小重复单元中,所述全色像素设置在第一对角线方向,所述彩色像素设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同。
  7. 根据权利要求6所述的图像传感器,其特征在于,所述第一对角线方向相邻的至少两个所述全色像素的第一曝光时间由第一曝光信号控制,所述第二对角线方向相邻的至少两个所述彩色像素的第二曝光时间由第二曝光信号控制。
  8. 根据权利要求7所述的图像传感器,其特征在于,所述第一曝光时间小于所述第二曝光时间。
  9. 根据权利要求8所述的图像传感器,其特征在于,所述第一曝光时间与所述第二曝光时间的比例为1:2,1:3或1:4中的一种。
  10. 根据权利要求6所述的图像传感器,其特征在于,所述最小重复单元为4行4列16个像素,排布方式为:
    Figure PCTCN2019121697-appb-100001
    其中,W表示所述全色像素;
    A表示所述多个所述彩色像素中的第一颜色像素;
    B表示所述多个所述彩色像素中的第二颜色像素;
    C表示所述多个所述彩色像素中的第三颜色像素。
  11. 根据权利要求6所述的图像传感器,其特征在于,所述最小重复单元为4行4列16个像素,排布方式为:
    Figure PCTCN2019121697-appb-100002
    其中,W表示所述全色像素;
    A表示所述多个所述彩色像素中的第一颜色像素;
    B表示所述多个所述彩色像素中的第二颜色像素;
    C表示所述多个所述彩色像素中的第三颜色像素。
  12. 根据权利要求10或11所述的图像传感器,其特征在于,
    所述第一颜色像素A为红色像素R;
    所述第二颜色像素B为绿色像素G;
    所述第三颜色像素C为蓝色像素Bu。
  13. 根据权利要求10或11所述的图像传感器,其特征在于,
    所述第一颜色像素A为红色像素R;
    所述第二颜色像素B为黄色像素Y;
    所述第三颜色像素C为蓝色像素Bu。
  14. 根据权利要求10或11所述的图像传感器,其特征在于,
    所述第一颜色像素A为品红色像素M;
    所述第二颜色像素B为青色像素Cy;
    所述第三颜色像素C为黄色像素Y。
  15. 根据权利要求6、10、11任意一项所述的图像传感器,其特征在于,所述全色像素的响应波段为可见光波段。
  16. 根据权利要求6、10、11任意一项所述的图像传感器,其特征在于,所述全色像素的响应波段为可见光波段和近红外波段,与所述图像传感器中的光电转换元件的响应波段相匹配。
  17. 一种控制方法,用于图像传感器,其特征在于,所述图像传感器包括二维像素阵列及透镜阵列,所述二维像素阵列包括多个像素,至少部分所述像素包括两个子像素,以每个所述像素的中心点为原点,平行于所述二维像素阵列的长度方向为X轴,宽度方向为Y轴,建立直角坐标系,两个所述子像素在所述X轴的正半轴及负半轴均有分布,且两个所述子像素在所述Y轴的正半轴及负半轴均有分布;所述透镜阵列包括多个透镜,每个所述透镜覆盖一个所述像素;所述控制方法包括:
    所述子像素曝光以输出子像素信息;
    根据所述子像素信息计算相位差以进行对焦;及
    在合焦状态下,所述二维像素阵列中的多个所述像素曝光以获取目标图像。
  18. 根据权利要求17所述的控制方法,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述控制方法还包括:
    获取环境亮度;
    所述子像素曝光以输出子像素信息,包括:
    在所述环境亮度小于第一预定亮度时,所述全色像素中的所述子像素曝光以输出全色子像素信息;
    所述根据所述子像素信息计算相位差以进行对焦,包括:
    根据所述全色子像素信息计算所述相位差以进行对焦。
  19. 根据权利要求17所述的控制方法,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述控制方法还包括:
    获取环境亮度;
    所述子像素曝光以输出子像素信息,包括:
    在所述环境亮度大于第二预定亮度时,所述彩色像素中的所述子像素曝光以输出彩色子像素信息;
    所述根据所述子像素信息计算相位差以进行对焦,包括:
    根据所述彩色子像素信息计算所述相位差以进行对焦。
  20. 根据权利要求17所述的控制方法,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述控制方法还包括:
    获取环境亮度;
    所述子像素曝光以输出子像素信息,包括:
    在所述环境亮度大于第一预定亮度且小于第二预定亮度时,所述全色像素中的所述子像素曝光以输出全色子像素信息,所述彩色像素中的所述子像素曝光以输出彩色子像素信息;
    所述根据所述子像素信息计算相位差以进行对焦,包括:
    根据所述全色子像素信息及所述彩色子像素信息中的至少一种计算所述相位差以进行对焦。
  21. 根据权利要求18或20所述的控制方法,其特征在于,所述全色像素包括两个全色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述第一全色子像素信息及所述第 二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,一个所述第一全色子像素信息及对应的一个所述第二全色子像素信息作为一对全色子像素信息对;根据所述全色子像素信息计算所述相位差以进行对焦,包括:
    根据多对所述全色子像素信息对中的所述第一全色子像素信息形成第一曲线;
    根据多对所述全色子像素信息对中的所述第二全色子像素信息形成第二曲线;及
    根据所述第一曲线及所述第二曲线计算所述相位差以进行对焦。
  22. 根据权利要求18或20所述的控制方法,其特征在于,所述全色像素包括两个全色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,多个所述第一全色子像素信息及对应的多个所述第二全色子像素信息作为一对全色子像素信息对;根据所述全色子像素信息计算所述相位差以进行对焦,包括:
    根据每对所述全色子像素信息对中的多个所述第一全色子像素信息计算第三全色子像素信息;
    根据每对所述全色子像素信息对中的多个所述第二全色子像素信息计算第四全色子像素信息;
    根据多个所述第三全色子像素信息形成第一曲线;
    根据多个所述第四全色子像素信息形成第二曲线;及
    根据所述第一曲线及所述第二曲线计算所述相位差以进行对焦。
  23. 根据权利要求19或20所述的控制方法,其特征在于,所述彩色像素包括两个彩色子像素,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,一个所述第一彩色子像素信息及对应的一个所述第二彩色子像素信息作为一对彩色子像素信息对;根据所述彩色子像素信息计算所述相位差以进行对焦,包括:
    根据多对所述彩色子像素信息对中的所述第一彩色子像素信息形成第三曲线;
    根据多对所述彩色子像素信息对中的所述第二彩色子像素信息形成第四曲线;及
    根据所述第三曲线及所述第四曲线计算所述相位差以进行对焦。
  24. 根据权利要求19或20所述的控制方法,其特征在于,所述彩色像素包括两个彩色子像素,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,多个所述第一彩色子像素信息及对应的多个所述第二彩色子像素信息作为一对彩色子像素信息对;根据所述彩色子像素信息计算所述相位差以进行对焦,包括:
    根据每对所述彩色子像素信息对中的多个所述第一彩色子像素信息计算第三彩色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第二彩色子像素信息计算第四彩色子像素信息;
    根据多个所述第三彩色子像素信息形成第三曲线;
    根据多个所述第四彩色子像素信息形成第四曲线;及
    根据所述第三曲线及所述第四曲线计算所述相位差以进行对焦。
  25. 根据权利要求20所述的控制方法,其特征在于,所述全色像素包括两个全色子像素,所述彩色像素包括两个彩色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,一个所述第一全色子像素信息及对应的一个所述第二全色子像素信息作为一对全色子像素信息对,一个所述第一彩色子像素信息及对应的一个所述第二彩色子像素信息作为一对彩色子像素信息对;根据所述全色子像素信息及所述彩色子像素信息计算所述相位差以进行对焦,包括:
    根据多对所述全色子像素信息对中的所述第一全色子像素信息形成第一曲线;
    根据多对所述全色子像素信息对中的所述第二全色子像素信息形成第二曲线;
    根据多对所述彩色子像素信息对中的所述第一彩色子像素信息形成第三曲线;
    根据多对所述彩色子像素信息对中的所述第二彩色子像素信息形成第四曲线;及
    根据所述第一曲线、所述第二曲线、所述第三曲线、及所述第四曲线计算所述相位差以进行对焦。
  26. 根据权利要求20所述的控制方法,其特征在于,所述全色像素包括两个全色子像素,所述彩色像素包括两个彩色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,多个所述第一全色子像素信息及对应的多个所述第二全色子像素信息作为一对全色子像素信息对,多个所述第一彩色子像素信息及对应的多个所述第二彩色子像素信息作为一对彩色子像素信息对;根据所述全色子像素信息及所述彩色子像素信息计算所述相位差以进行对焦,包括:
    根据每对所述全色子像素信息对中的多个所述第一全色子像素信息计算第三全色子像素信息;
    根据每对所述全色子像素信息对中的多个所述第二全色子像素信息计算第四全色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第一彩色子像素信息计算第三彩色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第二彩色子像素信息计算第四彩色子像素信息;
    根据多个所述第三全色子像素信息形成第一曲线;
    根据多个所述第四全色子像素信息形成第二曲线;
    根据多个所述第三彩色子像素信息形成第三曲线;
    根据多个所述第四彩色子像素信息形成第四曲线;及
    根据所述第一曲线、所述第二曲线、所述第三曲线、及所述第四曲线计算所述相位差以进行对焦。
  27. 根据权利要求17所述的控制方法,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应,所述二维像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色像素及多个所述全色像素;所述二维像素阵列中的多个所述像素曝光以获取目标图像,包括:
    所述二维像素阵列中的多个所述像素曝光并输出全色原始图像和彩色原始图像;
    插补处理所述全色原始图像,获取每个所述子单元中所有像素的像素信息以得到全色中间图像;
    插补处理所述彩色原始图像以得到彩色中间图像,所述彩色中间图像中对应的所述子单元呈拜耳阵列排布;及
    融合所述全色中间图像及所述彩色中间图像以得到所述目标图像。
  28. 根据权利要求17所述的控制方法,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应,所述二维像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色像素及多个所述全色像素;所述二维像素阵列中的多个所述像素曝光以获取目标图像,包括:
    所述二维像素阵列中的多个所述像素曝光并输出全色原始图像和彩色原始图像;
    处理所述全色原始图像,将每个所述子单元的所有所述像素作为全色大像素,并输出所述全色大像素的像素信息以得到全色中间图像;
    处理所述彩色原始图像,以将每个所述子单元的所有所述像素作为与该子单元中单颜色对应的单色大像素,并输出所述单色大像素的像素信息以得到彩色中间图像;及
    融合所述全色中间图像及所述彩色中间图像以得到所述目标图像。
  29. 一种摄像头组件,其特征在于,包括:
    镜头;及
    权利要求1-16任意一项所述的图像传感器,所述图像传感器能够接收穿过所述镜头的光线。
  30. 根据权利要求29所述的摄像头组件,其特征在于,所述子像素曝光以输出子像素信息;
    所述摄像头组件还包括处理芯片,所述处理芯片用于根据所述子像素信息计算相位差以进行对焦;
    在合焦状态下,所述二维像素阵列中的多个所述像素曝光以获取目标图像。
  31. 根据权利要求30所述的摄像头组件,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述处理芯片还用于获取环境亮度;
    在所述环境亮度小于第一预定亮度时,所述全色像素中的所述子像素曝光以输出全色子像素信息;
    所述处理芯片还用于根据所述全色子像素信息计算所述相位差以进行对焦。
  32. 根据权利要求30所述的摄像头组件,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述处理芯片还用于获取环境亮度;
    在所述环境亮度大于第二预定亮度时,所述彩色像素中的所述子像素曝光以输出彩色子像素信息;
    所述处理芯片还用于根据所述彩色子像素信息计算所述相位差以进行对焦。
  33. 据权利要求30所述的摄像头组件,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述处理芯片还用于获取环境亮度;
    在所述环境亮度大于第一预定亮度且小于第二预定亮度时,所述全色像素中的所述子像素曝光以输出全色子像素信息,所述彩色像素中的所述子像素曝光以输出彩色子像素信息;
    所述处理芯片还用于根据所述全色子像素信息及所述彩色子像素信息中的至少一种计算所述相位差以进行对焦。
  34. 根据权利要求31或33所述的摄像头组件,其特征在于,所述全色像素包括两个全色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,一个所述第一全色子像素信息及对应的一个所述第二全色子像素信息作为一对全色子像素信息对;所述处理芯片还用于:
    根据多对所述全色子像素信息对中的所述第一全色子像素信息形成第一曲线;
    根据多对所述全色子像素信息对中的所述第二全色子像素信息形成第二曲线;及
    根据所述第一曲线及所述第二曲线计算所述相位差以进行对焦。
  35. 根据权利要求31或33所述的摄像头组件,其特征在于,所述全色像素包括两个全色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,多个所述第一全色子像素信息及对应的多个所述第二全色子像素信息作为一对全色子像素信息对;所述处理芯片还用于:
    根据每对所述全色子像素信息对中的多个所述第一全色子像素信息计算第三全色子像素信息;
    根据每对所述全色子像素信息对中的多个所述第二全色子像素信息计算第四全色子像素信息;
    根据多个所述第三全色子像素信息形成第一曲线;
    根据多个所述第四全色子像素信息形成第二曲线;及
    根据所述第一曲线及所述第二曲线计算所述相位差以进行对焦。
  36. 根据权利要求32或33所述的摄像头组件,其特征在于,所述彩色像素包括两个彩色子像素,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,一个所述第一彩色子像素信息及对应的一个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理芯片还用于:
    根据多对所述彩色子像素信息对中的所述第一彩色子像素信息形成第三曲线;
    根据多对所述彩色子像素信息对中的所述第二彩色子像素信息形成第四曲线;及
    根据所述第三曲线及所述第四曲线计算所述相位差以进行对焦。
  37. 根据权利要求32或33所述的摄像头组件,其特征在于,所述彩色像素包括两个彩色子像素,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,多个所述第一彩色子像素信息及对应的多个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理芯片还用于:
    根据每对所述彩色子像素信息对中的多个所述第一彩色子像素信息计算第三彩色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第二彩色子像素信息计算第四彩色子像素信息;
    根据多个所述第三彩色子像素信息形成第三曲线;
    根据多个所述第四彩色子像素信息形成第四曲线;及
    根据所述第三曲线及所述第四曲线计算所述相位差以进行对焦。
  38. 根据权利要求33所述的摄像头组件,其特征在于,所述全色像素包括两个全色子像素,所述彩色像素包括两个彩色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,一个所述第一全色子像素信息及对应的一个所述第二全色子像素信息作为一对全色子像素信息对,一个所述第一彩色子像素信息及对应的一个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理芯片还用于:
    根据多对所述全色子像素信息对中的所述第一全色子像素信息形成第一曲线;
    根据多对所述全色子像素信息对中的所述第二全色子像素信息形成第二曲线;
    根据多对所述彩色子像素信息对中的所述第一彩色子像素信息形成第三曲线;
    根据多对所述彩色子像素信息对中的所述第二彩色子像素信息形成第四曲线;及
    根据所述第一曲线、所述第二曲线、所述第三曲线、及所述第四曲线计算所述相位差以进行对焦。
  39. 根据权利要求33所述的摄像头组件,其特征在于,所述全色像素包括两个全色子像素,所述彩色像素包括两个彩色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,多个所述第一全色子像素信息及对应的多个所述第二全色子像素信息作为一对全色子像素信息对,多个所述第一彩色子像素信息及对应的多个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理芯片还用于:
    根据每对所述全色子像素信息对中的多个所述第一全色子像素信息计算第三全色子像素信息;
    根据每对所述全色子像素信息对中的多个所述第二全色子像素信息计算第四全色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第一彩色子像素信息计算第三彩色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第二彩色子像素信息计算第四彩色子像素信息;
    根据多个所述第三全色子像素信息形成第一曲线;
    根据多个所述第四全色子像素信息形成第二曲线;
    根据多个所述第三彩色子像素信息形成第三曲线;
    根据多个所述第四彩色子像素信息形成第四曲线;及
    根据所述第一曲线、所述第二曲线、所述第三曲线、及所述第四曲线计算所述相位差以进行对焦。
  40. 根据权利要求30所述的摄像头组件,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应,所述二维像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色像素及多个所述全色像素;所述二维像素阵列中的多个所述像素曝光并输出全色原始图像和彩色原始图像;
    所述处理芯片还用于:
    插补处理所述全色原始图像,获取每个所述子单元中所有像素的像素信息以得到全色中间图像;
    插补处理所述彩色原始图像以得到彩色中间图像,所述彩色中间图像中对应的所述子单元呈拜耳阵列排布;及
    融合所述全色中间图像及所述彩色中间图像以得到所述目标图像。
  41. 根据权利要求30所述的摄像头组件,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应,所述二维像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色像素及多个所述全色像素;所述二维像素阵列中的多个所述像素曝光并输出全色原始图像和彩色原始图像;
    所述处理芯片还用于:
    处理所述全色原始图像,将每个所述子单元的所有所述像素作为全色大像素,并输出所述全色大像素的像素信息以得到全色中间图像;
    处理所述彩色原始图像,以将每个所述子单元的所有所述像素作为与该子单元中单颜色对应的单色大像素,并输出所述单色大像素的像素信息以得到彩色中间图像;及
    融合所述全色中间图像及所述彩色中间图像以得到所述目标图像。
  42. 一种移动终端,其特征在于,包括:
    机壳;及
    权利要求1-16任意一项所述的图像传感器,所述图像传感器安装在所述机壳内。
  43. 根据权利要求42所述的移动终端,其特征在于,所述子像素曝光以输出子像素信息;
    所述移动终端还包括处理器,所述处理器用于根据所述子像素信息计算相位差以进行对焦;
    在合焦状态下,所述二维像素阵列中的多个所述像素曝光以获取目标图像。
  44. 根据权利要求43所述的移动终端,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述处理器还用于获取环境亮度;
    在所述环境亮度小于第一预定亮度时,所述全色像素中的所述子像素曝光以输出全色子像素信息;
    所述处理器还用于根据所述全色子像素信息计算所述相位差以进行对焦。
  45. 根据权利要求43所述的移动终端,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述处理器还用于获取环境亮度;
    在所述环境亮度大于第二预定亮度时,所述彩色像素中的所述子像素曝光以输出彩色子像素信息;
    所述处理器还用于根据所述彩色子像素信息计算所述相位差以进行对焦。
  46. 根据权利要求43所述的移动终端,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应;所述全色像素及所述彩色像素均包括两个所述子像素;所述处理器还用于获取环境亮度;
    在所述环境亮度大于第一预定亮度且小于第二预定亮度时,所述全色像素中的所述子像素曝光以输出全色子像素信息,所述彩色像素中的所述子像素曝光以输出彩色子像素信息;
    所述处理器还用于根据所述全色子像素信息及所述彩色子像素信息中的至少一种计算所述相位差以进行对焦。
  47. 根据权利要求44或46所述的移动终端,其特征在于,所述全色像素包括两个全色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,一个所述第一全色子像素信息及对应的一个所述第二全色子像素信息作为一对全色子像素信息对;所述处理器还用于:
    根据多对所述全色子像素信息对中的所述第一全色子像素信息形成第一曲线;
    根据多对所述全色子像素信息对中的所述第二全色子像素信息形成第二曲线;及
    根据所述第一曲线及所述第二曲线计算所述相位差以进行对焦。
  48. 根据权利要求44或46所述的移动终端,其特征在于,所述全色像素包括两个全色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,多个所述第一全色子像素信息及对应的多个所述第二全色子像素信息作为一对全色子像素信息对;所述处理器还用于:
    根据每对所述全色子像素信息对中的多个所述第一全色子像素信息计算第三全色子像素信息;
    根据每对所述全色子像素信息对中的多个所述第二全色子像素信息计算第四全色子像素信息;
    根据多个所述第三全色子像素信息形成第一曲线;
    根据多个所述第四全色子像素信息形成第二曲线;及
    根据所述第一曲线及所述第二曲线计算所述相位差以进行对焦。
  49. 根据权利要求45或46所述的移动终端,其特征在于,所述彩色像素包括两个彩色子像素,所 述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,一个所述第一彩色子像素信息及对应的一个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理器还用于:
    根据多对所述彩色子像素信息对中的所述第一彩色子像素信息形成第三曲线;
    根据多对所述彩色子像素信息对中的所述第二彩色子像素信息形成第四曲线;及
    根据所述第三曲线及所述第四曲线计算所述相位差以进行对焦。
  50. 根据权利要求45或46所述的移动终端,其特征在于,所述彩色像素包括两个彩色子像素,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,多个所述第一彩色子像素信息及对应的多个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理器还用于:
    根据每对所述彩色子像素信息对中的多个所述第一彩色子像素信息计算第三彩色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第二彩色子像素信息计算第四彩色子像素信息;
    根据多个所述第三彩色子像素信息形成第三曲线;
    根据多个所述第四彩色子像素信息形成第四曲线;及
    根据所述第三曲线及所述第四曲线计算所述相位差以进行对焦。
  51. 根据权利要求46所述的移动终端,其特征在于,所述全色像素包括两个全色子像素,所述彩色像素包括两个彩色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,一个所述第一全色子像素信息及对应的一个所述第二全色子像素信息作为一对全色子像素信息对,一个所述第一彩色子像素信息及对应的一个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理器还用于:
    根据多对所述全色子像素信息对中的所述第一全色子像素信息形成第一曲线;
    根据多对所述全色子像素信息对中的所述第二全色子像素信息形成第二曲线;
    根据多对所述彩色子像素信息对中的所述第一彩色子像素信息形成第三曲线;
    根据多对所述彩色子像素信息对中的所述第二彩色子像素信息形成第四曲线;及
    根据所述第一曲线、所述第二曲线、所述第三曲线、及所述第四曲线计算所述相位差以进行对焦。
  52. 根据权利要求46所述的移动终端,其特征在于,所述全色像素包括两个全色子像素,所述彩色像素包括两个彩色子像素,所述全色子像素信息包括第一全色子像素信息及第二全色子像素信息,所述彩色子像素信息包括第一彩色子像素信息及第二彩色子像素信息,所述第一全色子像素信息及所述第二全色子像素信息分别由位于所述透镜的第一方位的所述全色子像素及位于所述透镜的第二方位的所述全色子像素输出,所述第一彩色子像素信息及所述第二彩色子像素信息分别由位于所述透镜的第一方位的所述彩色子像素及位于所述透镜的第二方位的所述彩色子像素输出,多个所述第一全色子像素信息及对应的多个所述第二全色子像素信息作为一对全色子像素信息对,多个所述第一彩色子像素信息及对应的多个所述第二彩色子像素信息作为一对彩色子像素信息对;所述处理器还用于:
    根据每对所述全色子像素信息对中的多个所述第一全色子像素信息计算第三全色子像素信息;
    根据每对所述全色子像素信息对中的多个所述第二全色子像素信息计算第四全色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第一彩色子像素信息计算第三彩色子像素信息;
    根据每对所述彩色子像素信息对中的多个所述第二彩色子像素信息计算第四彩色子像素信息;
    根据多个所述第三全色子像素信息形成第一曲线;
    根据多个所述第四全色子像素信息形成第二曲线;
    根据多个所述第三彩色子像素信息形成第三曲线;
    根据多个所述第四彩色子像素信息形成第四曲线;及
    根据所述第一曲线、所述第二曲线、所述第三曲线、及所述第四曲线计算所述相位差以进行对焦。
  53. 根据权利要求43所述的移动终端,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应,所述二维像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色像素及多个所述全色像素;所述二维像素阵列中的多个所述像素曝光并输出全色原始图像和彩色原始图像;
    所述处理器还用于:
    插补处理所述全色原始图像,获取每个所述子单元中所有像素的像素信息以得到全色中间图像;
    插补处理所述彩色原始图像以得到彩色中间图像,所述彩色中间图像中对应的所述子单元呈拜耳阵列排布;及
    融合所述全色中间图像及所述彩色中间图像以得到所述目标图像。
  54. 根据权利要求43所述的移动终端,其特征在于,多个所述像素包括多个全色像素及多个彩色像素,所述彩色像素具有比所述全色像素更窄的光谱响应,所述二维像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色像素及多个所述全色像素;所述二维像素阵列中的多个所述像素曝光并输出全色原始图像和彩色原始图像;
    所述处理器还用于:
    处理所述全色原始图像,将每个所述子单元的所有所述像素作为全色大像素,并输出所述全色大像素的像素信息以得到全色中间图像;
    处理所述彩色原始图像,以将每个所述子单元的所有所述像素作为与该子单元中单颜色对应的单色大像素,并输出所述单色大像素的像素信息以得到彩色中间图像;及
    融合所述全色中间图像及所述彩色中间图像以得到所述目标图像。
PCT/CN2019/121697 2019-11-28 2019-11-28 图像传感器、控制方法、摄像头组件及移动终端 WO2021102832A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/121697 WO2021102832A1 (zh) 2019-11-28 2019-11-28 图像传感器、控制方法、摄像头组件及移动终端
CN201980100683.9A CN114424517B (zh) 2019-11-28 2019-11-28 图像传感器、控制方法、摄像头组件及移动终端
US17/747,907 US11696041B2 (en) 2019-11-28 2022-05-18 Image sensor, control method, camera component and mobile terminal with raised event adaptability and phase detection auto focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121697 WO2021102832A1 (zh) 2019-11-28 2019-11-28 图像传感器、控制方法、摄像头组件及移动终端

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/747,907 Continuation US11696041B2 (en) 2019-11-28 2022-05-18 Image sensor, control method, camera component and mobile terminal with raised event adaptability and phase detection auto focus

Publications (1)

Publication Number Publication Date
WO2021102832A1 true WO2021102832A1 (zh) 2021-06-03

Family

ID=76129048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121697 WO2021102832A1 (zh) 2019-11-28 2019-11-28 图像传感器、控制方法、摄像头组件及移动终端

Country Status (3)

Country Link
US (1) US11696041B2 (zh)
CN (1) CN114424517B (zh)
WO (1) WO2021102832A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293706A1 (en) * 2011-05-16 2012-11-22 Samsung Electronics Co., Ltd. Image pickup device, digital photographing apparatus using the image pickup device, auto-focusing method, and computer-readable medium for performing the auto-focusing method
CN107146797A (zh) * 2017-04-28 2017-09-08 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN208062054U (zh) * 2018-05-09 2018-11-06 上海和辉光电有限公司 一种像素阵列以及显示装置
CN109922270A (zh) * 2019-04-17 2019-06-21 德淮半导体有限公司 相位对焦图像传感器芯片

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6149369B2 (ja) * 2012-09-27 2017-06-21 株式会社ニコン 撮像素子
US9888198B2 (en) * 2014-06-03 2018-02-06 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with sub-pixel resolution capabilities
US9794468B2 (en) * 2014-12-02 2017-10-17 Canon Kabushiki Kaisha Image sensor, image capturing apparatus, focus detection apparatus, image processing apparatus, and control method of image capturing apparatus using pupil division in different directions
US9749556B2 (en) * 2015-03-24 2017-08-29 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with phase detection capabilities
KR102348760B1 (ko) * 2015-07-24 2022-01-07 삼성전자주식회사 이미지 센서 및 그에 따른 신호 처리 방법
CN107395990A (zh) 2017-08-31 2017-11-24 珠海市魅族科技有限公司 相位对焦方法及装置、终端、计算机装置及可读存储介质
US10313579B2 (en) 2017-08-31 2019-06-04 Qualcomm Incorporated Dual phase detection auto focus camera sensor data processing
JP2019080141A (ja) * 2017-10-24 2019-05-23 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置及び電子機器
US10419664B2 (en) 2017-12-28 2019-09-17 Semiconductor Components Industries, Llc Image sensors with phase detection pixels and a variable aperture
CN110248095B (zh) * 2019-06-26 2021-01-15 Oppo广东移动通信有限公司 一种对焦装置、对焦方法及存储介质
US11527569B2 (en) * 2020-05-18 2022-12-13 Omnivision Technologies, Inc. High dynamic range split pixel CMOS image sensor with low color crosstalk

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293706A1 (en) * 2011-05-16 2012-11-22 Samsung Electronics Co., Ltd. Image pickup device, digital photographing apparatus using the image pickup device, auto-focusing method, and computer-readable medium for performing the auto-focusing method
CN107146797A (zh) * 2017-04-28 2017-09-08 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN208062054U (zh) * 2018-05-09 2018-11-06 上海和辉光电有限公司 一种像素阵列以及显示装置
CN109922270A (zh) * 2019-04-17 2019-06-21 德淮半导体有限公司 相位对焦图像传感器芯片

Also Published As

Publication number Publication date
US20220279150A1 (en) 2022-09-01
CN114424517A (zh) 2022-04-29
CN114424517B (zh) 2024-03-01
US11696041B2 (en) 2023-07-04

Similar Documents

Publication Publication Date Title
WO2021223590A1 (zh) 图像传感器、控制方法、摄像头组件和移动终端
CN110649056B (zh) 图像传感器、摄像头组件及移动终端
WO2021196553A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
WO2021179806A1 (zh) 图像获取方法、成像装置、电子设备及可读存储介质
US11812164B2 (en) Pixel-interpolation based image acquisition method, camera assembly, and mobile terminal
CN110740272B (zh) 图像采集方法、摄像头组件及移动终端
CN110649057B (zh) 图像传感器、摄像头组件及移动终端
WO2021233040A1 (zh) 控制方法、摄像头组件和移动终端
WO2021179805A1 (zh) 图像传感器、摄像头组件、移动终端及图像获取方法
WO2021233039A1 (zh) 控制方法、摄像头组件和移动终端
WO2021103818A1 (zh) 图像传感器、控制方法、摄像头组件及移动终端
CN110784634B (zh) 图像传感器、控制方法、摄像头组件及移动终端
US20220336508A1 (en) Image sensor, camera assembly and mobile terminal
WO2022007215A1 (zh) 图像获取方法、摄像头组件及移动终端
CN112738493B (zh) 图像处理方法、图像处理装置、电子设备及可读存储介质
WO2021062661A1 (zh) 图像传感器、摄像头组件及移动终端
US20220150450A1 (en) Image capturing method, camera assembly, and mobile terminal
US20220139974A1 (en) Image sensor, camera assembly, and mobile terminal
CN112822475B (zh) 图像处理方法、图像处理装置、终端及可读存储介质
WO2021102832A1 (zh) 图像传感器、控制方法、摄像头组件及移动终端
US20220279108A1 (en) Image sensor and mobile terminal
WO2022141743A1 (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质
CN112235485A (zh) 图像传感器、图像处理方法、成像装置、终端及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953942

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953942

Country of ref document: EP

Kind code of ref document: A1