CN111586323A - Image sensor, control method, camera assembly and mobile terminal - Google Patents

Image sensor, control method, camera assembly and mobile terminal Download PDF

Info

Publication number
CN111586323A
CN111586323A CN202010377218.4A CN202010377218A CN111586323A CN 111586323 A CN111586323 A CN 111586323A CN 202010377218 A CN202010377218 A CN 202010377218A CN 111586323 A CN111586323 A CN 111586323A
Authority
CN
China
Prior art keywords
pixel
pixels
sub
color
panchromatic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010377218.4A
Other languages
Chinese (zh)
Inventor
杨鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010377218.4A priority Critical patent/CN111586323A/en
Publication of CN111586323A publication Critical patent/CN111586323A/en
Priority to PCT/CN2021/088404 priority patent/WO2021223590A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14603Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Electromagnetism (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The application discloses an image sensor, a control method, a camera assembly and a mobile terminal. The image sensor includes a two-dimensional array of pixels and an array of lenses. The two-dimensional pixel array includes a plurality of color pixels and a plurality of panchromatic pixels. Color pixels have a narrower spectral response than panchromatic pixels. The two-dimensional pixel array includes a minimum repeating unit. In the minimum repeating unit, the panchromatic pixels are arranged in a first diagonal direction, and the color pixels are arranged in a second diagonal direction. Each pixel comprises at least two sub-pixels. The lens array includes a plurality of lenses, each lens covering one pixel. In the embodiment of the application, the two-dimensional pixel array comprises a plurality of color pixels and a plurality of panchromatic pixels, so that the light flux is increased compared with a common color sensor, the signal to noise ratio is better, the focusing performance is better under dark light, and the sensitivity of the sub-pixels is higher. Each pixel comprises at least two sub-pixels, and the resolution of the image sensor can be improved while phase focusing is realized.

Description

Image sensor, control method, camera assembly and mobile terminal
Technical Field
The present application relates to the field of imaging technologies, and in particular, to an image sensor, a control method, a camera assembly, and a mobile terminal.
Background
With the development of electronic technology, terminals having a camera function have become popular in people's lives. Currently, focusing methods adopted for mobile phone photographing mainly include contrast focusing and Phase Detection Auto Focusing (PDAF). Contrast focusing is more accurate but too slow. The phase focusing speed is high, the phase focusing on the market is realized on a color Sensor (Bayer Sensor), and the focusing performance in a dark light environment is not good enough.
Disclosure of Invention
The embodiment of the application provides an image sensor, a control method, a camera assembly and a mobile terminal.
The image sensor of the embodiment of the application comprises a two-dimensional pixel array and a lens array. The two-dimensional array of pixels includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are arranged in a first diagonal direction and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel comprises at least two sub-pixels. The lens array includes a plurality of lenses, each of the lenses covering one of the pixels.
The control method of the embodiment of the application is used for the image sensor. The image sensor includes a two-dimensional array of pixels and an array of lenses. The two-dimensional array of pixels includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are arranged in a first diagonal direction and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel comprises at least two sub-pixels. The lens array includes a plurality of lenses, each of the lenses covering one of the pixels. The control method comprises the following steps: controlling the at least two sub-pixel exposures to output at least two sub-pixel information; determining phase information according to the at least two pieces of sub-pixel information to carry out focusing; and under the in-focus state, controlling the two-dimensional pixel array to be exposed so as to acquire a target image.
The camera assembly of the embodiment of the application comprises a lens and an image sensor. The image sensor can receive light rays passing through the lens. The image sensor includes a two-dimensional array of pixels and an array of lenses. The two-dimensional array of pixels includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are arranged in a first diagonal direction and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel comprises at least two sub-pixels. The lens array includes a plurality of lenses, each of the lenses covering one of the pixels.
The mobile terminal of the embodiment of the application comprises a shell and a camera assembly. The camera assembly is mounted on the housing. The camera assembly comprises a lens and an image sensor. The image sensor can receive light rays passing through the lens. The image sensor includes a two-dimensional array of pixels and an array of lenses. The two-dimensional array of pixels includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are arranged in a first diagonal direction and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel comprises at least two sub-pixels. The lens array includes a plurality of lenses, each of the lenses covering one of the pixels.
The mobile terminal of the embodiment of the application comprises a shell and a camera assembly. The camera assembly is mounted on the housing. The camera assembly comprises a lens and an image sensor. The image sensor can receive light rays passing through the lens. The image sensor includes a two-dimensional array of pixels and an array of lenses. The two-dimensional array of pixels includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are arranged in a first diagonal direction and the color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel comprises at least two sub-pixels. The lens array includes a plurality of lenses, each of the lenses covering one of the pixels. The mobile terminal further comprises a processor, and the processor is used for realizing the control method: controlling the at least two sub-pixel exposures to output at least two sub-pixel information; determining phase information according to the at least two pieces of sub-pixel information to carry out focusing; and under the in-focus state, controlling the two-dimensional pixel array to be exposed so as to acquire a target image.
In the image sensor, the control method, the camera assembly and the mobile terminal of the embodiment of the application, the two-dimensional pixel array comprises a plurality of color pixels and a plurality of panchromatic pixels, compared with a common color sensor, the light transmission quantity is increased, the signal to noise ratio is better, the focusing performance is better under dark light, and the sensitivity of the sub-pixels is higher. In addition, each pixel comprises at least two sub-pixels, and the resolution of the image sensor can be improved while phase focusing is realized.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of an image sensor of certain embodiments of the present application;
FIGS. 2-8 are schematic illustrations of the distribution of sub-pixels according to certain embodiments of the present disclosure;
FIG. 9 is a schematic diagram of a pixel circuit according to some embodiments of the present application;
FIG. 10 is a schematic illustration of different color channel exposure saturation times;
FIGS. 11-20 are schematic diagrams of pixel arrangements and lens coverage of the minimal repeating unit according to some embodiments of the present disclosure;
FIG. 21 is a schematic flow chart diagram of a control method according to certain embodiments of the present application;
FIG. 22 is a schematic view of a camera head assembly according to certain embodiments of the present application;
FIG. 23 is a schematic illustration of the principle of a control method according to certain embodiments of the present application;
FIG. 24 is a schematic flow chart diagram of a control method according to certain embodiments of the present application;
FIGS. 25-27 are schematic diagrams of control methods according to certain embodiments of the present application;
fig. 28 is a schematic diagram of a mobile terminal according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application.
Referring to fig. 1, an image sensor 10 is provided. The image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17. The two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array 11 includes a minimum repeating unit in which panchromatic pixels are arranged in a first diagonal direction and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel 101 comprises at least two sub-pixels 102. The lens array 17 includes a plurality of lenses 170, each lens 170 covering one pixel 101.
Referring to fig. 1 and 21, the present application further provides a control method. The control method is used for the image sensor 10. The image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17. The two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array 11 includes a minimum repeating unit in which panchromatic pixels are arranged in a first diagonal direction and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel 101 comprises at least two sub-pixels 102. The lens array 17 includes a plurality of lenses 170, each lens 170 covering one pixel 101. The control method comprises the following steps:
01: controlling the exposure of the at least two sub-pixels 102 to output at least two sub-pixel information;
02: determining phase information according to the at least two pieces of sub-pixel information to carry out focusing;
03: in the in-focus state, the two-dimensional pixel array 11 is controlled to be exposed to acquire a target image.
Referring to fig. 1 and 22, the present application further provides a camera assembly 40. The camera assembly 40 includes a lens 30 and an image sensor 10. The image sensor 10 can receive light passing through the lens 30. The image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17. The two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array 11 includes a minimum repeating unit in which panchromatic pixels are arranged in a first diagonal direction and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel 101 comprises at least two sub-pixels 102. The lens array 17 includes a plurality of lenses 170, each lens 170 covering one pixel 101.
Referring to fig. 1, 22 and 28, the present application further provides a mobile terminal 90. The mobile terminal 90 includes a housing 80 and a camera assembly 40. The camera assembly 40 is mounted on the chassis 80. The camera assembly 40 includes a lens 30 and an image sensor 10. The image sensor 10 can receive light passing through the lens 30. The image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17. The two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array 11 includes a minimum repeating unit in which panchromatic pixels are arranged in a first diagonal direction and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel 101 comprises at least two sub-pixels 102. The lens array 17 includes a plurality of lenses 170, each lens 170 covering one pixel 101.
Referring to fig. 1, 21, 22 and 28, the present application further provides a mobile terminal 90. The mobile terminal 90 includes a housing 80 and a camera assembly 40. The camera assembly 40 is mounted on the chassis 80. The camera assembly 40 includes a lens 30 and an image sensor 10. The image sensor 10 can receive light passing through the lens 30. The image sensor 10 includes a two-dimensional pixel array 11 and a lens array 17. The two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels. The two-dimensional pixel array 11 includes a minimum repeating unit in which panchromatic pixels are arranged in a first diagonal direction and color pixels are arranged in a second diagonal direction. The first diagonal direction is different from the second diagonal direction. Wherein each pixel 101 comprises at least two sub-pixels 102. The lens array 17 includes a plurality of lenses 170, each lens 170 covering one pixel 101. The mobile terminal 90 further comprises a processor 60, and the processor 60 is configured to implement the control method:
01: controlling the exposure of the at least two sub-pixels 102 to output at least two sub-pixel information;
02: determining phase information according to the at least two pieces of sub-pixel information to carry out focusing;
03: in the in-focus state, the two-dimensional pixel array 11 is controlled to be exposed to acquire a target image.
With the development of electronic technology, terminals having a camera function have become popular in people's lives. Currently, focusing methods adopted for mobile phone photographing mainly include contrast focusing and Phase Detection Auto Focusing (PDAF). Contrast focusing is more accurate but too slow. The phase focusing speed is high, the phase focusing on the market is realized on a color Sensor (Bayer Sensor), and the focusing performance in a dark light environment is not good enough.
For the reasons described above, the present application provides an image sensor 10 (shown in fig. 1). In the image sensor 10 according to the embodiment of the present invention, the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, which increases the amount of light passing, has a better signal-to-noise ratio, and has better focusing performance under dark light and higher sensitivity of the sub-pixels 102 compared to a general color sensor. In addition, each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.
Next, the basic structure of the image sensor 10 will be described. Referring to fig. 1, fig. 1 is a schematic diagram of an image sensor 10 and a pixel 101 according to an embodiment of the disclosure. The image sensor 10 includes a two-dimensional pixel array 11, a filter array 16, and a lens array 17. The lens array 17, the filter 16, and the two-dimensional pixel array 11 are arranged in this order along the light receiving direction of the image sensor 10.
The image sensor 10 may employ a Complementary Metal Oxide Semiconductor (CMOS) photosensitive element or a Charge-coupled Device (CCD) photosensitive element.
The two-dimensional pixel array 11 includes a plurality of pixels 101 two-dimensionally arranged in an array form. Each pixel 101 comprises at least two sub-pixels 102. For example, each pixel 101 includes two subpixels 102, three subpixels 102, four subpixels 102, or more subpixels 102.
The filter array 16 includes a plurality of filters 160, and each filter 160 covers a corresponding one of the pixels 101. The spectral response of each pixel 101 (i.e., the color of light that the pixel 101 is capable of receiving) is determined by the color of the filter 160 corresponding to that pixel 102.
The lens array 17 includes a plurality of lenses 170, and each lens 170 covers a corresponding one of the pixels 101.
Fig. 2-8 show schematic distribution diagrams of sub-pixels 102 in various image sensors 10. In the distribution of the sub-pixels 102 shown in fig. 2 to 8, each pixel 101 includes at least two sub-pixels 102, so that the resolution of the image sensor 10 can be improved while phase focusing is realized.
For example, fig. 2 is a distribution diagram of a sub-pixel 102 according to an embodiment of the present disclosure. Each pixel 101 comprises two sub-pixels 102. The boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11. The cross section of each sub-pixel 102 is rectangular in shape, and the cross sections of the two sub-pixels 102 in each pixel 101 are equal in area. Here, the cross section refers to a section taken perpendicular to the light receiving direction of the image sensor 10 (the same applies hereinafter). The two sub-pixels 102 in each pixel 101 are distributed in central symmetry with respect to a center point of the pixel 101. The phase information can be determined from the two sub-pixel information that the two sub-pixels 102 in each pixel 101 are exposed to output, thereby achieving phase focusing, and at the same time, the longitudinal resolution of the image sensor 10 can be improved by each pixel 101 including the two sub-pixels 102 whose boundary lines are parallel to the longitudinal direction X of the two-dimensional pixel array 11.
For example, fig. 3 is a distribution diagram of another sub-pixel 102 according to an embodiment of the present disclosure. Each pixel 101 comprises two sub-pixels 102. The boundary between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11. The cross section of each sub-pixel 102 is rectangular in shape, and the cross sections of the two sub-pixels 102 in each pixel 101 are equal in area. The two sub-pixels 102 in each pixel 101 are distributed in central symmetry with respect to a center point of the pixel 101. The phase information can be determined from the two sub-pixel information that the two sub-pixels 102 in each pixel 101 are exposed to output, thereby achieving phase focusing, and at the same time, the lateral resolution of the image sensor 10 can be improved by each pixel 101 including the two sub-pixels 102 whose boundary lines are parallel to the width direction Y of the two-dimensional pixel array 11.
For example, fig. 4 is a distribution diagram of another sub-pixel 102 according to the embodiment of the present application. Each pixel 101 comprises two sub-pixels 102. The boundary between the two sub-pixels 102 is inclined with respect to the longitudinal direction X of the two-dimensional pixel array 11. The cross section of each sub-pixel 102 is trapezoidal, the cross section of one sub-pixel 102 in the same pixel 101 is trapezoidal with a narrow top and a wide bottom, and the cross section of the other sub-pixel 102 is trapezoidal with a wide top and a narrow bottom. The two sub-pixels 102 in each pixel 101 are distributed in central symmetry with respect to a center point of the pixel 101. A rectangular coordinate system is established by taking the central point of each pixel 101 as the origin, the length direction parallel to the two-dimensional pixel array 11 as a horizontal axis and the width direction as a longitudinal axis, the two sub-pixels 102 are distributed on the positive half axis and the negative half axis of the horizontal axis, and the two sub-pixels 102 are distributed on the positive half axis and the negative half axis of the longitudinal axis. One sub-pixel 102 of each pixel 101 is simultaneously disposed in the first quadrant, the second quadrant, and the fourth quadrant, and the other sub-pixel 102 is simultaneously disposed in the second quadrant, the third quadrant, and the fourth quadrant. The two sub-pixels 102 can acquire phase information in the horizontal direction and phase information in the vertical direction, so that the image sensor 10 can be applied to a scene containing a large number of pure-color horizontal stripes and a scene containing a large number of pure-color vertical stripes, the scene adaptability of the image sensor 10 is good, and the accuracy of phase focusing is high. Meanwhile, each pixel 101 includes two sub-pixels 102 whose boundary lines are inclined with respect to the length direction X of the two-dimensional pixel array 11, and the lateral resolution or the longitudinal resolution of the image sensor 10 can be improved.
For example, fig. 5 is a distribution diagram of another sub-pixel 102 according to the embodiment of the present application. Each pixel 101 comprises two sub-pixels 102. The boundary between the two sub-pixels 102 is inclined with respect to the width direction Y of the two-dimensional pixel array 11. The cross section of each sub-pixel 102 is trapezoidal, the cross section of one sub-pixel 102 in the same pixel 101 is trapezoidal with a wide top and a narrow bottom, and the cross section of the other sub-pixel 102 is trapezoidal with a narrow top and a wide bottom. The two sub-pixels 102 in each pixel 101 are distributed in central symmetry with respect to a center point of the pixel 101. A rectangular coordinate system is established by taking the central point of each pixel 101 as the origin, the length direction parallel to the two-dimensional pixel array 11 as a horizontal axis and the width direction as a longitudinal axis, the two sub-pixels 102 are distributed on the positive half axis and the negative half axis of the horizontal axis, and the two sub-pixels 102 are distributed on the positive half axis and the negative half axis of the longitudinal axis. One sub-pixel 102 of each pixel 101 is simultaneously disposed in the first, second, and third quadrants, and the other sub-pixel 102 is simultaneously disposed in the first, third, and fourth quadrants. The two sub-pixels 102 can acquire phase information in the horizontal direction and phase information in the vertical direction, so that the image sensor 10 can be applied to a scene containing a large number of pure-color horizontal stripes and a scene containing a large number of pure-color vertical stripes, the scene adaptability of the image sensor 10 is good, and the accuracy of phase focusing is high. Meanwhile, each pixel 101 includes two sub-pixels 102 whose boundary lines are inclined with respect to the width direction Y of the two-dimensional pixel array 11, and the lateral resolution or the longitudinal resolution of the image sensor 10 can be improved.
For example, fig. 6 is a distribution diagram of another sub-pixel 102 according to the embodiment of the present application. Each pixel 101 includes two sub-pixels 102, and in some of the pixels 101, a boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11; in the partial pixel 101, a boundary between two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11. Further, the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11 and the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11 are distributed in alternate rows or alternate columns. In the partial pixel 101, two sub-pixels 102 can acquire phase information in the vertical direction; in some pixels 101, two sub-pixels 102 may acquire phase information in the horizontal direction, so that the image sensor 10 may be applied to a scene including a large number of pure color horizontal stripes, or a scene including a large number of pure color vertical stripes, and the scene adaptability of the image sensor 10 is good, and the accuracy of phase focusing is high. Meanwhile, the partial pixel 101 includes two sub-pixels 102 whose boundary lines are parallel to the length direction X of the two-dimensional pixel array 11, and the partial pixel 101 includes two sub-pixels 102 whose boundary lines are parallel to the width direction Y of the two-dimensional pixel array 11, which can improve the lateral resolution and the longitudinal resolution of the image sensor 10 to some extent. In addition, the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11 and the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11 are distributed in alternate rows or alternate columns, which does not increase the difficulty of routing of the two-dimensional pixel array 11 too much.
For example, fig. 7 is a distribution diagram of another sub-pixel 102 according to the embodiment of the present application. Each pixel 101 includes two sub-pixels 102, and in some of the pixels 101, a boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11; in the partial pixel 101, a boundary between two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11. Further, the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11 and the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11 are distributed alternately. In the partial pixel 101, two sub-pixels 102 can acquire phase information in the vertical direction; in some pixels 101, two sub-pixels 102 may acquire phase information in the horizontal direction, so that the image sensor 10 may be applied to a scene including a large number of pure color horizontal stripes, or a scene including a large number of pure color vertical stripes, and the scene adaptability of the image sensor 10 is good, and the accuracy of phase focusing is high. Meanwhile, the partial pixel 101 includes two sub-pixels 102 whose boundary lines are parallel to the length direction X of the two-dimensional pixel array 11, and the partial pixel 101 includes two sub-pixels 102 whose boundary lines are parallel to the width direction Y of the two-dimensional pixel array 11, which can improve the lateral resolution and the longitudinal resolution of the image sensor 10 to some extent. In addition, the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the longitudinal direction X of the two-dimensional pixel array 11 and the pixels 101 whose boundary between the two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11 are distributed in a staggered manner, so that the phase focusing effect can be improved to the maximum extent.
For example, fig. 8 is a distribution diagram of another sub-pixel 102 according to an embodiment of the present disclosure. Each pixel 101 comprises four sub-pixels 102, and the four sub-pixels 102 are arranged in a 2 × 2 matrix. The phase information can be accurately determined according to the at least two sub-pixel information outputted by exposing any at least two sub-pixels 102 in each pixel 101, so as to realize phase focusing, and meanwhile, the horizontal resolution and the vertical resolution of the image sensor 10 can be simultaneously improved as each pixel 101 comprises four sub-pixels 102 distributed in a 2 x 2 matrix. In other examples, when each pixel 101 includes four sub-pixels 102, the four sub-pixels 102 may also be distributed along the width direction Y of the two-dimensional pixel array 11, and a boundary between two adjacent sub-pixels 102 is parallel to the length direction X of the two-dimensional pixel array 11; alternatively, the four sub-pixels 102 may be distributed along the longitudinal direction X of the two-dimensional pixel array 11, and the boundary between two adjacent sub-pixels 102 may be parallel to the width direction Y of the two-dimensional pixel array 11, or the boundary between two adjacent sub-pixels 102 may be inclined with respect to the longitudinal direction X or the width direction Y of the two-dimensional pixel array 11, and the like, but the present invention is not limited thereto.
It should be noted that, besides the examples of the cross-sectional shapes of the sub-pixels 102 shown in fig. 2 to 8, the cross-sectional shapes of the sub-pixels 102 may also be other regular or irregular shapes, and are not limited herein.
Further, in addition to the case where each pixel 101 includes two sub-pixels 102 and each pixel 101 includes four sub-pixels 102, it is also possible that: the partial pixel 101 includes two sub-pixels 102, the partial pixel 101 includes four sub-pixels 102, or includes at least two sub-pixels 102 in any other number, which is not limited herein.
Fig. 9 is a schematic diagram of a pixel circuit 110 according to an embodiment of the present disclosure. The embodiment of the present application is described by taking an example that each pixel 101 includes two sub-pixels 102, and a case that each pixel 101 includes more than two sub-pixels 102 will be described later. The operation of the pixel circuit 110 will be described with reference to fig. 1 and 9.
As shown in fig. 1 and 9, the pixel circuit 110 includes a first photoelectric conversion element 1171 (e.g., a photodiode PD), a second photoelectric conversion element 1172 (e.g., a photodiode PD), a first exposure control circuit 1161 (e.g., a transfer transistor 112), a second exposure control circuit 1162 (e.g., a transfer transistor 112), a reset circuit (e.g., a reset transistor 113), an amplification circuit (e.g., an amplification transistor 114), and a selection circuit (e.g., a selection transistor 115). At this time, one of the sub-pixels 102 includes the first photoelectric conversion element 1171, and the other sub-pixel 102 includes the second photoelectric conversion element 1172. In the embodiment of the present application, the transfer transistor 112, the reset transistor 113, the amplifying transistor 114, and the selection transistor 115 are, for example, MOS transistors, but are not limited thereto.
For example, referring to fig. 1 and 9, the gate TG of the transfer transistor 112 is connected to a vertical driving unit (not shown in the figure) of the image sensor 10 through an exposure control line (not shown in the figure); a gate RG of the reset transistor 113 is connected to the vertical driving unit through a reset control line (not shown in the figure); the gate SEL of the selection transistor 115 is connected to the vertical driving unit through a selection line (not shown in the figure). The first exposure control circuit 1161 in each pixel circuit 110 is electrically connected to the first photoelectric conversion element 1171, and is configured to transfer an electric potential accumulated by the first photoelectric conversion element 1171 after being irradiated with light; the second exposure control circuit 1162 in each pixel circuit 110 is electrically connected to the second photoelectric conversion element 1172, and is used to transfer the potential accumulated by the second photoelectric conversion element 1172 after being irradiated with light. For example, the first photoelectric conversion element 1171 and the second photoelectric conversion element 1172 each include a photodiode PD whose anode is connected to, for example, ground. The photodiode PD converts the received light into electric charges. The cathode of the photodiode PD is connected to the floating diffusion unit FD via the first exposure control circuit 1161 or the second exposure control circuit 1162 (e.g., the transfer transistor 112). The floating diffusion FD is connected to the gate of the amplification transistor 114 and the source of the reset transistor 113.
For example, the first exposure control circuit 1161 and the second exposure control circuit 1162 may be the transfer transistor 112, and the control terminal TG of the first exposure control circuit 1161 and the second exposure control circuit 1162 is the gate of the transfer transistor 112. The transfer transistor 112 is turned on when a pulse of an effective level (for example, VPIX level) is transmitted to the gate of the transfer transistor 112 through the exposure control line. The transfer transistor 112 transfers the charge photoelectrically converted by the photodiode PD to the floating diffusion unit FD.
For example, the reset circuit is connected to both the first exposure control circuit 1161 and the second exposure control circuit 1162. The reset circuit may be a reset transistor 113. The drain of the reset transistor 113 is connected to the pixel power supply VPIX. A source of the reset transistor 113 is connected to the floating diffusion FD. Before the electric charges are transferred from the photodiode PD to the floating diffusion unit FD, a pulse of an active reset level is transmitted to the gate of the reset transistor 113 via the reset line, and the reset transistor 113 is turned on. The reset transistor 113 resets the floating diffusion unit FD to the pixel power supply VPIX.
For example, the gate of the amplification transistor 114 is connected to the floating diffusion FD. The drain of the amplifying transistor 114 is connected to the pixel power supply VPIX. After the floating diffusion FD is reset by the reset transistor 113, the amplification transistor 114 outputs a reset level through the output terminal OUT via the selection transistor 115. After the charge of the photodiode PD is transferred by the transfer transistor 112, the amplification transistor 114 outputs a signal level through the output terminal OUT via the selection transistor 115.
For example, the drain of the selection transistor 115 is connected to the source of the amplification transistor 114. The source of the selection transistor 115 is connected to a column processing unit (not shown in the figure) in the image sensor 10 through an output terminal OUT. When a pulse of an effective level is transmitted to the gate of the selection transistor 115 through the selection line, the selection transistor 115 is turned on. The signal output from the amplifying transistor 114 is transmitted to the column processing unit through the selection transistor 115.
When the two sub-pixels 102 are exposed to output pixel information respectively, or output pixel information respectively and output phase information, the first exposure control circuit 1161 transfers charges generated after the first photoelectric conversion element 1171 receives light to output first sub-pixel information, and after the reset circuit is reset, the second exposure control circuit 1162 transfers charges generated after the second photoelectric conversion element 1172 receives light to output second sub-pixel information.
When the two sub-pixels 102 are exposed to combine the output pixel information (excluding the phase information), the first exposure control circuit 1161 transfers the electric charges generated after the first photoelectric conversion element 1171 receives the light, and the second exposure control circuit 1162 can transfer the electric charges generated after the second photoelectric conversion element 1172 receives the light, to output the combined pixel information. That is, while the first exposure control circuit 1161 transfers the electric charges generated after the first photoelectric conversion element 1171 receives the light, the second exposure control circuit 1162 also transfers the electric charges generated after the second photoelectric conversion element 1172 receives the light, or the first exposure control circuit 1161 first transfers the electric charges generated after the first photoelectric conversion element 1171 receives the light, and the second exposure control circuit 1162 then transfers the electric charges generated after the second photoelectric conversion element 1172 receives the light.
When the two sub-pixels 102 are exposed to combine the output pixel information and output the phase information, the first exposure control circuit 1161 transfers the charges generated by the first photoelectric conversion element 1171 after receiving the light to output the first sub-pixel information, after the reset circuit is reset, the second exposure control circuit 1162 transfers the charges generated by the second photoelectric conversion element 1172 after receiving the light to output the second sub-pixel information, and the first sub-pixel information and the second sub-pixel information are used for being combined into the combined pixel information. At this time, the image sensor 10 may further include a buffer (not shown) for storing the first sub-pixel information and the second sub-pixel information to output the phase information.
In signal transmission, pixel information and phase information may be separately output through a Mobile Industry Processor Interface (MIPI), and since the required amount of phase information is small, it may be assumed that one line of phase information is output every time the pixel circuit 110 outputs four lines of pixel information. Of course, in practical applications, the pixel circuit 110 may output one row of phase information every time it outputs one row of pixel information, which is not limited herein.
It should be noted that the pixel structure of the pixel circuit 110 in the embodiment of the present application is not limited to the structure shown in fig. 9. For example, the pixel circuit 110 may have a three-transistor pixel structure in which the functions of the amplifying transistor 114 and the selection transistor 115 are performed by one transistor. For example, the first exposure control circuit 1161 and the second exposure control circuit 1162 are not limited to the single transfer transistor 112, other electronic devices or structures with a control terminal to control the on function can be used as the exposure control circuit in the embodiment of the present application, and the implementation of the single transfer transistor 112 is simple, low in cost, and easy to control.
Further, when each pixel 101 includes more than two sub-pixels 102 (i.e., each pixel 101 includes more than two photoelectric conversion elements), it is only necessary to correspondingly increase the number of exposure control circuits, each of which is connected to a corresponding photoelectric conversion element, and all of which are connected to a reset circuit.
In the embodiment of the present application, the plurality of sub-pixels 102 in each pixel 101 share the reset circuit, the amplifying circuit, and the selecting circuit, which is beneficial to reducing the occupied space of the pixel circuit 110 and has lower cost. In other embodiments, each sub-pixel 102 in each pixel 101 may have a corresponding reset circuit, amplifying circuit, selection circuit, and the like, and is not limited herein.
In an image sensor including pixels of a plurality of colors, pixels of different colors receive different amounts of exposure per unit time. After some colors are saturated, some colors have not been exposed to the ideal state. For example, exposure to 60% -90% of the saturated exposure amount may have a relatively good signal-to-noise ratio and accuracy, but embodiments of the present application are not limited thereto.
RGBW (red, green, blue, full color) is illustrated in fig. 10 as an example. Referring to fig. 10, in fig. 10, the horizontal axis is exposure time, the vertical axis is exposure amount, Q is saturated exposure amount, LW is exposure curve of full-color pixel W, LG is exposure curve of green pixel G, LR is exposure curve of red pixel R, and LB is exposure curve of blue pixel.
As can be seen from fig. 10, the slope of the exposure curve LW of the panchromatic pixel W is the largest, that is, the panchromatic pixel W can obtain more exposure in unit time, and saturation is reached at time t 1. The next to the slope of the exposure curve LG for the green pixel G, the green pixel saturates at time t 2. The slope of the exposure curve LR for the red pixel R is again such that the red pixel is saturated at time t 3. The slope of the exposure curve LB for the blue pixel B is at a minimum and the blue pixel is saturated at time t 4. As can be seen from fig. 10, the exposure amount received per unit time of the panchromatic pixel W is larger than the exposure amount received per unit time of the color pixel, that is, the sensitivity of the panchromatic pixel W is higher than that of the color pixel.
If the image sensor only comprising the color pixels is adopted to realize the phase focusing, under the environment with higher brightness, R, G, B the three color pixels can receive more light rays and can output pixel information with higher signal-to-noise ratio, and the accuracy of the phase focusing is higher; however, in an environment with low brightness, R, G, B three pixels can receive less light, and the signal-to-noise ratio of the output pixel information is low, and the accuracy of phase focusing is also low.
For the above reasons, the image sensor 10 of the embodiment of the present application can simultaneously arrange panchromatic pixels and color pixels in the two-dimensional pixel array 11, and compared with a general color sensor, the image sensor increases the amount of light passing, has a better signal-to-noise ratio, has better focusing performance under dark light, and has higher sensitivity of the sub-pixels 102. In this way, the image sensor 10 according to the embodiment of the present application can realize accurate focusing in scenes with different ambient brightness, and the scene adaptability of the image sensor 10 is improved.
The spectral response of each pixel 101 (i.e., the color of light that the pixel 101 can receive) is determined by the color of the filter 160 corresponding to the pixel 101. Color pixels and panchromatic pixels throughout this application refer to pixels 101 that are capable of responding to light of the same color as the corresponding filter 160.
Fig. 11 to 20 show examples of arrangement of the pixels 101 in the various image sensors 10 (shown in fig. 1). Referring to fig. 11 to 20, the plurality of pixels 101 in the two-dimensional pixel array 11 may include a plurality of panchromatic pixels W and a plurality of color pixels (e.g., a plurality of first color pixels a, a plurality of second color pixels B, and a plurality of third color pixels C), wherein the color pixels and the panchromatic pixels are distinguished by a wavelength band of light that can pass through the filter 160 (shown in fig. 1) covering the color pixels, the color pixels have a narrower spectral response than the panchromatic pixels, and the response spectrum of the color pixels is, for example, a portion of the response spectrum of the panchromatic pixels W. Each panchromatic pixel includes at least two subpixels 102 and each color pixel includes at least two subpixels 102. The two-dimensional pixel array 11 is composed of a plurality of minimum repeating units (fig. 11 to 20 show examples of minimum repeating units in various types of image sensors 10), which are duplicated and arranged in rows and columns. Each minimal repeating unit includes a plurality of sub-units, each sub-unit including a plurality of single-color pixels and a plurality of panchromatic pixels. For example, each minimal repeating unit includes four sub-units, wherein one sub-unit includes a plurality of single-color pixels a (i.e., first-color pixels a) and a plurality of panchromatic pixels W, two sub-units include a plurality of single-color pixels B (i.e., second-color pixels B) and a plurality of panchromatic pixels W, and the remaining one sub-unit includes a plurality of single-color pixels C (i.e., third-color pixels C) and a plurality of panchromatic pixels W.
For example, the number of pixels 101 of the rows and columns of the minimum repeating unit is equal. For example, the minimal repeating unit includes, but is not limited to, 4 rows and 4 columns, 6 rows and 6 columns, 8 rows and 8 columns, and 10 rows and 10 columns. For example, the number of pixels 101 of a row and a column of a sub-cell is equal. For example, the sub-cells include, but are not limited to, sub-cells of 2 rows and 2 columns, 3 rows and 3 columns, 4 rows and 4 columns, and 5 rows and 5 columns. The arrangement is helpful for balancing the resolution of the image in the row direction and the column direction and balancing the color expression, thereby improving the display effect.
In one example, in the minimum repeating unit, the panchromatic pixels W are disposed in a first diagonal direction D1, the color pixels are disposed in a second diagonal direction D2, and the first diagonal direction D1 is different from the second diagonal direction D2.
For example, fig. 11 is a schematic diagram of a pixel 101 arrangement and a lens 170 covering manner of a minimum repeating unit in the embodiment of the present application; the minimum repeating unit is 4 rows, 4 columns and 16 pixels, the subunit is 2 rows, 2 columns and 4 pixels, and the arrangement mode is as follows:
Figure BDA0002480594540000081
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 11, the panchromatic pixels W are arranged in a first diagonal direction D1 (i.e., the direction in which the upper left corner and the lower right corner in fig. 11 are connected), the color pixels are arranged in a second diagonal direction D2 (e.g., the direction in which the lower left corner and the upper right corner in fig. 11 are connected), and the first diagonal direction D1 is different from the second diagonal direction D2. For example, the first diagonal and the second diagonal are perpendicular.
It should be noted that the first diagonal direction D1 and the second diagonal direction D2 are not limited to diagonal lines, and include directions parallel to diagonal lines. The "direction" herein is not a single direction, and is understood as a concept of "straight line" indicating arrangement, and there may be a bidirectional direction of both ends of the straight line.
As shown in fig. 11, one lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
For example, fig. 12 is a schematic diagram illustrating an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, the subunit is 2 rows, 2 columns and 4 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000082
Figure BDA0002480594540000091
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 12, the panchromatic pixels W are arranged in a first diagonal direction D1 (i.e., the direction in which the upper right corner and the lower left corner in fig. 12 are connected), and the color pixels are arranged in a second diagonal direction D2 (e.g., the direction in which the upper left corner and the lower right corner in fig. 12 are connected). The first diagonal direction D1 is different from the second diagonal direction D2. For example, the first diagonal and the second diagonal are perpendicular.
As shown in fig. 12, one lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
For example, fig. 13 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. Fig. 14 is a schematic diagram of a pixel 101 arrangement and a lens 170 covering manner of a minimum repeating unit according to another embodiment of the present disclosure. In the embodiment of fig. 13 and 14, corresponding to the arrangement and coverage of fig. 11 and 12, respectively, the first color pixel a is a red pixel R; the second color pixel B is a green pixel G; the third color pixel C is a blue pixel Bu.
It is noted that in some embodiments, the response band of the panchromatic pixel W is the visible band (e.g., 400nm-760 nm). For example, an infrared filter is disposed on the panchromatic pixel W to filter infrared light. In some embodiments, the response bands of the panchromatic pixels W are in the visible and near infrared bands (e.g., 400nm-1000nm), matching the response bands of the photoelectric conversion elements (e.g., photodiodes PD) in the image sensor 10. For example, the panchromatic pixel W may be provided without a filter, and the response band of the panchromatic pixel W is determined by the response band of the photodiode, i.e., matched. Embodiments of the present application include, but are not limited to, the above-described band ranges.
In some embodiments, in the minimal repeating unit shown in fig. 11 and 12, the first color pixel a may also be a red pixel R, and the second color pixel B may also be a yellow pixel Y; the third color pixel C may be a blue pixel Bu.
In some embodiments, in the minimal repeating unit shown in fig. 11 and 12, the first color pixel a may also be a magenta pixel M, the second color pixel B may also be a cyan pixel Cy, and the third color pixel C may also be a yellow pixel Y.
For example, fig. 15 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 6 rows, 6 columns and 36 pixels 101, the subunit is 3 rows, 3 columns and 9 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000092
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 15, the panchromatic pixels W are arranged in a first diagonal direction D1 (i.e., the direction in which the upper left corner and the lower right corner in fig. 15 are connected), the color pixels are arranged in a second diagonal direction D2 (e.g., the direction in which the lower left corner and the upper right corner in fig. 15 are connected), and the first diagonal direction D1 is different from the second diagonal direction D2. For example, the first diagonal and the second diagonal are perpendicular.
As shown in fig. 15, one lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
For example, fig. 16 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 6 rows, 6 columns and 36 pixels 101, the subunit is 3 rows, 3 columns and 9 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000093
Figure BDA0002480594540000101
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 16, the panchromatic pixels W are arranged in a first diagonal direction D1 (i.e., the direction in which the upper right corner and the lower left corner in fig. 16 are connected), and the color pixels are arranged in a second diagonal direction D2 (e.g., the direction in which the upper left corner and the lower right corner in fig. 16 are connected). The first diagonal direction D1 is different from the second diagonal direction D2. For example, the first diagonal and the second diagonal are perpendicular.
As shown in fig. 16, one lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
For example, the first color pixel a, the second color pixel B, and the third color pixel C in the minimum repeating unit shown in fig. 15 and 16 may be a red pixel R, a green pixel G, and a blue pixel Bu, respectively. Or; the first color pixel a in the minimal repeating unit shown in fig. 15 and 16 may be a red color pixel R, the second color pixel B may be a yellow color pixel Y, and the third color pixel C may be a blue color pixel Bu. Or; the first color pixel a in the minimal repeating unit shown in fig. 15 and 16 may be a magenta pixel M, the second color pixel B may be a cyan pixel Cy, and the third color pixel C may be a yellow pixel Y.
For example, fig. 17 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 8 rows, 8 columns and 64 pixels 101, the sub-unit is 4 rows, 4 columns and 16 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000102
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 17, the panchromatic pixels W are arranged in a first diagonal direction D1 (i.e., the direction in which the upper left corner and the lower right corner in fig. 17 are connected), the color pixels are arranged in a second diagonal direction D2 (e.g., the direction in which the lower left corner and the upper right corner in fig. 17 are connected), and the first diagonal direction D1 is different from the second diagonal direction D2. For example, the first diagonal and the second diagonal are perpendicular.
As shown in fig. 17, one lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
For example, fig. 18 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 8 rows, 8 columns and 64 pixels 101, the sub-unit is 4 rows, 4 columns and 16 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000103
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 18, the panchromatic pixels W are arranged in a first diagonal direction D1 (i.e., the direction in which the upper right corner and the lower left corner in fig. 18 are connected), and the color pixels are arranged in a second diagonal direction D2 (e.g., the direction in which the upper left corner and the lower right corner in fig. 18 are connected). The first diagonal direction D1 is different from the second diagonal direction D2. For example, the first diagonal and the second diagonal are perpendicular.
As shown in fig. 18, one lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
In the example shown in fig. 11 to 18, adjacent panchromatic pixels W are arranged diagonally and adjacent color pixels are also arranged diagonally in each of the sub-units. In another example, within each sub-unit, adjacent panchromatic pixels are arranged in a horizontal direction, and adjacent color pixels are also arranged in the horizontal direction; alternatively, adjacent panchromatic pixels are arranged in the vertical direction, and adjacent color pixels are also arranged in the vertical direction. The panchromatic pixels in adjacent subunits may be arranged in a horizontal direction or in a vertical direction, and the color pixels in adjacent subunits may also be arranged in a horizontal direction or in a vertical direction.
For example, fig. 19 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, the subunit is 2 rows, 2 columns and 4 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000111
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 19, in each sub-unit, adjacent panchromatic pixels W are arranged in the vertical direction, and adjacent color pixels are also arranged in the vertical direction. One lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
For example, fig. 20 is a schematic diagram of an arrangement of pixels 101 and a covering manner of a lens 170 in another minimum repeating unit in the embodiment of the present application. The minimum repeating unit is 4 rows, 4 columns and 16 pixels 101, the subunit is 2 rows, 2 columns and 4 pixels 101, and the arrangement mode is as follows:
Figure BDA0002480594540000112
w denotes a panchromatic pixel; a denotes a first color pixel of the plurality of color pixels; b denotes a second color pixel of the plurality of color pixels; c denotes a third color pixel of the plurality of color pixels.
As shown in fig. 20, in each sub-unit, adjacent panchromatic pixels W are arranged in the horizontal direction, and adjacent color pixels are also arranged in the horizontal direction. One lens 170 covers one pixel 101. Each panchromatic pixel and each color pixel includes two subpixels 102.
In the minimum repeating unit shown in fig. 19 and 20, the first color pixel a may be a red color pixel R, the second color pixel B may be a green color pixel G, and the third color pixel C may be a blue color pixel Bu. Or; in the minimum repeating unit shown in fig. 19 and 20, the first color pixel a may be a red color pixel R, the second color pixel B may be a yellow color pixel Y, and the third color pixel C may be a blue color pixel Bu. Or; in the minimum repeating unit shown in fig. 19 and 20, the first color pixel a may be a magenta pixel M, the second color pixel B may be a cyan pixel Cy, and the third color pixel C may be a yellow pixel Y.
In the minimal repeating unit shown in fig. 11 to 20, each full-color pixel and each color pixel includes two sub-pixels 102. In other embodiments, it is also possible that each panchromatic pixel includes four subpixels 102, and each color pixel includes four subpixels 102; or each panchromatic pixel may include four sub-pixels 102, each color pixel may include two sub-pixels 102, etc., without limitation.
In the minimum repeating unit shown in fig. 11 to 20, the boundary between two sub-pixels 102 is parallel to the width direction Y of the two-dimensional pixel array 11. In other embodiments, a boundary between two sub-pixels 102 may be parallel to the length direction X of the two-dimensional pixel array 11, or any boundary form or combination thereof described above.
A plurality of panchromatic pixels and a plurality of color pixels in the two-dimensional pixel array 11 in any one of the arrangements shown in fig. 11 to 20 may be controlled by different exposure control lines, respectively, thereby achieving independent control of the exposure time of the panchromatic pixels and the exposure time of the color pixels. In the two-dimensional pixel array 11 shown in fig. 11 to 18, the control terminals of the exposure control circuits of at least two full-color pixels adjacent to each other in the first diagonal direction are electrically connected to the first exposure control line, and the control terminals of the exposure control circuits of at least two color pixels adjacent to each other in the second diagonal direction are electrically connected to the second exposure control line. With the two-dimensional pixel array 11 shown in fig. 19 and 20, the control terminal of the exposure control circuit for the full-color pixels of the same row or the same column is electrically connected to the first exposure control line, and the control terminal of the exposure control circuit for the color pixels of the same row or the same column is electrically connected to the second exposure control line. The first exposure control line may transmit a first exposure signal to control a first exposure time of the panchromatic pixel, and the second exposure control line may transmit a second exposure signal to control a second exposure time of the color pixel. Where a panchromatic pixel includes two subpixels 102, then both subpixels 102 in the panchromatic pixel are electrically connected by the same first exposure control line. When the color pixel includes two sub-pixels 102, both sub-pixels 102 in the color pixel are electrically connected by the same second exposure control line.
Where the exposure time of the panchromatic pixels is controlled independently of the exposure time of the color pixels, the first exposure time of the panchromatic pixels can be less than the second exposure time of the color pixels. For example, the ratio of the first exposure time to the second exposure time may be 1: 2. 1:3 or 1: 4. For example, in an environment with dark light, the color pixels are more likely to be underexposed, and the ratio of the first exposure time to the second exposure time can be adjusted to be 1:2,1: 3 or 1: 4. wherein, the exposure ratio is the integer ratio or the approximate integer ratio, which is beneficial to setting and controlling the setting signal of the time sequence.
In some embodiments, the relative relationship of the first exposure time to the second exposure time may be determined based on ambient brightness. For example, when the ambient brightness is less than or equal to the brightness threshold, the panchromatic pixels are exposed for a first exposure time equal to a second exposure time; the panchromatic pixels are exposed with a first exposure time that is less than a second exposure time when the ambient brightness is greater than the brightness threshold. When the ambient brightness is greater than the brightness threshold, the relative relationship between the first exposure time and the second exposure time may be determined according to a brightness difference between the ambient brightness and the brightness threshold, for example, the larger the brightness difference, the smaller the ratio of the first exposure time to the second exposure time. Illustratively, when the luminance difference value is within the first range [ a, b), the ratio of the first exposure time to the second exposure time is 1: 2; when the brightness difference value is within the second range [ b, c), the ratio of the first exposure time to the second exposure time is 1: 3; when the brightness difference is larger than or equal to c, the ratio of the first exposure time to the second exposure time is 1: 4.
Referring to fig. 1 and 21, the present application further provides a control method. The control method according to the present embodiment can be applied to the image sensor 10 according to any one of the above embodiments. The control method comprises the following steps:
01: controlling the exposure of the at least two sub-pixels 102 to output at least two sub-pixel information;
02: determining phase information according to the at least two pieces of sub-pixel information to carry out focusing;
03: in the in-focus state, the two-dimensional pixel array 11 is controlled to be exposed to acquire a target image.
Referring to fig. 1 and 22, a control method according to an embodiment of the present disclosure may be implemented by a camera assembly 40 according to an embodiment of the present disclosure. The camera assembly 40 includes a lens 30, an image sensor 10 according to any of the above embodiments, and a processing chip 20. The image sensor 10 may receive light incident through the lens 30 and generate an electrical signal. The image sensor 10 is electrically connected to the processing chip 20. The processing chip 20 can be packaged with the image sensor 10 and the lens 30 in the housing of the camera assembly 40; alternatively, the image sensor 10 and the lens 30 are enclosed in a housing of the camera assembly 40, and the processing chip 20 is disposed outside the housing. Step 01, step 02 and step 03 can all be implemented by the processing chip 20. That is, the processing chip 20 may be configured to: controlling the exposure of the at least two sub-pixels 102 to output at least two sub-pixel information; determining phase information according to the at least two pieces of sub-pixel information to carry out focusing; in the in-focus state, the two-dimensional pixel array 11 is controlled to be exposed to acquire a target image.
In the control method and the camera head assembly 40 according to the embodiment of the present invention, the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, which increases the amount of light passing, has a better signal-to-noise ratio, and has a better focusing performance under dark light and a higher sensitivity of the sub-pixels 102 compared to a general color sensor. In addition, each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.
In addition, the control method and the camera assembly 40 in the embodiment of the present application do not need to shield the pixels 101 in the image sensor 10, and all the pixels 101 can be used for imaging, and do not need to perform dead pixel compensation, which is beneficial to improving the quality of the target image acquired by the camera assembly 40.
In addition, all the pixels 101 including at least two sub-pixels 102 in the camera assembly 40 and the control method according to the embodiment of the present invention can be used for phase focusing, so that the accuracy of phase focusing is higher.
In particular, the phase information may be a phase difference. Determining phase information from the at least two sub-pixel information to focus includes: (1) calculating the phase difference according to the full-color sub-pixel information to focus; (2) calculating the phase difference according to the color sub-pixel information to focus; (3) and meanwhile, phase difference is calculated according to the full-color sub-pixel information and the color sub-pixel information so as to carry out focusing.
The control method and camera assembly 40 according to the embodiment of the present invention implement phase focusing using the image sensor 10 including panchromatic pixels and color pixels, so that phase focusing can be performed using the panchromatic pixels having higher sensitivity in an environment with lower luminance (for example, luminance less than or equal to a first preset luminance), phase focusing can be performed using the color pixels having lower sensitivity in an environment with higher luminance (for example, luminance greater than or equal to a second preset luminance), and phase focusing can be performed using at least one of the panchromatic pixels and the color pixels in an environment with luminance (for example, greater than the first preset luminance and less than the second preset luminance). Therefore, the problems that the phase focusing is performed by adopting the color pixels when the ambient brightness is low and the focusing is inaccurate due to the fact that the signal-to-noise ratio of the color sub-pixel information output by the sub-pixels 102 in the color pixels is too low can be solved, and the problems that the focusing is inaccurate due to the fact that the sub-pixels 102 in the panchromatic pixels are over-saturated when the ambient brightness is high can be solved, so that the accuracy of the phase focusing in various application scenes is high, and the adaptability of the phase focusing scene is good.
Referring to fig. 1 and 11, in some embodiments, a panchromatic pixel includes two panchromatic subpixels. The panchromatic subpixel information includes first panchromatic subpixel information and second panchromatic subpixel information. The first panchromatic subpixel information and the second panchromatic subpixel information are output by the panchromatic subpixels located in the first orientation of the lens 170 and the panchromatic subpixels located in the second orientation of the lens 170, respectively. One first panchromatic subpixel information and a corresponding one second panchromatic subpixel information are paired as a pair of panchromatic subpixel information. The step of calculating the phase difference according to the full-color sub-pixel information to focus comprises the following steps: forming a first curve from first panchromatic subpixel information in the plurality of pairs of panchromatic subpixel information; forming a second curve from second panchromatic subpixel information in the plurality of pairs of panchromatic subpixel information; and calculating the phase difference according to the first curve and the second curve to focus.
Specifically, referring to fig. 23, in an example, the first orientation P1 of each lens 170 is a position corresponding to the left half of the lens 170, and the second orientation P2 is a position corresponding to the right half of the lens 170. It should be noted that the first orientation P1 and the second orientation P2 shown in fig. 23 are determined according to the distribution example of the sub-pixels 102 shown in fig. 23, and the first orientation P1 and the second orientation P2 change correspondingly for the sub-pixels 102 of other types of distributions. In each of the panchromatic pixels W corresponding to the pixel array 11 of fig. 23, one of the sub-pixels 102 (i.e., the panchromatic sub-pixel W) is located at the first orientation P1 of the lens 170, and the other sub-pixel 102 (i.e., the panchromatic sub-pixel W) is located at the second orientation P2 of the lens 170. The first panchromatic subpixel information is output by the panchromatic subpixel W positioned in the first orientation P1 of the lens 170, and the second panchromatic subpixel information is output by the panchromatic subpixel W positioned in the second orientation P2 of the lens 170. For example, a full-color subpixel W11,P1、W13,P1、W15,P1、W17,P1、W22,P1、W24,P1、W26,P1、W28,P1Equal to the first orientation P1, the full-color subpixel W11,P2、W13,P2、W15,P2、W17,P2、W22,P2、W24,P2、W26,P2、W28,P2Etc. are located in the second orientation P2. Two panchromatic sub-pixels W in the same panchromatic pixel form a pair of panchromatic sub-pixel pairs, and correspondingly, panchromatic sub-pixel information of two panchromatic sub-pixels in the same panchromatic pixel W form a pair of panchromatic sub-pixel information, for example, the panchromatic sub-pixel W11,P1Panchromatic sub-pixel information and panchromatic sub-pixel W11,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information, panchromatic sub-pixel W13,P1Panchromatic sub-pixel information and panchromatic sub-pixel W13,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information, panchromatic sub-pixelElement W15,P1Panchromatic sub-pixel information and panchromatic sub-pixel W15,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information, panchromatic sub-pixel W17,P1Panchromatic sub-pixel information and panchromatic sub-pixel W17,P2The panchromatic subpixel information of (a) constitutes a pair of panchromatic subpixel information pairs, and so on.
After acquiring the pairs of panchromatic sub-pixel information, the processing chip 20 forms a first curve according to first panchromatic sub-pixel information in the pairs of panchromatic sub-pixel information, forms a second curve according to second panchromatic sub-pixel information in the pairs of panchromatic sub-pixel information, and calculates a phase difference according to the first curve and the second curve. For example, the first plurality of panchromatic subpixel information may trace one histogram curve (i.e., a first curve) and the second plurality of panchromatic subpixel information may trace another histogram curve (i.e., a second curve). The processing chip 20 may then calculate the phase difference between the two histogram curves based on where the peaks of the two histogram curves are located. Then, the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the lens 30 to move by a distance required to move so that the lens 30 is in a focusing state.
Referring to fig. 1 and 11, in some embodiments, a panchromatic pixel includes two panchromatic subpixels. The full color sub-pixel information includes first full color sub-pixel information and second full color sub-pixel information. The first panchromatic subpixel information and the second panchromatic subpixel information are output by the panchromatic subpixel positioned in the first orientation of the lens 170 and the panchromatic subpixel positioned in the second orientation of the lens 170, respectively. The plurality of first panchromatic subpixel information and the corresponding plurality of second panchromatic subpixel information are paired as a pair of panchromatic subpixel information. Calculating a phase difference according to full-color sub-pixel information to perform focusing, comprising: calculating third panchromatic subpixel information from the plurality of first panchromatic subpixel information in each pair of panchromatic subpixel information pairs; calculating fourth panchromatic subpixel information from the plurality of second panchromatic subpixel information in each pair of panchromatic subpixel information pairs; forming a first curve from the plurality of third panchromatic subpixel information; forming a second curve from the plurality of fourth panchromatic subpixel information; and calculating the phase difference according to the first curve and the second curve to focus.
Specifically, referring to fig. 23, in an example, the first orientation P1 of each lens 170 is a position corresponding to the left half of the lens 170, and the second orientation P2 is a position corresponding to the right half of the lens 170. It should be noted that the first orientation P1 and the second orientation P2 shown in fig. 23 are determined according to the distribution example of the sub-pixels 102 shown in fig. 23, and the first orientation P1 and the second orientation P2 change correspondingly for the sub-pixels 102 of other types of distributions. In each of the panchromatic pixels corresponding to the pixel array 11 of fig. 23, one subpixel 102 (i.e., the panchromatic subpixel W) is located in the first orientation P1 of the lens 170, and the other subpixel 102 (i.e., the panchromatic subpixel W) is located in the second orientation P2 of the lens 170. The first panchromatic subpixel information is output by the panchromatic subpixel W positioned in the first orientation P1 of the lens 170, and the second panchromatic subpixel information is output by the panchromatic subpixel W positioned in the second orientation P2 of the lens 170. For example, a full-color subpixel W11,P1、W13,P1、W15,P1、W17,P1、W22,P1、W24,P1、W26,P1、W28,P1Equal to the first orientation P1, the full-color subpixel W11,P2、W13,P2、W15,P2、W17,P2、W22,P2、W24,P2、W26,P2、W28,P2Etc. are located in the second orientation P2. The plurality of panchromatic subpixels W located at the first orientation P1 and the plurality of panchromatic subpixels W located at the second orientation P2 form a pair of panchromatic subpixel pairs, and accordingly, the plurality of first panchromatic subpixel information and the corresponding plurality of second panchromatic subpixel information form a pair of panchromatic subpixel information. For example, the plurality of first panchromatic subpixel information in the same sub-unit and the plurality of second panchromatic subpixel information in the sub-unit are paired as a pair of panchromatic subpixel information, that is, panchromatic subpixel W11,P1、W22,P1Panchromatic sub-pixel information and panchromatic pixel W11,P2、W22,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information, panchromatic sub-pixel W13,P1、W24,P1Panchromatic sub-pixel information and panchromatic sub-pixel W13,P2、W24,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information, panchromatic sub-pixel W15,P1、W26,P1Panchromatic sub-pixel information and panchromatic sub-pixel W15,P2、W26,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information, panchromatic sub-pixel W17,P1、W28,P1Panchromatic sub-pixel information and panchromatic sub-pixel W17,P2、W28,P2The panchromatic subpixel information of (a) make up a pair of panchromatic subpixel information, and so on. For another example, the plurality of first panchromatic subpixel information in the same minimal repeating unit and the plurality of second panchromatic subpixel information in the minimal repeating unit are regarded as a pair of panchromatic subpixel information, i.e., panchromatic subpixel W11,P1、W13,P1、W22,P1、W24,P1、W31,P1、W33,P1、W42,P1、W44,P1Panchromatic sub-pixel information and panchromatic pixel W11,P2、W13,P2、W22,P2、W24,P2、W31,P2、W33,P12、W42,P2、W44,P2The panchromatic sub-pixel information forms a pair of panchromatic sub-pixel information pairs, and so on.
After obtaining the pairs of panchromatic sub-pixel information, the processing chip 20 calculates third panchromatic sub-pixel information from the plurality of first panchromatic sub-pixel information in each pair of panchromatic sub-pixel information, and calculates fourth panchromatic sub-pixel information from the plurality of second panchromatic sub-pixel information in each pair of panchromatic sub-pixel information. Illustratively, for a full-color subpixel W11,P1、W22,P1Panchromatic pixel information and panchromatic sub-pixel W11,P2、W22,P2The panchromatic sub-pixel information pair formed by the panchromatic sub-pixel information, and the third panchromatic sub-pixel information can be calculated by: LT1 ═ W11,P1+W22,P1The fourth panchromatic subpixel information may be calculated by: RB1 ═ W11,P2+W22,P2. For a full color sub-pixel W11,P1、W13,P1、W22,P1、W24,P1、W31,P1、W33,P1、W42,P1、W44,P1Panchromatic sub-pixel information and panchromatic pixel W11,P2、W13,P2、W22,P2、W24,P2、W31,P2、W33,P12、W42,P2、W44,P2The panchromatic sub-pixel information of (a) constitutes a pair of panchromatic sub-pixel information, and the third panchromatic sub-pixel information can be calculated by: LT1 ═ W11,P1+W13,P1+W22,P1+W24,P1+W31,P1+W33,P1+W42,P1+W44,P1) And 8, the fourth panchromatic sub-pixel information can be calculated in the following mode: RB1 ═ (W)11,P2+W13,P2+W22,P2+W24,P2+W31,P2+W33,P12+W42,P2+W44,P2)/8. The calculation methods of the third panchromatic sub-pixel information and the fourth panchromatic sub-pixel information of the remaining panchromatic sub-pixel information pairs are similar to those of the first panchromatic sub-pixel information pair, and are not described again here. In this manner, the processing chip 20 can obtain a plurality of third panchromatic sub-pixel information and a plurality of fourth sub-panchromatic pixel information. The third plurality of panchromatic subpixel information may trace one histogram curve (i.e., the first curve) and the fourth plurality of panchromatic subpixel information may trace another histogram curve (i.e., the second curve). Subsequently, the processing chip 20 can calculate the phase difference from the two histogram curves. Then, the processing chip 20 can determine the distance that the lens 30 needs to move according to the phase difference and the pre-calibrated parameters. Subsequently, the processing chip 20 can control the lens 30 to move by a distance required to move so that the lens 30 is in a focusing state.
Referring to fig. 1 and 11, in some embodiments, a color pixel includes two color sub-pixels. The color sub-pixel information includes first color sub-pixel information and second color sub-pixel information. The first color sub-pixel information and the second sub-pixel information are output by the color sub-pixel located in the first orientation of the lens 170 and the color sub-pixel located in the second orientation of the lens 170, respectively. One first color sub-pixel information and a corresponding one second color sub-pixel information as a pair of color sub-pixel information. The step of calculating the phase difference according to the color sub-pixel information to focus comprises the following steps: forming a third curve according to the first color sub-pixel information in the plurality of pairs of color sub-pixel information; forming a fourth curve according to the second color sub-pixel information in the plurality of pairs of color sub-pixel information; and calculating the phase difference according to the third curve and the fourth curve to focus. This process is similar to the process described above in which the phase difference is calculated from the full-color sub-pixel information only to perform focusing, and will not be described again.
Referring to fig. 1 and 11, in some embodiments, a color pixel includes two color sub-pixels. The color sub-pixel information includes first color sub-pixel information and second color sub-pixel information. The first color sub-pixel information and the second color sub-pixel information are output by the color sub-pixel located in the first orientation of the lens 170 and the color sub-pixel located in the second orientation of the lens 170, respectively. The plurality of first color sub-pixel information and the corresponding plurality of second color sub-pixel information are used as a pair of color sub-pixel information. Calculating a phase difference according to the color sub-pixel information to perform focusing, comprising: calculating third color sub-pixel information according to the plurality of first color sub-pixel information in each pair of color sub-pixel information; calculating fourth color sub-pixel information according to the plurality of second color sub-pixel information in each pair of color sub-pixel information; forming a third curve according to the information of the plurality of third color sub-pixels; forming a fourth curve according to the information of the plurality of fourth color sub-pixels; and calculating the phase difference according to the third curve and the fourth curve to focus. This process is similar to the process described above in which the phase difference is calculated from the full-color sub-pixel information only to perform focusing, and will not be described again.
Referring to fig. 1 and 11, in some embodiments, a panchromatic pixel includes two panchromatic subpixels. The color pixel includes two color sub-pixels. The full-color sub-pixel information includes first full-color sub-pixel information and second full-color sub-pixel information, and the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information. The first panchromatic subpixel information, the second panchromatic subpixel information, the first color subpixel information, and the second color subpixel information are output by the panchromatic subpixel located in the first orientation of the lens 170, the panchromatic subpixel located in the second orientation of the lens 170, the color subpixel located in the first orientation of the lens 170, and the color subpixel located in the second orientation of the lens 170, respectively. One first panchromatic subpixel information and a corresponding one second panchromatic subpixel information as a pair of panchromatic subpixel information, and one first color subpixel information and a corresponding one second color subpixel information as a pair of color subpixel information. Calculating a phase difference according to the full-color sub-pixel information and the color sub-pixel information to perform focusing, comprising: forming a first curve from first panchromatic subpixel information in the plurality of pairs of panchromatic subpixel information; forming a second curve from second panchromatic subpixel information in the plurality of pairs of panchromatic subpixel information; forming a third curve according to the first color sub-pixel information in the plurality of pairs of color sub-pixel information; forming a fourth curve according to the second color sub-pixel information in the plurality of pairs of color sub-pixel information; and calculating the phase difference according to the first curve, the second curve, the third curve and the fourth curve to focus. This process is similar to the above-described process of calculating the phase difference from the panchromatic subpixel information to perform focusing and calculating the phase difference from the color subpixel information to perform focusing, respectively, and a description thereof will not be provided again.
Referring to fig. 1 and 11, in some embodiments, a panchromatic pixel includes two panchromatic subpixels. The color pixel includes two color sub-pixels. The full-color sub-pixel information includes first full-color sub-pixel information and second full-color sub-pixel information, and the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information. The first panchromatic subpixel information, the second panchromatic subpixel information, the first color subpixel information, and the second color subpixel information are output by the panchromatic subpixel located in the first orientation of the lens 170, the panchromatic subpixel located in the second orientation of the lens 170, the color subpixel located in the first orientation of the lens 170, and the color subpixel located in the second orientation of the lens 170, respectively. The plurality of first panchromatic subpixel information and the corresponding plurality of second panchromatic subpixel information form a pair of panchromatic subpixel information, and the plurality of first color subpixel information and the corresponding plurality of second color subpixel information form a pair of color subpixel information. Calculating a phase difference according to the full-color sub-pixel information and the color sub-pixel information to perform focusing, comprising: calculating third panchromatic subpixel information from the plurality of first panchromatic subpixel information in each pair of panchromatic subpixel information pairs; calculating fourth panchromatic subpixel information from the plurality of second panchromatic subpixel information in each pair of panchromatic subpixel information pairs; calculating third color sub-pixel information according to the plurality of first color sub-pixel information in each pair of color sub-pixel information; calculating fourth color sub-pixel information according to the plurality of second color sub-pixel information in each pair of color sub-pixel information; forming a first curve from the plurality of third panchromatic subpixel information; forming a second curve from the plurality of fourth panchromatic subpixel information; forming a third curve according to the information of the plurality of third color sub-pixels; forming a fourth curve according to the information of the plurality of fourth color sub-pixels; and calculating the phase difference according to the first curve, the second curve, the third curve and the fourth curve to focus. This process is similar to the above-described process of calculating the phase difference from the panchromatic subpixel information to perform focusing and calculating the phase difference from the color subpixel information to perform focusing, respectively, and a description thereof will not be provided again.
Referring to fig. 1 and 24, in some embodiments, the control method further includes:
04: obtaining the ambient brightness;
step 03, in a focus state, controlling the two-dimensional pixel array 11 to expose to acquire a target image, including:
031: when the ambient brightness is greater than a first preset brightness, in an in-focus state, controlling at least two sub-pixels 102 in each panchromatic pixel to be exposed so as to respectively output panchromatic sub-pixel information, and controlling at least two sub-pixels 102 in each color pixel to be exposed so as to respectively output color sub-pixel information;
032: and generating a target image according to the full-color sub-pixel information and the color sub-pixel information.
Referring to fig. 1 and 22, in some embodiments, step 04, step 031, and step 032 can be implemented by the processing chip 10. That is, the processing chip 20 may be configured to: obtaining the ambient brightness; when the ambient brightness is greater than a first preset brightness, in an in-focus state, controlling at least two sub-pixels 102 in each panchromatic pixel to be exposed so as to respectively output panchromatic sub-pixel information, and controlling at least two sub-pixels 102 in each color pixel to be exposed so as to respectively output color sub-pixel information; and generating a target image according to the full-color sub-pixel information and the color sub-pixel information.
Specifically, referring to fig. 25, taking an example that each panchromatic pixel includes two panchromatic sub-pixels and each color pixel includes two color sub-pixels, the panchromatic sub-pixel information includes first panchromatic sub-pixel information and second panchromatic sub-pixel information. The first panchromatic subpixel information and the second panchromatic subpixel information are output by the panchromatic subpixel located in the first orientation P1 of the lens 170 and the panchromatic subpixel located in the second orientation P2 of the lens 170, respectively (the first orientation P1 and the second orientation P2 of the lens 170 are not labeled in fig. 25, and may be divided in conjunction with the orientation of the lens 170 in fig. 23). In one example, the first orientation P1 of each lens 170 is a position corresponding to a left half of the lens 170 and the second orientation P2 is a position corresponding to a right half of the lens 170. In each panchromatic pixel W, one subpixel 102 (i.e., the panchromatic subpixel W) is positioned in the first orientation P1 of the lens 170, and the other subpixel 102 (i.e., the panchromatic subpixel W) is positioned in the second orientation P2 of the lens 170. In each panchromatic pixel W, the panchromatic sub-pixel W located in the first orientation P1 of the lens 170 and the panchromatic sub-pixel W located in the second orientation P2 of the lens 170 are exposed to light and output first panchromatic sub-pixel information and second panchromatic sub-pixel information, respectively. For example, panchromatic sub-pixel W11, P1 and panchromatic sub-pixel W11, P2 exposure output first and second panchromatic sub-pixel information, respectively, panchromatic sub-pixel W22, P1 and panchromatic sub-pixel W22, P2 exposure output first and second panchromatic sub-pixel information, respectively, and so on.
Similarly, the color sub-pixel information includes first color sub-pixel information and second color sub-pixel information. The first color sub-pixel information and the second color sub-pixel information are respectively outputted by the color sub-pixel located at the first orientation P1 of the lens 170 and the color sub-pixel located at the second orientation P2 of the lens 170 (the first orientation P1 and the second orientation P2 of the lens 170 are not shown in fig. 25, and can be divided in combination with the orientation of the lens 170 in fig. 23). In one example, the first orientation P1 of each lens 170 is a position corresponding to a left half of the lens 170 and the second orientation P2 is a position corresponding to a right half of the lens 170. In each color pixel, one sub-pixel 102 (i.e., color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located at the first orientation P1 of the lens 170, and the other sub-pixel 102 (i.e., color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located at the second orientation P2 of the lens 170. In each color pixel, the color sub-pixel located at the first orientation P1 of the lens 170 and the color sub-pixel located at the second orientation P2 of the lens 170 are exposed to light and then output first color sub-pixel information and second color sub-pixel information, respectively. For example, color sub-pixel a12, P1 and color sub-pixel a12, P2 exposure output first color sub-pixel information and second color sub-pixel information, color sub-pixel B14, P1 and color sub-pixel B14, P2 exposure output first color sub-pixel information and second color sub-pixel information, color sub-pixel C34, P1 and color sub-pixel C34, P2 exposure output first color sub-pixel information and second color sub-pixel information, and so on.
In the embodiment of the application, when the ambient brightness is greater than the first predetermined brightness (i.e. under the condition of sufficient light), each panchromatic pixel and each color pixel are divided into a plurality of sub-pixels to respectively output sub-pixel information, so that the resolution of the target image can be improved. For example, in fig. 25, each panchromatic pixel and each color pixel are divided into two sub-pixels to output sub-pixel information, and the lateral resolution of the output target image can be increased to 2 times of the original resolution (when each panchromatic pixel and each color pixel are divided into four sub-pixels distributed in a 2 × 2 matrix to output sub-pixel information, the lateral resolution and the longitudinal resolution of the output target image are increased to 2 times of the original resolution, and the phase focusing performance is better).
Referring to fig. 1 and 24, in some embodiments, the control method further includes: 04: obtaining the ambient brightness;
step 03, in a focus state, controlling the two-dimensional pixel array 11 to expose to acquire a target image, including:
033: when the ambient brightness is smaller than a second preset brightness, in a focusing state, controlling at least two sub-pixel exposures in each panchromatic pixel to respectively output panchromatic sub-pixel information, and controlling at least two sub-pixel exposures in each color pixel to jointly output color merged pixel information;
034: and generating a target image according to the full-color sub-pixel information and the color merging pixel information.
Referring to fig. 1 and 22, in some embodiments, step 04, step 033 and step 034 may be implemented by the processing chip 10. That is, the processing chip 20 may be configured to: obtaining the ambient brightness; when the ambient brightness is smaller than a second preset brightness, in a focusing state, controlling at least two sub-pixel exposures in each panchromatic pixel to respectively output panchromatic sub-pixel information, and controlling at least two sub-pixel exposures in each color pixel to jointly output color merged pixel information; and generating a target image according to the full-color sub-pixel information and the color merging pixel information.
Wherein the second predetermined brightness is less than or equal to the first predetermined brightness.
Specifically, referring to fig. 26, taking an example that each panchromatic pixel includes two panchromatic sub-pixels and each color pixel includes two color sub-pixels, the panchromatic sub-pixel information includes first panchromatic sub-pixel information and second panchromatic sub-pixel information. The first panchromatic subpixel information and the second panchromatic subpixel information are output by the panchromatic subpixel located in the first orientation P1 of the lens 170 and the panchromatic subpixel located in the second orientation P2 of the lens 170, respectively (the first orientation P1 and the second orientation P2 of the lens 170 are not labeled in fig. 26, and may be divided in conjunction with the orientation of the lens 170 in fig. 23). In one example, the first orientation P1 of each lens 170 is a position corresponding to a left half of the lens 170 and the second orientation P2 is a position corresponding to a right half of the lens 170. In each panchromatic pixel W, one subpixel 102 (i.e., the panchromatic subpixel W) is positioned in the first orientation P1 of the lens 170, and the other subpixel 102 (i.e., the panchromatic subpixel W) is positioned in the second orientation P2 of the lens 170. In each panchromatic pixel W, the panchromatic sub-pixel W located in the first orientation P1 of the lens 170 and the panchromatic sub-pixel W located in the second orientation P2 of the lens 170 are exposed to light and output first panchromatic sub-pixel information and second panchromatic sub-pixel information, respectively. For example, panchromatic sub-pixel W11, P1 and panchromatic sub-pixel W11, P2 exposure output first and second panchromatic sub-pixel information, respectively, panchromatic sub-pixel W22, P1 and panchromatic sub-pixel W22, P2 exposure output first and second panchromatic sub-pixel information, respectively, and so on.
The color merged pixel information is merged and output by the color sub-pixel located at the first orientation P1 of the lens 170 and the color sub-pixel located at the second orientation P2 of the lens 170 (the first orientation P1 and the second orientation P2 of the lens 170 are not shown in fig. 26, and may be divided in conjunction with the orientation of the lens 170 in fig. 23). In one example, the first orientation P1 of each lens 170 is a position corresponding to a left half of the lens 170 and the second orientation P2 is a position corresponding to a right half of the lens 170. In each color pixel, one sub-pixel 102 (i.e., color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located at the first orientation P1 of the lens 170, and the other sub-pixel 102 (i.e., color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located at the second orientation P2 of the lens 170. In each color pixel, the color sub-pixel located at the first orientation P1 of the lens 170 and the color sub-pixel located at the second orientation P2 of the lens 170 are exposed to light and combined to output color-combined pixel information. For example, color sub-pixel A12, P1 and color sub-pixel A12, P2 expose and combine to output color-merged pixel information, color sub-pixel B14, P1 and color sub-pixel B14, P2 expose and combine to output color-merged pixel information, color sub-pixel C34, P1 and color sub-pixel C34, P2 expose and combine to output color-merged pixel information, and so on.
Under the condition of dark light, the sensitivity of a panchromatic pixel is far higher than that of a color pixel, and if each color pixel is divided into a plurality of sub-pixels to output sub-pixel information, the signal-to-noise ratio of an image is seriously reduced. Therefore, in the embodiment of the present application, when the ambient brightness is less than the second predetermined brightness (i.e., under a condition of dark light), each panchromatic pixel is divided into a plurality of sub-pixels to output sub-pixel information, and the plurality of sub-pixels in each color pixel are combined to output combined pixel information, so that the resolution of the target image can be improved to a certain extent. For example, in fig. 26, each full-color pixel is divided into two sub-pixels to output sub-pixel information, and the lateral resolution of the output target image can be 1.5 times that of the original image.
Referring to fig. 1 and 24, in some embodiments, the control method further includes: 04: obtaining the ambient brightness;
step 03, in a focus state, controlling the two-dimensional pixel array 11 to expose to acquire a target image, including:
035: when the ambient brightness is smaller than a third preset brightness, in a focusing state, controlling at least two sub-pixel exposures in each panchromatic pixel to combine and output panchromatic combined pixel information, and controlling at least two sub-pixel exposures in each color pixel to combine and output color combined pixel information;
036: and generating the target image according to the panchromatic combined pixel information and the color combined pixel information.
Referring to fig. 1 and 22, in some embodiments, step 04, step 035, and step 036 can be implemented by the processing chip 10. That is, the processing chip 20 may be configured to: obtaining the ambient brightness; when the ambient brightness is smaller than a third preset brightness, in a focusing state, controlling at least two sub-pixel exposures in each panchromatic pixel to combine and output panchromatic combined pixel information, and controlling at least two sub-pixel exposures in each color pixel to combine and output color combined pixel information; and generating the target image according to the panchromatic combined pixel information and the color combined pixel information.
Wherein the third predetermined brightness is less than the second predetermined brightness.
Specifically, referring to fig. 27, taking an example that each panchromatic pixel includes two panchromatic sub-pixels and each color pixel includes two color sub-pixels, the panchromatic merged pixel information is merged and output by the panchromatic sub-pixel located at the first orientation P1 of the lens 170 and the panchromatic sub-pixel located at the second orientation P2 of the lens 170 (the first orientation P1 and the second orientation P2 of the lens 170 are not shown in fig. 27, and may be divided in combination with the orientation of the lens 170 in fig. 23). In one example, the first orientation P1 of each lens 170 is a position corresponding to a left half of the lens 170 and the second orientation P2 is a position corresponding to a right half of the lens 170. In each panchromatic pixel W, one subpixel 102 (i.e., the panchromatic subpixel W) is positioned in the first orientation P1 of the lens 170, and the other subpixel 102 (i.e., the panchromatic subpixel W) is positioned in the second orientation P2 of the lens 170. In each panchromatic pixel W, the panchromatic sub-pixel W located at the first orientation P1 of the lens 170 and the panchromatic sub-pixel W located at the second orientation P2 of the lens 170 are exposed to light and combined to output panchromatic combined pixel information. For example, panchromatic subpixels W11, P1 and panchromatic subpixels W11, P2 are exposed and combined to output panchromatic combined pixel information, panchromatic subpixels W22, P1 and panchromatic subpixels W22, P2 are exposed and combined to output panchromatic combined pixel information, and so on.
The color merged pixel information is merged and output by the color sub-pixel located at the first orientation P1 of the lens 170 and the color sub-pixel located at the second orientation P2 of the lens 170 (the first orientation P1 and the second orientation P2 of the lens 170 are not shown in fig. 27, and may be divided in conjunction with the orientation of the lens 170 in fig. 23). In one example, the first orientation P1 of each lens 170 is a position corresponding to a left half of the lens 170 and the second orientation P2 is a position corresponding to a right half of the lens 170. In each color pixel, one sub-pixel 102 (i.e., color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located at the first orientation P1 of the lens 170, and the other sub-pixel 102 (i.e., color sub-pixel A, color sub-pixel B, or color sub-pixel C) is located at the second orientation P2 of the lens 170. In each color pixel, the color sub-pixel located at the first orientation P1 of the lens 170 and the color sub-pixel located at the second orientation P2 of the lens 170 are exposed to light and combined to output color-combined pixel information. For example, color sub-pixel A12, P1 and color sub-pixel A12, P2 expose and combine to output color-merged pixel information, color sub-pixel B14, P1 and color sub-pixel B14, P2 expose and combine to output color-merged pixel information, color sub-pixel C34, P1 and color sub-pixel C34, P2 expose and combine to output color-merged pixel information, and so on.
Under the condition of extremely dark light, signals of panchromatic pixels and color pixels are low, and the significance of pursuing the resolution of an image is not large. Therefore, in the embodiment of the present application, when the ambient brightness is less than the third predetermined brightness (i.e., under the condition that the light is extremely dark), each panchromatic pixel is split into a plurality of sub-pixels to output sub-pixel information, and each panchromatic pixel and the plurality of sub-pixels in each color pixel are combined to output combined pixel information, so as to improve the signal amount and the signal-to-noise ratio.
After exposing and outputting the original image including the panchromatic sub-pixel information and the color merged pixel information in the two-dimensional pixel array 11 (as shown in fig. 25), or outputting the original image including the panchromatic merged pixel information and the color merged pixel information (as shown in fig. 26), or outputting the original image including the panchromatic merged pixel information and the color merged pixel information (as shown in fig. 27), the processing chip 20 may further perform corresponding demosaicing algorithm (e.g., bilinear interpolation algorithm) processing on the original image for the three different resolutions, so as to complete the pixel information or the sub-pixel information of each channel (e.g., including the red channel, the green channel, and the blue channel), maintain the complete representation of the image colors, and finally obtain the target image with colors. In one example, in the process of obtaining the target image from the original image, the processing chip 20 may further perform any one or more of black level correction processing, lens shading correction processing, dead pixel compensation processing, color correction processing, global tone mapping processing, and color conversion processing on the original image to obtain better image effect.
Referring to fig. 1, 22 and 28, the present application further provides a mobile terminal 90. The mobile terminal 90 according to the embodiment of the present application may be a mobile phone, a tablet computer, a notebook computer, an intelligent wearable device (such as an intelligent watch, an intelligent bracelet, an intelligent glasses, an intelligent helmet, etc.), a head display device, a virtual reality device, etc., which is not limited herein. The mobile terminal 90 of the embodiment of the present application includes the camera assembly 40 of any of the above embodiments, the processor 60, the memory 70, and the housing 80. Camera assembly 40, processor 60 and memory 70 are all mounted on a housing 80. Wherein the image sensor 10 in the camera assembly 40 is connected to the processor 60. The processor 60 may perform the same functions as the processing chip 20 in the camera assembly 40, in other words, the processor 60 may implement the functions that the processing chip 20 of any of the above embodiments can implement. The memory 70 is connected to the processor 60, and the memory 70 can store data processed by the processor 60, such as a target image. The processor 60 may be mounted on the same substrate as the image sensor 10, in which case the image sensor 10 and the processor 60 may be considered a camera assembly 40. Of course, the processor 60 may be mounted on a different substrate than the image sensor 10.
In the mobile terminal 90 according to the embodiment of the present invention, the two-dimensional pixel array 11 includes a plurality of color pixels and a plurality of panchromatic pixels, which increases the amount of light passing, has a better signal-to-noise ratio, and has better focusing performance under dark light and higher sensitivity of the sub-pixels 102 compared to a general color sensor. In addition, each pixel 101 includes at least two sub-pixels 102, which can improve the resolution of the image sensor 10 while achieving phase focusing.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (19)

1. An image sensor, comprising:
a two-dimensional array of pixels comprising a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels; the two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are disposed in a first diagonal direction and the color pixels are disposed in a second diagonal direction, the first diagonal direction being different from the second diagonal direction; wherein each pixel comprises at least two sub-pixels; and
a lens array comprising a plurality of lenses, each lens covering one of the pixels.
2. The image sensor of claim 1, wherein each of the pixels comprises two of the sub-pixels, and a boundary between the two sub-pixels is parallel to a length direction of the two-dimensional pixel array; or
A boundary between two of the sub-pixels is parallel to a width direction of the two-dimensional pixel array.
3. The image sensor according to claim 1, wherein each of the pixels includes two of the sub-pixels, and a boundary between the two sub-pixels is inclined with respect to a length direction or a width direction of the two-dimensional pixel array.
4. The image sensor of claim 1, wherein each of the pixels comprises two of the sub-pixels, and a boundary between the two sub-pixels in some of the pixels is parallel to a length direction of the two-dimensional pixel array; in some of the pixels, a boundary between two of the sub-pixels is parallel to a width direction of the two-dimensional pixel array.
5. The image sensor according to claim 4, wherein the pixels whose boundary between two of the sub-pixels is parallel to a longitudinal direction of the two-dimensional pixel array and the pixels whose boundary between two of the sub-pixels is parallel to a width direction of the two-dimensional pixel array are distributed alternately or in columns.
6. The image sensor according to claim 4, wherein the pixels whose boundary between two of the sub-pixels is parallel to a length direction of the two-dimensional pixel array and the pixels whose boundary between two of the sub-pixels is parallel to a width direction of the two-dimensional pixel array are distributed alternately.
7. The image sensor of claim 1, wherein each of the pixels comprises four of the sub-pixels, and the four sub-pixels are arranged in a 2 x 2 matrix.
8. The image sensor of claim 1, further comprising a filter array comprising a plurality of filters, each of the filters covering one of the pixels.
9. The image sensor according to claim 1, wherein each of the pixels includes two of the sub-pixels, one of the sub-pixels includes a first photoelectric conversion element, and the other of the sub-pixels includes a second photoelectric conversion element, each of the pixels further includes a first exposure control circuit connected to the first photoelectric conversion element for transferring electric charges generated by the first photoelectric conversion element receiving light to output pixel information, and a second exposure control circuit connected to the second photoelectric conversion element for transferring electric charges generated by the second photoelectric conversion element receiving light to output pixel information.
10. The image sensor of claim 9, wherein each of the pixels further comprises a reset circuit, the reset circuit being simultaneously connected to the first exposure control circuit and the second exposure control circuit;
when the two sub-pixels are exposed to respectively output pixel information or respectively output the pixel information and output phase information, the first exposure control circuit transfers the electric charge generated after the first photoelectric conversion element receives light to output first sub-pixel information, and after the reset circuit is reset, the second exposure control circuit transfers the electric charge generated after the second photoelectric conversion element receives light to output second sub-pixel information;
when the two sub-pixels are exposed to combine and output pixel information, the first exposure control circuit transfers the charges generated after the first photoelectric conversion element receives light, and the second exposure control circuit can transfer the charges generated after the second photoelectric conversion element receives light to output combined pixel information;
when the two sub-pixels are exposed to combine and output pixel information and output phase information, the first exposure control circuit transfers charges generated after the first photoelectric conversion element receives light to output first sub-pixel information, after the reset circuit is reset, the second exposure control circuit transfers charges generated after the second photoelectric conversion element receives light to output second sub-pixel information, and the first sub-pixel information and the second sub-pixel information are used for being combined into combined pixel information.
11. The image sensor of claim 10, wherein when two of the sub-pixels are exposed to combine output pixel information and output phase information, the image sensor further comprises a buffer for storing the first sub-pixel information and the second sub-pixel information to output phase information.
12. A control method for an image sensor comprising a two-dimensional array of pixels comprising a plurality of color pixels and a plurality of panchromatic pixels, the color pixels having a narrower spectral response than the panchromatic pixels, and a lens array; the two-dimensional pixel array includes a minimal repeating unit in which the panchromatic pixels are disposed in a first diagonal direction and the color pixels are disposed in a second diagonal direction, the first diagonal direction being different from the second diagonal direction; wherein each pixel comprises at least two sub-pixels; the lens array comprises a plurality of lenses, each lens covering one of the pixels; the control method comprises the following steps:
controlling the at least two sub-pixel exposures to output at least two sub-pixel information;
determining phase information according to the at least two pieces of sub-pixel information to carry out focusing;
and under the in-focus state, controlling the two-dimensional pixel array to be exposed so as to acquire a target image.
13. The control method according to claim 12, characterized by further comprising:
obtaining the ambient brightness;
in a focusing state, controlling the exposure of the two-dimensional pixel array to acquire a target image includes:
when the ambient brightness is larger than a first preset brightness, in an in-focus state, controlling the at least two sub-pixel exposures in each panchromatic pixel to respectively output panchromatic sub-pixel information, and controlling the at least two sub-pixel exposures in each color pixel to respectively output color sub-pixel information;
and generating the target image according to the full-color sub-pixel information and the color sub-pixel information.
14. The control method according to claim 12, characterized by further comprising:
obtaining the ambient brightness;
in a focusing state, controlling the exposure of the two-dimensional pixel array to acquire a target image includes:
when the ambient brightness is smaller than a second preset brightness, in an in-focus state, controlling the at least two sub-pixel exposures in each panchromatic pixel to respectively output panchromatic sub-pixel information, and controlling the at least two sub-pixel exposures in each color pixel to jointly output color merged pixel information;
and generating the target image according to the full-color sub-pixel information and the color merging pixel information.
15. The control method according to claim 12, characterized by further comprising:
obtaining the ambient brightness;
in a focusing state, controlling the exposure of the two-dimensional pixel array to acquire a target image includes:
controlling the at least two sub-pixel exposures in each of the panchromatic pixels to merge output panchromatic merged pixel information and controlling the at least two sub-pixel exposures in each of the color pixels to merge output color merged pixel information in an in-focus state when the ambient brightness is less than a third predetermined brightness;
generating the target image from the panchromatic merged pixel information and the color merged pixel information.
16. A camera head assembly, comprising:
a lens; and
the image sensor of any of claims 1-11, said image sensor capable of receiving light passing through said lens.
17. A camera assembly according to claim 16, further comprising a processing chip for implementing the control method of any one of claims 12 to 15.
18. A mobile terminal, comprising:
a housing; and
a camera assembly according to claim 16 or 17, mounted on the housing.
19. A mobile terminal, comprising:
a housing; and
the camera assembly of claim 16, said camera assembly mounted on said housing;
the mobile terminal further comprises a processor for implementing the control method of any one of claims 12-15.
CN202010377218.4A 2020-05-07 2020-05-07 Image sensor, control method, camera assembly and mobile terminal Pending CN111586323A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010377218.4A CN111586323A (en) 2020-05-07 2020-05-07 Image sensor, control method, camera assembly and mobile terminal
PCT/CN2021/088404 WO2021223590A1 (en) 2020-05-07 2021-04-20 Image sensor, control method, camera assembly, and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010377218.4A CN111586323A (en) 2020-05-07 2020-05-07 Image sensor, control method, camera assembly and mobile terminal

Publications (1)

Publication Number Publication Date
CN111586323A true CN111586323A (en) 2020-08-25

Family

ID=72126248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010377218.4A Pending CN111586323A (en) 2020-05-07 2020-05-07 Image sensor, control method, camera assembly and mobile terminal

Country Status (2)

Country Link
CN (1) CN111586323A (en)
WO (1) WO2021223590A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235494A (en) * 2020-10-15 2021-01-15 Oppo广东移动通信有限公司 Image sensor, control method, imaging apparatus, terminal, and readable storage medium
CN113178457A (en) * 2021-04-12 2021-07-27 维沃移动通信有限公司 Pixel structure and image sensor
WO2021223590A1 (en) * 2020-05-07 2021-11-11 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly, and mobile terminal
CN113676675A (en) * 2021-08-16 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
CN113676708A (en) * 2021-07-01 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
WO2022073364A1 (en) * 2020-10-09 2022-04-14 Oppo广东移动通信有限公司 Image obtaining method and apparatus, terminal, and computer readable storage medium
CN114697585A (en) * 2020-12-31 2022-07-01 杭州海康威视数字技术股份有限公司 Image sensor, image processing system and image processing method
WO2022141349A1 (en) * 2020-12-31 2022-07-07 Oppo广东移动通信有限公司 Image processing pipeline, image processing method, camera assembly, and electronic device
WO2023020527A1 (en) * 2021-08-19 2023-02-23 维沃移动通信(杭州)有限公司 Image processing method and apparatus, electronic device, and readable storage medium
WO2023087908A1 (en) * 2021-11-22 2023-05-25 Oppo广东移动通信有限公司 Focusing control method and apparatus, image sensor, electronic device, and computer readable storage medium
WO2023098230A1 (en) * 2021-12-01 2023-06-08 Oppo广东移动通信有限公司 Image sensor, camera module, electronic device, and image generation method and apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11683604B1 (en) * 2022-02-23 2023-06-20 Omnivision Technologies, Inc. Circuit and method for image artifact reduction in high-density, highpixel-count, image sensor with phase detection autofocus

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104718745A (en) * 2012-09-27 2015-06-17 株式会社尼康 Image pick-up element and image pick-up device
CN104978920A (en) * 2015-07-24 2015-10-14 京东方科技集团股份有限公司 Pixel array, display device and display method thereof
CN107124536A (en) * 2017-04-28 2017-09-01 广东欧珀移动通信有限公司 Double-core focus image sensor and its focusing control method and imaging device
WO2019082568A1 (en) * 2017-10-24 2019-05-02 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and electronic apparatus
CN110649056A (en) * 2019-09-30 2020-01-03 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal
CN110740272A (en) * 2019-10-31 2020-01-31 Oppo广东移动通信有限公司 Image acquisition method, camera assembly and mobile terminal
CN110913152A (en) * 2019-11-25 2020-03-24 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal
CN110996077A (en) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal
CN111031297A (en) * 2019-12-02 2020-04-17 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8139130B2 (en) * 2005-07-28 2012-03-20 Omnivision Technologies, Inc. Image sensor with improved light sensitivity
CN103460702B (en) * 2011-03-24 2015-01-07 富士胶片株式会社 Color image capturing element and image capturing device
CN107040724B (en) * 2017-04-28 2020-05-15 Oppo广东移动通信有限公司 Dual-core focusing image sensor, focusing control method thereof and imaging device
CN111586323A (en) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
CN111464733B (en) * 2020-05-22 2021-10-01 Oppo广东移动通信有限公司 Control method, camera assembly and mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104718745A (en) * 2012-09-27 2015-06-17 株式会社尼康 Image pick-up element and image pick-up device
CN104978920A (en) * 2015-07-24 2015-10-14 京东方科技集团股份有限公司 Pixel array, display device and display method thereof
CN107124536A (en) * 2017-04-28 2017-09-01 广东欧珀移动通信有限公司 Double-core focus image sensor and its focusing control method and imaging device
WO2019082568A1 (en) * 2017-10-24 2019-05-02 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and electronic apparatus
CN110649056A (en) * 2019-09-30 2020-01-03 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal
CN110740272A (en) * 2019-10-31 2020-01-31 Oppo广东移动通信有限公司 Image acquisition method, camera assembly and mobile terminal
CN110913152A (en) * 2019-11-25 2020-03-24 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal
CN110996077A (en) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal
CN111031297A (en) * 2019-12-02 2020-04-17 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021223590A1 (en) * 2020-05-07 2021-11-11 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly, and mobile terminal
WO2022073364A1 (en) * 2020-10-09 2022-04-14 Oppo广东移动通信有限公司 Image obtaining method and apparatus, terminal, and computer readable storage medium
EP4216534A4 (en) * 2020-10-09 2024-01-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image obtaining method and apparatus, terminal, and computer readable storage medium
CN112235494A (en) * 2020-10-15 2021-01-15 Oppo广东移动通信有限公司 Image sensor, control method, imaging apparatus, terminal, and readable storage medium
CN114697585A (en) * 2020-12-31 2022-07-01 杭州海康威视数字技术股份有限公司 Image sensor, image processing system and image processing method
WO2022141349A1 (en) * 2020-12-31 2022-07-07 Oppo广东移动通信有限公司 Image processing pipeline, image processing method, camera assembly, and electronic device
CN114697585B (en) * 2020-12-31 2023-12-29 杭州海康威视数字技术股份有限公司 Image sensor, image processing system and image processing method
CN113178457A (en) * 2021-04-12 2021-07-27 维沃移动通信有限公司 Pixel structure and image sensor
CN113676708B (en) * 2021-07-01 2023-11-14 Oppo广东移动通信有限公司 Image generation method, device, electronic equipment and computer readable storage medium
CN113676708A (en) * 2021-07-01 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
CN113676675A (en) * 2021-08-16 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
CN113676675B (en) * 2021-08-16 2023-08-15 Oppo广东移动通信有限公司 Image generation method, device, electronic equipment and computer readable storage medium
WO2023020527A1 (en) * 2021-08-19 2023-02-23 维沃移动通信(杭州)有限公司 Image processing method and apparatus, electronic device, and readable storage medium
WO2023087908A1 (en) * 2021-11-22 2023-05-25 Oppo广东移动通信有限公司 Focusing control method and apparatus, image sensor, electronic device, and computer readable storage medium
WO2023098230A1 (en) * 2021-12-01 2023-06-08 Oppo广东移动通信有限公司 Image sensor, camera module, electronic device, and image generation method and apparatus

Also Published As

Publication number Publication date
WO2021223590A1 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
WO2021223590A1 (en) Image sensor, control method, camera assembly, and mobile terminal
CN110649056B (en) Image sensor, camera assembly and mobile terminal
US9736447B2 (en) Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus
CN111405204B (en) Image acquisition method, imaging device, electronic device, and readable storage medium
CN110649057B (en) Image sensor, camera assembly and mobile terminal
CN111447380B (en) Control method, camera assembly and mobile terminal
WO2021196553A1 (en) High-dynamic-range image processing system and method, electronic device and readable storage medium
CN111385543B (en) Image sensor, camera assembly, mobile terminal and image acquisition method
US20230086743A1 (en) Control method, camera assembly, and mobile terminal
CN112235494B (en) Image sensor, control method, imaging device, terminal, and readable storage medium
CN110784634B (en) Image sensor, control method, camera assembly and mobile terminal
US20220150450A1 (en) Image capturing method, camera assembly, and mobile terminal
CN111263129A (en) Image sensor, camera assembly and mobile terminal
CN114008781A (en) Image sensor, camera assembly and mobile terminal
US20220279108A1 (en) Image sensor and mobile terminal
CN114424517B (en) Image sensor, control method, camera component and mobile terminal
CN114041208A (en) Image sensor, camera assembly and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication