WO2021174425A1 - 图像传感器和图像感光方法 - Google Patents

图像传感器和图像感光方法 Download PDF

Info

Publication number
WO2021174425A1
WO2021174425A1 PCT/CN2020/077652 CN2020077652W WO2021174425A1 WO 2021174425 A1 WO2021174425 A1 WO 2021174425A1 CN 2020077652 W CN2020077652 W CN 2020077652W WO 2021174425 A1 WO2021174425 A1 WO 2021174425A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
pixel
light
filter layer
infrared light
Prior art date
Application number
PCT/CN2020/077652
Other languages
English (en)
French (fr)
Inventor
陈健沛
蓝晶
许聪
郭家诚
西泽真人
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20922912.9A priority Critical patent/EP4109894A4/en
Priority to JP2022552945A priority patent/JP2023516410A/ja
Priority to PCT/CN2020/077652 priority patent/WO2021174425A1/zh
Priority to CN202080097816.4A priority patent/CN115462066A/zh
Publication of WO2021174425A1 publication Critical patent/WO2021174425A1/zh
Priority to US17/901,965 priority patent/US20230005240A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/131Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/583Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/702SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure

Definitions

  • This application relates to image processing technology, in particular to an image sensor and an image light-sensing method.
  • the color filter array (Color Filter Array, CFA) of the traditional Bayer red, green and blue sensor includes three types of pixels: R, G, and B. Sensitivity to one of R, G, and B colors and arrange them to form a mosaic color image.
  • Figure 1 shows a schematic diagram of a typical Bayer RGB CFA. The array is arranged in a 2 ⁇ 2 array, with the first row in the smallest repeating unit Including R pixels and G pixels, the second row includes G pixels and B pixels.
  • the red, green and blue infrared sensor reduces some R pixels, G pixels or B pixels on the basis of RGB Sensor, and increases IR pixels, which are also arranged to form a mosaic color image.
  • Figure 2 shows a typical The schematic diagram of RGBIR CFA, as shown in Figure 2, the array is arranged in a 2 ⁇ 2 array, the first row in the smallest repeating unit includes R pixels and G pixels, and the second row includes IR pixels and B pixels.
  • Figure 3 shows A schematic diagram of a typical RGBIR CFA, as shown in Figure 3, the array adopts a 4 ⁇ 4 array sequence, the first row in the smallest repeating unit includes R pixels, G pixels, B pixels, and G pixels, and the second row includes G Pixels, IR pixels, G pixels, and IR pixels, the third row includes B pixels, G pixels, R pixels, and G pixels, and the fourth row includes G pixels, IR pixels, G pixels, and IR pixels.
  • the infrared light in the visible light pixel is filtered out, the infrared light in adjacent IR pixels will still crosstalk into the visible light pixel, affecting the signal-to-noise ratio of the visible light pixel.
  • the present application provides an image sensor and an image light-sensing method to reduce the light crosstalk of small pixels to large pixels and improve the signal-to-noise ratio of large pixels on the premise of sufficient color information.
  • the present application provides an image sensor including: red pixels, green pixels, blue pixels, and non-visible light pixels; wherein the red pixels, the green pixels, and the blue pixels are large pixels, and the non-visible light pixels It is a small pixel, and the photosensitive area of the large pixel is larger than the photosensitive area of the small pixel; the arrangement of the red pixel, the green pixel and the blue pixel is in Bayer Bayer format.
  • the image sensor provided in this application uses a pixel array composed of large pixels and small pixels. Because large pixels use Bayer RGB CFA, it is completely consistent with the traditional RGB sensor, and the photosensitive area of large pixels is larger than that of small pixels, so it is based on The color information obtained by large pixels will not be reduced, and the resolution of visible light images will not be reduced. You can also directly reuse the existing Demosaic algorithm and seamlessly embed it in the existing image signal processor (Image Signal Processor, ISP) middle. In addition, the photosensitive area of a large pixel is larger than that of a small pixel. Correspondingly, the photosensitive sensitivity of a large pixel is greater than that of a small pixel.
  • ISP Image Signal Processor
  • the non-visible light pixels include infrared light pixels or white pixels, and the white pixels are used for sensitizing white light, and the white light includes red light, green light, blue light, and infrared light.
  • four large pixels surround one small pixel, and four small pixels surround one large pixel.
  • the areas of the large pixel and the small pixel are set according to the crosstalk accuracy of the image sensor.
  • the distance between the center points of adjacent pixels in the pixel array is fixed, and the distance depends on the size and process of the image sensor.
  • the areas of large pixels and small pixels can be set according to the crosstalk accuracy of the image sensor.
  • the area ratio of large pixels and small pixels is as large as possible.
  • the larger the area of the large pixel the more accurate the color information of the image obtained.
  • the larger the area of the small pixel the more detailed image information can be obtained. Therefore, although the large pixel and the small pixel have the largest area However, it is still expected that the area of large pixels and small pixels should be as large as possible.
  • the areas of large pixels and small pixels can be set in advance while balancing the foregoing factors.
  • setting the areas of large pixels and small pixels can be converted into setting the spaces between adjacent sides of large pixels and small pixels. If the areas of large pixels and small pixels are large, the spaces between adjacent sides are small. Since the area of large pixels and small pixels or the interval between adjacent sides of large pixels and small pixels takes into account the aforementioned multiple factors, based on the large pixels and small pixels under this setting, a higher resolution can be obtained.
  • the color information can be supplemented with sufficient detail information, which helps to improve the image quality of the image finally obtained by the image sensor.
  • the red pixel, the green pixel, and the blue pixel all correspond to an infrared cut filter layer, and the infrared cut filter layer is used to cut off optical signals with a wavelength greater than a first preset wavelength ,
  • the optical signal with a wavelength greater than the first preset wavelength includes infrared light.
  • the image sensor provided by this application is coated with an infrared light cut-off filter layer on the micro lens corresponding to the R pixel, G pixel and B pixel, which cuts off the infrared light reaching the R pixel, G pixel and B pixel, and removes the visible light pixel.
  • the IR signal in the light-sensing result the color of the light-sensing result is more accurate, and the light-sensing effect of the sensor is improved.
  • the present application can coat the infrared light cut filter layer on the micro lens based on coating technology, on the one hand, there is no need to add a complicated mechanical structure; on the other hand, it does not change the structure of the pixel itself under the micro lens.
  • the relatively simple and stable internal structure of the pixel is conducive to controlling the chief ray angle (Chief Ray Angle, CRA) and other issues that affect imaging, and improves the photosensitive effect of the image sensor while maintaining the stability of the pixel structure.
  • CRA chief ray angle
  • the first preset wavelength is 650nm.
  • the infrared cut filter cuts off all light with a wavelength greater than the visible light range to ensure that infrared light in all wavelength ranges cannot enter Red pixels, green pixels, and blue pixels.
  • a filter layer the filter layer includes a red filter layer, a green filter layer, and a blue filter layer; each red pixel corresponds to one red filter layer, The red filter layer is used to pass red light and infrared light in the first wavelength range; each green pixel corresponds to a green filter layer, and the green filter layer is used to pass green light and infrared light in the second wavelength range.
  • each blue pixel corresponds to a blue filter layer, the blue filter layer is used to pass blue light and infrared light in the third wavelength range; infrared light in the first wavelength range, the The wavelengths of the infrared light in the second wavelength range and the infrared light in the third wavelength range are both greater than the first preset wavelength; when the non-visible light pixel is an infrared light pixel, the filter layer further includes an infrared light filter layer ; Each infrared light pixel corresponds to an infrared light filter layer, the infrared light filter layer is used to pass infrared light in a specific wavelength range; when the non-visible light pixel is a white pixel, the filter layer also includes all Pass filter layer or double pass filter layer; each of the white pixels corresponds to one of the all pass filter layer or the double pass filter layer, the all pass filter layer is used to pass light in the full wavelength range, the double pass The pass filter layer is used to pass the red light, the green light, the
  • the infrared light cut filter layer and/or the filter layer are coated on the micro lens corresponding to the pixel.
  • a red filter layer and an infrared cut filter layer are coated on the R pixel to filter out the IR component in the R pixel's photosensitive result, so that the R pixel can only receive red light.
  • the G pixel is coated with a green filter layer and an infrared cut-off filter layer
  • the B pixel is coated with a blue filter layer and an infrared cut-off filter layer to filter out the IR in the G pixel and B pixel photosensitive results Component, so that the G pixel can only receive green light, and the B pixel can only receive blue light.
  • the IR pixel is coated with an infrared filter layer, so that the IR pixel can only receive infrared light, or the W pixel is coated with an all-pass filter layer or a double-pass filter layer, so that the W pixel can only receive white light. .
  • the IR pixel is coated with an infrared filter layer, so that the IR pixel can only receive infrared light
  • the W pixel is coated with an all-pass filter layer or a double-pass filter layer, so that the W pixel can only receive white light. .
  • the red filter layer is above or below the infrared light cut filter layer; the green filter layer is above or below the infrared light cut filter layer; the blue filter layer is above or below the infrared light cut filter layer. Above or below the filter layer. This application does not limit the coating sequence of the infrared cut filter layer, the red filter layer, the green filter layer, and the blue filter layer on the micro lens.
  • a red filter layer and an infrared cut filter layer are coated on the micro lens of red pixels; a green filter layer and an infrared cut filter layer are coated on the micro lens of green pixels; blue pixels
  • the micro lens is coated with blue filter layer and infrared light cut filter layer; the micro lens of infrared light pixel is coated with infrared light filter layer, and the infrared light cut filter layer, red filter layer and green filter layer are not limited.
  • the positional relationship between the optical layer and the blue filter layer coated on the micro lens, the red filter layer, the green filter layer and the blue filter layer may be respectively coated on the infrared cut-off filter layer; or the infrared
  • the light cut filter layer can also be coated on the red filter layer, the green filter layer and the blue filter layer respectively, as long as the light passes through the infrared cut filter layer and any visible light component before reaching the micro lens
  • the filter layer is sufficient.
  • the infrared cut filter layer is coated on the micro lens, and the red filter layer, the green filter layer, and the blue filter layer are coated on the inner side of the micro lens or are respectively made on the red pixels.
  • it further includes: a logic control circuit for controlling the exposure time of the large pixel and the small pixel respectively.
  • the exposure time of large pixels and small pixels is independently controlled. For example, when infrared light is too strong and visible light is too weak, the exposure time of large pixels can be increased while the exposure time of small pixels is reduced, so that visible light
  • the brightness of imaging with invisible light infrared light or white light
  • the dynamic range of the image sensor’s sensitivity is improved to meet the user’s need for clarity.
  • signal-to-noise ratio and other indicators
  • the logic control circuit includes: a first control line and a second control line; the large pixel is coupled to the first control line, and the small pixel is coupled to the second control line; the logic control The circuit is specifically configured to control the exposure start time of the large pixel based on the first control line; control the exposure start time of the small pixel based on the second control line.
  • the logic control circuit is further configured to control the exposure time of the large pixel and the small pixel to meet a preset ratio based on the first control line and the second control line.
  • the first control line outputs a first control signal
  • the second control line outputs a second control signal.
  • the first effective transition edge of the first control signal arrives, the large pixel starts to be exposed.
  • the second effective transition edge of the second control signal arrives, the small pixel starts to be exposed; by setting the arrival time of the first effective transition edge and the second effective transition edge, the exposure time of the large pixel and the small pixel Meet the preset ratio.
  • it further includes: a filter for filtering ultraviolet light and infrared light with a wavelength greater than a second preset wavelength, where the second preset wavelength is greater than the first preset wavelength and the specific wavelength Any wavelength within the range.
  • the filter can filter the far-infrared light with a longer wavelength and the ultraviolet light with a shorter wavelength in natural light, so as to prevent the far-infrared light and the ultraviolet light from affecting the photosensitive characteristics of the photosensitive device.
  • the exposure time of the R pixels, G pixels, B pixels, and IR pixels can be preset to meet a preset ratio, so as to achieve fine control of the photosensitive effect of the image sensor.
  • the sensor further includes: a row coordinate control line, a column coordinate control line, and an exposure start control line; each pixel in the pixel array is coupled to its own row coordinate control line and column coordinate control line,
  • the exposure start control line includes a plurality of branches, and each branch corresponds to a pixel; when the control signals output by the row coordinate control line and the column coordinate control line of the target pixel are both effective levels, the target pixel corresponds to the
  • the branch of the exposure start control line outputs a control signal to control the exposure start time of the target pixel, and the target pixel is any pixel in the pixel array.
  • the present application provides an image light-sensing method, which is applied to an image sensor, and the image sensor includes: red pixels, green pixels, blue pixels, and non-visible light pixels; wherein, the red pixels, the green pixels, and the The blue pixel is a large pixel, the invisible light pixel is a small pixel, and the light-sensitive area of the large pixel is larger than the light-sensitive area of the small pixel; the arrangement of the red pixel, the green pixel, and the blue pixel is in a Bayer Bayer format; The method includes: sensing the red light based on the red pixel; sensing the green light based on the green pixel; sensing the blue light based on the blue pixel; sensing infrared light or white light based on the small pixel.
  • the device may be a sensor including a control unit.
  • the senor is an RGBIR sensor
  • the at least two types of pixels include: visible light pixels and IR pixels
  • the visible light pixels include: R pixels, G pixels, B pixels
  • the pixels include: R pixels, B pixels, G pixels, and IR pixels.
  • the at least two control units include: a first control unit, a second control unit, a third control unit, and a fourth control unit; the first control unit is used for Control the exposure start time of the R pixel; the second control unit is used to control the exposure start time of the G pixel; the third control unit is used to control the exposure start time of the B pixel; the fourth control unit is used To control the exposure start time of the IR pixel.
  • the at least two control units include: a first control unit, a second control unit, a third control unit, and a fourth control unit;
  • the unit is used to control the exposure start time of the R pixel;
  • the second control unit is used to control the exposure start time of the G pixel;
  • the third control unit is used to control the exposure start time of the B pixel;
  • the fourth The control unit is used to control the exposure start time of the W pixel.
  • the device further includes: an exposure end control unit, configured to uniformly control the exposure end time of all pixels in the pixel array.
  • first control unit and the second control unit to control the exposure time of the visible light pixels and the W pixels to meet the preset ratio; or, based on the first control unit, the second control unit, the third control unit, and the fourth control unit to control R , G, B, and W pixels’ exposure time meets the preset ratio; or based on the first control unit and the second control unit controlling the exposure time of the visible light pixel and the C pixel to meet the preset ratio; or based on the first control unit, the The second control unit and the third control unit control the exposure time of the R, B, and C pixels to meet the preset ratio.
  • the at least two types of pixels further include: a third type of pixels and a fourth type of pixels, and the method further includes: controlling the third type of pixels based on a third control unit The exposure start time of the pixel; the fourth control unit controls the exposure start time of the fourth type of pixel.
  • the present application provides a computer program product containing instructions, which when it runs on a computer or processor, causes the computer or processor to execute the above-mentioned fourth aspect or any one of its possible implementations. Methods.
  • FIG. 2 shows a schematic diagram of a typical RGBIR CFA
  • Fig. 9a is a schematic structural diagram of an exemplary image sensor provided by this application.
  • FIG. 12 is a schematic structural diagram of an exemplary longitudinal section of the image sensor provided by this application.
  • Fig. 16 shows an exemplary control connection schematic diagram of the sorting of large and small pixel arrays
  • FIG. 19 shows an exemplary timing diagram of the control signal
  • FIG. 20b shows a photosensitive characteristic curve diagram of the photosensitive device in the image sensor provided by the present application
  • FIG. 22 is a flowchart of an embodiment of an image photosensitive method provided by this application.
  • FIG. 23 is a flowchart of an embodiment of an image photosensitive method provided by this application.
  • FIG. 24 is a flowchart of an embodiment of a method for independently controlling exposure time provided by this application.
  • At least one (item) refers to one or more, and “multiple” refers to two or more.
  • “And/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, “A and/or B” can mean: only A, only B, and both A and B , Where A and B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the following at least one item (a) or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one of a, b, or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c" ", where a, b, and c can be single or multiple.
  • Fig. 4 shows the photosensitive characteristic curve of the photosensitive device.
  • the R pixel is in the wavelength range of red light (near 650nm)
  • the G pixel is in the wavelength range of green light (near 550nm).
  • B pixels in the wavelength range of green light (near 450nm)
  • IR pixels in the wavelength range of infrared light (near 850nm or 910nm) respectively have a photosensitive intensity peak
  • R pixels, G pixels and B The pixel will also have a photosensitive intensity peak in the infrared wavelength range (850nm or 910nm).
  • the red filter layer, the green filter layer, and the blue filter layer will all pass the light signals at the above two light-sensitive intensity peaks corresponding to their respective colors. This phenomenon leads to the The sensitization results obtained by the R pixel, G pixel and B pixel all have a certain degree of IR signal, and the color information of the image obtained by the image sensor will be inaccurate due to the influence of the IR signal.
  • This application provides an image sensor, which can obtain sufficient color information on the basis of independent photosensitivity of visible light and infrared light, ensure the resolution of visible light images, reduce the crosstalk influence of infrared light on visible light, and improve the performance of visible light pixels. Signal-to-noise ratio.
  • the above-mentioned image sensor can be applied to mobile phones, surveillance cameras, security access control systems and other equipment, as well as photography, video, surveillance and other fields that require the use of color images and IR or W black and white images for image processing.
  • Typical applications can include scenes such as live detection based on visible light and infrared light, night video surveillance, and color-black and white dynamic fusion.
  • the terminal equipment to which the sensor in this application is applicable can also be called user equipment (UE), which can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; or on the water (such as ships, etc.); Can be deployed in the air (for example, airplanes, balloons, satellites, etc.).
  • UE user equipment
  • the terminal device can be a terminal device 100 (mobile phone), a tablet computer (pad), a computer with wireless transceiver function, a virtual reality (VR) device, an augmented reality (AR) device, a monitoring device, and a smart phone.
  • This application is not limited to screens, smart TVs, wireless devices in remote medical (remote medical), or wireless devices in smart home (smart home).
  • the ISP module 703 is used to adjust the visible light exposure parameters and the intensity of the infrared fill light according to the original image of the object to be photographed until the convergence condition of the automatic exposure algorithm is met. It is also used to separate the visible light image and the light intensity from the original image of the object to be photographed. Infrared image.
  • the image fusion module 704 is used to fuse the separated visible light image and infrared image to obtain a target image.
  • the infrared lamp driving control module 705 is configured to control the infrared supplement light lamp 706 according to the light intensity of the infrared supplement light lamp configured by the ISP module 703.
  • the infrared fill light 706 is used to provide infrared light.
  • FIG. 8 is a schematic diagram of an exemplary structure of the image sensor provided by this application.
  • the image sensor includes: red (R) pixels, green (G) pixels, blue (B) pixels, and non-visible light pixels. Since red light, green light, and blue light are collectively referred to as visible light, the corresponding pixels, namely R pixels, G pixels, and B pixels, can be collectively referred to as visible light pixels.
  • this application refers to pixels other than R pixels, G pixels, and B pixels as non-visible light pixels.
  • the non-visible light pixels include infrared light (IR) pixels and white (W) pixels.
  • IR infrared light
  • W white
  • white light refers to light in the full range of wavelengths, including red light, green light, blue light and infrared light.
  • Fig. 9a is a schematic diagram of an exemplary structure of the image sensor provided by this application.
  • R pixels, G pixels, and B pixels are large pixels, and IR pixels are small pixels.
  • the photosensitive area of the large pixel is greater than the photosensitive area of the small pixel, and correspondingly, the photosensitive sensitivity of the large pixel is greater than the photosensitive sensitivity of the small pixel.
  • the arrangement of R pixels, G pixels and B pixels adopts Bayer RGB CFA.
  • the array adopts a 2 ⁇ 2 array arrangement.
  • the first row in the smallest repeating unit includes R pixels and G pixels, and the second row includes G Pixels and B pixels.
  • the pixel array composed of large pixels and small pixels, because large pixels (R pixels, G pixels, and B pixels) use Bayer RGB CFA, which is completely consistent with the traditional RGB sensor, so based on the color information obtained from R pixels, G pixels and B pixels It will not be reduced, and the resolution of the resulting visible light image will not be reduced, and the existing Demosaic algorithm can be directly reused, and it can be seamlessly embedded in the existing ISP.
  • the photosensitive area of a large pixel is larger than that of a small pixel.
  • the photosensitive sensitivity of a large pixel is greater than that of a small pixel. Therefore, a large pixel is more conducive to more accurate color information.
  • IR pixels The area of adjacent small pixels (IR pixels) is small, and the contact range between IR pixels and visible light pixels is also greatly reduced, which can reduce the light crosstalk between IR pixels and visible light pixels, and reduce the photosensitive effect of IR pixels on visible light.
  • the interference of the color information obtained by the pixels further improves the signal-to-noise ratio of the color information of the visible light image.
  • infrared night vision scenes or in weak light scenes infrared supplementary light will be provided, that is, infrared light has strong light, and the design of IR pixels as small pixels will not affect the imaging results of infrared light, that is Will not affect the acquisition of detailed information.
  • large pixels obtain more accurate color information, and the detailed information obtained by small pixels is not affected, which greatly improves the details and color performance of the fused image.
  • Fig. 9b is a schematic diagram of an exemplary structure of the image sensor provided by this application.
  • R pixels, G pixels, and B pixels are large pixels, and W pixels are small pixels.
  • the photosensitive area of the large pixel is greater than the photosensitive area of the small pixel, and correspondingly, the photosensitive sensitivity of the large pixel is greater than the photosensitive sensitivity of the small pixel.
  • the arrangement of R pixels, G pixels, and B pixels also adopts Bayer RGB CFA, as shown in Figure 1.
  • the pixel array composed of large pixels and small pixels, because large pixels (R pixels, G pixels, and B pixels) use Bayer RGB CFA, which is completely consistent with the traditional RGB sensor, so based on the color information obtained from R pixels, G pixels and B pixels It will not be reduced, and the resolution of the resulting visible light image will not be reduced, and the existing Demosaic algorithm can be directly reused, and it can be seamlessly embedded in the existing ISP.
  • the photosensitive area of a large pixel is larger than that of a small pixel.
  • the photosensitive sensitivity of a large pixel is greater than that of a small pixel. Therefore, a large pixel is more conducive to more accurate color information.
  • This application obtains the color information of the image based on visible light pixels, and obtains the detailed information of the image based on the non-visible light pixels, and merges the visible light image obtained by the visible light pixel and the non-visible light image obtained by the non-visible light pixel to obtain the final image.
  • the visible light pixels adopt Bayer RGB CFA, which is completely consistent with the traditional RGB sensor, the visible light image is a full-resolution image.
  • the invisible light pixels as IR pixels or W pixels are arranged between the pixel rows and pixel columns of the visible light pixels to form four A visible light pixel surrounds a non-visible light pixel, and four non-visible light pixels surround a pixel array of visible light pixels. Therefore, the obtained non-visible light image is also a full-resolution image.
  • the design of large pixels and small pixels does not affect the visible light image. Color information, as well as detailed information of non-visible light images.
  • the distance between the center points of adjacent pixels in the pixel array is fixed, and the distance depends on the size and process of the image sensor.
  • the areas of large pixels and small pixels can be set according to the crosstalk accuracy of the image sensor.
  • the area ratio of large pixels and small pixels is as large as possible.
  • the larger the area of the large pixel the more accurate the color information of the image obtained.
  • the larger the area of the small pixel the more detailed image information can be obtained. Therefore, although the large pixel and the small pixel have the largest area However, it is still expected that the area of large pixels and small pixels should be as large as possible.
  • the areas of large pixels and small pixels can be set in advance while balancing the foregoing factors.
  • setting the area of large pixels and small pixels can be converted into setting the space between adjacent sides of large pixels and small pixels. If the areas of large pixels and small pixels are large, the space between adjacent sides is small.
  • the present application can set the shapes of large pixels and small pixels to be regular polygons or circles.
  • the shape of the large pixel and the small pixel may be the same or different.
  • the shape of the large pixel is a regular octagon
  • the shape of the small pixel is a square.
  • the shapes of the large pixels and small pixels may also be regular hexagons, squares, and the like. This application does not specifically limit this.
  • FIG. 10 is an exemplary structural diagram of the longitudinal section of the image sensor provided by this application.
  • the longitudinal section of this embodiment is cut along the dashed line in FIG. 8.
  • seven pixels along the dashed line namely R pixel, IR pixel, B pixel, IR pixel, R pixel, IR pixel and B pixel are exemplarily shown.
  • Each pixel includes a micro lens 1001, In the filter layer and charge readout module 1003, an infrared cut filter layer 1004 is also provided in the R pixel, the G pixel and the B pixel.
  • the filter layer in the R pixel is the red filter layer 1002R
  • the filter layer in the G pixel is the green filter layer 1002G
  • the filter layer in the B pixel is the blue filter layer 1002B
  • the filter layer in the IR pixel It is the infrared filter layer 1002IR.
  • the infrared light cut filter layer 1004 may also be referred to as IR-Cut, and is used to cut off optical signals with a wavelength greater than a first preset wavelength, and the optical signals with a wavelength greater than the first preset wavelength include infrared optical signals.
  • the first preset wavelength is 650 nm
  • the infrared light cut filter layer 1004 is used to cut off optical signals with a wavelength greater than 650 nm
  • the optical signals with a wavelength greater than 650 nm include infrared light.
  • the typical wavelength of visible light is about 430 nm to 650 nm
  • the typical wavelength of infrared light sensitive to the IR pixel is about 850 nm to 920 nm.
  • IR-Cut can cut off light signals with a wavelength greater than 650nm, so that infrared light in the wavelength range of about 850nm to 920nm cannot enter R pixels, G pixels, and B pixels.
  • the photosensitive characteristic of the light passing through the red filter layer 1002R in the R pixel is shown by the thin black solid line R in FIG.
  • the photosensitivity characteristics of light passing through the green filter layer 1002G in the G pixel are shown by the short dashed line G in FIG.
  • the photosensitive characteristic of the light passing through the blue filter layer 1002B in the B pixel is shown by the dotted line B in FIG.
  • the photosensitive characteristic of the light passing through the infrared light filter layer 1002IR in the IR pixel is shown by the long dotted line IR in FIG.
  • the red filter layer 1002R can transmit red light and infrared light in the first wavelength range at the same time
  • the green filter layer 1002G can transmit green light and infrared light in the second wavelength range at the same time
  • the blue filter layer can transmit light at the same time.
  • the layer 1002B can transmit blue light and infrared light in the third wavelength range at the same time.
  • the first wavelength range, the second wavelength range, and the third wavelength range may be the same or different, and infrared light in the first wavelength range, infrared light in the second wavelength range, and infrared light in the third wavelength range
  • the wavelengths of are all greater than the first preset wavelength.
  • the micro lens 1001 is a tiny convex lens device arranged on each pixel of the image sensor, and is used to concentrate the input light into each pixel.
  • the micro lenses corresponding to the R pixels, G pixels, and B pixels are respectively coated with an infrared light cut filter layer 1004, so light exceeding 650 nm cannot enter the R pixels, G pixels, and B pixels.
  • the micro lens corresponding to the R pixel is also coated with a red filter layer 1002R, so only red light near 650 nm enters the R pixel, and the R pixel can only receive red light.
  • the micro lens corresponding to the G pixel is also coated with a green filter layer 1002G, so only the green light near 550 nm enters the G pixel, and the G pixel can only receive green light.
  • the micro lens corresponding to the B pixel is also coated with a blue filter layer 1002B, so only blue light near 450 nm enters the B pixel, and the B pixel can only receive blue light.
  • the micro lens corresponding to the IR pixel is coated with an infrared filter layer 1002IR, so only near-infrared light near 850nm or 910nm enters the IR pixel, and the IR pixel can only receive infrared light.
  • the pixels under the micro lens only have photosensitive devices, such as photodiodes, and the relatively simple and stable internal structure of the pixels is conducive to controlling the main light path angle of incidence (Chief Ray Angle, CRA) and other issues that affect imaging.
  • the filter layer is coated on the micro lens. On the lens, the light-sensing effect of the sensor is improved on the premise of keeping the structure of the pixel itself stable.
  • the internal structure of the pixel is not a smooth inner wall. There are some protrusions on the inner wall of the pixel. If the incident angle of the light is offset from the main optical path of the micro lens, part of the light will be blocked by the protrusions on the inner wall of the pixel. Sensitivity will decrease.
  • the CRA of the pixel located in the optical center of the sensor is 0 degrees, and the CRA angle of the pixel farther from the optical center is larger.
  • the pixel s CRA is The CRA angle is taken as the ordinate, and the function between the offset distance of the pixel from the center and the CRA angle of the pixel is a linear function.
  • This rule is called CRA consistent performance.
  • the position of the micro lens of the pixel needs to be fine-tuned according to the position of the pixel. For example, the micro lens of the pixel located in the optical center is directly above the pixel, and the micro lens of the pixel deviating from the optical center is not in the pixel.
  • the deviation of the micro lens of the pixels farther from the optical center is also relatively large. If the internal structure of the pixel under the micro lens is relatively complicated, it is easy to cause inconsistent CRA performance, and the method of fine-tuning the position of the micro lens on the pixel surface may no longer be applicable.
  • the filter layer added by the sensor provided by this application is coated on the micro lens, and does not change the internal structure of the pixel. The internal result of the pixel is simple and stable, and the sensitivity of the sensor is improved without affecting the CRA performance of the sensor. Effect.
  • Each pixel in the image sensor includes a photosensitive device, for example, the photosensitive device may be a photodiode, which is used to convert a light signal into an electric signal or convert a light signal into an electric charge.
  • the photosensitive device may be a photodiode, which is used to convert a light signal into an electric signal or convert a light signal into an electric charge.
  • the charge readout module 1003 is used to read out the charge accumulated by the photosensitive device and output it to the subsequent image processing circuit or image processor.
  • the charge readout module is similar to a buffer area, the charge accumulated by the photosensitive device is transferred and temporarily buffered in the charge readout module, and the charge signal of the corresponding pixel is output under the control of the readout signal.
  • the red filter layer 1002R, the green filter layer 1002G, the blue filter layer 1002B and the infrared filter layer 1002IR are coated on the micro lens 1001, and the infrared
  • the light cut filter layer 1004 is formed in R pixels, G pixels, and B pixels.
  • a glass sheet is added to the inside of the pixel, and the infrared light cut filter layer 1004 is coated on the added glass sheet.
  • FIG. 11 is a schematic structural diagram of an exemplary longitudinal section of the image sensor provided by this application.
  • the infrared cut filter layer 1104 is respectively coated on the red filter layer 1102R.
  • the other parts of the image sensor shown in FIG. 11 are the same as the sensor shown in FIG. 10, and will not be repeated here.
  • only the infrared cut filter layer 1104 can be coated on the micro lens 1101, and the red filter layer 1102R can be used in the R pixel, and the green filter layer 1102G can be used in the G pixel.
  • the blue filter layer 1102B is made in the B pixel, and the infrared filter layer 1102IR is made in the IR pixel.
  • a glass sheet is added inside the pixel, and the red filter layer 1102R, the green filter layer 1102G, the blue filter layer 1102B and/or the infrared filter layer 1102IR are coated on the added glass sheet.
  • the image sensor provided in this application does not limit the positional relationship between the infrared cut filter layer and the red filter layer, the green filter layer and the blue filter layer on the outside of the micro lens or the inside of the micro lens.
  • FIG. 12 is an exemplary structural diagram of the longitudinal section of the image sensor provided by this application.
  • the image sensor further includes: a filter 1205 for filtering Ultraviolet light and infrared light with a wavelength greater than a second preset wavelength, and the second preset wavelength is greater than any wavelength within the specific wavelength range passed by the first preset wavelength and the infrared light filter layer.
  • the visible light and part of the infrared light pass through the filter 1205.
  • the infrared light with a wavelength greater than the second preset wavelength may be referred to as far-infrared light, and the wavelength of the far-infrared light is greater than the wavelength of the infrared light allowed to pass through the infrared light filter layer.
  • the wavelength of visible light is about 430 nm to 650 nm
  • the typical wavelength range of infrared light sensitized by the IR pixel is about 850 nm to 920 nm.
  • the second preset wavelength may be, for example, 900 nm, or 920 nm, or may also be any wavelength between 850 nm and 950 nm.
  • the filter 1205 may be an all-pass filter or a dual-pass filter.
  • An exemplary all-pass filter is used to filter ultraviolet light with a wavelength less than 400 nm and a wavelength greater than 900nm infrared light.
  • An exemplary double-pass filter is used to pass only visible light and infrared light in the range of 800 nm to 900 nm. At this time, the double-pass filter is equivalent to filtering out infrared light greater than 900 nm.
  • An exemplary double-pass filter is used to pass only visible light and infrared light in the range of 900 nm to 950 nm. At this time, the double-pass filter is equivalent to filtering out infrared light greater than 950 nm.
  • the filter 1205 can prevent the long-wavelength far-infrared light and ultraviolet light from affecting the photosensitive characteristics of the photosensitive device.
  • micro lens 1201 Please refer to the description of the embodiment shown in FIG. 10 for the micro lens 1201, the filter layer, the infrared cut filter layer 1204, and the charge readout module 1203, which will not be repeated here.
  • FIG. 13 is an exemplary structural diagram of the longitudinal section of the image sensor provided by this application.
  • the longitudinal section of this embodiment is formed by cutting along the dashed line in FIG. 9.
  • seven pixels along the dotted line are exemplarily shown, namely R pixel, W pixel, B pixel, W pixel, R pixel, W pixel and B pixel.
  • Each pixel includes a micro lens 1301.
  • the filter layer, the charge readout module 1303 and the filter 1305 are also provided with an infrared cut filter layer 1304 in the R pixels, G pixels and B pixels.
  • the filter layer in the R pixel is the red filter layer 1302R
  • the filter layer in the G pixel is the green filter layer 1302G
  • the filter layer in the B pixel is the blue filter layer 1302B
  • the filter layer in the IR pixel 1302W is an all-pass filter layer or a double-pass filter layer 1302W.
  • the all-pass filter layer is used to pass light in the full wavelength range, including red light, green light, blue light and infrared light.
  • the double-pass filter layer is used to pass red light, green light, blue light and infrared light in a specific wavelength range.
  • the pixel array 1410 is a pixel array in an image sensor as shown in any of the embodiments in FIGS. 8-13.
  • the logic control circuit 1420 is used to independently control the exposure time of the large pixels and the small pixels.
  • the large pixels include R pixels, G pixels, and B pixels, and the small pixels are IR pixels or W pixels.
  • the IR pixel is taken as an example in FIG. 14.
  • the logic control circuit 1420 includes a first control line and a second control line, or in other words, includes two independent control circuits: a first control circuit and a second control circuit. Large pixels in the pixel array 1410 are coupled to the first control line, and small pixels in the pixel array 1410 are coupled to the second control line.
  • the control lines with the same name in FIG. 14 are the same line or connected to each other.
  • the first control line on the pixel array side and the first control line of the logic control circuit are the same line or connected.
  • the second control line on the pixel array side and the second control line of the logic control circuit are the same line or connected.
  • FIG. 15 shows an exemplary timing diagram of the control signal.
  • the effective transition edges of the first control signal and the second control signal are both falling edges.
  • the first The first control signal and the second control signal can be obtained according to the system reset signal of the logic control circuit.
  • the first control signal and the second control signal are both high-level active signals.
  • the exposure end control line is used to uniformly control the exposure stop time of all pixels in the pixel array.
  • the logic control circuit 1420 further includes:
  • variable x and variable y meet the coordinate condition of the R pixel, connect the reset signal to the output terminal of the first control line; if the variable x and variable y meet the coordinate condition of the G pixel, connect the reset signal to the coordinate condition of the second control line.
  • the output terminal is connected; if the variable x and the variable y meet the coordinate condition of the B pixel, the reset signal is connected to the output terminal of the third control line; if the variable x and the variable y meet the coordinate condition of the IR pixel, the reset signal is connected to the first
  • the output ends of the four control lines are connected. It should be understood that when the arrays of the pixel arrays are arranged differently, the respective coordinate conditions of the pixels will change accordingly. Therefore, the logic operation circuit inside the logic control circuit needs to be adjusted correspondingly according to the arrangement of the pixel array.
  • the exposure end control line is used to uniformly control the exposure stop time of all pixels in the pixel array.
  • the exposure end control line outputs the exposure end signal.
  • the exposure end signal can be a high-level active signal or a low-level active signal.
  • the exposure end time point can be a high-level falling edge or a low-level rising edge. along.
  • the exposure end control signal is a high-level active signal.
  • the exposure time of the R pixel is the time difference between the falling edge of the first control signal and the falling edge of the exposure end control signal: the first exposure time, G pixel, B pixel and
  • the exposure times of the IR pixels are the second exposure time, the third exposure time, and the fourth exposure time, respectively. Therefore, the exposure time of R pixels, G pixels, B pixels, and IR pixels is independently controlled.
  • the R pixels, G pixels, B pixels, and The exposure time of the IR pixel satisfies the preset ratio.
  • the logic control circuit 1620 further includes: a charge transfer control line, which is used to control when the charge accumulated by the photosensitive device of the pixel array is transferred to the charge readout module.
  • the charge transfer control signal outputs a charge transfer control signal in the charge transfer control line, and the charge transfer control signal can be a high-level effective signal or a low-level effective signal.
  • the charge transfer control signal shown in FIG. 17 is the same as that shown in FIG. 15.
  • the switch When the signals in the column coordinate control line and the row coordinate control line are both valid signals, the switch is opened and the reset signal is activated. You can output to the target pixel through the exposure start control line, and control the exposure of the target pixel. Exemplarily, when the signals in the column coordinate control line and the row coordinate control line are both valid signals, and when the valid transition edge of the reset signal arrives, the target pixel is controlled to start exposure. If there is a signal in the column coordinate control line and the row coordinate control line that does not meet the requirements, the switch is closed, and the exposure start control line has no control signal output.
  • each pixel in the pixel array has its own corresponding row coordinate line and column coordinate line
  • the exposure time of each pixel can be independently controlled, for example, the row coordinate line and column coordinate of the pixel that needs to be exposed
  • the signal in the line is preferentially set as a valid signal, thereby prolonging the exposure time of the key exposure pixels.
  • the logic control circuit 1818 further includes: an exposure end control line, which is used to uniformly control the exposure end time of each pixel in the pixel array.
  • an exposure end control line which is used to uniformly control the exposure end time of each pixel in the pixel array.
  • FIG. 19 shows an exemplary timing diagram of the control signal. As shown in FIG. 19, two types of pixels are taken as examples to illustrate the control of the exposure start control signal on the pixel exposure start time.
  • the signals in the timing diagram are all high-level effective, and it should be understood that each control signal may also be low-level effective.
  • FIG. 20a shows the photosensitive characteristic curve diagram of the photosensitive device in the image sensor provided by the present application.
  • the abscissa is the wavelength of the light, in nm
  • the ordinate is the sensitivity of light.
  • the thin solid line is the photosensitive characteristic curve of the R pixel
  • the short dashed line is the photosensitive characteristic curve of the G pixel
  • the dotted line is the photosensitive characteristic curve of the B pixel
  • the long dashed line is the photosensitive characteristic curve of the IR pixel.
  • the R pixel only has a photosensitive intensity peak near 650nm of red light
  • the G pixel only has a photosensitive intensity peak near 550nm of green light
  • the B pixel only has a photosensitive intensity peak near blue light 450nm
  • the photosensitive range of W pixels is covered. Full band.
  • the image sensor provided by the present application removes the IR signals in the photosensitive results of R pixels, G pixels, and B pixels, so that R pixels can only receive red light, and G pixels can only receive light.
  • Light-sensitive green light and B pixels can only light-sensitive blue light, which improves the color accuracy of the image sensor's light-sensing results.
  • the CPU may be a single-CPU processor or a multi-core (multi-CPU) processor; optionally, the CPU may be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses.
  • the exposure control can be partly completed by software codes running on a general-purpose CPU or MCU, and partly completed by hardware logic circuits; or it can also be completely completed by software codes running on a general-purpose CPU or MCU.
  • the memory may be a non-power-down volatile memory, such as Embedded MultiMedia Card (EMMC), Universal Flash Storage (UFS) or Read-Only Memory (ROM) ), or other types of static storage devices that can store static information and instructions, or volatile memory, such as Random Access Memory (RAM), or can store information and instructions
  • EMMC Embedded MultiMedia Card
  • UFS Universal Flash Storage
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • Other types of dynamic storage devices can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory, CD-ROM or other optical disk storage , CD storage (including compressed CDs, laser disks, CDs, digital universal CDs, Blu-ray CDs, etc.), disk storage media or other magnetic storage devices, or can be used to carry or store program codes in the form of instructions or data structures and can be used by Any other computer-readable storage medium accessed by the computer, but not limited to this.
  • the receiving interface may be a data input interface of the processor
  • visible light pixels are classified as one type of pixels, that is, R pixels, G pixels, and B pixels are classified as one type of pixels (large pixels), and IR pixels or W pixels are considered to be another type of pixels (small pixels).
  • the senor is an RGBIR sensor.
  • the RGBIR sensor can realize the independent exposure of visible light pixels and IR pixels, and can also realize independent exposure of the four components of R, G, B, and IR.
  • the senor is an RGBW sensor.
  • the RGBW sensor can realize the independent exposure of visible light pixels and W pixels, and can also realize independent exposure of the four components of R, G, B, and W.
  • the at least two control units include: a first control unit and a second control unit; the first control unit is used to control the exposure start time of the visible light pixels; the second control unit Used to control the exposure start time of W pixels.
  • the independent exposure device may also control the exposure time of at least two types of pixels to meet a preset ratio based on at least two control units.
  • a preset ratio based on at least two control units.
  • the first control unit and the second control unit controlling the exposure time of the visible light pixels and the IR pixels to meet the preset ratio; or based on the first control unit, the second control unit, the third control unit and the fourth control unit control The exposure time of the R, G, B, and IR pixels meets the preset ratio.
  • the independent exposure device further includes: an exposure end control unit for uniformly controlling the exposure end time of all pixels in the pixel array.
  • FIG. 22 is a flowchart of an embodiment of an image light-sensing method provided by this application. As shown in FIG. 22, the method is applied to the image sensor provided by this application, and the image sensor includes: red pixels, green pixels, blue pixels, and non-visible light. Pixels; among them, red pixels, green pixels and blue pixels are large pixels, non-visible light pixels are small pixels, and the photosensitive area of large pixels is larger than that of small pixels; the arrangement of red pixels, green pixels and blue pixels is in Bayer format .
  • the image photosensitive method includes:
  • Step 2201 Sensitize red light based on the red pixels.
  • Step 2202 Sensitize green light based on the green pixels.
  • Step 2203 Sensitize blue light based on the blue pixels.
  • Step 2204 Sensitize infrared light or white light based on small pixels.
  • step 2204 may further include sensing infrared light based on infrared light pixels, or sensing white light based on white pixels.
  • four large pixels surround one small pixel, and four small pixels surround one large pixel.
  • the areas of large pixels and small pixels are set according to the crosstalk accuracy of the image sensor.
  • the shapes of large pixels and small pixels are regular polygons or circles.
  • steps 2201-2204 do not limit the execution order of the method. Steps 2201-2204 can usually be executed synchronously, or steps may not be executed strictly synchronously, but there are some time differences between them. This application does not limit this.
  • FIG. 23 is a flowchart of an embodiment of an image sensitization method provided by this application.
  • the image sensor may further include: a micro lens, an infrared cut filter layer, and a filter Layer; each pixel corresponds to a micro lens; each large pixel corresponds to an infrared cut filter layer, the infrared cut filter layer is used to cut off light signals with a wavelength greater than the first preset wavelength, and wavelengths greater than the first preset wavelength
  • the optical signal includes infrared light;
  • the filter layer includes a red filter layer, a green filter layer and a blue filter layer; each red pixel corresponds to a red filter layer, and the red filter layer is used to pass red light and the first wavelength Infrared light within the range; each green pixel corresponds to a green filter layer, the green filter layer is used to pass green light and infrared light in the second wavelength range; each blue pixel corresponds to a blue filter layer, blue
  • the color filter layer is
  • the filter layer also includes an infrared light filter layer; each infrared light pixel corresponds to an infrared light filter layer, and the infrared light filter layer is used to pass a specific wavelength range
  • the filter layer also includes an all-pass filter layer or a double-pass filter layer; each white pixel corresponds to an all-pass filter layer or a double-pass filter layer.
  • the pass filter layer is used to pass light in the full wavelength range
  • the double pass filter layer is used to pass red light, green light, blue light, and infrared light in a specific wavelength range.
  • the image light-sensing method may further include:
  • Step 2301 The original light from the natural world passes through the filter to obtain the first light.
  • the filter is used to filter out ultraviolet light and far-infrared light.
  • Far-infrared light is infrared light with a longer wavelength.
  • the infrared light with a wavelength greater than the second preset wavelength mentioned in the foregoing embodiment can be referred to as far-infrared light.
  • the wavelength of the far-infrared light is greater than the wavelength of the infrared light in the specific wavelength range allowed by the subsequent infrared light filter layer.
  • the filter please refer to the description of the filter on the device side, which will not be repeated here.
  • Step 2302. The first light passes through the infrared cut filter layer, the red filter layer and the micro lens to reach the red pixels.
  • Step 2303 The first light passes through the infrared cut filter layer, the green filter layer and the micro lens to reach the green pixel.
  • Step 2304 The first light passes through the infrared cut filter layer, the blue filter layer and the micro lens to reach the blue pixel.
  • Step 2305 The first light passes through the infrared light filter layer to reach the infrared light pixels, or the light passes through the all-pass filter layer or the double-pass filter layer to reach the white pixels.
  • steps 2302-2305 do not limit the execution order of the method. Steps 2302-2305 can usually be executed synchronously, or steps may not be executed strictly synchronously, but there are some time differences between them. This application does not limit this.
  • the infrared filter layer only allows infrared light in a specific wavelength range to pass, the red filter layer is used to pass only red light and infrared light in the first wavelength range, and the green filter layer is used to pass only green light and the second wavelength. Infrared light within the range, the blue filter layer is used to pass only blue light and infrared light in the third wavelength range, and the infrared light cut off by the infrared light cut-off filter layer includes: infrared light in the first wavelength range, the Infrared light in the second wavelength range and infrared light in the third wavelength range.
  • the infrared light cut filter layer cuts off the entry into the R pixel , G pixel and B pixel infrared light, so that R pixel, G pixel and B pixel can only receive red light, green light and blue light respectively.
  • step 2301 is an optional step, and the original light from nature may not pass through the filter but directly enter the filter layer and the micro lens.
  • the infrared cut filter layer can be on the red filter layer, the green filter layer and the blue filter layer; the red filter layer, the green filter layer and the blue filter layer can also be on the infrared cut filter layer Above, this application does not limit this.
  • the photosensitive device in the pixel converts the light entering the pixel into electric charge.
  • the method further includes: controlling the exposure start time of the large pixels based on the first control line, the large pixels including R pixels, G pixels, and B pixels; and controlling the exposure of the small pixels based on the second control line At the start time, the small pixels are IR pixels or W pixels.
  • the method further includes: controlling the exposure time of the large pixels and the small pixels to satisfy a preset ratio based on the first control line and the second control line.
  • the method further includes: controlling the exposure start time of the R pixel based on the first control line; controlling the exposure start time of the G pixel based on the second control line; and controlling the B pixel based on the third control line
  • the exposure start time; the exposure start time of the IR pixel is controlled based on the fourth control line.
  • the four types of pixels can be independently exposed, which improves the light-sensing effect of the image sensor.
  • the method further includes: controlling the exposure time of the R pixel, the G pixel, and the B pixel to meet a preset ratio.
  • each pixel in the image sensor is coupled to its own row coordinate control line and column coordinate control line, and each pixel corresponds to a branch of the exposure start control line.
  • the method further includes : When the control signals output by the row coordinate control line and the column coordinate control line of the target pixel are both effective levels, the branch of the exposure start control line corresponding to the target pixel outputs a control signal, and the control signal is controlled based on the control signal
  • the exposure start time of the target pixel, and the target pixel is any pixel in the pixel array.
  • each pixel can individually control the exposure time.
  • the method further includes: controlling the exposure end time of all pixels in the pixel array based on the exposure end control line.
  • FIG. 24 is a flowchart of an embodiment of a method for independently controlling exposure time provided by this application. As shown in FIG. 24, the method is applied to a sensor including at least two types of pixels, and the at least two types of pixels include the first type. Pixels and the second type of pixels. The method includes:
  • Step 2401 based on the first control unit, controls the exposure start time of the first type of pixels.
  • Step 2402 based on the second control unit, controls the exposure start time of the second type of pixels.
  • the sensor may be an RGBIR sensor.
  • the first type of pixels are large pixels corresponding to visible light pixels.
  • the visible light pixels include R pixels, G pixels, and B pixels, and the second type of pixels corresponds to IR pixels. Of small pixels.
  • the image sensor can be an RGBW sensor.
  • the first type of pixels are large pixels corresponding to visible light pixels.
  • the visible light pixels include R pixels, G pixels and B pixels, and the second type of pixels are small pixels corresponding to W pixels. .
  • the first control unit and the second control unit are independent of each other, so the exposure start time of the first type of pixel and the second type of pixel are independently controlled. It should be understood that the first control unit and the second control unit may be implemented by a hardware logic circuit, or may be implemented by a software module running on a processor.
  • the at least two types of pixels further include: a third type of pixels; the method further includes: controlling the exposure start time of the third type of pixels based on the third control unit.
  • the at least two types of pixels further include: the at least two types of pixels further include: a third type of pixels and a fourth type of pixels, and the method further includes:
  • the exposure start time of the fourth type of pixel is controlled.
  • the senor is an RGBIR sensor
  • the pixels of the first type are R pixels
  • the pixels of the second type are G pixels
  • the pixels of the third type are B pixels
  • the pixels of the fourth type are IR pixels
  • the method specifically includes: controlling the exposure start time of the R pixel based on the first control unit; controlling the exposure start time of the G pixel based on the second control unit; and controlling the exposure start time of the B pixel based on the third control unit Start time; control the exposure start time of the IR pixel based on the fourth control unit; or, the sensor is an RGBW sensor, the first type of pixel is an R pixel, the second type of pixel is a G pixel, and the The third type of pixel is a B pixel, and the fourth type of pixel is a W pixel; the method specifically includes: controlling the exposure start time of the R pixel based on the first control unit; and controlling the exposure start time based on the second control unit The exposure start time of the G pixel; the exposure start time of the
  • the method further includes: controlling the exposure end time of all pixels based on the exposure end control unit.
  • the method further includes: controlling the exposure time of each of the at least two types of pixels to meet a preset ratio.
  • the first control unit and the second control unit controlling the exposure time of the large pixels and the small pixels to meet the preset ratio; or based on the first control unit, the second control unit, the third control unit and the fourth control unit control
  • the exposure time of the R pixel, G pixel, B pixel, and IR pixel meets a preset ratio.
  • the second control unit, the third control unit, and the fourth control unit to control the exposure time of the R pixel, the G pixel, the B pixel, and the W pixel to meet the preset ratio.
  • the exposure start time of different types of pixels is independently controlled, and the exposure end time is uniformly controlled. Therefore, it is possible to set the exposure start time of different pixels so that the exposure time between each pixel meets a preset ratio.
  • the method further includes: transferring the charge accumulated in the photosensitive device to the charge readout module based on the charge transfer control unit.
  • This application also provides a computer-readable storage medium with instructions stored in the computer-readable storage medium, which when run on a computer or processor, cause the computer or processor to execute any of the independent exposure control functions provided in this application Part or all of the steps in the method.
  • This application also provides a computer program product containing instructions, which when run on a computer or processor, causes the computer or processor to execute part or all of the steps in any independent exposure control method provided in this application.
  • the steps of the foregoing method embodiments may be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the steps of the method disclosed in the present application can be directly embodied as being executed and completed by a hardware encoding processor, or executed and completed by a combination of hardware and software modules in the encoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Color Television Image Signal Generators (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

一种图像传感器(702)和图像感光方法。图像传感器(702)包括:红色像素(R)、绿色像素(G)、蓝色像素(B)和非可见光像素;其中,红色像素(R)、绿色像素(G)和蓝色像素(B)为大像素,非可见光像素为小像素,大像素的感光面积大于小像素的感光面积;红色像素(R)、绿色像素(G)和蓝色像素(B)的排列呈贝叶尔Bayer格式。可以在颜色信息足够的前提下,减少小像素对大像素的光串扰,提高大像素的信噪比。

Description

图像传感器和图像感光方法 技术领域
本申请涉及图像处理技术,尤其涉及一种图像传感器和图像感光方法。
背景技术
传统的贝叶尔(Bayer)红绿蓝传感器(Red Green Blue Sensor,RGB Sensor)的彩色滤镜阵列(Color Filter Array,CFA)包含R、G、B三种像素,让不同位置的像素点只对R、G、B其中一种颜色感光,排列形成马赛克彩色图像,图1示出了一种典型的Bayer RGB CFA的示意图,该阵列采用2×2阵列排序,最小重复单元中的第一行包括R像素和G像素,第二行包括G像素和B像素。红绿蓝红外传感器(RGB-infrared,RGBIR Sensor)在RGB Sensor的基础上减少部分R像素、G像素或B像素,增加IR像素,同样排列形成马赛克彩色图像,图2示出了一种典型的RGBIR CFA的示意图,如图2所示,该阵列采用2×2阵列排序,最小重复单元中的第一行包括R像素和G像素,第二行包括IR像素和B像素,图3示出了一种典型的RGBIR CFA的示意图,如图3所示,该阵列采用4×4阵列排序,最小重复单元中的第一行包括R像素、G像素、B像素和G像素,第二行包括G像素、IR像素、G像素和IR像素,第三行包括B像素、G像素、R像素和G像素,第四行包括G像素、IR像素、G像素和IR像素。
相关技术中,提出了一种将可见光(包括红色光、绿色光和蓝色光)和红外光在感光时分离的图像传感器的设计方案,通过在红色、绿色和蓝色三种颜色的像素单元中增加IR截止滤光层,可以从可见光像素(包括R像素、G像素和B像素)感光得到的感光结果中将IR分量过滤掉。
上述方法中,虽然过滤掉了可见光像素中的IR信号,但相邻IR像素中的红外光仍然会串扰进入可见光像素,影响可见光像素的信噪比。
发明内容
本申请提供一种图像传感器和图像感光方法,以在颜色信息足够的前提下,减少小像素对大像素的光串扰,提高大像素的信噪比。
第一方面,本申请提供一种图像传感器,包括:红色像素、绿色像素、蓝色像素和非可见光像素;其中,该红色像素、该绿色像素和该蓝色像素为大像素,该非可见光像素为小像素,该大像素的感光面积大于该小像素的感光面积;该红色像素、该绿色像素和该蓝色像素的排列呈贝叶尔Bayer格式。
本申请提供的图像传感器,采用大像素和小像素组成的像素阵列,由于大像素采用Bayer RGB CFA,与传统的RGB sensor完全一致,而且大像素的感光面积比小像素的感光面积大,因此基于大像素得到的颜色信息不会减少,可见光图像的分辨率也不会下降,还可以直接复用现有的Demosaic算法,将其无缝嵌入现有的图像信号处理器(Image Signal Processor,ISP)中。另外,大像素的感光面积比小像素的感光面积大,相应的,大像素的感光灵敏度大于小像素的感光灵敏度,因此将可将光像素设计为大像素,有利于提升可见光的成像质量,从而获得更准确的颜色信息;且由于与大像素(可见光像素)相邻的小 像素(非可见光像素)面积较小,非可见光像素与可见光像素的接触范围也大大减小,可以减少非可见光像素对可见光像素的光串扰,降低了非可见光像素的感光结果对可见光像素获取的颜色信息的干扰,进一步提高了可见光图像的颜色信息的信噪比;进一步的,将红外像素或白色像素设计成小像素并不会影响细节信息的获取,如果非可见光像素为红外像素,在红外夜视场景下存在红外补光,因此红外像素设计成小像素不会影响红外光的成像结果;如果非可见光像素为白色像素,白光像素的光谱响应曲线比较宽,感光能力强,因此将白色像素设计成小像素不会影响白光的成像结果。本申请基于可见光像素获取图像的颜色信息,基于非可见光像素获取图像的细节信息,并将颜色信息和细节信息融合得到最终的图像,本申请提供的图像传感器,大像素获取了更准确的颜色信息,小像素获取的细节信息也不受影响,极大提升了融合图像的细节及颜色表现。而且,由于可见光像素采用Bayer RGB CFA,与传统的RGB sensor完全一致,因此可见光图像是全分辨率图像,作为IR像素或W像素的非可见光像素设置于可见光像素的像素行和像素列之间,形成四个可见光像素包围一个非可见光像素,且四个非可见光像素包围一个可见光像素的像素阵列,因此得到的非可见光图像也是全分辨率图像,大像素和小像素的设计并不会影响到可见光图像的颜色信息,以及非可见光图像的细节信息。
在一种可能的实现方式中,该非可见光像素包括红外光像素或白色像素,该白色像素用于感光白色光,该白色光包括红色光、绿色光、蓝色光和红外光。
在一种可能的实现方式中,四个该大像素包围一个该小像素,且四个该小像素包围一个该大像素。
在一种可能的实现方式中,该大像素和该小像素的面积根据该图像传感器的串扰精度设置。
通常像素阵列中相邻像素的中心点之间的距离是固定的,该距离取决于图像传感器的尺寸、工艺等。在此基础上,大像素和小像素的面积可以根据图像传感器的串扰精度设置。基于减少串扰的目的,期望大像素和小像素的面积之比尽可能大。而为了提高感光灵敏度的目的,一方面大像素的面积越大,得到的图像颜色信息越准确,小像素的面积越大,得到的图像细节信息越多,因此尽管大像素和小像素有最大面积的限制,仍然期望大像素和小像素的面积尽可能大。再考虑到图像传感器自身的精度、光照度需求、红外补光灯的强度等因素,可以在平衡前述各个因素的情况下,预先设置好大像素和小像素的面积。可选的,设置大像素和小像素的面积可以转换成设置大像素和小像素相邻边之间的间隔,如果大像素和小像素的面积大,则相邻边之间的间隔就小。由于在设置大像素和小像素的面积或大像素和小像素相邻边之间的间隔考虑到了前述多种因素,因此基于该设置下的大像素和小像素,即可以获取到分辨率较高的颜色信息,又可以有充足的细节信息补充,有助于提升图像传感器最终获得的图像的成像质量。
在一种可能的实现方式中,该大像素和该小像素的形状为正多边形或圆形。
在一种可能的实现方式中,该红色像素、该绿色像素和该蓝色像素均对应一个红外截止滤光层,该红外光截止滤光层用于截止波长大于第一预设波长的光信号,该波长大于第一预设波长的光信号包括红外光。
本申请提供的图像传感器,在R像素、G像素和B像素对应的微镜头上涂覆了红外光截止滤光层,截止了到达R像素、G像素和B像素的红外光,去除了可见光像素的感 光结果中的IR信号,感光结果的色彩更准确,提升了传感器的感光效果。进一步的,本申请可以基于涂覆技术将红外光截止滤光层涂覆在微镜头上,一方面不需要增加复杂的机械结构;另一方面,不会改变微镜头下的像素本身的结构,而相对简单稳定的像素内部结构有利于控制主光线角度(Chief Ray Angle,CRA)等影响成像的问题,在保持像素本身结构稳定的前提下提升了图像传感器的感光效果。
在一种可能的实现方式中,该第一预设波长为650nm,在这种情况中,红外光截止滤光片将波长大于可见光范围的光线均截止,确保所有波长范围的红外光均无法进入红色像素、绿色像素和蓝色像素。
在一种可能的实现方式中,还包括:滤光层,该滤光层包括红色滤光层、绿色滤光层和蓝色滤光层;每个该红色像素对应一个该红色滤光层,该红色滤光层用于通过红色光和第一波长范围内的红外光;每个该绿色像素对应一个该绿色滤光层,该绿色滤光层用于通过绿色光和第二波长范围内的红外光;每个该蓝色像素对应一个该蓝色滤光层,该蓝色滤光层用于通过蓝色光和第三波长范围内的红外光;该第一波长范围内的红外光、该第二波长范围内的红外光以及该第三波长范围内的红外光的波长均大于第一预设波长;当该非可见光像素为红外光像素时,该滤光层还包括红外光滤光层;每个该红外光像素对应一个该红外光滤光层,该红外光滤光层用于通过特定波长范围内的红外光;当该非可见光像素为白色像素时,该滤光层还包括全通滤光层或者双通滤光层;每个该白色像素对应一个该全通滤光层或者该双通滤光层,该全通滤光层用于通过全波段范围内的光,该双通滤光层用于通过该红色光、该绿色光、该蓝色光和该特定波长范围内的红外光。
在一种可能的实现方式中,该红外光截止滤光层和/或该滤光层涂覆于对应像素的微镜头上。
本申请提供的图像传感器,在R像素上涂覆了红色滤光层和红外光截止滤光层,滤除了R像素感光结果中的IR分量,使得R像素可以仅感光红色光,对应的,在G像素上涂覆了绿色滤光层和红外光截止滤光层,在B像素上涂覆了蓝色滤光层和红外光截止滤光层,滤除了G像素和B像素感光结果中的IR分量,使得G像素可以仅感光绿色光,B像素可以仅感光蓝色光。在IR像素上涂覆了红外光滤光层,使得IR像素可以仅感光红外光,或者在W像素上涂覆了全通滤光层或双通滤光层,使得W像素可以仅感光白色光。大大提升了图像传感器得到的感光结果的色彩准确度。
应当理解,该红色滤光层在该红外光截止滤光层的上方或下方;该绿色滤光层在该红外光截止滤光层的上方或下方;该蓝色滤光层在该红外光截止滤光层的上方或下方。本申请对红外光截止滤光层和红色滤光层、绿色滤光层、蓝色滤光层在微镜头上的涂覆顺序不做限定。
本申请提供的图像传感器,红色像素的微镜头上涂覆红色滤光层和红外光截止滤光层;绿色像素的微镜头上涂覆绿色滤光层和红外光截止滤光层;蓝色像素的微镜头上涂覆蓝色滤光层和红外光截止滤光层;红外光像素的微镜头上涂覆红外光滤光层,不限定红外光截止滤光层和红色滤光层、绿色滤光层和蓝色滤光层在微镜头上涂覆的位置关系,红色滤光层、绿色滤光层和蓝色滤光层可以分别涂覆在红外光截止滤光层的上面;或者该红外光截止滤光层也可以分别涂覆在红色滤光层、绿色滤光层和蓝色滤光层的上面,只要光线在达到微镜头之前先经过了红外光截止滤光层和任一个可见光分量的滤光层即可。
在一种可能的实现方式中,红外光截止滤光层涂覆在微镜头上,红色滤光层、绿色滤光层、蓝色滤光层涂覆在微镜头的内侧或者分别做在红色像素、绿色像素和蓝色像素内部;在一种可能的实现方式中,红色滤光层、绿色滤光层、蓝色滤光层涂覆在微镜头上,红外截止滤光层涂覆在微镜头内侧或者做在红色像素、绿色像素和蓝色像素内部。
在一种可能的实现方式中,还包括:逻辑控制电路,用于分别控制该大像素和该小像素的曝光时间。
本申请提供的图像传感器,大像素和小像素的曝光时间是独立控制的,例如可以在红外光太强而可见光太弱的情况下,增加大像素的曝光时间而减少小像素的曝光时间,使得可见光和非可见光(红外光或白色光)的成像亮度趋于平衡,避免在红外光占主导成分或者可见光占主导成分时,容易出现曝光失衡问题,提升了图像传感器感光的动态范围,满足用户对清晰度和信噪比等指标的要求。
在一种可能的实现方式中,该逻辑控制电路包括:第一控制线和第二控制线;该大像素耦合至该第一控制线,该小像素耦合至该第二控制线;该逻辑控制电路,具体用于基于该第一控制线控制该大像素的曝光起始时间;基于该第二控制线控制该小像素的曝光起始时间。
在一种可能的实现方式中,该逻辑控制电路还用于:基于该第一控制线和该第二控制线控制该大像素和该小像素的曝光时间满足预设比例。
示例性的,该第一控制线输出第一控制信号,该第二控制线输出第二控制信号,当该第一控制信号的第一有效跳变沿到来时,该大像素开始曝光,当该第二控制信号的第二有效跳变沿到来时,该小像素开始曝光;通过设置第一有效跳变沿和第二有效跳变沿的到来时刻,使得该大像素和该小像素的曝光时间满足预设比例。
本申请提供的图像传感器,可以通过设置大像素和小像素各自的控制信号的有效跳变沿的到来时刻,使得大像素和小像素的曝光时间满足预设比例,例如当大像素和小像素的曝光时间的比例为2:1时,曝光结果清晰度更好,信噪比更高,则使得大像素的控制信号先跳变,小像素后跳变,并确保两种像素的跳变时间点之间的时间差使得大像素的曝光时间和小像素的曝光时间满足预设比例。因此,通过精准设置大像素与小像素的曝光时间比例更精细的控制图像传感器的感光效果。示例性的,有效跳变沿可以是高电平信号的下降沿,低电平信号的上升沿,高电平信号的上升沿,低电平信号的下降沿等。
在一种可能的实现方式中,还包括:滤光片,用于滤除紫外光和波长大于第二预设波长的红外光,该第二预设波长大于该第一预设波长和特定波长范围内的任一个波长。
本申请提供的图像传感器,滤光片可以滤除自然光线中波长较长的远红外光和波长较短的紫外光线,避免远红外光线和紫外光线影响感光器件的感光特性。
在一种可能的实现方式中,该图像传感器还包括电荷读出模块,每个像素包括感光器件;该感光器件用于将光线转换为电荷;该电荷读出模块将该感光器件累积的电荷输出,得到感光结果。
在一种可能的实现方式中,传感器还包括:逻辑控制电路,用于分别独立控制该红色像素、该绿色像素、该蓝色像素和该红外光像素的曝光时间。
本申请提供的图像传感器,R像素、G像素、B像素和IR像素的曝光时间分别独立控制,当某些场景对R像素、G像素的感光结果要求较高而希望降低B像素、IR像素的 感光结果,则可以通过灵活控制四种像素的曝光时间,加强R像素、G像素分量的感光效果,减弱B像素、IR像素的感光效果,使得最终的感光结果更符合场景需求。进一步提升了图像传感器感光的动态范围,提供更符合客户需求的清晰度或信噪比的感光结果。
在一种可能的实现方式中,该逻辑控制电路包括:第一控制线、第二控制线、第三控制线和第四控制线,该像素阵列中的红色像素耦合至该第一控制线,该像素阵列中的绿色像素耦合至该第二控制线,该像素阵列中的蓝色像素耦合至该第三控制线,该像素阵列中的红外光像素耦合至该第四控制线;该逻辑控制电路具体用于:基于该第一控制线控制该红色像素的曝光起始时间;基于该第二控制线控制该绿色像素的曝光起始时间;基于该第三控制线控制该蓝色像素的曝光起始时间;基于该第四控制线控制该红外光像素的曝光起始时间。
在一种可能的实现方式中,该逻辑控制电路还用于:基于该第一控制线、该第二控制线、该第三控制线和该第四控制线控制该红色像素、该绿色像素、该蓝色像素和该红外光像素的曝光时间满足预设比例。
本申请提供的图像传感器,可以预先设置R像素、G像素、B像素和IR像素的曝光时间满足预设比例,以实现对图像传感器感光效果的精细控制。
示例性的,该第一控制线输出第一控制信号,该第二控制线输出第二控制信号,该第三控制线输出第三控制信号,该第四控制线输出第四控制信号;当该第一控制信号的第一有效跳变沿到来时,该红色像素开始曝光,当该第二控制信号的第二有效跳变沿到来时,该绿色像素开始曝光,当该第三控制信号的第三有效跳变沿到来时,该绿色像素开始曝光,当该第四控制信号的第四有效跳变沿到来时,该红外光像素开始曝光。通过设置第一有效跳变沿、第二有效跳变沿、第三有效跳变沿和第四有效跳变沿的到来时刻,使得R像素、G像素、B像素和IR像素的曝光时间满足预设比例。
在一种可能的实现方式中,传感器还包括:行坐标控制线、列坐标控制线和曝光开始控制线;该像素阵列中的每个像素耦合至各自的行坐标控制线和列坐标控制线,该曝光开始控制线包括多个支路,每个支路对应一个像素;当目标像素的该行坐标控制线和该列坐标控制线输出的控制信号均为有效电平时,该目标像素对应的该曝光开始控制线的支路输出控制信号控制该目标像素的曝光起始时间,该目标像素为该像素阵列中的任一个像素。
本申请提供的图像传感器,每个像素的感光时间可以独立控制,在某些需要增强目标区域像素的场景中,可以仅增加目标区域中的像素的曝光时间,进一步提升了传感器感光的灵活性,也进一步满足用户对感光结果的需求。
在一种可能的实现方式中,传感器还包括:曝光结束控制信号,用于统一控制像素阵列中的所有像素的曝光结束时间。
在一种可能的实现方式中,逻辑控制电路中包括第一控制变量x和第二控制变量y,当x和y满足可见光像素的坐标条件时,将逻辑控制电路的复位信号输出到第一控制线作为第一控制信号;当x和y满足IR像素的坐标条件时,将逻辑控制电路的复位信号输出到第二控制线作为第二控制信号。
在一种可能的实现方式中,逻辑控制电路中包括第一控制变量x和第二控制变量y,当x和y满足R像素的坐标条件时,将逻辑控制电路的复位信号输出到第一控制线作为第一控制信号;当x和y满足G像素的坐标条件时,将逻辑控制电路的复位信号输出到第二 控制线作为第二控制信号;当x和y满足B像素的坐标条件时,将逻辑控制电路的复位信号输出到第三控制线作为第三控制信号;当x和y满足IR像素的坐标条件时,将逻辑控制电路的复位信号输出到第四控制线作为第四控制信号。
第二方面,本申请提供一种图像感光方法,该方法应用于图像传感器,该图像传感器包括:红色像素、绿色像素、蓝色像素和非可见光像素;其中,该红色像素、该绿色像素和该蓝色像素为大像素,该非可见光像素为小像素,该大像素的感光面积大于该小像素的感光面积;该红色像素、该绿色像素和该蓝色像素的排列呈贝叶尔Bayer格式;该方法包括:基于该红色像素感光该红色光;基于该绿色像素感光该绿色光;基于该蓝色像素感光该蓝色光;基于该小像素感光红外光或白色光。
在一种可能的实现方式中,该非可见光像素包括红外光像素或白色像素,该白色像素用于感光白色光,该白色光包括红色光、绿色光、蓝色光和红外光;该方法具体包括:基于该红外光像素感光该红外光,或者,基于该白色像素感光该白色光。
在一种可能的实现方式中,四个该大像素包围一个该小像素,且四个该小像素包围一个该大像素。
在一种可能的实现方式中,该大像素和该小像素的面积根据该图像传感器的串扰精度设置。
在一种可能的实现方式中,该大像素和该小像素的形状为正多边形或圆形。
在一种可能的实现方式中,该图像传感器还包括:红外截止滤光层;每个该大像素对应一个该红外截止滤光层,该红外光截止滤光层用于截止波长大于第一预设波长的光信号,该波长大于第一预设波长的光信号包括红外光;该方法还包括:光线通过该红外光截止滤光层到达该大像素。
在一种可能的实现方式中,该图像传感器还包括:滤光层;该滤光层包括红色滤光层、绿色滤光层和蓝色滤光层;每个该红色像素对应一个该红色滤光层,该红色滤光层用于通过红色光和第一波长范围内的红外光;每个该绿色像素对应一个该绿色滤光层,该绿色滤光层用于通过绿色光和第二波长范围内的红外光;每个该蓝色像素对应一个该蓝色滤光层,该蓝色滤光层用于通过蓝色光和第三波长范围内的红外光;该第一波长范围内的红外光、该第二波长范围内的红外光以及该第三波长范围内的红外光的波长均大于第一预设波长;当该非可见光像素为红外光像素时,该滤光层还包括红外光滤光层;每个该红外光像素对应一个该红外光滤光层,该红外光滤光层用于通过特定波长范围内的红外光;当该非可见光像素为白色像素时,该滤光层还包括全通滤光层或者双通滤光层;每个该白色像素对应一个该全通滤光层或者该双通滤光层,该全通滤光层用于通过全波段范围内的光,该双通滤光层用于通过该红色光、该绿色光、该蓝色光和该特定波长范围内的红外光;该方法还包括:该光线通过该红外光截止滤光层和该红色滤光层到达该红色像素;该光线通过该红外光截止滤光层和该绿色滤光层到达该绿色像素;该光线通过该红外光截止滤光层和该蓝色滤光层到达该蓝色像素;该光线通过该红外光滤光层到达该红外光像素,或者该光线通过该全通滤光层或者该双通滤光层到达该白色像素。
在一种可能的实现方式中,该图像传感器还包括:逻辑控制电路;该逻辑控制电路包括第一控制线和第二控制线;该大像素耦合至该第一控制线,该小像素耦合至该第二控制线;该方法还包括:基于该第一控制线控制该大像素的曝光起始时间;基于该第二控制线 控制该小像素的曝光起始时间。
第三方面,本申请提供一种独立曝光的装置,该装置包括:至少两个控制单元,该至少两个控制单元中的每个控制单元用于对应控制传感器的像素阵列中的一种类型的像素的曝光起始时间,该传感器的像素阵列包括至少两种类型的像素。
现有的包含多种类型的像素的传感器,不同类型的像素的曝光时间是统一控制的,光照条件不理想时容易出现曝光失衡的问题,曝光控制灵活性差,传感器曝光的动态范围比较差。本申请提供的装置对传感器中不同类型的像素的曝光时间可以独立控制,提升了传感器感光的动态范围和信噪比。示例性的,该装置是独立于传感器之外的控制单元或逻辑控制电路,对应的产品形态可以是处理器或包含处理器的芯片产品。
在一种可能的实现方式中,装置还包括:该像素阵列。
该装置可以是包含控制单元的传感器。
在一种可能的实现方式中,该传感器为RGBIR传感器,该至少两种类型的像素包括:可见光像素和IR像素,该可见光像素包括:R像素、G像素、B像素,该至少两种类型的像素包括:R像素、B像素、G像素和IR像素,该至少两个控制单元包括:第一控制单元、第二控制单元、第三控制单元和第四控制单元;该第一控制单元用于控制该R像素的曝光起始时间;该第二控制单元用于控制该G像素的曝光起始时间;该第三控制单元用于控制该B像素的曝光起始时间;该第四控制单元用于控制该IR像素的曝光起始时间。
在一种可能的实现方式中,该传感器为RGBW传感器,该至少两种类型的像素包括:可见光像素和W像素,该可见光像素包括:R像素、G像素、B像素,该至少两个控制单元包括:第一控制单元和第二控制单元;该第一控制单元用于控制该可见光像素的曝光起始时间;该第二控制单元用于控制该W像素的曝光起始时间;或者该至少两种类型的像素包括:R像素、B像素、G像素和W像素,该至少两个控制单元包括:第一控制单元、第二控制单元、第三控制单元和第四控制单元;该第一控制单元用于控制该R像素的曝光起始时间;该第二控制单元用于控制该G像素的曝光起始时间;该第三控制单元用于控制该B像素的曝光起始时间;该第四控制单元用于控制该W像素的曝光起始时间。
在一种可能的实现方式中,该传感器为RCCB传感器,该至少两种类型的像素包括:可见光像素和C像素,该可见光像素包括:R像素和B像素,该至少两个控制单元包括:第一控制单元和第二控制单元;该第一控制单元用于控制该可见光像素的曝光起始时间;该第二控制单元用于控制该C像素的曝光起始时间;该至少两种类型的像素包括:R像素、B像素和C像素,该至少两个控制单元包括:第一控制单元、第二控制单元和第三控制单元;该第一控制单元用于控制该R像素的曝光起始时间;该第二控制单元用于控制该B像素的曝光起始时间;该第三控制单元用于控制该C像素的曝光起始时间。
在一种可能的实现方式中,基于该至少两个控制单元控制该至少两种类型的像素的曝光时间满足预设比例。
在一种可能的实现方式中,装置还包括:曝光结束控制单元,用于统一控制该像素阵列中的所有像素的曝光结束时间。
第四方面,本申请提供一种独立曝光的方法,该方法应用于包括至少两种类型的像素的传感器,该至少两种类型的像素包括第一种类型的像素和第二种类型的像素,该方法包括:基于第一控制单元控制该第一种类型的像素的曝光起始时间;基于第二控制单元控制 该第二种类型的像素的曝光起始时间。
在一种可能的实现方式中,该方法还包括:控制该至少两种类型的像素中的每一种类型的像素的曝光时间满足预设比例。
示例性的,基于第一控制单元和第二控制单元控制可见光像素和IR像素的曝光时间满足预设比例;或者基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R、G、B和IR像素的曝光时间满足预设比例。或者,基于第一控制单元和第二控制单元控制可见光像素和W像素的曝光时间满足预设比例;或者,基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R、G、B和W像素的曝光时间满足预设比例;或者,基于第一控制单元和第二控制单元控制可见光像素和C像素的曝光时间满足预设比例;或者,基于第一控制单元、第二控制单元和第三控制单元控制R、B和C像素的曝光时间满足预设比例。
在一种可能的实现方式中,该传感器为RGBIR传感器,该第一种类型的像素为可见光像素,该第二种类型的像素为IR像素,该可见光像素包括R像素、G像素和B像素;或者,该传感器为RGBW传感器,该第一种类型的像素为可见光像素,该第二种类型的像素为W像素,该可见光像素包括R像素、G像素和B像素;该传感器为RCCB传感器,该第一种类型的像素为可见光像素,该第二种类型的像素为C像素,该可见光像素包括R像素和B像素。
在一种可能的实现方式中,该至少两种类型的像素还包括:第三种类型的像素;该方法还包括:基于第三控制单元控制该第三种类型的像素的曝光起始时间。
在一种可能的实现方式中,该传感器为RCCB传感器,该第一种类型的像素为R像素,该第二种类型的像素为B像素,该第三种类型的像素为C像素;该方法具体包括:基于该第一控制单元控制该R像素的曝光起始时间;基于该第二控制单元控制该B像素的曝光起始时间;基于该第三控制单元控制该C像素的曝光起始时间。
在一种可能的实现方式中,该至少两种类型的像素还包括:第三种类型的像素和第四种类型的像素,该方法还包括:基于第三控制单元控制该第三种类型的像素的曝光起始时间;基于第四控制单元控制该第四种类型的像素的曝光起始时间。
在一种可能的实现方式中,该传感器为RGBIR传感器,该第一种类型的像素为R像素,该第二种类型的像素为G像素,该第三种类型的像素为B像素,该第四种类型的像素为IR像素;该方法具体包括:基于该第一控制单元控制该R像素的曝光起始时间;基于该第二控制单元控制该G像素的曝光起始时间;基于该第三控制单元控制该B像素的曝光起始时间;基于该第四控制单元控制该IR像素的曝光起始时间;或者,该传感器为RGBW传感器,该第一种类型的像素为R像素,该第二种类型的像素为G像素,该第三种类型的像素为B像素,该第四种类型的像素为W像素;该方法具体包括:基于该第一控制单元控制该R像素的曝光起始时间;基于该第二控制单元控制该G像素的曝光起始时间;基于该第三控制单元控制该B像素的曝光起始时间;基于该第四控制单元控制该W像素的曝光起始时间。
在一种可能的实现方式中,该方法还包括:基于曝光结束控制单元统一控制该像素阵列中的所有像素的曝光结束时间。
第五方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质中存储有指 令,当其在计算机或处理器上运行时,使得该计算机或处理器执行如上述第四方面或者其任一种可能的实施方式中该的方法。
第六方面,本申请提供一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得该计算机或处理器执行如上述第四方面或者其任一种可能的实施方式中该的方法。
附图说明
图1示出了一种典型的Bayer RGB CFA的示意图;
图2示出了一种典型的RGBIR CFA的示意图;
图3示出了一种典型的RGBIR CFA的示意图;
图4示出了感光器件的感光特性曲线图;
图5示出了大小像素图像传感器的像素阵列的一个示例性的结构示意图;
图6示出了大小像素图像传感器的像素阵列的一个示例性的结构示意图;
图7示出了图像获取装置的一个示例性的结构示意图;
图8为本申请提供的图像传感器的一个示例性的结构示意图;
图9a为本申请提供的图像传感器的一个示例性的结构示意图;
图9b为本申请提供的图像传感器的一个示例性的结构示意图;
图10为本申请提供的图像传感器纵向截面的一个示例性的结构示意图;
图11为本申请提供的图像传感器纵向截面的一个示例性的结构示意图;
图12为本申请提供的图像传感器纵向截面的一个示例性的结构示意图;
图13为本申请提供的图像传感器纵向截面的一个示例性的结构示意图;
图14示出了大小像素阵列排序的一个示例性的控制连接示意图;
图15示出了控制信号的一个示例性的时序图;
图16示出了大小像素阵列排序的一个示例性的控制连接示意图;
图17示出了控制信号的一个示例性的时序图;
图18示出了大小像素阵列排序的一个示例性的控制连接示意图;
图19示出了控制信号的一个示例性的时序图;
图20a示出了本申请提供的图像传感器中的感光器件的一个感光特性曲线图;
图20b示出了本申请提供的图像传感器中的感光器件的一个感光特性曲线图;
图21示出了独立曝光的装置的一个示例性的结构示意图;
图22为本申请提供的图像感光方法实施例的流程图;
图23为本申请提供的图像感光方法实施例的流程图;
图24为本申请提供的独立控制曝光时间的方法实施例的流程图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请中的附图,对本申请中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书实施例和权利要求书及附图中的术语“第一”、“第二”等仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。
图4示出了感光器件的感光特性曲线图,如图4所示,在感光器件中,R像素在红色光的波长范围内(650nm附近)、G像素在绿色光的波长范围内(550nm附近)、B像素在绿色光的波长范围内(450nm附近)、IR像素在红外光的波长范围内(850nm或910nm附近)分别有一个感光强度峰值,除此之外,R像素、G像素和B像素又会在红外光的波长范围内(850nm或910nm附近)还有一个感光强度峰值。因此,红色滤光层、绿色滤光层以及蓝色滤光层均会让各自颜色对应的上述两个感光强度峰值处的光信号通过,这一现象导致即便采用了滤光层,感光器件的R像素、G像素和B像素感光得到的感光结果中均带有一定程度的IR信号,而受IR信号的影响,图像传感器得到的图像的色彩信息会不准确。
图5示出了大小像素图像传感器的像素阵列的一个示例性的结构示意图,如图5所示,该图像传感器的像素阵列是在大像素(包括R像素、G像素和B像素)的空隙中,嵌入一组小像素(包括R像素、G像素和B像素),大像素的感光面积比较大,灵敏度较高,小像素的感光面积比较小,灵敏度比较低。大像素图像对暗区有较好的还原能力,小像素图像对亮区有较好的过曝抑制效果,两张图像融合即可得到高动态范围成像(High Dynamic Range Imaging,HDR)。但是该图像传感器不支持红外夜视场景。
为了更好的得到暗区的细节,图6示出了大小像素图像传感器的像素阵列的一个示例性的结构示意图,如图6所示,大像素对全波段都感光,即大像素为白色像素,用于感光白色光(包括红色光、绿色光、蓝色光和红外光),将小像素对可见光(包括红色光、绿色光和蓝色光)感光,进一步提高图像传感器的感光能力。但是该图像传感器对可见光的感光比较弱,较难得到准确的颜色信息,而且对于小像素来说,从大像素串扰进来的干扰光更加严重,极大影响了小像素的信噪比。
本申请提供了一种图像传感器,在可见光和红外光独立感光的基础上,既可以得到足够的颜色信息,确保可见光图像的分辨率,又可以减少红外光对可见光的串扰影响,提高可见光像素的信噪比。
上述图像传感器可以应用于手机、监控摄像、安全门禁系统等设备,以及照相、摄像、监控等同时需要使用彩色图像和IR或W黑白图像进行图像处理的领域。其典型应用可以包括基于可见光和红外光的活体检测、夜间视频监控、彩色-黑白动态融合等场景。本申 请的传感器适用的终端设备又可称之为用户设备(user equipment,UE),可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。终端设备可以是终端设备100(mobile phone)、平板电脑(pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、监控设备、智能大屏、智能电视、远程医疗(remote medical)中的无线设备或智慧家庭(smart home)中的无线设备等,本申请对此不作限定。
图7示出了图像获取装置的一个示例性的结构示意图,如图7所示,该图像获取装置包括镜头701、图像传感器702、ISP模块703、图像融合模块704、红外灯驱动控制模块705以及红外补光灯706等模块组成。其中,镜头701用于捕获静态图像或视频,采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器702可以采用本申请提供的图像传感器,其根据光信号生成待拍摄物体的原始图像数据(可见光图像数据和红外图像数据)。ISP模块703用于根据待拍摄物体的原始图像对可见光曝光参数和红外补光灯的光强进行调整直到满足自动曝光算法的收敛条件,还用于从待拍摄物体的原始图像分离出可见光图像和红外图像。图像融合模块704用于对分离出的可见光图像和红外图像进行融合得到目标图像。红外灯驱动控制模块705用于根据ISP模块703配置的红外补光灯的光强控制红外补光灯706。红外补光灯706用于提供红外光照。
可选的,图像获取装置可以采用单镜头加单图像传感器,或者双镜头加双图像传感器,或者单镜头加分光片和双图像传感器的结构,其中单镜头的结构可以节约成本,而单图像传感器的结构可以简化摄像头的结构。本申请对此不做具体限定。
图8为本申请提供的图像传感器的一个示例性的结构示意图,如图8所示,图像传感器包括:红色(R)像素、绿色(G)像素、蓝色(B)像素和非可见光像素。由于红色光、绿色光和蓝色光统称为可见光,因此其所对应的像素,即R像素、G像素和B像素可以统称为可见光像素。与此相对,本申请将R像素、G像素和B像素之外的像素称为非可见光像素。示例性的,非可见光像素包括红外光(IR)像素和白色(W)像素。通常白色光是指全波段范围内的光,包括红色光、绿色光、蓝色光和红外光。
图9a为本申请提供的图像传感器的一个示例性的结构示意图,如图9a所示,R像素、G像素和B像素为大像素,IR像素为小像素。大像素的感光面积大于小像素的感光面积,相应的,大像素的感光灵敏度大于小像素的感光灵敏度。R像素、G像素和B像素的排列采用Bayer RGB CFA,例如图1所示,该阵列采用2×2阵列排序,最小重复单元中的第一行包括R像素和G像素,第二行包括G像素和B像素。
大像素和小像素组成的像素阵列,由于大像素(R像素、G像素和B像素)采用Bayer RGB CFA,与传统的RGB sensor完全一致,因此基于R像素、G像素和B像素得到的颜色信息不会减少,由此得到的可见光图像的分辨率也不会下降,还可以直接复用现有的Demosaic算法,将其无缝嵌入现有的ISP中。另外,大像素的感光面积比小像素的感光面积大,相应的,大像素的感光灵敏度大于小像素的感光灵敏度,因此大像素更有利于感光更准确的颜色信息;进一步的,由于与大像素(可见光像素)相邻的小像素(IR像素)面积较小,IR像素与可见光像素的接触范围也大大减小,可以减少IR像素对可见光像素的光串扰,降低了IR像素的感光结果对可见光像素获取的颜色信息的干扰,进一步提高了可见光图像的颜色信息的信噪比。进一步的,在红外夜视场景下或者在光线较弱的场景 下,会提供红外补光,也即红外光光线较强,将IR像素设计成小像素不会影响红外光的成像结果,也即不会影响细节信息的获取。本申请提供的图像传感器,大像素获取了更准确的颜色信息,小像素获取的细节信息也不受影响,极大提升了融合图像的细节及颜色表现。
图9b为本申请提供的图像传感器的一个示例性的结构示意图,如图9b所示,R像素、G像素和B像素为大像素,W像素为小像素。大像素的感光面积大于小像素的感光面积,相应的,大像素的感光灵敏度大于小像素的感光灵敏度。R像素、G像素和B像素的排列同样采用Bayer RGB CFA,例如图1所示。
大像素和小像素组成的像素阵列,由于大像素(R像素、G像素和B像素)采用Bayer RGB CFA,与传统的RGB sensor完全一致,因此基于R像素、G像素和B像素得到的颜色信息不会减少,由此得到的可见光图像的分辨率也不会下降,还可以直接复用现有的Demosaic算法,将其无缝嵌入现有的ISP中。另外,大像素的感光面积比小像素的感光面积大,相应的,大像素的感光灵敏度大于小像素的感光灵敏度,因此大像素更有利于感光更准确的颜色信息;进一步的,由于与大像素(可见光像素)相邻的小像素(W像素)面积较小,W像素与可见光像素的接触范围也大大减小,可以减少W像素对可见光像素的光串扰,降低了W像素的感光结果对可见光像素获取的颜色信息的干扰,进一步提高了可见光图像的颜色信息的信噪比。进一步的,由于白光像素的光谱响应曲线比较宽,感光能力强,因此将白色像素设计成小像素不会影响白光的成像结果,也即不会影响细节信息的获取。本申请提供的图像传感器,大像素获取了更准确的颜色信息,小像素获取的细节信息也不受影响,极大提升了融合图像的细节及颜色表现。
上述像素阵列中,可以由四个大像素包围一个小像素,反之亦然,由四个小像素包围一个大像素。但是在像素阵列的边缘位置,如果最外侧的是大像素,那么在该大像素位置不能实现四个小像素包围一个大像素,而如果最外侧的是小像素,那么在该小像素位置不能实现四个大像素包围一个小像素。本申请对于四种像素包围一个其他像素的设置不作具体限定。
本申请基于可见光像素获取图像的颜色信息,基于非可见光像素获取图像的细节信息,把可见光像素得到的可见光图像和非可见光像素得到的非可见光图像融合得到最终的图像。由于可见光像素采用Bayer RGB CFA,与传统的RGB sensor完全一致,因此可见光图像是全分辨率图像,作为IR像素或W像素的非可见光像素设置于可见光像素的像素行和像素列之间,形成四个可见光像素包围一个非可见光像素,且四个非可见光像素包围一个可见光像素的像素阵列,因此得到的非可见光图像也是全分辨率图像,大像素和小像素的设计并不会影响到可见光图像的颜色信息,以及非可见光图像的细节信息。
通常像素阵列中相邻像素的中心点之间的距离是固定的,该距离取决于图像传感器的尺寸、工艺等。在此基础上,大像素和小像素的面积可以根据图像传感器的串扰精度设置。基于减少串扰的目的,期望大像素和小像素的面积之比尽可能大。而为了提高感光灵敏度的目的,一方面大像素的面积越大,得到的图像颜色信息越准确,小像素的面积越大,得到的图像细节信息越多,因此尽管大像素和小像素有最大面积的限制,仍然期望大像素和小像素的面积尽可能大。再考虑到图像传感器自身的精度、光照度需求、红外补光灯的强度等因素,可以在平衡前述各个因素的情况下,预先设置好大像素和小像素的面积。可选 的,设置大像素和小像素的面积可以转换成设置大像素和小像素相邻边之间的间隔,如果大像素和小像素的面积大,则相邻边之间的间隔就小。
基于上述因素,本申请可以将大像素和小像素的形状设置为正多边形或圆形。大像素和小像素的形状可以相同或不相同,例如图8和9所示,大像素的形状为正八边形,小像素的形状为正方形。可选的,大像素和小像素的形状还可以为正六边形、正方形等。本申请对此不作具体限定。
图10为本申请提供的图像传感器纵向截面的一个示例性的结构示意图,本实施例的纵向截面是沿着图8中的虚线切割形成的。如图10所示,示例性的示出了虚线沿线上的七个像素,即R像素、IR像素、B像素、IR像素、R像素、IR像素和B像素,每个像素包括微镜头1001、滤光层和电荷读出模块1003,在R像素、G像素和B像素内还设置有红外截止滤光层1004。R像素内的滤光层为红色滤光层1002R,G像素内的滤光层为绿色滤光层1002G,B像素内的滤光层为蓝色滤光层1002B,IR像素内的滤光层为红外光滤光层1002IR。
红外光截止滤光层1004也可以称为IR-Cut,用于截止波长大于第一预设波长的光信号,该波长大于第一预设波长的光信号包括红外光信号。示例性的,该第一预设波长为650nm,红外光截止滤光层1004用于截止波长大于650nm的光信号,波长大于650nm的光信号中包括红外光。示例性的,可见光的典型波长为430nm~650nm左右,IR像素感光的红外光的典型波长为850nm~920nm左右。IR-Cut可以截止波长大于650nm的光信号,使850nm~920nm左右波长范围内的红外光无法进入R像素、G像素和B像素。
光线透过红色滤光层1002R在R像素中的感光特性如图4中的细黑色实线R所示,R像素在红色光650nm附近和红外光850nm附近出现了两次感光强度波峰。光线透过绿色滤光层1002G在G像素中的感光特性如图4中的短虚线G所示,G像素在绿色光550nm附近和红外光850nm附近出现了两次感光强度波峰。光线透过蓝色滤光层1002B在B像素中的感光特性如图4中的点画线B所示,B像素在蓝色光450nm附近和红外光850nm附近出现了两次感光强度波峰。光线透过红外光滤光层1002IR在IR像素中的感光特性如图4中的长虚线IR所示,IR像素仅在红外光850nm(910nm)附近出现感光强度波峰。
由此可知,红色滤光层1002R可以同时透过红色光和第一波长范围内的红外光,绿色滤光层1002G可以同时透过绿色光和第二波长范围内的红外光,蓝色滤光层1002B可以同时透过蓝色光和第三波长范围内的红外光。应当理解,第一波长范围、第二波长范围和第三波长范围可以相同也可以不同,且第一波长范围内的红外光、第二波长范围内的红外光以及第三波长范围内的红外光的波长均大于第一预设波长。红外光滤光层1002IR可以仅透过特定波长范围的红外光。应当理解,特定波长范围可以为850nm-920nm,或者该特定波长范围可以为850nm-920nm范围内及其附近的任意某个特定的波长。示例性的,IR像素可以主要感光850nm的红外光,也可以主要感光910nm的红外光,还可以感光850nm-920nm范围内及其附近的某个任意特定波长的红外光,本申请对此不作限定。
微镜头1001为图像传感器的每个像素上设置的一个微小的凸透镜装置,用于使得输入的光线集中进入每个像素中。
R像素、G像素和B像素对应的微镜头上分别涂覆了红外光截止滤光层1004,因此超过650nm的光线无法进入R像素、G像素和B像素。
R像素对应的微镜头上还涂覆了红色滤光层1002R,因此只有650nm附近的红色光进入R像素,R像素可以只感光红色光。
G像素对应的微镜头上还涂覆了绿色滤光层1002G,因此只有550nm附近的绿色光进入G像素,G像素可以只感光绿色光。
B像素对应的微镜头上还涂覆了蓝色滤光层1002B,因此只有450nm附近的蓝色光进入B像素,B像素可以只感光蓝色光。
IR像素对应的微镜头上涂覆了红外光滤光层1002IR,因此只有850nm或者910nm附近的近红外光进入IR像素,IR像素可以只感光红外光。
本申请在R像素、G像素和B像素对应的微镜头上涂覆了红外光截止滤光层,截止了到达R像素、G像素和B像素的红外光,去除了可见光像素的感光结果中的IR信号,感光结果的色彩更准确,提升了传感器的感光效果。进一步的,由于本申请基于涂覆技术将红外光截止滤光层涂覆在微镜头上,一方面不需要增加复杂的机械结构,另一方面,不会改变微镜头下的像素本身的结构,微镜头下的像素只有感光器件,例如光电二极管等,而相对简单稳定的像素内部结构有利于控制主光路入射角(Chief Ray Angle,CRA)等影响成像的问题,将滤光层涂覆在微镜头上,在保持像素本身结构稳定的前提下提升了传感器的感光效果。
像素内部的结构并不是一个光滑的内壁,像素内壁上存在一些凸起,如果光线的入射角度与微镜头的主光路有偏移的话,部分光线会被像素内壁上的凸起阻挡,传感器本身的感光效果会下降。位于传感器的光学中心(optical center)的像素的CRA为0度,距离光学中心的越远的像素的CRA角度越大,通常,如果将像素距离画面中心的偏移距离作为横坐标,将像素的CRA角度作为纵坐标,则像素距离中心的偏移距离与像素的CRA角度之间的函数为线性函数,这种规律称为CRA表现一致。为了使得传感器符合CRA表现一致的规律,需要根据像素所处的位置微调像素的微镜头的位置,例如位于光学中心的像素的微镜头在像素的正上方,偏离光学中心的像素的微镜头不在像素的正上方,离光学中心越远的像素的微镜头偏离幅度也相对较大。如果微镜头下面的像素内部的结构比较复杂,容易导致CRA表现不一致,通过微调像素表面的微镜头的位置的方法也可能不再适用。而本申请提供的传感器添加的滤光层是涂覆在微镜头上的,并不会改变像素的内部结构,像素内部结果简单稳定,在不影响传感器的CRA表现一致的前提下提升传感器的感光效果。
图像传感器中的每个像素都包括一个感光器件,例如该感光器件可以是光电二极管,用于将光信号转换成电信号或者说将光信号转换成电荷。
电荷读出模块1003,用于将感光器件累积的电荷读出,并输出给后续图像处理电路或者图像处理器。电荷读出模块类似于一个缓存区域,感光器件累积的电荷会转移并暂时缓存在电荷读出模块中,并在读出信号的控制下将对应像素的电荷信号输出。
应当理解,图10所示的图像传感器中,只将红色滤光层1002R、绿色滤光层1002G、蓝色滤光层1002B和红外光滤光层1002IR涂覆在微镜头1001上,而将红外光截止滤光层1004做在R像素、G像素和B像素中。例如,在像素的内部增加玻璃片,将红外光截止滤光层1004涂覆在增加的玻璃片上。
在一种可能的实现方式中,图11为本申请提供的图像传感器纵向截面的一个示例性 的结构示意图,如图11所示,将红外截止滤光层1104分别涂覆在红色滤光层1102R、绿色滤光层1102G和蓝色滤光层1102B的上面。图11所示的图像传感器的其他部分同图10所示的传感器,这里不再赘述。
在一种可能的实现方式中,可以只将红外光截止滤光层1104涂覆在微镜头1101上,而将红色滤光层1102R做在R像素中,将绿色滤光层1102G做在G像素中,将蓝色滤光层1102B做在B像素中,以及将红外光滤光层1102IR做在IR像素中。例如,在像素的内部增加玻璃片,将红色滤光层1102R、绿色滤光层1102G、蓝色滤光层1102B和/或红外光滤光层1102IR涂覆在增加的玻璃片上。
在一种可能的实现方式中,可以将红色滤光层1102R、绿色滤光层1102G和蓝色滤光层1102B可以分别涂覆在红外光截止滤光层1104的上面。
本申请提供的图像传感器,不限定红外光截止滤光层和红色滤光层、绿色滤光层和蓝色滤光层在微镜头外侧或微镜头内侧的涂覆的位置关系。
在图10所示图像传感器的基础上,图12为本申请提供的图像传感器纵向截面的一个示例性的结构示意图,如图12所示,图像传感器还包括:滤光片1205,用于滤除紫外光和波长大于第二预设波长的红外光,第二预设波长大于第一预设波长和红外光滤光层通过的特定波长范围内的任一个波长。从而使得可见光和部分红外光通过滤光片1205。
波长大于第二预设波长的红外光可以称为远红外光,远红外光的波长大于红外光滤光层允许通过的红外光的波长。示例性的,可见光的波长为430nm~650nm左右,IR像素感光的红外光的典型波长范围为850nm~920nm左右。该第二预设波长例如可以是900nm,或者920nm,或者也可以是850nm~950nm之间的任一个波长。示例性的,滤光片1205可以是全通滤光片或双通(dual-pass)滤光片,一种示例性的全通滤光片用于滤除波长小于400nm的紫外光和波长大于900nm的红外光。一种示例性的双通滤光片用于仅通过可见光和800nm~900nm范围内的红外光,此时,双通滤光片相当于滤除了大于900nm的红外光。一种示例性的双通滤光片用于仅通过可见光和900nm~950nm范围内的红外光,此时,双通滤光片相当于滤除了大于950nm的红外光。应当理解,全通滤光片滤除的红外光的波长以及双通滤光片允许通过的红外光的波长都可以根据需要进行设计,本申请对此不作限定。滤光片1205可以避免波长较长的远红外光线和紫外光线影响感光器件的感光特性。
微镜头1201、滤光层、红外截止滤光层1204和电荷读出模块1203请参考图10所示的实施例的描述,此处不再赘述。
图13为本申请提供的图像传感器纵向截面的一个示例性的结构示意图,本实施例的纵向截面是沿着图9中的虚线切割形成的。如图13所示,示例性的示出了虚线沿线上的七个像素,即R像素、W像素、B像素、W像素、R像素、W像素和B像素,每个像素包括微镜头1301、滤光层、电荷读出模块1303和滤光片1305,在R像素、G像素和B像素内还设置有红外截止滤光层1304。R像素内的滤光层为红色滤光层1302R,G像素内的滤光层为绿色滤光层1302G,B像素内的滤光层为蓝色滤光层1302B,IR像素内的滤光层为全通滤光层或双通滤光层1302W。
全通滤光层用于通过全波段范围内的光,包括红色光、绿色光、蓝色光和红外光。双通滤光层用于通过红色光、绿色光、蓝色光和特定波长范围内的红外光。
微镜头1301、红色滤光层1302R、绿色滤光层1302G、蓝色滤光层1302B、红外截止滤光层1304、电荷读出模块1303和滤光片1305请参考图10-12所示的实施例的描述,此处不再赘述。
本申请还提供一种能够独立控制大像素和小像素的曝光时间的传感器,如图14所示,图14示出了大小像素阵列排序的一个示例性的控制连接示意图。
该图像传感器包括像素阵列1410和逻辑控制电路1420。
该像素阵列1410为如图8-13任一个实施例中所示的图像传感器中的像素阵列。
逻辑控制电路1420,用于分别独立控制大像素和小像素的曝光时间,大像素包括R像素、G像素和B像素,小像素为IR像素或者W像素,图14中以IR像素为例。具体的,该逻辑控制电路1420包括第一控制线和第二控制线,或者也可以说包括两个独立的控制电路:第一控制电路和第二控制电路。像素阵列1410中的大像素耦合至第一控制线,像素阵列1410中的小像素耦合至第二控制线。应当理解,图14中名称相同的控制线是同一根线或者说是彼此相连的,例如,像素阵列侧的第一控制线与逻辑控制电路的第一控制线是同一根线或者是相连的,像素阵列侧的第二控制线和逻辑控制电路的第二控制线是同一根线或者说是相连的。
位于大像素位置的像素耦合至第一控制线,位于小像素位置的像素耦合至第二控制线。应当理解,像素阵列的阵列排序不同时,像素各自的坐标条件会相应变动,因此,逻辑控制电路与像素阵列的连接方式需要根据传感器的排列方式对应设计。
第一控制线输出第一控制信号,第二控制线输出第二控制信号,第一控制信号用于控制大像素的曝光起始时间,第二控制信号用于控制小像素的曝光起始时间。第一控制信号和第二控制信号是彼此独立的,因此大像素和小像素的曝光起始时间可以不同。示例性的,当第一控制信号的第一有效跳变沿到来时,大像素开始曝光,当第二控制信号的第二有效跳变沿到来时,小像素开始曝光。第一控制信号和第二控制信号的有效跳变沿可以都是下降沿也可以都是上升沿,或者可以一个是下降沿另一个是上升沿,本申请对控制信号的有效跳变沿不做限定。图15示出了控制信号的一个示例性的时序图,如图15所示,第一控制信号和第二控制信号的有效跳变沿均为下降沿,在一种可能的实现方式中,第一控制信号和第二控制信号可以根据逻辑控制电路的系统复位信号得到。如图15所示,第一控制信号和第二控制信号均为高电平有效信号,当第一控制信号的下降沿到来时,大像素开始曝光,当第二控制信号的下降沿到来时,小像素开始曝光。
可选的,逻辑控制电路1420还包括:复位信号,该复位信号可以为系统时钟信号,第一控制信号和第二控制信号可以均由复位信号得到。示例性的,逻辑控制电路1420内部包括逻辑运算电路,该逻辑运算电路可以包括例如与、或、非、异或等逻辑运算,该逻辑运算电路包括三个输入:变量x、变量y以及复位信号,该逻辑运算电路包括两个输出端:第一控制线和第二控制线,如果变量x和变量y满足大像素的坐标条件时,将复位信号与第一控制线的输出端相连;如果变量x和变量y满足小像素的坐标条件时,将复位信号与第二控制线的输出端相连。
可选的,逻辑控制电路1420还包括:
曝光结束控制线,用于统一控制像素阵列中的所有像素的曝光停止时间。
曝光结束控制线输出曝光结束信号,曝光结束信号可以是高电平有效信号,也可以是 低电平有效信号,曝光结束时间点可以为高电平的下降沿,也可以为低电平的上升沿。在图15所示的控制信号时序图中,曝光结束控制信号为高电平有效信号,当曝光结束控制信号的下降沿到来时,像素阵列中的所有像素停止曝光。也即,像素阵列中大像素和小像素开始曝光的时间分别由第一控制线和第二控制线独立控制,结束曝光的时间由曝光结束控制线统一控制,如图15所示,大像素的曝光时间为第一控制信号的下降沿和曝光结束控制信号的下降沿之间的时间差:第一曝光时间,小像素的曝光时间为第二控制信号的下降沿和曝光结束控制信号的下降沿之间的时间差:第二曝光时间。因此,大像素和小像素的曝光时间是独立控制的。
在一种示例性的实施方式中,可以通过控制第一控制信号的第一有效跳变沿和第二控制信号的第二有效跳变沿到来的时刻,使得大像素和小像素的曝光时间满足预设比例。例如当大像素和小像素的曝光时间的比例为2:1时,曝光结果清晰度更好,信噪比更高,则使得大像素的控制信号先跳变,小像素后跳变,并确保两种像素的跳变时间点之间的时间差使得大像素的曝光时间和小像素的曝光时间满足预设比例。
本申请提供的图像传感器,大像素和小像素的曝光时间是独立控制的,例如可以在红外光或白光太强而可见光太弱的情况下,增加大像素的曝光时间而减少小像素的曝光时间,使得大像素和小像素的成像亮度趋于平衡,避免在红外光或白光占主导成分或者可见光占主导成分时,容易出现曝光失衡问题,提升了传感器感光的动态范围,满足用户对清晰度和信噪比等指标的要求。进一步的,通过精准设置大像素与小像素的曝光时间比例可以更精细的控制图像传感器的感光效果。
可选的,逻辑控制电路1420还包括:
电荷转移控制线,用于控制像素阵列的感光器件积累的电荷转移到电荷读出模块的时间点。电荷转移控制线中输出电荷转移控制信号,电荷转移控制信号可以为高电平有效信号,也可以为低电平有效信号。在图15所示的控制信号时序图中,电荷转移控制信号为高电平有效信号,当电荷转移控制信号的下降沿到来时,累积的电荷从感光器件中转移到电荷读出模块中。在一种可能的实现方式中,电荷转移控制信号复位之后,曝光结束控制信号再复位。
应当理解,逻辑控制电路的功能还可以由运行在处理器上的软件代码来实现,或者逻辑控制电路的功能可以部分由硬件电路实现,部分由软件模块实现。示例性的,传感器可以包括像素阵列和控制单元,该控制单元为运行在处理器上的软件模块,该控制单元包括第一控制单元和第二控制单元,用于分别独立控制大像素和小像素的曝光起始时间;该控制单元还包括曝光结束控制单元,用于统一控制像素阵列中的每个像素的曝光结束时间。该控制单元还包括电荷转移控制单元和复位单元,复位单元用于提供上述复位信号,电荷转移控制单元的功能类似于电荷转移控制线,此处不再赘述。
本申请还提供一种能够独立控制各个颜色像素的曝光时间的传感器,如图16所示,图16示出了大小像素阵列排序的一个示例性的控制连接示意图。
该传感器包括像素阵列1610和逻辑控制电路1620。
该像素阵列1610为如图8-13任一个实施例中所示的图像传感器中的像素阵列。
逻辑控制电路1620,用于分别独立控制R像素、G像素、B像素和小像素(IR像素或者W像素,图16中以IR像素为例)的曝光时间。具体的,该逻辑控制电路1620包括 第一控制线、第二控制线、第三控制线和第四控制线,或者也可以说包括四个独立的控制电路:第一控制电路、第二控制电路、第三控制电路和第四控制电路。像素阵列中的R像素耦合至第一控制线,G像素耦合至第二控制线,B像素耦合至第三控制线,IR像素耦合至第四控制线。应当理解,图16中名称相同的控制线是同一根控制线,是彼此相连的,例如像素阵列侧的第一控制线与逻辑控制电路的第一控制线是同一根线,像素阵列侧的第四控制线与逻辑控制电路的第四控制线是同一根线,以此类推。
耦合至第一控制线的像素的坐标满足R像素的坐标条件,耦合至第二控制线的像素的坐标满足G像素的坐标条件,耦合至第三控制线的像素的坐标满足B像素的坐标条件,耦合至第四控制线的像素的坐标满足IR像素的坐标条件。应当理解,像素阵列的阵列排序不同时,像素各自的坐标条件会相应变动,因此,逻辑控制电路与像素阵列的连接方式需要根据传感器的排列方式对应设计。
第一控制线输出第一控制信号,第二控制线输出第二控制信号,第三控制线输出第三控制信号,第四控制线输出第四控制信号,第一控制信号用于控制R像素的曝光起始时间,第二控制信号用于控制G像素的曝光起始时间,第三控制信号用于控制B像素的曝光起始时间,第四控制信号用于控制IR像素的曝光起始时间。第一控制信号至第四控制信号是彼此独立的,因此R像素、G像素、B像素和IR像素的曝光起始时间可以不同。示例性的,当第一控制信号的第一有效跳变沿到来时,R像素开始曝光,当第二控制信号的第二有效跳变沿到来时,G像素开始曝光,当第三控制信号的第三有效跳变沿到来时,B像素开始曝光,当第四控制信号的第四有效跳变沿到来时,IR像素开始曝光。第一控制信号至第四控制信号可以均为高电平有效信号,第一控制信号至第四控制信号的有效跳变沿可以都是下降沿也可以都是上升沿,或者可以部分是下降沿剩余部分是上升沿,本申请对控制信号的有效跳变沿不做限定。图17示出了控制信号的一个示例性的时序图,如图17所示,第一控制信号至第四控制信号均为高电平有效信号,第一控制信号至第四控制信号的有效跳变沿均为下降沿,在一种可能的实现方式中,第一控制信号至第四控制信号可以根据逻辑控制电路的系统复位信号得到。如图17所示,当第一控制信号的下降沿到来时,R像素开始曝光,当第二控制信号的下降沿到来时,G像素开始曝光,当第三控制信号的下降沿到来时,B像素开始曝光,当第四控制信号的下降沿到来时,IR像素开始曝光。
可选的,逻辑控制电路1620还包括:复位信号,该复位信号可以为系统时钟信号,第一控制信号至第四控制信号可以均由复位信号得到。示例性的,逻辑控制电路1620内部包括逻辑运算电路,该逻辑运算电路可以包括例如与、或、非、异或等逻辑运算,该逻辑运算电路包括三个输入:变量x、变量y以及复位信号,该逻辑运算电路包括四个输出端:第一控制线至第四控制线。如果变量x和变量y满足R像素的坐标条件时,将复位信号与第一控制线的输出端相连;如果变量x和变量y满足G像素的坐标条件时,将复位信号与第二控制线的输出端相连;如果变量x和变量y满足B像素的坐标条件时,将复位信号与第三控制线的输出端相连;如果变量x和变量y满足IR像素的坐标条件时,将复位信号与第四控制线的输出端相连。应当理解,像素阵列的阵列排序不同时,像素各自的坐标条件会相应变动,因此,逻辑控制电路内部的逻辑运算电路需要根据像素阵列的排列方式对应调整。
可选的,逻辑控制电路1620还包括:
曝光结束控制线,用于统一控制像素阵列中的所有像素的曝光停止时间。
曝光结束控制线输出曝光结束信号,曝光结束信号可以是高电平有效信号,也可以是低电平有效信号,曝光结束时间点可以为高电平的下降沿,也可以为低电平的上升沿。在图17所示的控制信号时序图中,曝光结束控制信号为高电平有效信号,当曝光结束控制信号的下降沿到来时,像素阵列中的所有像素停止曝光。也即,像素阵列中R像素、G像素、B像素和IR像素开始曝光的时间分别由第一控制线至第四控制线独立控制,结束曝光的时间由曝光结束控制线统一控制,示例性的,如图17所示的控制信号时序图中,R像素的曝光时间为第一控制信号的下降沿与曝光结束控制信号的下降沿之间的时间差:第一曝光时间,G像素、B像素和IR像素的曝光时间分别为第二曝光时间、第三曝光时间和第四曝光时间。因此,R像素、G像素、B像素和IR像素的曝光时间是独立控制的。
在一种示例性的实施方式中,可以通过控制第一控制信号的第一有效跳变沿至第四控制信号的第四有效跳变沿到来的时刻,使得R像素、G像素、B像素和IR像素的曝光时间满足预设比例。
可选的,逻辑控制电路1620还包括:电荷转移控制线,用于控制何时将像素阵列的感光器件积累的电荷转移到电荷读出模块。电荷转移控制线中输出电荷转移控制信号,电荷转移控制信号可以为高电平有效信号,也可以为低电平有效信号。图17所示的电荷转移控制信号同图15。
应当理解,逻辑控制电路的功能还可以由运行在处理器上的软件代码来实现,或者逻辑控制电路的功能可以部分由硬件电路实现,部分由软件模块实现。示例性的,传感器可以包括像素阵列和控制单元,该控制单元为运行在处理器上的软件模块,该控制单元包括第一控制单元、第二控制单元、第三控制单元、第四控制单元,用于分别独立控制R像素、G像素、B像素和IR像素的曝光起始时间;该控制单元还包括曝光结束控制单元,用于统一控制像素的四个分量的曝光结束时间。该控制单元还包括电荷转移控制单元和复位单元,复位单元用于提供复位信号,电荷转移控制单元的功能类似于电荷转移控制线,此处不再赘述。
本申请提供的图像传感器,R像素、G像素、B像素和IR像素的曝光时间分别独立控制,进一步提升了传感器感光的动态范围。当某些场景对R像素和G像素的感光结果要求较高而希望降低B像素和IR像素的感光结果,则可以通过灵活控制R像素、G像素、B像素和IR像素的曝光时间,加强R像素和G像素的感光效果,减弱B像素和IR像素的感光效果,使得最终的感光结果更符合场景需求或者更符合客户需求的清晰度或信噪比。进一步的,可以预先设置R像素、G像素、B像素和IR像素的曝光时间满足预设比例,以实现对传感器感光效果的精细控制。
本申请还提供一种各个像素的曝光时间均可以独立控制的传感器,如图18所示,图18示出了大小像素阵列排序的一个示例性的控制连接示意图。
该传感器包括像素阵列1810和逻辑控制电路1818。
该像素阵列1810为如图8-13任一个实施例中所示的传感器中的像素阵列。
逻辑控制电路1818包括行坐标控制电路和列坐标控制电路,或者说包括行坐标控制线和列坐标控制线,像素阵列中的每个像素耦合至各自的行坐标控制线和列坐标控制线。
逻辑控制电路1818还包括:复位信号和曝光开始控制线,当目标像素的行坐标线输 出的行坐标控制信号和列坐标线中输出的列坐标控制信号均为有效信号时,曝光开始控制线输出复位信号到该目标像素,并基于复位信号控制该目标像素的曝光起始时间。示例性的,曝光开始控制线具有多个分支,每个像素耦合至一个分支,当目标像素的行坐标控制信号和列坐标控制信号均满足要求时,该目标像素对应的分支输出有效的控制信号。列坐标控制线和行坐标控制线相当于开关,复位信号为输入,曝光开始控制线为输出,当列坐标控制线和行坐标控制线中的信号均为有效信号时,开关打开,复位信号才可以通过曝光开始控制线输出到目标像素,并控制目标像素的曝光。示例性的,当列坐标控制线和行坐标控制线中的信号均为有效信号时,且复位信号的有效跳变沿到来时,控制目标像素开始曝光。如果列坐标控制线和行坐标控制线中有一个信号不满足要求时,开关关闭,曝光开始控制线无控制信号输出。由于像素阵列中的每个像素均有各自对应的行坐标线和列坐标线,因此每个像素的曝光时间均可以独立控制,例如,可以将需要重点曝光的像素点的行坐标线和列坐标线中的信号优先设置为有效信号,从而延长重点曝光像素的曝光时间。
可选的,逻辑控制电路1818还包括:曝光结束控制线,用于统一控制像素阵列中各像素的曝光结束时间,具体可参考逻辑控制电路1618和逻辑控制电路1818的曝光结束控制线,此处不再赘述。
可选的,逻辑控制电路1818还包括:电荷转移控制线,用于控制何时将感光器件中累积的电荷转移到电荷读出模块,具体可参考逻辑控制电路1618和逻辑控制电路1818的电荷转移控制线,此处不再赘述。
应当理解,逻辑控制电路的功能还可以由运行在处理器上的软件代码来实现,或者逻辑控制电路的功能可以部分由硬件电路实现,部分由软件模块实现。示例性的,传感器可以包括像素阵列和控制单元,该控制单元为运行在处理器上的软件模块,该控制单元包括行控制单元、列控制单元和曝光开始控制单元,行控制单元和列控制单元用于分别指示像素的行坐标和纵坐标,曝光开始控制单元用于在目标像素的行控制单元和列控制单元均满足要求时,输出有效控制信号控制该目标像素的曝光起始时间。
本申请提供的传感器,可以根据每个像素的行坐标控制线和列坐标控制线中的控制信号的情况控制像素的曝光起始时间,而曝光结束时间由曝光结束控制线统一控制,因此每个像素的曝光时间可以是不同的。进一步的,可以通过设置像素对应的行坐标控制信号和列坐标控制信号均变为有效信号的时刻,使得每个像素的曝光时间满足预设比例。在某些需要增强目标区域像素的场景中,可以仅增加目标区域中的像素的曝光时间,进一步提升了传感器感光的灵活性,也进一步满足用户对感光结果的需求。
图19示出了控制信号的一个示例性的时序图,如图19所示,以两种像素为例说明曝光开始控制信号对像素曝光起始时间的控制。该时序图中的信号均为高电平有效,应当理解,各控制信号也可以为低电平有效。
第一像素耦合至第一行坐标控制线和第一列坐标控制线,第一行坐标控制线中的信号为行坐标控制信号1,第一列坐标控制线中的信号为列坐标控制信号1,第二像素耦合至第二行坐标控制线和第二列坐标控制线,第二行坐标控制线中的信号为行坐标控制信号2,第二列坐标控制线中的信号为列坐标控制信号2。当第一像素的行坐标控制信号1和列坐标控制信号1均为高电平时,第一像素的曝光开始控制信号有效,具体的,将复位信号作为曝光开始控制信号,并在复位信号的下降沿到来时,控制第一像素开始曝光;当第二像 素的行坐标控制信号2和列坐标控制信号2均为高电平时,第二像素的曝光开始控制信号有效,具体的,将复位信号作为曝光开始控制信号,并在复位信号的下降沿到来时,控制第二像素开始曝光。在曝光结束控制信号的下降沿到来时,第一像素和第二像素均停止曝光。至此,第一像素的曝光时间为第一曝光时间,第二像素的曝光时间为第二曝光时间。应当理解,第一像素的曝光开始控制信号和第二像素的曝光开始控制信号可以为同一个信号的两个不同的分支,第一像素的坐标控制信号满足要求时,第一像素对应的分支输出有效的控制信号;第二像素的坐标控制信号满足要求时,第二像素对应的分支输出有效的控制信号。
图20a示出了本申请提供的图像传感器中的感光器件的感光特性曲线图。如图20a所示,横坐标为光线的波长,单位为nm,纵坐标为感光强度。细实线为R像素的感光特性曲线,短虚线为G像素的感光特性曲线,点画线为B像素的感光特性曲线,长虚线为IR像素的感光特性曲线。由图20a可得,R像素仅在红光650nm附近存在感光强度波峰,G像素仅在绿光550nm附近存在感光强度波峰,B像素仅在蓝光450nm附近存在感光强度波峰,IR像素仅在红外光850nm(在某些情况下可以是910nm)附近存在感光强度波峰。与图4所示的感光特性曲线图相比,本申请提供的图像传感器去除了R像素、G像素和B像素的感光结果中的IR信号,使得R像素可以仅感光红色光,G像素可以仅感光绿色光以及B像素可以仅感光蓝色光,提升了图像传感器感光结果的色彩准确度。
图20b示出了本申请提供的图像传感器中的感光器件的感光特性曲线图,如图20b所示,横坐标为光线的波长,单位为nm,纵坐标为感光强度。细实线为R像素的感光特性曲线,短虚线为G像素的感光特性曲线,点画线为B像素的感光特性曲线,长虚线为W像素的感光特性曲线。由图20b可得,R像素仅在红光650nm附近存在感光强度波峰,G像素仅在绿光550nm附近存在感光强度波峰,B像素仅在蓝光450nm附近存在感光强度波峰,W像素的感光范围覆盖全波段。与图4所示的感光特性曲线图相比,本申请提供的图像传感器去除了R像素、G像素和B像素的感光结果中的IR信号,使得R像素可以仅感光红色光,G像素可以仅感光绿色光以及B像素可以仅感光蓝色光,提升了图像传感器感光结果的色彩准确度。
本申请提供一种独立曝光的装置,该装置用于控制图像传感器的像素阵列的曝光时间,该装置包括至少两个控制单元,该至少两个控制单元中的每个控制单元用于分别对应控制传感器的像素阵列中的一种类型的像素的曝光起始时间,该装置控制的传感器的像素阵列中包括至少两种类型的像素,例如大像素和小像素。
应当理解,该独立曝光的装置可以认为是独立于图像传感器之外的控制装置,例如可以为通用处理器或专用处理器,或者可以认为是独立固化的硬件逻辑或硬件电路。例如,该独立曝光的装置可以认为是图14、16和18中的逻辑控制电路。
图21示出了独立曝光的装置的一个示例性的结构示意图。应当理解,前述图14、16和18中的逻辑控制电路均可以由运行在如图21所示的独立曝光的装置上的软件模块完成。
该曝光控制装置包括:至少一个中央处理单元(Central Processing Unit,CPU)、至少一个存储器、微控制器(Microcontroller Unit,MCU)、接收接口和发送接口等。可选的,该曝光控制装置还包括:专用的视频或图形处理器,以及GPU等。
可选的,CPU可以是一个单核(single-CPU)处理器或多核(multi-CPU)处理器;可选的,CPU可以是多个处理器构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。在一种可能的实现方式中,曝光控制可以一部分由跑在通用CPU或MCU上的软件代码完成,一部分由硬件逻辑电路完成;或者也可以全部由跑在通用CPU或MCU上的软件代码完成。可选的,存储器可以是非掉电易失性存储器,例如是嵌入式多媒体卡(Embedded Multi Media Card,EMMC)、通用闪存存储(Universal Flash Storage,UFS)或只读存储器(Read-Only Memory,ROM),或者是可存储静态信息和指令的其他类型的静态存储设备,还可以是掉电易失性存储器(volatile memory),例如随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的程序代码并能够由计算机存取的任何其他计算机可读存储介质,但不限于此。该接收接口可以为处理器芯片的数据输入的接口。
在一种可能的实现方式中,该独立曝光的装置还包括:像素阵列。在这种情况中,该独立曝光的装置包括至少两种类型的像素,例如大像素和小像素,也即该独立曝光的装置可以为包含控制单元或者逻辑控制电路在内的图像传感器,或者说,该独立曝光的装置为可以独立控制曝光的图像传感器。示例性的,该独立曝光的装置可以为独立控制曝光的RGBIR传感器、RGBW传感器等。
应当理解,在一种可能的实现方式中,可见光像素被归为一种类型的像素,也即R像素、G像素和B像素被归为一种类型的像素(大像素),而IR像素或W像素被认为是另外一种类型的像素(小像素)。
在另外一种可选的情况中,每种像素分量被认为是一种类型的像素,例如,RGBIR传感器包括:R、G、B和IR四种类型的像素,RGBW传感器包括:R、G、B和W四种类型的像素。
在一种可能的实现方式中,传感器为RGBIR传感器,RGBIR传感器可以实现可见光像素和IR像素分别独立曝光,也可以实现R、G、B、IR四分量分别独立曝光。
对于可见光像素和IR像素分别独立曝光的RGBIR传感器,该至少两个控制单元包括:第一控制单元和第二控制单元;第一控制单元用于控制可见光像素的曝光起始时间;第二控制单元用于控制IR像素的曝光起始时间。
对于R、G、B、IR四分量分别独立曝光的RGBIR传感器,至少两个控制单元包括:第一控制单元、第二控制单元、第三控制单元和第四控制单元;第一控制单元用于控制R像素的曝光起始时间;第二控制单元用于控制G像素的曝光起始时间;第三控制单元用于控制B像素的曝光起始时间;第四控制单元用于控制IR像素的曝光起始时间。
在一种可能的实现方式中,传感器为RGBW传感器,RGBW传感器可以实现可见光像素和W像素分别独立曝光,也可以实现R、G、B、W四分量分别独立曝光。
对于可见光像素和W像素分别独立曝光的RGBW传感器,该至少两个控制单元包括:第一控制单元和第二控制单元;第一控制单元用于控制可见光像素的曝光起始时间;第二 控制单元用于控制W像素的曝光起始时间。
对于R、G、B、W四分量分别独立曝光的RGBW传感器,至少两个控制单元包括:第一控制单元、第二控制单元、第三控制单元和第四控制单元;第一控制单元用于控制R像素的曝光起始时间;第二控制单元用于控制G像素的曝光起始时间;第三控制单元用于控制B像素的曝光起始时间;第四控制单元用于控制W像素的曝光起始时间。
在一种可能的实现方式中,该独立曝光的装置还可以基于至少两个控制单元控制至少两种类型的像素的曝光时间满足预设比例。示例性的,基于第一控制单元和第二控制单元控制可见光像素和IR像素的曝光时间满足预设比例;或者基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R、G、B和IR像素的曝光时间满足预设比例。或者,基于第一控制单元和第二控制单元控制可见光像素和W像素的曝光时间满足预设比例;或者,基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R、G、B和W像素的曝光时间满足预设比例。
在一种可能的实现方式中,该独立曝光的装置还包括:曝光结束控制单元,用于统一控制像素阵列中的所有像素的曝光结束时间。
本申请还提供了一种图像感光方法。图22为本申请提供的图像感光方法实施例的流程图,如图22所示,该方法应用于本申请提供的图像传感器,该图像传感器包括:红色像素、绿色像素、蓝色像素和非可见光像素;其中,红色像素、绿色像素和蓝色像素为大像素,非可见光像素为小像素,大像素的感光面积大于小像素的感光面积;红色像素、绿色像素和蓝色像素的排列呈Bayer格式。该图像感光方法包括:
步骤2201、基于红色像素感光红色光。
步骤2202、基于绿色像素感光绿色光。
步骤2203、基于蓝色像素感光蓝色光。
步骤2204、基于小像素感光红外光或白色光。
上述作为小像素的非可见光像素包括红外光像素或白色像素,其中白色像素用于感光白色光,白色光包括红色光、绿色光、蓝色光和红外光。因此步骤2204可以进一步包括基于红外光像素感光红外光,或者,基于白色像素感光白色光。
在一种可能的实现方式中,四个大像素包围一个小像素,且四个小像素包围一个大像素。大像素和小像素的面积根据图像传感器的串扰精度设置。大像素和小像素的形状为正多边形或圆形。
图像传感器中的像素结构和原理可参考装置侧的说明,此处不再赘述。
应当理解,步骤2201-2204的标号并不限定方法的执行顺序,步骤2201-2204通常可以是同步执行的,或者步骤与步骤之间也可以不是严格同步执行,而是彼此之间存在一些时间差,本申请对此不作限定。
在图22所示实施例的基础上,图23为本申请提供的图像感光方法实施例的流程图,如图23所示,图像传感器还可以包括:微镜头、红外截止滤光层和滤光层;每个像素对应一个微镜头;每个大像素对应一个红外截止滤光层,红外光截止滤光层用于截止波长大于第一预设波长的光信号,波长大于第一预设波长的光信号包括红外光;滤光层包括红色滤光层、绿色滤光层和蓝色滤光层;每个红色像素对应一个红色滤光层,红色滤光层用于通过红色光和第一波长范围内的红外光;每个绿色像素对应一个绿色滤光层,绿色滤光层 用于通过绿色光和第二波长范围内的红外光;每个蓝色像素对应一个蓝色滤光层,蓝色滤光层用于通过蓝色光和第三波长范围内的红外光;第一波长范围内的红外光、第二波长范围内的红外光以及第三波长范围内的红外光的波长均大于第一预设波长;当非可见光像素为红外光像素时,滤光层还包括红外光滤光层;每个红外光像素对应一个红外光滤光层,红外光滤光层用于通过特定波长范围内的红外光;当非可见光像素为白色像素时,滤光层还包括全通滤光层或者双通滤光层;每个白色像素对应一个全通滤光层或者双通滤光层,全通滤光层用于通过全波段范围内的光,双通滤光层用于通过红色光、绿色光、蓝色光和特定波长范围内的红外光。该图像感光方法还可以包括:
步骤2301、自然界的原始光线通过滤光片得到第一光线。
滤光片用于滤除紫外光和远红外光,远红外光为波长较长的红外光,例如前述实施例中提到的波长大于第二预设波长的红外光可以称为是远红外光。远红外光的波长大于后续红外光滤光层允许通过的特定波长范围内的红外光的波长。该滤光片可参考装置侧关于滤光片的说明,此处不再赘述。
步骤2302、第一光线通过红外光截止滤光层、红色滤光层和微镜头到达红色像素。
步骤2303、第一光线通过红外光截止滤光层、绿色滤光层和微镜头到达绿色像素。
步骤2304、第一光线通过红外光截止滤光层、蓝色滤光层和微镜头到达蓝色像素。
步骤2305、第一光线通过红外光滤光层到达红外光像素,或者光线通过全通滤光层或者双通滤光层到达白色像素。
应当理解,步骤2302-2305的标号并不限定方法的执行顺序,步骤2302-2305通常可以是同步执行的,或者步骤与步骤之间也可以不是严格同步执行,而是彼此之间存在一些时间差,本申请对此不作限定。
红外光滤光层仅允许特定波长范围内的红外光通过,红色滤光层用于仅通过红色光和第一波长范围内的红外光,绿色滤光层用于仅通过绿色光和第二波长范围内的红外光,蓝色滤光层用于仅通过蓝色光和第三波长范围内的红外光,红外光截止滤光层截止的红外光包括:该第一波长范围内的红外光、该第二波长范围内的红外光以及该第三波长范围内的红外光。
由于红色滤光层、绿色滤光层和蓝色滤光层通过的红外光均在红外光截止滤光层所截止的红外光的波长范围内,因此红外光截止滤光层截止了进入R像素、G像素和B像素的红外光,使得R像素、G像素和B像素可以分别仅感光红色光、绿色光和蓝色光。
应当理解,步骤2301为可选步骤,自然界的原始光线也可以不通过滤光片,而直接进入滤光层和微镜头。红外光截止滤光层可以在红色滤光层、绿色滤光层以及蓝色滤光层的上面;红色滤光层、绿色滤光层以及蓝色滤光层也可以在红外光截止滤光层的上面,本申请对此不作限定。
2306、像素中的感光器件将进入像素的光线转换为电荷。
2307、通过电荷读出模块将累积的电荷输出,得到感光结果。
在一种可能的实现方式中,该方法还包括:基于第一控制线控制大像素的曝光起始时间,大像素包括R像素、G像素和B像素;基于第二控制线控制小像素的曝光起始时间,小像素为IR像素或W像素。
大像素和小像素可以独立曝光,提升了图像传感器的感光效果。
在一种可能的实现方式中,该方法还包括:基于第一控制线和第二控制线控制大像素和小像素的曝光时间满足预设比例。
在一种可能的实现方式中,该方法还包括:基于第一控制线控制R像素的曝光起始时间;基于第二控制线控制G像素的曝光起始时间;基于第三控制线控制B像素的曝光起始时间;基于第四控制线控制IR像素的曝光起始时间。
在该方法中,四种像素可以分别独立曝光,提升了图像传感器的感光效果。
在一种可能的实现方式中,该方法还包括:控制R像素、G像素和B像素的曝光时间满足预设比例。
在一种可能的实现方式中,该图像传感器中的每个像素耦合至各自的行坐标控制线和列坐标控制线,且该每个像素对应曝光开始控制线的一个支路,该方法还包括:当目标像素的该行坐标控制线和该列坐标控制线输出的控制信号均为有效电平时,该目标像素对应的该曝光开始控制线的支路输出控制信号,并基于该控制信号控制该目标像素的曝光起始时间,该目标像素为该像素阵列中的任一个像素。
在该方法中,每个像素可以单独控制曝光时间。
在一种可能的实现方式中,该方法还包括:基于曝光结束控制线控制像素阵列中的所有像素的曝光结束时间。
图24为本申请提供的独立控制曝光时间的方法实施例的流程图,如图24所示,该方法应用于包括至少两种类型的像素的传感器,至少两种类型的像素包括第一种类型的像素和第二种类型的像素。该方法包括:
步骤2401、基于第一控制单元控制第一种类型的像素的曝光起始时间。
步骤2402、基于第二控制单元控制第二种类型的像素的曝光起始时间。
示例性的,该传感器可以为RGBIR传感器,对应的,第一种类型的像素为可见光像素对应的大像素,可见光像素包括R像素、G像素和B像素,第二种类型的像素为IR像素对应的小像素。该图像传感器可以为RGBW传感器,对应的,第一种类型的像素为可见光像素对应的大像素,可见光像素包括R像素、G像素和B像素,第二种类型的像素为W像素对应的小像素。该第一控制单元和第二控制单元彼此独立,因此第一类型像素和第二类型像素的曝光起始时间是独立控制的。应当理解,第一控制单元和第二控制单元可以由硬件逻辑电路实现,也可以由运行在处理器上的软件模块实现。
在一种可能的实现方式中,至少两种类型的像素还包括:第三种类型的像素;该方法还包括:基于第三控制单元控制第三种类型的像素的曝光起始时间。
示例性的,传感器为RCCB传感器,第一种类型的像素为R像素,第二种类型的像素为B像素,第三种类型的像素为C像素;该方法具体包括:基于第一控制单元控制R像素的曝光起始时间;基于第二控制单元控制B像素的曝光起始时间;基于第三控制单元控制C像素的曝光起始时间。
在一种可能的实现方式中,至少两种类型的像素还包括:该至少两种类型的像素还包括:第三种类型的像素和第四种类型的像素,该方法还包括:
基于第三控制单元控制该第三种类型的像素的曝光起始时间;
基于第四控制单元控制该第四种类型的像素的曝光起始时间。
示例性的,该传感器为RGBIR传感器,第一种类型的像素为R像素,第二种类型的 像素为G像素,第三种类型的像素为B像素,第四种类型的像素为IR像素;该方法具体包括:基于该第一控制单元控制该R像素的曝光起始时间;基于该第二控制单元控制该G像素的曝光起始时间;基于该第三控制单元控制该B像素的曝光起始时间;基于该第四控制单元控制该IR像素的曝光起始时间;或者,该传感器为RGBW传感器,该第一种类型的像素为R像素,该第二种类型的像素为G像素,该第三种类型的像素为B像素,该第四种类型的像素为W像素;该方法具体包括:基于该第一控制单元控制该R像素的曝光起始时间;基于该第二控制单元控制该G像素的曝光起始时间;基于该第三控制单元控制该B像素的曝光起始时间;基于该第四控制单元控制该W像素的曝光起始时间。
可选的,该方法还包括:基于曝光结束控制单元控制所有像素的曝光结束时间。
可选的,该方法还包括:控制该至少两种类型的像素中的每一种类型的像素的曝光时间满足预设比例。
示例性的,基于第一控制单元和第二控制单元控制大像素和小像素的曝光时间满足预设比例;或者基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R像素、G像素、B像素和IR像素的曝光时间满足预设比例。或者,基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R像素、G像素、B像素和W像素的曝光时间满足预设比例。
不同类型的像素的曝光起始时间分别独立控制,曝光结束时间统一控制,因此可以通过设置不同像素曝光开始的时间使得各像素之间的曝光时间满足预设比例。
可选的,该方法还包括:基于电荷转移控制单元将感光器件中累积的电荷转移到电荷读出模块中。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行本申请提供的任一个独立曝光控制的方法中的部分或全部步骤。
本申请还提供一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得计算机或处理器执行本申请提供的任一个独立曝光控制的方法中的部分或全部步骤。
在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。本申请公开的方法的步骤可以直接体现为硬件编码处理器执行完成,或者用编码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通 过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种图像传感器,其特征在于,包括:红色像素、绿色像素、蓝色像素和非可见光像素;其中,
    所述红色像素、所述绿色像素和所述蓝色像素为大像素,所述非可见光像素为小像素,所述大像素的感光面积大于所述小像素的感光面积;
    所述红色像素、所述绿色像素和所述蓝色像素的排列呈贝叶尔Bayer格式。
  2. 根据权利要求1所述的传感器,其特征在于,所述非可见光像素包括红外光像素或白色像素,所述白色像素用于感光白色光,所述白色光包括红色光、绿色光、蓝色光和红外光。
  3. 根据权利要求1或2所述的传感器,其特征在于,四个所述大像素包围一个所述小像素,且四个所述小像素包围一个所述大像素。
  4. 根据权利要求1-3中任一项所述的传感器,其特征在于,所述大像素和所述小像素的面积根据所述图像传感器的串扰精度设置。
  5. 根据权利要求1-4中任一项所述的传感器,其特征在于,所述大像素和所述小像素的形状为正多边形或圆形。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述红色像素、所述绿色像素和所述蓝色像素均对应一个红外截止滤光层,所述红外光截止滤光层用于截止波长大于第一预设波长的光信号,所述波长大于第一预设波长的光信号包括红外光。
  7. 根据权利要求1-6中任一项所述的传感器,其特征在于,还包括:滤光层,所述滤光层包括红色滤光层、绿色滤光层和蓝色滤光层;
    每个所述红色像素对应一个所述红色滤光层,所述红色滤光层用于通过红色光和第一波长范围内的红外光;
    每个所述绿色像素对应一个所述绿色滤光层,所述绿色滤光层用于通过绿色光和第二波长范围内的红外光;
    每个所述蓝色像素对应一个所述蓝色滤光层,所述蓝色滤光层用于通过蓝色光和第三波长范围内的红外光;所述第一波长范围内的红外光、所述第二波长范围内的红外光以及所述第三波长范围内的红外光的波长均大于第一预设波长;
    当所述非可见光像素为红外光像素时,所述滤光层还包括红外光滤光层;每个所述红外光像素对应一个所述红外光滤光层,所述红外光滤光层用于通过特定波长范围内的红外光;;
    当所述非可见光像素为白色像素时,所述滤光层还包括全通滤光层或者双通滤光层;每个所述白色像素对应一个所述全通滤光层或者所述双通滤光层,所述全通滤光层用于通过全波段范围内的光,所述双通滤光层用于通过所述红色光、所述绿色光、所述蓝色光和所述特定波长范围内的红外光。
  8. 根据权利要求6或7所述的传感器,其特征在于,所述红外光截止滤光层和/或所述滤光层涂覆于对应像素的微镜头上。
  9. 根据权利要求1-8中任一项所述的传感器,其特征在于,还包括:
    逻辑控制电路,用于分别控制所述大像素和所述小像素的曝光时间。
  10. 根据权利要求9所述的传感器,其特征在于,所述逻辑控制电路包括:第一控制线和第二控制线;所述大像素耦合至所述第一控制线,所述小像素耦合至所述第二控制线;
    所述逻辑控制电路,具体用于基于所述第一控制线控制所述大像素的曝光起始时间;基于所述第二控制线控制所述小像素的曝光起始时间。
  11. 根据权利要求1-10中任一项所述的传感器,其特征在于,还包括:
    滤光片,用于滤除紫外光和波长大于第二预设波长的红外光,所述第二预设波长大于所述第一预设波长和特定波长范围内的任一个波长。
  12. 一种图像感光方法,其特征在于,所述方法应用于图像传感器,所述图像传感器包括:红色像素、绿色像素、蓝色像素和非可见光像素;其中,所述红色像素、所述绿色像素和所述蓝色像素为大像素,所述非可见光像素为小像素,所述大像素的感光面积大于所述小像素的感光面积;所述红色像素、所述绿色像素和所述蓝色像素的排列呈贝叶尔Bayer格式;
    所述方法包括:
    基于所述红色像素感光所述红色光;
    基于所述绿色像素感光所述绿色光;
    基于所述蓝色像素感光所述蓝色光;
    基于所述小像素感光红外光或白色光。
  13. 根据权利要求12所述的方法,其特征在于,所述非可见光像素包括红外光像素或白色像素,所述白色像素用于感光白色光,所述白色光包括红色光、绿色光、蓝色光和红外光;
    所述方法具体包括:
    基于所述红外光像素感光所述红外光,或者,基于所述白色像素感光所述白色光。
  14. 根据权利要求12或13所述的方法,其特征在于,四个所述大像素包围一个所述小像素,且四个所述小像素包围一个所述大像素。
  15. 根据权利要求12-14中任一项所述的方法,其特征在于,所述大像素和所述小像素的面积根据所述图像传感器的串扰精度设置。
  16. 根据权利要求12-15中任一项所述的方法,其特征在于,所述大像素和所述小像素的形状为正多边形或圆形。
  17. 根据权利要求12-16中任一项所述的方法,其特征在于,所述图像传感器还包括:红外截止滤光层;每个所述大像素对应一个所述红外截止滤光层,所述红外光截止滤光层用于截止波长大于第一预设波长的光信号,所述波长大于第一预设波长的光信号包括红外光;
    所述方法还包括:
    光线通过所述红外光截止滤光层到达所述大像素。
  18. 根据权利要求17所述的方法,其特征在于,所述图像传感器还包括:滤光层;所述滤光层包括红色滤光层、绿色滤光层和蓝色滤光层;每个所述红色像素对应一个所述红色滤光层,所述红色滤光层用于通过红色光和第一波长范围内的红外光;每个所述绿色像素对应一个所述绿色滤光层,所述绿色滤光层用于通过绿色光和第二波长范围内的红外光;每个所述蓝色像素对应一个所述蓝色滤光层,所述蓝色滤光层用于通过蓝色光和第三 波长范围内的红外光;所述第一波长范围内的红外光、所述第二波长范围内的红外光以及所述第三波长范围内的红外光的波长均大于第一预设波长;当所述非可见光像素为红外光像素时,所述滤光层还包括红外光滤光层;每个所述红外光像素对应一个所述红外光滤光层,所述红外光滤光层用于通过特定波长范围内的红外光;当所述非可见光像素为白色像素时,所述滤光层还包括全通滤光层或者双通滤光层;每个所述白色像素对应一个所述全通滤光层或者所述双通滤光层,所述全通滤光层用于通过全波段范围内的光,所述双通滤光层用于通过所述红色光、所述绿色光、所述蓝色光和所述特定波长范围内的红外光;
    所述方法还包括:
    所述光线通过所述红外光截止滤光层和所述红色滤光层到达所述红色像素;
    所述光线通过所述红外光截止滤光层和所述绿色滤光层到达所述绿色像素;
    所述光线通过所述红外光截止滤光层和所述蓝色滤光层到达所述蓝色像素;
    所述光线通过所述红外光滤光层到达所述红外光像素,或者所述光线通过所述全通滤光层或者所述双通滤光层到达所述白色像素。
  19. 根据权利要求12-18中任一项所述的方法,其特征在于,所述图像传感器还包括:逻辑控制电路;所述逻辑控制电路包括第一控制线和第二控制线;所述大像素耦合至所述第一控制线,所述小像素耦合至所述第二控制线;
    所述方法还包括:
    基于所述第一控制线控制所述大像素的曝光起始时间;
    基于所述第二控制线控制所述小像素的曝光起始时间。
PCT/CN2020/077652 2020-03-03 2020-03-03 图像传感器和图像感光方法 WO2021174425A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP20922912.9A EP4109894A4 (en) 2020-03-03 2020-03-03 IMAGE SENSOR AND IMAGE SENSITIZATION METHOD
JP2022552945A JP2023516410A (ja) 2020-03-03 2020-03-03 イメージセンサおよびイメージ光感知方法
PCT/CN2020/077652 WO2021174425A1 (zh) 2020-03-03 2020-03-03 图像传感器和图像感光方法
CN202080097816.4A CN115462066A (zh) 2020-03-03 2020-03-03 图像传感器和图像感光方法
US17/901,965 US20230005240A1 (en) 2020-03-03 2022-09-02 Image sensor and image light sensing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/077652 WO2021174425A1 (zh) 2020-03-03 2020-03-03 图像传感器和图像感光方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/901,965 Continuation US20230005240A1 (en) 2020-03-03 2022-09-02 Image sensor and image light sensing method

Publications (1)

Publication Number Publication Date
WO2021174425A1 true WO2021174425A1 (zh) 2021-09-10

Family

ID=77612863

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077652 WO2021174425A1 (zh) 2020-03-03 2020-03-03 图像传感器和图像感光方法

Country Status (5)

Country Link
US (1) US20230005240A1 (zh)
EP (1) EP4109894A4 (zh)
JP (1) JP2023516410A (zh)
CN (1) CN115462066A (zh)
WO (1) WO2021174425A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268716A (zh) * 2021-11-30 2022-04-01 维沃移动通信有限公司 图像传感器、摄像模组和电子设备
WO2023045764A1 (zh) * 2021-09-22 2023-03-30 华为技术有限公司 一种图像传感器、摄像头模组、电子设备及图像处理方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11636700B2 (en) 2021-05-21 2023-04-25 Ford Global Technologies, Llc Camera identification
US11769313B2 (en) 2021-05-21 2023-09-26 Ford Global Technologies, Llc Counterfeit image detection
US11967184B2 (en) * 2021-05-21 2024-04-23 Ford Global Technologies, Llc Counterfeit image detection
CN116962886A (zh) * 2022-04-07 2023-10-27 安霸国际有限合伙企业 用于rgb-ir传感器的智能自动曝光控制

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016096234A (ja) * 2014-11-14 2016-05-26 ソニー株式会社 固体撮像素子および電子機器
CN107105140A (zh) * 2017-04-28 2017-08-29 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN107809601A (zh) * 2017-11-24 2018-03-16 深圳先牛信息技术有限公司 图像传感器
CN208158780U (zh) * 2017-05-24 2018-11-27 半导体元件工业有限责任公司 图像传感器
CN109327665A (zh) * 2017-07-26 2019-02-12 聚晶半导体股份有限公司 图像获取装置及其红外线感测方法
CN110574367A (zh) * 2019-07-31 2019-12-13 华为技术有限公司 一种图像传感器和图像感光的方法
CN111263129A (zh) * 2020-02-11 2020-06-09 Oppo广东移动通信有限公司 图像传感器、摄像头组件及移动终端

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4510613B2 (ja) * 2004-12-28 2010-07-28 パナソニック株式会社 固体撮像装置の製造方法
JP4967427B2 (ja) * 2006-04-06 2012-07-04 凸版印刷株式会社 撮像素子
JP2011244351A (ja) * 2010-05-20 2011-12-01 Fujifilm Corp 撮像装置及び固体撮像素子の駆動制御方法
JP2012089716A (ja) * 2010-10-21 2012-05-10 Canon Inc 撮像素子および前記撮像素子を搭載した撮像装置
US9264676B2 (en) * 2012-01-06 2016-02-16 Microsoft Technology Licensing, Llc Broadband imager
US9172892B2 (en) * 2012-09-14 2015-10-27 Semiconductor Components Industries, Llc Imaging systems with image pixels having varying light collecting areas
US10565925B2 (en) * 2014-02-07 2020-02-18 Samsung Electronics Co., Ltd. Full color display with intrinsic transparency
JP2017108062A (ja) * 2015-12-11 2017-06-15 ソニー株式会社 固体撮像素子、撮像装置、および、固体撮像素子の製造方法
JP2018093284A (ja) * 2016-11-30 2018-06-14 マクセル株式会社 可視近赤外同時撮像装置
CN107170768A (zh) * 2017-06-13 2017-09-15 展谱光电科技(上海)有限公司 多光谱摄像装置以及多光谱摄像系统
CN110493494B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN110649056B (zh) * 2019-09-30 2022-02-18 Oppo广东移动通信有限公司 图像传感器、摄像头组件及移动终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016096234A (ja) * 2014-11-14 2016-05-26 ソニー株式会社 固体撮像素子および電子機器
CN107105140A (zh) * 2017-04-28 2017-08-29 广东欧珀移动通信有限公司 双核对焦图像传感器及其对焦控制方法和成像装置
CN208158780U (zh) * 2017-05-24 2018-11-27 半导体元件工业有限责任公司 图像传感器
CN109327665A (zh) * 2017-07-26 2019-02-12 聚晶半导体股份有限公司 图像获取装置及其红外线感测方法
CN107809601A (zh) * 2017-11-24 2018-03-16 深圳先牛信息技术有限公司 图像传感器
CN110574367A (zh) * 2019-07-31 2019-12-13 华为技术有限公司 一种图像传感器和图像感光的方法
CN111263129A (zh) * 2020-02-11 2020-06-09 Oppo广东移动通信有限公司 图像传感器、摄像头组件及移动终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4109894A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045764A1 (zh) * 2021-09-22 2023-03-30 华为技术有限公司 一种图像传感器、摄像头模组、电子设备及图像处理方法
CN114268716A (zh) * 2021-11-30 2022-04-01 维沃移动通信有限公司 图像传感器、摄像模组和电子设备

Also Published As

Publication number Publication date
US20230005240A1 (en) 2023-01-05
CN115462066A (zh) 2022-12-09
EP4109894A4 (en) 2023-03-22
JP2023516410A (ja) 2023-04-19
EP4109894A1 (en) 2022-12-28

Similar Documents

Publication Publication Date Title
WO2021174425A1 (zh) 图像传感器和图像感光方法
WO2021016900A1 (zh) 一种图像传感器和图像感光的方法
CN212785522U (zh) 图像传感器和电子设备
KR101733443B1 (ko) 이종 이미저를 구비한 모놀리식 카메라 어레이를 이용한 이미지의 캡처링 및 처리
US20130278802A1 (en) Exposure timing manipulation in a multi-lens camera
EP1871091A2 (en) Camera Module
TWI475260B (zh) 用於數位相機之具有紅色吸收層之紅外線截止濾波器
CN111131798B (zh) 图像处理方法、图像处理装置以及摄像装置
CN113711584B (zh) 一种摄像装置
TWI567958B (zh) 雙像素大小彩色影像感測器及其製造方法
US11431911B2 (en) Imaging device and signal processing device
CN108881701A (zh) 拍摄方法、摄像头、终端设备及计算机可读存储介质
WO2020155739A1 (zh) 图像传感器、从其获取图像数据的方法及摄像设备
WO2023087908A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
CN204633909U (zh) 成像器
KR20190051371A (ko) 보색관계의 필터 어레이를 포함하는 카메라 모듈 및 그를 포함하는 전자 장치
WO2023093852A1 (zh) 像素结构、像素阵列、图像传感器及电子设备
CN212628124U (zh) 暗景全彩功能图像传感器及其成像装置
CN114650359A (zh) 摄像模组以及电子设备
CN103811509A (zh) 测量入射光在三维空间中入射角度的像素阵列及方法
JP7398284B2 (ja) カラー画像撮像装置
CN210839866U (zh) 图像传感器芯片、摄像模组和终端设备
WO2023050040A1 (zh) 一种摄像头模组及电子设备
CN113905193A (zh) 暗景全彩功能的图像传感器及其成像方法
JP2023541369A (ja) センサー装置及びセンサー装置の製造方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922912

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022552945

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020922912

Country of ref document: EP

Effective date: 20220920

NENP Non-entry into the national phase

Ref country code: DE