WO2021179806A1 - 图像获取方法、成像装置、电子设备及可读存储介质 - Google Patents

图像获取方法、成像装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2021179806A1
WO2021179806A1 PCT/CN2021/073292 CN2021073292W WO2021179806A1 WO 2021179806 A1 WO2021179806 A1 WO 2021179806A1 CN 2021073292 W CN2021073292 W CN 2021073292W WO 2021179806 A1 WO2021179806 A1 WO 2021179806A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
panchromatic
intermediate image
pixel
Prior art date
Application number
PCT/CN2021/073292
Other languages
English (en)
French (fr)
Inventor
杨鑫
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP21767801.0A priority Critical patent/EP4113977A4/en
Publication of WO2021179806A1 publication Critical patent/WO2021179806A1/zh
Priority to US17/940,780 priority patent/US20230017746A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/533Control of the integration time by using differing integration times for different sensor regions
    • H04N25/534Control of the integration time by using differing integration times for different sensor regions depending on the spectral component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array

Definitions

  • This application relates to the field of imaging technology, and in particular to an image acquisition method, imaging device, electronic equipment, and non-volatile computer-readable storage medium.
  • High dynamic range imaging technology can usually be obtained by controlling the image sensor in the imaging device to perform long and short exposures, and then fusing the images obtained from the long and short exposures.
  • the fused image can better show the details of the dark and bright areas. .
  • the embodiments of the present application provide an image acquisition method, an imaging device, an electronic device, and a non-volatile computer-readable storage medium.
  • the image acquisition method of the embodiment of the present application is used in an image sensor.
  • the image sensor includes a pixel array including a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels, and the color photosensitive pixels have a narrower spectral response than the panchromatic photosensitive pixels.
  • the pixel array includes a minimum repeating unit, each of the minimum repeating units includes a plurality of sub-units, and each of the sub-units includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels.
  • the image acquisition method includes: controlling the exposure of the pixel array, wherein, for a plurality of photosensitive pixels in the same subunit, at least one of the single-color photosensitive pixels is exposed at a first exposure time, and at least one of the single-color photosensitive pixels The photosensitive pixels are exposed at a second exposure time that is less than the first exposure time, at least one of the panchromatic photosensitive pixels is exposed at a third exposure time that is less than the first exposure time; and the first panchromatic original image A color original image and a second color original image are interpolated, and the interpolated image is fused with the first panchromatic original image to obtain a target image with the same resolution as that of the pixel array, wherein The first color original image is obtained from the first color information generated by the single-color photosensitive pixels exposed at the first exposure time, and the second color original image is obtained from the first color information exposed at the second exposure time. The second color information generated by the single-color photosensitive pixel is obtained, and the first panchromatic original image is obtained from the first pan
  • the imaging device of the embodiment of the present application includes an image sensor and a processor.
  • the image sensor includes a pixel array.
  • the pixel array includes a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels, and the color photosensitive pixels have a narrower spectral response than the panchromatic photosensitive pixels.
  • the pixel array includes a minimum repeating unit, each of the minimum repeating units includes a plurality of sub-units, and each of the sub-units includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels.
  • the pixel array in the image sensor is exposed, wherein, for a plurality of photosensitive pixels in the same subunit, at least one of the single-color photosensitive pixels is exposed with a first exposure time, and at least one of the single-color photosensitive pixels is less than The first exposure time is exposed at a second exposure time, and at least one of the panchromatic photosensitive pixels is exposed at a third exposure time that is less than the first exposure time.
  • the processor is configured to interpolate the first color original image and the second color original image according to the first panchromatic original image, and fuse the interpolated image with the first panchromatic original image to obtain
  • the pixel array has a target image with the same resolution, wherein the first color original image is obtained from the first color information generated by the single-color photosensitive pixels exposed at the first exposure time, and the second The color original image is obtained from the second color information generated by the single-color photosensitive pixels exposed at the second exposure time, and the first full-color original image is obtained from the full-color photosensitive pixels exposed at the third exposure time.
  • the first panchromatic information generated by the pixel is obtained.
  • the electronic device of the embodiment of the present application includes a housing and an imaging device.
  • the imaging device is combined with the housing.
  • the imaging device includes an image sensor and a processor.
  • the image sensor includes a pixel array.
  • the pixel array includes a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels, and the color photosensitive pixels have a narrower spectral response than the panchromatic photosensitive pixels.
  • the pixel array includes a minimum repeating unit, each of the minimum repeating units includes a plurality of sub-units, and each of the sub-units includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels.
  • the pixel array in the image sensor is exposed, wherein, for a plurality of photosensitive pixels in the same subunit, at least one of the single-color photosensitive pixels is exposed with a first exposure time, and at least one of the single-color photosensitive pixels is less than The first exposure time is exposed at a second exposure time, and at least one of the panchromatic photosensitive pixels is exposed at a third exposure time that is less than the first exposure time.
  • the processor is configured to interpolate the first color original image and the second color original image according to the first panchromatic original image, and fuse the interpolated image with the first panchromatic original image to obtain
  • the pixel array has a target image with the same resolution, wherein the first color original image is obtained from the first color information generated by the single-color photosensitive pixels exposed at the first exposure time, and the second The color original image is obtained from the second color information generated by the single-color photosensitive pixels exposed at the second exposure time, and the first full-color original image is obtained from the full-color photosensitive pixels exposed at the third exposure time.
  • the first panchromatic information generated by the pixel is obtained.
  • the non-volatile computer-readable storage medium containing a computer program when the computer program is executed by a processor, causes the processor to execute the image acquisition method described in the following steps: controlling the exposure of the pixel array, wherein For a plurality of photosensitive pixels in the same subunit, at least one of the single-color photosensitive pixels is exposed at a first exposure time, and at least one of the single-color photosensitive pixels is exposed at a second exposure time that is less than the first exposure time Exposure, at least one of the panchromatic photosensitive pixels is exposed with a third exposure time that is less than the first exposure time; and the first panchromatic original image and the second color original image are interpolated according to the first panchromatic original image, and The interpolated image is fused with the first full-color original image to obtain a target image with the same resolution as that of the pixel array, wherein the first color original image is obtained by using the first exposure The first color information generated by the single-color photosensitive pixel exposed by time is obtained, and
  • FIG. 1 is a schematic diagram of an imaging device according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of a pixel array according to an embodiment of the present application.
  • FIG. 3 is a schematic cross-sectional view of a photosensitive pixel according to an embodiment of the present application.
  • FIG. 4 is a pixel circuit diagram of a photosensitive pixel according to an embodiment of the present application.
  • 5 to 10 are schematic diagrams of the arrangement of the smallest repeating unit in the pixel array of the embodiment of the present application.
  • 11 to 16 are schematic diagrams of the principle of processing the original image obtained by the image sensor by the processor in the imaging device according to some embodiments of the present application;
  • 17 to 19 are schematic diagrams of partial circuit connections of pixel arrays in some embodiments of the present application.
  • FIG. 20 is a schematic diagram of the structure of an electronic device according to an embodiment of the present application.
  • FIG. 21 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
  • FIG. 22 is a schematic diagram of interaction between a non-volatile computer-readable storage medium and a processor in some embodiments of the present application.
  • the image acquisition method of the embodiment of the present application is used in an image sensor.
  • the image sensor includes a pixel array.
  • the pixel array includes a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels.
  • the color photosensitive pixels have a narrower spectral response than the panchromatic photosensitive pixels.
  • the pixel array includes a minimum repeating unit, each minimum repeating unit includes a plurality of sub-units, and each sub-unit includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels.
  • the image acquisition method includes: controlling the exposure of the pixel array, wherein, for a plurality of photosensitive pixels in the same subunit, at least one single-color photosensitive pixel is exposed at a first exposure time, and at least one single-color photosensitive pixel is exposed at a first exposure time that is less than the first exposure time.
  • Exposure at a second exposure time at least one full-color photosensitive pixel is exposed at a third exposure time less than the first exposure time; and interpolating the first color original image and the second color original image according to the first full-color original image, and interpolating
  • the processed image is fused with the first full-color original image to obtain a target image with the same resolution as that of the pixel array, where the first color original image is generated by single-color photosensitive pixels exposed at the first exposure time
  • the first color information is obtained
  • the second color original image is obtained from the second color information generated by the single-color photosensitive pixels exposed at the second exposure time
  • the first full-color original image is generated from the panchromatic photosensitive pixels exposed at the third exposure time.
  • the first panchromatic information is obtained.
  • all panchromatic photosensitive pixels are exposed at the third exposure time; the first panchromatic original image is interpolated with the second color original image, and the interpolated image is compared with the first panchromatic original image.
  • Fusion of a full-color original image to obtain a target image with the same resolution as that of the pixel array includes: performing interpolation processing on the first color original image to obtain a first color intermediate image with a resolution smaller than that of the pixel array , Perform interpolation processing on the second color original image to obtain a second color intermediate image with a resolution smaller than that of the pixel array; perform interpolation processing on the first full-color original image to obtain a second color intermediate image with a resolution equal to that of the pixel array A full-color intermediate image; brightness alignment processing is performed on the first color intermediate image and the second color intermediate image to obtain the first color intermediate image after brightness alignment; the first color intermediate image and the second color intermediate image after brightness alignment are merged To obtain a color initial merged image; interpolate the color initial merged image according to
  • part of the panchromatic photosensitive pixels in the same subunit are exposed at the fourth exposure time, and the remaining panchromatic photosensitive pixels are exposed at the third exposure time.
  • the fourth exposure time is less than or equal to the first exposure time and greater than The third exposure time; interpolate the first color original image and the second color original image according to the first panchromatic original image, and fuse the interpolated image with the first panchromatic original image to obtain a resolution with the pixel array
  • the target image with the same resolution at the same rate includes: performing interpolation processing on the first color original image to obtain a first color intermediate image with a resolution smaller than that of the pixel array, and performing interpolation processing on the second color original image to obtain the resolution A second color intermediate image smaller than the resolution of the pixel array; interpolation processing is performed on the first full-color original image to obtain a first full-color intermediate image with a resolution equal to that of the pixel array, and the second full-color original image is performed Interpolation processing to obtain a second panchromatic intermediate image with a
  • the exposure time of all the single-color photosensitive pixels exposed at the second exposure time is within the same time as all the single-color photosensitive pixels exposed at the first exposure time.
  • the exposure time of all the full-color photosensitive pixels exposed at the third exposure time is within the exposure time of all the single-color photosensitive pixels exposed at the first exposure time; when the parts in the same subunit are all
  • the exposure time of all the single-color photosensitive pixels exposed at the second exposure time is within all the single-colors exposed at the first exposure time
  • the exposure time of all the full-color photosensitive pixels exposed at the third exposure time is within the exposure time of all the single-color photosensitive pixels exposed at the first exposure time, and those exposed at the fourth exposure time
  • the exposure progress time of all the single-color photosensitive pixels is within the
  • performing brightness alignment processing on the first color intermediate image and the second color intermediate image to obtain the first color intermediate image after brightness alignment includes: identifying that the pixel value in the first color intermediate image is greater than the first preset Set the threshold of the overexposed image pixels; for each overexposed image pixel, expand the predetermined area with the overexposed image pixel as the center; find the intermediate image pixels with the pixel value less than the first preset threshold in the predetermined area; use the intermediate image pixels And the second color intermediate image to correct the pixel values of the pixels of the overexposed image; and use the corrected pixel values of the pixels of the overexposed image to update the first color intermediate image to obtain the first color intermediate image with aligned brightness.
  • performing brightness alignment processing on the first panchromatic intermediate image and the second panchromatic intermediate image to obtain the second panchromatic intermediate image after brightness alignment includes: identifying pixel values in the second panchromatic intermediate image Overexposed image pixels larger than the second preset threshold; for each overexposed image pixel, expand a predetermined area with the overexposed image pixel as the center; find intermediate image pixels with pixel values less than the second preset threshold in the predetermined area; Use the intermediate image pixels and the first panchromatic intermediate image to correct the pixel values of the overexposed image pixels; and use the corrected pixel values of the overexposed image pixels to update the second panchromatic intermediate image to obtain the second panchromatic after brightness alignment.
  • Color intermediate image is
  • fusing the first color intermediate image and the second color intermediate image after brightness alignment to obtain a color initial merged image includes: performing motion detection on the first color intermediate image after brightness alignment; When there is no motion blur area in the first color intermediate image of, the first color intermediate image and the second color intermediate image after brightness alignment are merged to obtain the color initial merged image; there is motion in the first color intermediate image after brightness alignment When blurring the area, the first color intermediate image after brightness alignment except for the motion blur area and the second color intermediate image are merged to obtain the color initial merged image.
  • fusing the first panchromatic intermediate image and the brightness-aligned second panchromatic intermediate image to obtain a panchromatic merged image includes: performing motion detection on the brightness-aligned second panchromatic intermediate image; When there is no motion blur area in the second panchromatic intermediate image after brightness alignment, fuse the first panchromatic intermediate image and the second panchromatic intermediate image after brightness alignment to obtain a panchromatic merged image; When there is a motion blur area in the panchromatic intermediate image, the first panchromatic intermediate image and the second panchromatic intermediate image whose brightness is aligned except for the motion blur area are merged to obtain a panchromatic merged image.
  • the imaging device of the embodiment of the present application includes an image sensor and a processor.
  • the image sensor includes an array of pixels.
  • the pixel array includes a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels, and the color photosensitive pixels have a narrower spectral response than the panchromatic photosensitive pixels.
  • the pixel array includes a minimum repeating unit, each minimum repeating unit includes a plurality of sub-units, and each sub-unit includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels.
  • the pixel array in the image sensor is exposed, wherein, for multiple photosensitive pixels in the same subunit, at least one single-color photosensitive pixel is exposed for a first exposure time, and at least one single-color photosensitive pixel is exposed for a second exposure that is less than the first exposure time Time exposure, at least one full-color photosensitive pixel is exposed with a third exposure time that is less than the first exposure time.
  • the processor is used to interpolate the first color original image and the second color original image according to the first panchromatic original image, and fuse the interpolated image with the first panchromatic original image to obtain the same resolution as the pixel array
  • the second color information generated by the pixels is obtained, and the first panchromatic original image is obtained from the first panchromatic information generated by the panchromatic photosensitive pixels exposed at the third exposure time.
  • all panchromatic photosensitive pixels are exposed at the third exposure time; the processor is also used to:
  • part of the panchromatic photosensitive pixels in the same subunit are exposed at the fourth exposure time, and the remaining panchromatic photosensitive pixels are exposed at the third exposure time.
  • the fourth exposure time is less than or equal to the first exposure time and greater than The third exposure time; the processor is also used to: perform interpolation processing on the first color original image to obtain a first color intermediate image with a resolution less than that of the pixel array, and perform interpolation processing on the second color original image to obtain the resolution A second color intermediate image smaller than the resolution of the pixel array; interpolation processing is performed on the first full-color original image to obtain a first full-color intermediate image with a resolution equal to that of the pixel array, and the second full-color original image is performed Interpolation processing to obtain a second panchromatic intermediate image with a resolution equal to that of the pixel array, wherein the second panchromatic original image is obtained from the second panchromatic information generated by the panchromatic photosensitive pixels exposed at the fourth exposure time; Perform brightness alignment processing on the first color intermediate image and
  • the exposure progress time of all the single-color photosensitive pixels exposed at the second exposure time is within that of all the single-color photosensitive pixels exposed at the first exposure time.
  • the exposure time of all the full-color photosensitive pixels exposed at the third exposure time is within the exposure time of all the single-color photosensitive pixels exposed at the first exposure time; when the parts in the same subunit are all
  • the exposure time of all the single-color photosensitive pixels exposed at the second exposure time is within all the single-colors exposed at the first exposure time
  • the exposure time of all the full-color photosensitive pixels exposed at the third exposure time is within the exposure time of all the single-color photosensitive pixels exposed at the first exposure time
  • the exposure time of all the full-color photosensitive pixels exposed at the third exposure time is within the exposure time of all the single-color photosensitive pixels exposed at the first exposure time, and those exposed at the fourth exposure time
  • the exposure progress time of all the single-color photosensitive pixels is within the exposure
  • the processor is further configured to: identify the overexposed image pixels in the first color intermediate image whose pixel values are greater than the first preset threshold; for each overexposed image pixel, take the overexposed image pixel as the center Extend the predetermined area; find the intermediate image pixels in the predetermined area whose pixel value is less than the first preset threshold; use the intermediate image pixels and the second color intermediate image to correct the pixel values of the overexposed image pixels; and use the overexposed image pixels The corrected pixel values update the first color intermediate image to obtain the first color intermediate image after brightness alignment.
  • the processor is further configured to: identify overexposed image pixels in the second panchromatic intermediate image with pixel values greater than a second preset threshold; for each overexposed image pixel, use the overexposed image pixel as Extend the predetermined area in the center; find the intermediate image pixels whose pixel value is less than the second preset threshold in the predetermined area; use the intermediate image pixels and the first panchromatic intermediate image to correct the pixel value of the overexposed image pixels; and use the overexposed image The corrected pixel values of the pixels update the second panchromatic intermediate image to obtain the second panchromatic intermediate image after brightness alignment.
  • the processor is further used to: perform motion detection on the first color intermediate image after brightness alignment; when there is no motion blur area in the first color intermediate image after brightness alignment, merge the brightness aligned first color intermediate image The first color intermediate image and the second color intermediate image are used to obtain a color initial merged image; when there is a motion blur area in the first color intermediate image after brightness alignment, the motion blur area is removed from the first color intermediate image after fusion brightness alignment Outside the area and the second color intermediate image to obtain the color initial merged image.
  • the processor is further configured to: perform motion detection on the second panchromatic intermediate image after brightness alignment; when there is no motion blur area in the second panchromatic intermediate image after brightness alignment, fuse the first panchromatic image.
  • the panchromatic intermediate image and the second panchromatic intermediate image after brightness alignment are used to obtain a panchromatic merged image; when there is a motion blur area in the second panchromatic intermediate image after luminance alignment, the first panchromatic intermediate image is merged and the brightness is aligned
  • the area except the motion blur area in the second panchromatic intermediate image is obtained to obtain a panchromatic merged image.
  • the pixel array is arranged in a two-dimensional matrix; for any two rows of adjacent photosensitive pixels, at least one row of photosensitive pixels meets the control end of the exposure control circuit of the multiple single-color photosensitive pixels located in the same row. Connected to a first exposure control line, the control end of the exposure control circuit of a plurality of full-color photosensitive pixels is connected to a second exposure control line, and the reset circuit of a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels are controlled Terminal is connected to a reset line; or for any two rows of adjacent photosensitive pixels, at least one row of photosensitive pixels meets the requirements of multiple single-color photosensitive pixels in the same row.
  • the control terminal of the reset circuit is connected to a first reset line.
  • the control terminal of the reset circuit of a full-color photosensitive pixel is connected with a second reset line, and the control terminal of the exposure control circuit of a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels is connected with an exposure control line; or Two rows of adjacent photosensitive pixels, at least one row of photosensitive pixels meets the requirements of multiple single-color photosensitive pixels in the same row.
  • the control terminal of the exposure control circuit is connected to a first exposure control line, and the exposure control circuit for multiple full-color photosensitive pixels
  • the control terminal of the multi-color photosensitive pixel is connected with a second exposure control line, and the control terminal of the reset circuit of a plurality of single-color photosensitive pixels is connected with a first reset line, and the control terminal of the reset circuit of a plurality of full-color photosensitive pixels is connected with a second reset ⁇ Wire connection.
  • the arrangement of the smallest repeating unit is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • the electronic device of the embodiment of the present application includes a housing and the imaging device of any of the above embodiments.
  • the imaging device is combined with the housing.
  • the non-volatile computer-readable storage medium containing a computer program in an embodiment of the present application when the computer program is executed by a processor, causes the processor to execute the image acquisition method of any of the above-mentioned embodiments.
  • control unit in the image sensor can control multiple photoelectric conversion elements covered by the same filter to perform exposures of different durations, respectively, so as to obtain multiple frames of color original images with different exposure times.
  • the processor fuses multiple color original images to obtain a high dynamic range image.
  • this high dynamic range image will reduce the resolution of the high dynamic range image and affect the imaging clarity of the image sensor.
  • the imaging device 100 includes an image sensor 10 and a processor 20.
  • the image sensor 10 includes a pixel array 11.
  • the pixel array 11 includes a plurality of full-color photosensitive pixels W and a plurality of color photosensitive pixels.
  • the pixel array 11 includes a minimum repeating unit, and each minimum repeating unit includes a plurality of sub-units. Each subunit includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels W.
  • the pixel array 11 in the image sensor 10 is exposed to light.
  • At least one single-color photosensitive pixel is exposed with a first exposure time
  • at least one single-color photosensitive pixel is exposed with a second exposure time less than the first exposure time
  • at least one full-color photosensitive pixel is exposed
  • the photosensitive pixels are exposed at a third exposure time that is less than the first exposure time.
  • the processor 20 is electrically connected to the image sensor 10.
  • the processor 10 is configured to interpolate the first color original image and the second color original image according to the first panchromatic original image, and fuse the interpolated image with the first panchromatic original image to obtain a resolution with the pixel array. Rate the target image with the same resolution.
  • the first color original image is obtained from the first color information generated by the single-color photosensitive pixels exposed at the first exposure time
  • the second color original image is obtained from the second color information generated by the single-color photosensitive pixels exposed at the second exposure time.
  • the first panchromatic original image is obtained from the first panchromatic information generated by the panchromatic photosensitive pixels exposed at the third exposure time.
  • the imaging device 100 of the embodiment of the present application controls the multiple photosensitive pixels 110 in each subunit of the pixel array 11 to be exposed with different exposure times to obtain a target image with a high dynamic range.
  • interpolation processing is performed on the first color original image and the second color original image, so that the finally obtained target image can have the same resolution as the resolution of the pixel array 11. Rate.
  • the interpolation between the first color original image and the second color original image is performed based on the information in the first full-color original image, the result of the interpolation is more accurate, and the color reproduction effect is better.
  • FIG. 2 is a schematic diagram of the image sensor 10 in the embodiment of the present application.
  • the image sensor 10 includes a pixel array 11, a vertical driving unit 12, a control unit 13, a column processing unit 14 and a horizontal driving unit 15.
  • the image sensor 10 may adopt a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge-coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS complementary metal oxide semiconductor
  • CCD Charge-coupled Device
  • the pixel array 11 includes a plurality of photosensitive pixels 110 (shown in FIG. 3) arranged two-dimensionally in an array (ie, arranged in a two-dimensional matrix), and each photosensitive pixel 110 includes a photoelectric conversion element 1111 (shown in FIG. 4) .
  • Each photosensitive pixel 110 converts light into electric charge according to the intensity of light incident thereon.
  • the vertical driving unit 12 includes a shift register and an address decoder.
  • the vertical drive unit 12 includes readout scanning and reset scanning functions.
  • the readout scan refers to sequentially scanning the unit photosensitive pixels 110 line by line, and reading signals from these unit photosensitive pixels 110 line by line.
  • the signal output by each photosensitive pixel 110 in the selected and scanned photosensitive pixel row is transmitted to the column processing unit 14.
  • the reset scan is used to reset the charge, and the photocharge of the photoelectric conversion element is discarded, so that the accumulation of new photocharge can be started.
  • the signal processing performed by the column processing unit 14 is correlated double sampling (CDS) processing.
  • CDS correlated double sampling
  • the reset level and the signal level output from each photosensitive pixel 110 in the selected photosensitive pixel row are taken out, and the level difference is calculated.
  • the signals of the photosensitive pixels 110 in a row are obtained.
  • the column processing unit 14 may have an analog-to-digital (A/D) conversion function for converting analog pixel signals into a digital format.
  • A/D analog-to-digital
  • the horizontal driving unit 15 includes a shift register and an address decoder.
  • the horizontal driving unit 15 sequentially scans the pixel array 11 column by column. Through the selection scanning operation performed by the horizontal driving unit 15, each photosensitive pixel column is sequentially processed by the column processing unit 14, and is sequentially output.
  • control unit 13 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 12, the column processing unit 14 and the horizontal driving unit 15 to work together.
  • FIG. 3 is a schematic diagram of a photosensitive pixel 110 in an embodiment of the present application.
  • the photosensitive pixel 110 includes a pixel circuit 111, a filter 112, and a micro lens 113. Along the light-receiving direction of the photosensitive pixel 110, the microlens 113, the filter 112, and the pixel circuit 111 are arranged in sequence.
  • the microlens 113 is used for condensing light
  • the filter 112 is used for passing light of a certain wavelength band and filtering out the light of other wavelength bands.
  • the pixel circuit 111 is used to convert the received light into electrical signals, and provide the generated electrical signals to the column processing unit 14 shown in FIG. 2.
  • FIG. 4 is a schematic diagram of a pixel circuit 111 of a photosensitive pixel 110 in an embodiment of the present application.
  • the pixel circuit 111 in FIG. 4 can be applied to each photosensitive pixel 110 (shown in FIG. 3) in the pixel array 11 shown in FIG.
  • the working principle of the pixel circuit 111 will be described below with reference to FIGS. 2 to 4.
  • the pixel circuit 111 includes a photoelectric conversion element 1111 (for example, a photodiode), an exposure control circuit (for example, a transfer transistor 1112), a reset circuit (for example, a reset transistor 1113), and an amplification circuit (for example, an amplification transistor 1114). ) And a selection circuit (for example, a selection transistor 1115).
  • the transfer transistor 1112, the reset transistor 1113, the amplifying transistor 1114, and the selection transistor 1115 are, for example, MOS transistors, but are not limited thereto.
  • the photoelectric conversion element 1111 includes a photodiode, and the anode of the photodiode is connected to the ground, for example.
  • the photodiode converts the received light into electric charge.
  • the cathode of the photodiode is connected to the floating diffusion unit FD via an exposure control circuit (for example, a transfer transistor 1112).
  • the floating diffusion unit FD is connected to the gate of the amplification transistor 1114 and the source of the reset transistor 1113.
  • the exposure control circuit is a transfer transistor 1112, and the control terminal TG of the exposure control circuit is the gate of the transfer transistor 1112.
  • an active level for example, VPIX level
  • the transfer transistor 1112 is turned on.
  • the transfer transistor 1112 transfers the charge photoelectrically converted by the photodiode to the floating diffusion unit FD.
  • the drain of the reset transistor 1113 is connected to the pixel power supply VPIX.
  • the source of the reset transistor 113 is connected to the floating diffusion unit FD.
  • a pulse of an effective reset level is transmitted to the gate of the reset transistor 113 via a reset line (for example, RX shown in FIG. 17), and the reset transistor 113 is turned on.
  • the reset transistor 113 resets the floating diffusion unit FD to the pixel power supply VPIX.
  • the gate of the amplifying transistor 1114 is connected to the floating diffusion unit FD.
  • the drain of the amplifying transistor 1114 is connected to the pixel power supply VPIX.
  • the amplifying transistor 1114 After the floating diffusion unit FD is reset by the reset transistor 1113, the amplifying transistor 1114 outputs the reset level through the output terminal OUT via the selection transistor 1115. After the charge of the photodiode is transferred by the transfer transistor 1112, the amplifying transistor 1114 outputs a signal level through the output terminal OUT via the selection transistor 1115.
  • the drain of the selection transistor 1115 is connected to the source of the amplification transistor 1114.
  • the source of the selection transistor 1115 is connected to the column processing unit 14 in FIG. 2 through the output terminal OUT.
  • the selection transistor 1115 is turned on.
  • the signal output by the amplifying transistor 1114 is transmitted to the column processing unit 14 through the selection transistor 1115.
  • the pixel structure of the pixel circuit 111 in the embodiment of the present application is not limited to the structure shown in FIG. 4.
  • the pixel circuit 111 may also have a three-transistor pixel structure, in which the functions of the amplifying transistor 1114 and the selecting transistor 1115 are performed by one transistor.
  • the exposure control circuit is not limited to the way of a single transfer transistor 1112, and other electronic devices or structures with the function of controlling the conduction of the control terminal can be used as the exposure control circuit in the embodiment of the present application.
  • the implementation of the transistor 1112 is simple, low in cost, and easy to control.
  • 5 to 10 are schematic diagrams of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in the pixel array 11 (shown in FIG. 2) according to some embodiments of the present application.
  • the photosensitive pixels 110 include two types, one is a full-color photosensitive pixel W, and the other is a color photosensitive pixel.
  • 5 to 10 only show the arrangement of a plurality of photosensitive pixels 110 in a minimum repeating unit. The smallest repeating unit shown in FIGS. 5 to 10 is copied multiple times in rows and columns to form the pixel array 11. Each minimum repeating unit is composed of multiple full-color photosensitive pixels W and multiple color photosensitive pixels. Each minimum repeating unit includes multiple subunits.
  • Each subunit includes a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels W.
  • the color photosensitive pixel refers to the photosensitive pixel that can receive the light of the color color channel.
  • the multiple color photosensitive pixels include multiple types of single color photosensitive pixels, and different types of single color photosensitive pixels receive light of different color color channels. It should be noted that, a single-color photosensitive pixel can only receive light from a single color channel, or can receive light from two or more color channels, which is not limited here.
  • the full-color photosensitive pixel W and the color photosensitive pixel in each sub-unit are alternately arranged.
  • multiple photosensitive pixels 110 in the same row are photosensitive pixels 110 of the same category; or, multiple photosensitive pixels 110 in the same column are photosensitive pixels 110 of the same category 110.
  • FIG. 5 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in the smallest repeating unit of an embodiment of the application.
  • the smallest repeating unit is 4 rows, 4 columns and 16 photosensitive pixels 110
  • the sub-units are 2 rows, 2 columns and 4 photosensitive pixels 110.
  • the arrangement method is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • full-color photosensitive pixels W and single-color photosensitive pixels are alternately arranged.
  • the categories of subunits include three categories.
  • the first type subunit UA includes a plurality of full-color photosensitive pixels W and a plurality of first color photosensitive pixels A
  • the second type of subunit UB includes a plurality of panchromatic photosensitive pixels W and a plurality of second-color photosensitive pixels B
  • the third type of subunit UC includes a plurality of full-color photosensitive pixels W and a plurality of third-color photosensitive pixels C.
  • Each minimum repeating unit includes four subunits, which are one subunit of the first type UA, two subunits of the second type UB, and one subunit of the third type UC.
  • a first type subunit UA and a third type subunit UC are arranged in the first diagonal direction D1 (for example, the direction connecting the upper left corner and the lower right corner in FIG. 5), and two second type subunits UB are arranged In the second diagonal direction D2 (for example, the direction where the upper right corner and the lower left corner are connected in FIG. 5).
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • first diagonal direction D1 may also be a direction connecting the upper right corner and the lower left corner
  • second diagonal direction D2 may also be a direction connecting the upper left corner and the lower right corner
  • direction here is not a single direction, but can be understood as the concept of a "straight line” indicating the arrangement, and there may be two-way directions at both ends of the straight line.
  • the explanation of the first diagonal direction D1 and the second diagonal direction D2 in FIGS. 6 to 10 is the same as here.
  • FIG. 6 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in a minimum repeating unit according to another embodiment of the application.
  • the smallest repeating unit is 36 photosensitive pixels 110 in 6 rows and 6 columns, and the sub-units are 9 photosensitive pixels 110 in 3 rows and 3 columns.
  • the arrangement method is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • full-color photosensitive pixels W and single-color photosensitive pixels are alternately arranged.
  • the categories of subunits include three categories.
  • the first type subunit UA includes a plurality of full-color photosensitive pixels W and a plurality of first color photosensitive pixels A
  • the second type of subunit UB includes a plurality of panchromatic photosensitive pixels W and a plurality of second-color photosensitive pixels B
  • the third type of subunit UC includes a plurality of full-color photosensitive pixels W and a plurality of third-color photosensitive pixels C.
  • Each minimum repeating unit includes four subunits, which are one subunit of the first type UA, two subunits of the second type UB, and one subunit of the third type UC.
  • one first type subunit UA and one third type subunit UC are arranged in the first diagonal direction D1
  • two second type subunits UB are arranged in the second diagonal direction D2.
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • FIG. 7 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in the smallest repeating unit according to another embodiment of the application.
  • the minimum repeating unit is 8 rows and 8 columns and 64 photosensitive pixels 110
  • the sub-units are 4 rows and 4 columns and 16 photosensitive pixels 110.
  • the arrangement method is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • full-color photosensitive pixels W and single-color photosensitive pixels are alternately arranged.
  • the categories of subunits include three categories.
  • the first type subunit UA includes a plurality of full-color photosensitive pixels W and a plurality of first color photosensitive pixels A
  • the second type of subunit UB includes a plurality of panchromatic photosensitive pixels W and a plurality of second-color photosensitive pixels B
  • the third type of subunit UC includes a plurality of full-color photosensitive pixels W and a plurality of third-color photosensitive pixels C.
  • Each minimum repeating unit includes four subunits, which are one subunit of the first type UA, two subunits of the second type UB, and one subunit of the third type UC.
  • one first type subunit UA and one third type subunit UC are arranged in the first diagonal direction D1
  • two second type subunits UB are arranged in the second diagonal direction D2.
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • FIG. 8 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in the smallest repeating unit according to another embodiment of the application.
  • the smallest repeating unit is 4 rows, 4 columns and 16 photosensitive pixels 110, and the sub-units are 2 rows, 2 columns and 4 photosensitive pixels 110.
  • the arrangement method is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • the arrangement of the photosensitive pixels 110 in the smallest repeating unit shown in FIG. 8 is roughly the same as the arrangement of the photosensitive pixels 110 in the smallest repeating unit shown in FIG.
  • the alternating sequence of full-color photosensitive pixels W and single-color photosensitive pixels in the subunit UB is inconsistent with the alternating sequence of full-color photosensitive pixels W and single-color photosensitive pixels in the second type of subunit UB in the lower left corner of FIG. 5, and ,
  • the alternating sequence of the full-color photosensitive pixel W and the single-color photosensitive pixel in the third type subunit UC in FIG. 8 is the same as the full-color photosensitive pixel W and the single-color photosensitive pixel W in the third type subunit UC in the lower right corner of FIG.
  • the alternating sequence of photosensitive pixels is also inconsistent. Specifically, in the second type subunit UB in the lower left corner of FIG. 5, the alternating sequence of the photosensitive pixels 110 in the first row is full-color photosensitive pixels W, single-color photosensitive pixels (ie, second-color photosensitive pixels B), and The alternating sequence of the two rows of photosensitive pixels 110 is single-color photosensitive pixels (ie, second-color photosensitive pixels B) and full-color photosensitive pixels W; and in the second-type subunit UB in the lower left corner of FIG.
  • the first row The alternating sequence of photosensitive pixels 110 is single-color photosensitive pixels (ie, second-color photosensitive pixels B), full-color photosensitive pixels W, and the alternating sequence of photosensitive pixels 110 in the second row is full-color photosensitive pixels W, single-color photosensitive pixels (ie The second color photosensitive pixel B).
  • the alternating sequence of the photosensitive pixels 110 in the first row is full-color photosensitive pixels W, single-color photosensitive pixels (that is, third-color photosensitive pixels C), and the second row
  • the alternating sequence of the photosensitive pixels 110 is a single-color photosensitive pixel (that is, a third-color photosensitive pixel C) and a full-color photosensitive pixel W; and in the third type subunit UC in the lower right corner of FIG.
  • the photosensitive pixels 110 in the first row The alternating sequence of the single-color photosensitive pixels (that is, the third color photosensitive pixel C), the full-color photosensitive pixel W, the alternating sequence of the photosensitive pixels 110 in the second row is the full-color photosensitive pixel W, the single-color photosensitive pixel (that is, the third color Photosensitive pixel C).
  • the alternating sequence of pixels is not consistent.
  • the alternating sequence of the photosensitive pixels 110 in the first row is full-color photosensitive pixels W, single-color photosensitive pixels (that is, first-color photosensitive pixels A), and the second row
  • the alternating sequence of the photosensitive pixels 110 is a single-color photosensitive pixel (that is, the first color photosensitive pixel A), a full-color photosensitive pixel W; and in the third type of subunit CC shown in FIG.
  • the photosensitive pixels 110 in the first row The alternating sequence is single-color photosensitive pixels (that is, third-color photosensitive pixels C), full-color photosensitive pixels W, and the alternating sequence of photosensitive pixels 110 in the second row is full-color photosensitive pixels W, single-color photosensitive pixels (that is, third-color photosensitive pixels). Pixel C). That is to say, in the same minimum repeating unit, the alternating sequence of full-color photosensitive pixels W and color photosensitive pixels in different subunits can be the same (as shown in Figure 5) or inconsistent (as shown in Figure 8). Show).
  • FIG. 9 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in the smallest repeating unit according to another embodiment of the application.
  • the smallest repeating unit is 4 rows, 4 columns and 16 photosensitive pixels 110, and the sub-units are 2 rows, 2 columns and 4 photosensitive pixels 110.
  • the arrangement method is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • photosensitive pixels 110 in the same row are photosensitive pixels 110 of the same category.
  • the photosensitive pixels 110 of the same category include: (1) all full-color photosensitive pixels W; (2) all first-color photosensitive pixels A; (3) all second-color photosensitive pixels B; (4) all The third color photosensitive pixel C.
  • the categories of subunits include three categories.
  • the first type subunit UA includes a plurality of full-color photosensitive pixels W and a plurality of first color photosensitive pixels A
  • the second type of subunit UB includes a plurality of panchromatic photosensitive pixels W and a plurality of second-color photosensitive pixels B
  • the third type of subunit UC includes a plurality of full-color photosensitive pixels W and a plurality of third-color photosensitive pixels C.
  • Each minimum repeating unit includes four subunits, which are one subunit of the first type UA, two subunits of the second type UB, and one subunit of the third type UC.
  • one first type subunit UA and one third type subunit UC are arranged in the first diagonal direction D1
  • two second type subunits UB are arranged in the second diagonal direction D2.
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • FIG. 10 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 3) in the smallest repeating unit according to another embodiment of the application.
  • the smallest repeating unit is 4 rows, 4 columns and 16 photosensitive pixels 110, and the sub-units are 2 rows, 2 columns and 4 photosensitive pixels 110.
  • the arrangement method is:
  • W represents the full-color photosensitive pixel
  • A represents the first color photosensitive pixel among the multiple color photosensitive pixels
  • B represents the second color photosensitive pixel among the multiple color photosensitive pixels
  • C represents the third color photosensitive pixel among the multiple color photosensitive pixels Pixels.
  • a plurality of photosensitive pixels 110 in the same column are photosensitive pixels 110 of the same category.
  • the photosensitive pixels 110 of the same category include: (1) all full-color photosensitive pixels W; (2) all first-color photosensitive pixels A; (3) all second-color photosensitive pixels B; (4) all The third color photosensitive pixel C.
  • the categories of subunits include three categories.
  • the first type subunit UA includes a plurality of full-color photosensitive pixels W and a plurality of first color photosensitive pixels A
  • the second type of subunit UB includes a plurality of panchromatic photosensitive pixels W and a plurality of second-color photosensitive pixels B
  • the third type of subunit UC includes a plurality of full-color photosensitive pixels W and a plurality of third-color photosensitive pixels C.
  • Each minimum repeating unit includes four subunits, which are one subunit of the first type UA, two subunits of the second type UB, and one subunit of the third type UC.
  • one first type subunit UA and one third type subunit UC are arranged in the first diagonal direction D1
  • two second type subunits UB are arranged in the second diagonal direction D2.
  • the first diagonal direction D1 is different from the second diagonal direction D2.
  • the first diagonal line and the second diagonal line are perpendicular.
  • multiple photosensitive pixels 110 in the same row in some sub-units may be photosensitive pixels 110 of the same category, and multiple photosensitive pixels 110 in the same column in the remaining sub-units
  • the pixels 110 are photosensitive pixels 110 of the same type.
  • the first color photosensitive pixel A may be a red photosensitive pixel R; the second color photosensitive pixel B may be a green photosensitive pixel G; and the third color photosensitive pixel C may be Blue photosensitive pixel Bu.
  • the first color photosensitive pixel A may be a red photosensitive pixel R; the second color photosensitive pixel B may be a yellow photosensitive pixel Y; and the third color photosensitive pixel C may be Blue photosensitive pixel Bu.
  • the first color photosensitive pixel A may be a magenta photosensitive pixel M; the second color photosensitive pixel B may be a cyan photosensitive pixel Cy; and the third color photosensitive pixel C may It is the yellow photosensitive pixel Y.
  • the response band of the full-color photosensitive pixel W may be the visible light band (for example, 400 nm-760 nm).
  • the full-color photosensitive pixel W is provided with an infrared filter to filter out infrared light.
  • the response wavelength bands of the full-color photosensitive pixel W are visible light and near-infrared wavelengths (for example, 400nm-1000nm), and the photoelectric conversion element 1111 (shown in FIG. 4) in the image sensor 10 (shown in FIG. 1) (Shown) to match the response band.
  • the full-color photosensitive pixel W may not be provided with a filter or a filter that can pass light of all wavelength bands.
  • the response band of the full-color photosensitive pixel W is determined by the response band of the photoelectric conversion element 1111, that is, the two match. .
  • the embodiments of the present application include, but are not limited to, the above-mentioned waveband range.
  • the control unit 13 controls the pixel array 11 to expose.
  • at least one single-color photosensitive pixel is exposed with a first exposure time
  • at least one single-color photosensitive pixel is exposed with a second exposure time less than the first exposure time
  • at least one full-color photosensitive pixel is exposed
  • the photosensitive pixel W is exposed at a third exposure time that is less than the first exposure time.
  • the multiple single-color photosensitive pixels exposed at the first exposure time in the pixel array 11 can generate multiple first color information
  • the multiple single-color photosensitive pixels exposed at the second exposure time can generate multiple second color information.
  • a plurality of panchromatic photosensitive pixels W (shown in FIG. 5) exposed for three exposure times can generate a plurality of first panchromatic information.
  • a plurality of first color information may form a first color original image.
  • a plurality of second color information may form a second color original image, and a plurality of first panchromatic information may generate a first full color original image.
  • the processor 20 in the imaging device 100 may interpolate the first color original image and the second color original image according to the first panchromatic original image, and fuse the interpolated image with the first panchromatic original image to obtain
  • the resolution of the pixel array 11 has the same resolution as the target image.
  • all the full-color photosensitive pixels W in the pixel array 11 are exposed at the third exposure time.
  • one single-color photosensitive pixel is exposed for the first exposure time (for example, the long exposure time L shown in FIG. 11), and one single-color photosensitive pixel is exposed
  • the color photosensitive pixels are exposed for the second exposure time (for example, the short exposure time S shown in FIG. 11), and the two full-color photosensitive pixels W are both exposed for the third exposure time (for example, the short exposure time S shown in FIG. 11).
  • the exposure process of the pixel array 11 may be: (1) the photosensitive pixel 110 exposed at the first exposure time, the photosensitive pixel 110 exposed at the second exposure time, and the third exposure
  • the time-exposed photosensitive pixels 110 are sequentially exposed (the exposure sequence of the three is not limited), and the exposure time of the three does not overlap; (2) the photosensitive pixels 110 exposed at the first exposure time are exposed at the second exposure time
  • the exposed photosensitive pixels 110 and the photosensitive pixels 110 exposed at the third exposure time are sequentially exposed (the exposure sequence of the three is not limited), and the exposure time of the three overlaps partially; (3) All exposures are shorter
  • the exposure time of the time-exposed photosensitive pixels 110 are all within the exposure time of the photosensitive pixels 110 that are exposed with the longest exposure time.
  • the exposure time of all the single-color photosensitive pixels exposed at the second exposure time are all within the time
  • the exposure time of all the single-color photosensitive pixels exposed at the first exposure time is within the exposure time of all the full-color photosensitive pixels W exposed at the third exposure time are within the exposure time of all the single-color photosensitive pixels exposed at the first exposure time Within time.
  • the imaging device 100 adopts the (3) exposure method, which can shorten the overall exposure time required by the pixel array 11, which is beneficial to increase the frame rate of the image.
  • the image sensor 10 can output three original images, which are: (1) The first color original image, which is composed of first color information generated by multiple single-color photosensitive pixels exposed with a long exposure time L (2)
  • the second color original image is composed of second color information generated by multiple single-color photosensitive pixels exposed with a short exposure time S; (3)
  • the first full-color original image is composed of a short exposure time S
  • the first panchromatic information generated by a plurality of panchromatic photosensitive pixels W is composed.
  • the image sensor 10 After the image sensor 10 obtains the first color original image, the second color original image, and the first full-color original image, it transmits these three original images to the processor 20. , So that the processor 20 performs subsequent processing on the three original images. Specifically, the processor 20 may perform interpolation processing on the first color original image to obtain a first color intermediate image with a resolution smaller than that of the pixel array 11, and perform interpolation processing on the second color original image to obtain a resolution smaller than the pixel array 11. The second color intermediate image of the resolution of the array 11.
  • the processor 20 performing interpolation processing on the first color original image refers to complementing the value of the color channel lacking in each image pixel in the first color original image, so that each of the interpolated first color intermediate image is obtained.
  • An image pixel has the values of all color channels. Take the image pixel in the upper left corner of the first color original image shown in FIG. 12 as an example. The image pixel has the value of the first color channel (ie A), but lacks the value of the second color channel (ie B) and the third The value of the color channel (ie C).
  • the processor 20 can calculate the value of the second color channel and the value of the third color channel of the image pixel through interpolation processing, and combine the value of the first color channel, the value of the second color channel, and the value of the third color channel.
  • the fusion is performed to obtain the value of the image pixel in the upper left corner of the first color intermediate image.
  • the value of the image pixel is composed of the values of the three color channels, that is, A+B+C.
  • the processor 20 performing interpolation processing on the second color original image refers to complementing the value of the color channel lacking in each image pixel in the second color original image, so that the second color intermediate image obtained after interpolation is Each image pixel has the value of all color channels. It should be noted that A+B+C shown in FIG. 12 only means that the value of each image pixel is composed of the values of three color channels, and does not mean that the values of the three color channels are directly added.
  • the processor 20 may also perform interpolation processing on the first full-color original image to obtain a first full-color intermediate image with a resolution equal to that of the pixel array 11.
  • the first full-color original image includes image pixels with pixel values (image pixels marked with S in the first full-color original image) and image pixels without pixel values (in the first full-color original image).
  • Image pixels marked with N that is, NULL.
  • Each subunit of the first full-color original image includes two image pixels marked with S and two image pixels marked with N.
  • the positions of the two image pixels marked with S correspond to the positions of the two full-color photosensitive pixels W in the corresponding subunits in the pixel array 11, and the positions of the two image pixels marked with N correspond to the positions of the two image pixels marked with N in the pixel array 11.
  • the processor 20 performing interpolation processing on the first panchromatic original image refers to calculating the pixel value of each image pixel marked with N in the first panchromatic original image, so that the interpolated first panchromatic intermediate image is obtained Each image pixel in can have a value of the W color channel.
  • the processor 20 may perform brightness alignment processing on the first color intermediate image and the second color intermediate image to obtain the first color intermediate image after brightness alignment.
  • Brightness alignment mainly includes the following implementation process.
  • the processor 20 first identifies pixels of the overexposed image in the first color intermediate image whose pixel values are greater than a first preset threshold. Subsequently, for each overexposed image pixel, the processor 20 expands a predetermined area with the overexposed image pixel as the center. Subsequently, the processor 20 searches for intermediate image pixels whose pixel values are less than the first preset threshold in a predetermined area, and uses the intermediate image pixels and the second color intermediate image to correct the pixel values of the overexposed image pixels.
  • the processor 20 uses the corrected pixel values of the pixels of the overexposed image to update the first color intermediate image to obtain the first color intermediate image after brightness alignment.
  • the processor 20 uses the corrected pixel values of the pixels of the overexposed image to update the first color intermediate image to obtain the first color intermediate image after brightness alignment.
  • the pixel value V1 of the image pixel P12 (the image pixel marked with a dashed circle in the first color intermediate image in Figure 14) is greater than the first preset threshold V0, that is, the image pixel P12 is
  • the processor 20 extends a predetermined area with the center of the overexposed image pixel P12, for example, the 3*3 area shown in FIG. 14. Of course, in other embodiments, it may also be a 4*4 area.
  • the processor 20 searches for an intermediate image pixel with a pixel value less than the first preset threshold V0 in a predetermined area of 3*3, such as image pixel P21 in FIG. 14 (the first color intermediate image in FIG. 14 is marked with a dotted line If the pixel value V2 of the circled image pixel) is less than the first preset threshold value V0, the image pixel P21 is the intermediate image pixel P21. Subsequently, the processor 20 searches for the image pixels corresponding to the overexposed image pixel P12 and the intermediate image pixel P21 respectively in the second color intermediate image, that is, the image pixel P1'2' (the second color intermediate image in FIG.
  • Circle image pixel and image pixel P2'1' (the image pixel marked with a dotted circle in the second color intermediate image in Figure 14), where the image pixel P1'2' corresponds to the overexposed image pixel P12, the image pixel P2'1' corresponds to the intermediate image pixel P21, the pixel value of the image pixel P1'2' is V3, and the pixel value of the image pixel P2'1' is V4.
  • the processor 20 performs this brightness alignment process on each overexposed image pixel in the first color intermediate image, and then the first color intermediate image after brightness alignment is obtained. Since the pixel value of the overexposed image pixel in the first color intermediate image after brightness alignment is corrected, the pixel value of each image pixel in the first color intermediate image after brightness alignment is relatively accurate. It should be noted that there may be multiple image pixels with pixel values smaller than the first preset threshold in the predetermined area expanded with the overexposed image pixels as the center.
  • the ratio of the long and short pixel values of multiple image pixels in one area is proportional to The average value is a constant, where the ratio of the long to short pixel value of an image pixel refers to the pixel value of the image pixel corresponding to the first exposure time (ie, the long exposure pixel value) and the image pixel corresponding to the second exposure time The ratio between the pixel values (ie short-exposure pixel values). Therefore, the processor 20 can arbitrarily select an image pixel from the plurality of image pixels as the intermediate image pixel, and calculate the actual pixel value of the overexposed image pixel based on the intermediate image pixel and the second color original image.
  • the processor 20 may fuse the first color intermediate image and the second color intermediate image after the brightness alignment to obtain the color initial merged image. Specifically, the processor 20 first performs motion detection on the first color intermediate image after brightness alignment to identify whether there is a motion blur area in the first color intermediate image after brightness alignment. If there is no motion blur area in the first color intermediate image after the brightness alignment, the first color intermediate image and the second color intermediate image after the brightness alignment are directly merged to obtain the color initial merged image.
  • the resolution of the color initial combined image is smaller than the resolution of the pixel array 11.
  • the fusion of the two intermediate images at this time follows the following Principle: (1) In the first color intermediate image after brightness alignment, the pixel value of the image pixel in the overexposed area is directly replaced with the pixel value of the image pixel in the second color intermediate image corresponding to the overexposed area; (2) In the first color intermediate image after brightness alignment, the pixel value of the image pixels in the underexposure area is: the long-exposure pixel value divided by the ratio of long and short pixel values; (3) In the first color intermediate image after brightness alignment, there is no under-exposure The pixel value of the image pixel in the area that is not overexposed is: the long-exposure pixel value divided by the ratio of the long-short pixel value.
  • the fusion of the two intermediate images at this time must follow the above three principles, and also need to follow the (4) principle: the first color after brightness alignment In the intermediate image, the pixel value of the image pixel in the motion blur area is directly replaced with the pixel value of the image pixel in the second color intermediate image corresponding to the motion blur area.
  • VL represents the long exposure pixel value
  • VS represents the segment exposure pixel value
  • VS' represents the calculated pixel value of the image pixel in the under-exposed area and the neither under-exposed nor over-exposed area.
  • the signal-to-noise ratio of VS’ will be greater than the signal-to-noise ratio of VS.
  • the processor 20 After acquiring the color initial merged image and the first panchromatic intermediate image, the processor 20 interpolates the color initial merged image according to the first panchromatic intermediate image to obtain a color intermediate merged image with a resolution equal to that of the pixel array. Specifically, referring to Figures 1 and 15, the processor 20 first divides the first panchromatic intermediate image into a plurality of texture regions, each texture region includes a plurality of image pixels (in the example of FIG. 15, each texture region includes 3*3 image pixels, in other examples, the number of image pixels in each texture area can also be other numbers, which is not limited here).
  • the processor 20 calculates the target texture direction of each texture area, where the target texture direction may be any one of a horizontal direction, a vertical direction, a diagonal direction, an anti-angle direction, or a plane direction. Specifically, for each texture area, the processor 20 first calculates the feature value in the horizontal direction, the feature value in the vertical direction, the feature value in the diagonal direction, and the feature value in the anti-angle direction, and then determines the feature value based on the multiple feature values. The target texture direction of the texture area.
  • the processor 20 calculates P00 and The absolute value of the difference between P01, the absolute value of the difference between P01 and P02, the absolute value of the difference between P10 and P11, the absolute value of the difference between P11 and P12, the absolute value of the difference between P20 and P21, P21 and The absolute value of the difference of P22, and calculate the average of these six absolute values, the average is the characteristic value Diff_H in the horizontal direction.
  • the processor 20 calculates the absolute value of the difference between P00 and P10, the absolute value of the difference between P10 and P20, the absolute value of the difference between P01 and P11, and the difference between P11 and P21.
  • the absolute value of the value, the absolute value of the difference between P02 and P12, the absolute value of the difference between P12 and P22, and the average of these six absolute values are calculated.
  • the average is the vertical characteristic value Diff_V.
  • the processor 20 calculates the absolute value of the difference between P00 and P11, the absolute value of the difference between P01 and P12, the absolute value of the difference between P10 and P21, and the difference between P11 and P22.
  • the absolute value of the difference, and calculate the average of these four absolute values, the average is the characteristic value Diff_D in the diagonal direction.
  • the processor 20 calculates the absolute value of the difference between P01 and P10, the absolute value of the difference between P02 and P11, the absolute value of the difference between P11 and P20, and the difference between P12 and P21.
  • the absolute value of the difference, and calculate the average of these four absolute values, the average is the eigenvalue Diff_AD in the anti-angle direction.
  • the processor 20 may determine the target texture direction of the texture region according to the four feature values.
  • the processor 20 selects the largest eigenvalue from the four eigenvalues: (1) Assuming that the largest eigenvalue is Diff_H, the predetermined threshold is Diff_PV, if Diff_H-Diff_V ⁇ Diff_PV, Diff_H-Diff_D ⁇ Diff_PV, and Diff_H- Diff_AD ⁇ Diff_PV, the processor 20 determines that the target texture direction is the vertical direction; (2) Assuming that the largest feature value is Diff_V and the predetermined threshold is Diff_PV, if Diff_V-Diff_H ⁇ Diff_PV, Diff_V-Diff_D ⁇ Diff_PV, and Diff_V-Diff_AD ⁇ Diff_PV, the processor 20 determines that the target texture direction is the horizontal direction; (3) Assuming that the largest feature value is Diff_D and the predetermined threshold is Diff_PV, if Diff_D-Diff_H ⁇ Diff_PV, Diff
  • the processor 20 determines that the target texture direction is Plane direction.
  • the target texture direction of the texture area is a plane direction, which means that the shooting scene corresponding to the texture area may be a solid color scene.
  • the processor 20 can use the target texture direction of each texture area to determine the interpolation direction of the image pixels of the area corresponding to the corresponding texture area in the color initial merged image. , And interpolate the color initial combined image based on the determined interpolation direction to obtain a color intermediate combined image with a resolution equal to that of the pixel array 11. Specifically, if the target texture direction of a certain area in the color initial merged image corresponding to the texture area in the first panchromatic intermediate image is the horizontal direction, then the interpolation direction of the image pixels in the area is the horizontal direction.
  • the interpolation direction of the image pixels in the area is the vertical direction. If the target texture direction of a certain area in the color initial merged image corresponding to the texture area in the first panchromatic intermediate image is the diagonal direction, then the interpolation direction of the image pixels in the area is the diagonal direction. If the target texture direction of a certain area in the color initial merged image corresponding to the texture area in the first panchromatic intermediate image is the anti-angle direction, then the interpolation direction of the image pixels in the area is the anti-angle direction.
  • the interpolation direction of the image pixels in the area is the plane direction. In this way, using the target texture direction to determine the interpolation direction of the image pixels can make the interpolation result more accurate, and the final interpolated image has a better color reproduction effect.
  • the texture of the interpolated color intermediate merged image is compared with that of the actual shooting scene. The consistency between textures is higher.
  • the color original image may not be divided into texture regions.
  • the entire color initial merged image is regarded as a texture area.
  • the method of not dividing the area can reduce the amount of data to be processed by the processor 20, which is beneficial to increase the processing speed of the image, and can save the power consumption of the imaging device 100.
  • the method of dividing regions increases the amount of data to be processed by the processor 20, the color reproduction degree of the color intermediate image calculated in this manner is more accurate.
  • the processor 20 may adaptively select whether to divide the area according to different application scenarios.
  • the color initial merged image when the power of the imaging device 100 is low, the color initial merged image can be interpolated without dividing regions; when the power of the imaging device 100 is high, the color initial merged image can be realized by dividing regions. Interpolation. When shooting static images, the interpolation of the color initial merged image can be realized by dividing the area, and when shooting dynamic images (such as video, video, etc.), the interpolation of the color initial merged image can be realized by not dividing the area.
  • the processor 20 may fuse the color intermediate merged image and the first panchromatic intermediate image to obtain the target image.
  • the target image has a higher dynamic range and a higher resolution, and the image quality is better.
  • part of the full-color photosensitive pixels W in the same subunit is exposed at the fourth exposure time, and the remaining full-color photosensitive pixels W are exposed at the third exposure time. exposure.
  • the fourth exposure time is less than or equal to the first exposure time and greater than the third exposure time.
  • one single-color photosensitive pixel is exposed for the first exposure time (for example, the long exposure time L shown in FIG. 16)
  • one single-color photosensitive pixel is exposed Pixels are exposed for the second exposure time (for example, the short exposure time S shown in FIG.
  • a full-color photosensitive pixel W is exposed for the third exposure time (for example, the short exposure time S shown in FIG. 16), and a full-color photosensitive pixel W is exposed for the fourth exposure time (for example, the long exposure time L shown in FIG. 16).
  • the exposure process of the pixel array 11 may be: (1) the photosensitive pixels 110 exposed at the first exposure time, the photosensitive pixels 110 exposed at the second exposure time, and the third exposure time.
  • the time-exposed photosensitive pixel 110 and the photosensitive pixel 110 exposed at the fourth exposure time are sequentially exposed (the exposure sequence of the four is not limited), and the exposure time of the four does not overlap; (2) the first exposure time
  • the exposed photosensitive pixels 110, the photosensitive pixels 110 exposed at the second exposure time, the photosensitive pixels 110 exposed at the third exposure time, and the photosensitive pixels 110 exposed at the fourth exposure time are sequentially exposed (the exposure order of the four is not limited ), and there is a partial overlap in the exposure time of the four;
  • the exposure time of all the photosensitive pixels 110 exposed with a shorter exposure time is within the exposure time of the photosensitive pixels 110 exposed with the longest exposure time
  • the exposure time of all the single-color photosensitive pixels exposed at the second exposure time is within the exposure time of all the single-color photosensitive pixels exposed at the
  • the image sensor 10 can output four original images, which are: (1) The first color original image, which is composed of first color information generated by multiple single-color photosensitive pixels exposed with a long exposure time L (2)
  • the second color original image is composed of second color information generated by multiple single-color photosensitive pixels exposed with a short exposure time S; (3)
  • the first full-color original image is composed of a short exposure time S
  • the first panchromatic information generated by a plurality of panchromatic photosensitive pixels W is composed;
  • the second panchromatic original image is composed of the second panchromatic information generated by the plurality of panchromatic photosensitive pixels W exposed with a long exposure time S.
  • the four original images are transmitted to the processor 20 to be used by the processor 20. Perform subsequent processing on these four original images.
  • the subsequent processing of the four original images by the processor 20 mainly includes: (1) Perform interpolation processing on the first color original image to obtain a first color intermediate image with a resolution less than that of the pixel array 11, and perform an interpolation process on the second color original image.
  • the process of processing the four original images by the processor 20 and the process of processing the three original images by the processor 20 are substantially the same.
  • the main differences include:
  • the processor 20 also needs to perform interpolation processing on the second panchromatic original image to obtain the second panchromatic intermediate image.
  • the interpolation of the second panchromatic original image is the same as the interpolation of the first panchromatic original image, both of which calculate the pixel value of the image pixels that do not have pixel values, so that each image in the second panchromatic intermediate image after interpolation is calculated
  • the pixels all have the pixel value of the W color channel.
  • the processor 20 also needs to perform brightness alignment processing on the first panchromatic intermediate image and the second panchromatic intermediate image to obtain the second panchromatic intermediate image after brightness alignment, which specifically includes: identifying the second panchromatic intermediate image Overexposed image pixels with a middle pixel value greater than the second preset threshold; for each overexposed image pixel, expand the predetermined area with the overexposed image pixel as the center, and find the middle of the pixel value less than the second preset threshold in the predetermined area Image pixels; use the intermediate image pixels and the first panchromatic intermediate image to correct the pixel values of the overexposed image pixels; use the corrected pixel values of the overexposed image pixels to update the second panchromatic intermediate image to obtain the first full-color intermediate image after brightness alignment Two full-color intermediate images.
  • the brightness alignment process of the first panchromatic intermediate image and the second panchromatic intermediate image is similar to the brightness alignment process of the first color intermediate image and the second color intermediate image, and will not be further described here.
  • the processor 20 needs to fuse the first panchromatic intermediate image and the brightness-aligned second panchromatic intermediate image to obtain a panchromatic merged image, which specifically includes: performing motion detection on the brightness-aligned second panchromatic intermediate image ; When there is no motion blur area in the second panchromatic intermediate image after brightness alignment, fuse the first panchromatic intermediate image and the second panchromatic intermediate image after brightness alignment to obtain a panchromatic merged image; When there is a motion blur area in the second panchromatic intermediate image, the first panchromatic intermediate image and the second panchromatic intermediate image whose brightness is aligned except for the motion blur area are merged to obtain a panchromatic merged image.
  • the fusion manner of the first panchromatic intermediate image and the second panchromatic intermediate image after the brightness is aligned is similar to the fusion manner of the second color intermediate image and the first color intermediate image after the brightness is aligned, and will not be further described here.
  • the processor 20 interpolates the color initial combined image according to the panchromatic combined image to obtain the color intermediate combined image.
  • the processor 20 also needs to calculate the target texture direction of at least one texture area, and then determine the interpolation direction of the color initial merged image based on the target texture direction, so that the color initial merged image is interpolated based on the determined interpolation direction to obtain the color reproduction A more accurate color intermediate merged image.
  • the imaging device 100 of the embodiment of the present application controls the multiple photosensitive pixels 110 in each subunit of the pixel array 11 to be exposed with different exposure times to obtain a target image with a high dynamic range.
  • the panchromatic image composed of panchromatic information generated by the panchromatic photosensitive pixel W is used to indicate the interpolation of the color image, which can not only improve the resolution of the target image finally obtained , It can also improve the color reproduction of the target image, which greatly improves the imaging quality of the imaging device 100.
  • the third exposure time is equal to the second exposure time is equal to the short exposure time. In other embodiments, the third exposure time may also be different from the second exposure time, for example, the third exposure time is greater than the second exposure time and less than the first exposure time, or the third exposure time is less than the second exposure time.
  • the fourth exposure time is equal to the first exposure time is equal to the long exposure time. In other embodiments, the fourth exposure time may also be different from the first exposure time.
  • the processor 20 may also first interpolate the first color intermediate image and the second color intermediate image by using the first panchromatic intermediate image (or the panchromatic merged image) to obtain the resolution and the pixel array, respectively. 11.
  • the first color high-resolution image and the second color high-resolution image with the same resolution.
  • the processor 20 performs brightness alignment and fusion processing on the first color high-resolution image and the second color high-resolution image, and merges the processed image with the first panchromatic intermediate image (or panchromatic merged image) In order to obtain a target image with a resolution equal to that of the pixel array 11.
  • the circuit connection of photosensitive pixels 110 may be: for any two rows of adjacent photosensitive pixels 110, there is at least one row of photosensitive pixels 110.
  • the pixel 110 satisfies that the control terminal TG of the exposure control circuit of a plurality of single-color photosensitive pixels located in the same row is connected to a first exposure control line TX1, and the control terminal TG of the exposure control circuit of a plurality of full-color photosensitive pixels W is connected to a first exposure control line TX1.
  • the two exposure control lines TX2 are connected, and the control terminal RG of the reset circuit of a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels W is connected with a reset line RX.
  • the control unit 13 in the image sensor 10 can control the different exposure times of different photosensitive pixels 110 in the same subunit by controlling the pulse timings of RX, TX1, and TX2. For example, as shown in FIG.
  • the control terminal TG of the exposure control circuit of the multiple single-color photosensitive pixels in the same row is connected to a first exposure control line TX1
  • the control terminal TG of the exposure control circuit of the multiple full-color photosensitive pixels W in the same row is connected to a second exposure control line TX2
  • the control terminal RG of the reset circuit of the multiple photosensitive pixels 110 in the same row is connected to a reset line RX .
  • the connection method shown in FIG. 17 can be applied to the exposure method of the pixel array 11 shown in FIG. 11 and also applicable to the exposure method of the pixel array 11 shown in FIG. 16.
  • control terminal TG of the exposure control circuit of the multiple single-color photosensitive pixels in one row is connected to a first exposure control line TX1, and the exposure control circuit of the multiple full-color photosensitive pixels W
  • the control terminal TG is connected to a second exposure control line TX2
  • the control terminal RG of the reset circuit of the plurality of photosensitive pixels 110 is connected to a reset line RX
  • the control terminal TG of the exposure control circuit of the plurality of photosensitive pixels 110 in the other row is connected to
  • An exposure control line TX is connected, and the control terminal RG of the reset circuit is connected to a reset line RX.
  • the circuit connection of photosensitive pixels 110 may also be: for any two rows of adjacent photosensitive pixels 110, there is at least one row.
  • the photosensitive pixel 110 satisfies that the control terminal RG of the reset circuit of a plurality of single-color photosensitive pixels located in the same row is connected to a first reset line RX1, and the control terminal RG of the reset circuit of a plurality of full-color photosensitive pixels is connected to a second reset line RX2 is connected, and the control terminal TG of the exposure control circuit of a plurality of single-color photosensitive pixels and a plurality of full-color photosensitive pixels W is connected with an exposure control line TX.
  • the control unit 13 in the image sensor 10 can control the different exposure times of different photosensitive pixels 110 in the same subunit by controlling the pulse timings of TX, RX1, and RX2.
  • the control terminal RG of the reset circuit of the multiple single-color photosensitive pixels in the same row is connected to a first reset line RX1.
  • the control terminal RG of the reset circuit of the plurality of full-color photosensitive pixels W in a row is connected to a second reset line RX2, and the control terminal TG of the exposure control circuit of the plurality of photosensitive pixels 110 in the same row is connected to an exposure control line TX.
  • the control terminal RG of the reset circuit of the multiple single-color photosensitive pixels in one row is connected to a first reset line RX1, and the control terminal RG of the reset circuit of the multiple full-color photosensitive pixels W It is connected to a second reset line RX2, and the control terminal TG of the exposure control circuit of the plurality of photosensitive pixels 110 is connected to an exposure control line TX, and the control terminal TG of the exposure control circuit of the plurality of photosensitive pixels 110 in the other row is connected to an exposure control circuit.
  • the control line TX is connected, and the control terminal RG of the reset circuit is connected to a reset line RX.
  • the circuit connection of photosensitive pixels 110 may also be: for any two rows of adjacent photosensitive pixels 110, there is at least one row.
  • the photosensitive pixel 110 satisfies that the control terminal TG of the exposure control circuit of a plurality of single-color photosensitive pixels located in the same row is connected to a first exposure control line TX1, and the control terminal TG of the exposure control circuit of a plurality of full-color photosensitive pixels W is connected to a first exposure control line TX1.
  • the two exposure control lines TX2 are connected, and the control terminal RG of the reset circuit of a plurality of single-color photosensitive pixels is connected with a first reset line RX1, and the control terminal RG of the reset circuit of a plurality of full-color photosensitive pixels W is connected with a second reset line RX2 connection.
  • the control unit 13 in the image sensor 10 can control the different exposure times of different photosensitive pixels 110 in the same subunit by controlling the pulse timing of TX1, TX2, RX1, and RX2.
  • the control terminal RG of the reset circuit of the multiple single-color photosensitive pixels in the same row is connected to a first reset line RX1.
  • the control terminal RG of the reset circuit of the multiple full-color photosensitive pixels W in the row is connected to a second reset line RX2, and the control terminal TG of the exposure control circuit of the multiple single-color photosensitive pixels in the same row is connected to a first exposure control line TX1 Connected, the control terminal TG of the exposure control circuit of the multiple full-color photosensitive pixels W in the same row is connected to a second exposure control line TX2.
  • the connection method shown in FIG. 19 can be applied to the exposure method of the pixel array 11 shown in FIG. 11 and also applicable to the exposure method of the pixel array 11 shown in FIG. 16.
  • control terminal RG of the reset circuit of the multiple single-color photosensitive pixels in one row is connected to a first reset line RX1
  • control terminal of the reset circuit of the multiple full-color photosensitive pixels W RG is connected to a second reset line RX2
  • the control terminal TG of the exposure control circuit of a plurality of single-color photosensitive pixels is connected to a first exposure control line TX1
  • the control terminal TG of the exposure control circuit of a plurality of full-color photosensitive pixels W It is connected to a second exposure control line TX2
  • the control end of the exposure control circuit of the plurality of photosensitive pixels 110 in the other row is connected to an exposure control line TX
  • the control end of the reset circuit is connected to a reset line RX.
  • the present application also provides an electronic device 300.
  • the electronic device 300 includes the imaging device 100 and the housing 200 described in any one of the above embodiments.
  • the imaging device 100 is combined with the housing 200.
  • the electronic device 300 may be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (such as a smart watch, a smart bracelet, a smart glasses, a smart helmet), a drone, a head-display device, etc., which are not limited here.
  • the electronic device 300 of the embodiment of the present application controls the multiple photosensitive pixels 110 in each subunit of the pixel array 11 to be exposed with different exposure times to obtain a target image with a high dynamic range.
  • the panchromatic image composed of panchromatic information generated by the panchromatic photosensitive pixel W is used to indicate the interpolation of the color image, which can not only improve the resolution of the target image finally obtained , It can also improve the color reproduction of the target image, which greatly improves the imaging quality of the electronic device 300.
  • Image acquisition methods include:
  • Step 02 includes: performing interpolation processing on the first color original image to obtain a first color intermediate image with a resolution smaller than that of the pixel array, and performing interpolation processing on the second color original image to obtain a resolution smaller than that of the pixel array
  • the second color intermediate image perform interpolation processing on the first full color original image to obtain a first full color intermediate image with a resolution equal to that of the pixel array
  • perform brightness alignment on the first color intermediate image and the second color intermediate image Process to obtain a first color intermediate image after brightness alignment; fuse the first color intermediate image and second color intermediate image after brightness alignment to obtain a color initial merged image; interpolate the color initial merged image according to the first panchromatic intermediate image To obtain a color intermediate merged image with a resolution equal to that of the pixel array; fuse the color intermediate merged image and the first panchromatic intermediate image to obtain the target image.
  • part of the panchromatic photosensitive pixels W in the same subunit is exposed at the fourth exposure time, and the remaining panchromatic photosensitive pixels W are exposed at the third exposure time, and the fourth exposure time is less than or equal to The first exposure time is greater than the third exposure time.
  • Step 02 includes: performing interpolation processing on the first color original image to obtain a first color intermediate image with a resolution smaller than that of the pixel array, and performing interpolation processing on the second color original image to obtain a resolution smaller than that of the pixel array
  • the second color intermediate image perform interpolation processing on the first panchromatic original image to obtain a first panchromatic intermediate image with a resolution equal to that of the pixel array, and perform interpolation processing on the second panchromatic original image to obtain a resolution and The second panchromatic intermediate image with the same resolution of the pixel array, wherein the second panchromatic original image is obtained from the second panchromatic information generated by the panchromatic photosensitive pixels exposed at the fourth exposure time; for the first color intermediate image and Perform brightness alignment processing on the second color intermediate image to obtain the first color intermediate image after brightness alignment; perform brightness alignment processing on the first panchromatic intermediate image and the second panchromatic intermediate image to obtain the brightness aligned second panchromatic intermediate image Image; fusing the first color intermediate image and the second color intermediate image after brightness
  • the step of performing brightness alignment processing on the first color intermediate image and the second color intermediate image to obtain the first color intermediate image after brightness alignment includes: identifying that the pixel value in the first color intermediate image is greater than the first color intermediate image.
  • the first color intermediate image is updated to obtain the first color intermediate image after brightness alignment.
  • performing brightness alignment processing on the first panchromatic intermediate image and the second panchromatic intermediate image to obtain the second panchromatic intermediate image after brightness alignment includes: identifying pixel values in the second panchromatic intermediate image Overexposed image pixels larger than the second preset threshold; for each overexposed image pixel, expand a predetermined area with the overexposed image pixel as the center; find intermediate image pixels with pixel values less than the second preset threshold in the predetermined area; Use the intermediate image pixels and the first panchromatic intermediate image to correct the pixel values of the overexposed image pixels; use the corrected pixel values of the overexposed image pixels to update the second panchromatic intermediate image to obtain the second panchromatic after brightness alignment The middle image.
  • fusing the first color intermediate image and the second color intermediate image after brightness alignment to obtain a color initial merged image includes: performing motion detection on the first color intermediate image after brightness alignment; When there is no motion blur area in the first color intermediate image of, the first color intermediate image and the second color intermediate image after brightness alignment are merged to obtain the color initial merged image; there is motion in the first color intermediate image after brightness alignment When blurring the area, the first color intermediate image after brightness alignment except for the motion blur area and the second color intermediate image are merged to obtain the color initial merged image.
  • fusing the first panchromatic intermediate image and the brightness-aligned second panchromatic intermediate image to obtain a panchromatic merged image includes: performing motion detection on the brightness-aligned second panchromatic intermediate image; When there is no motion blur area in the second panchromatic intermediate image after brightness alignment, fuse the first panchromatic intermediate image and the second panchromatic intermediate image after brightness alignment to obtain a panchromatic merged image; When there is a motion blur area in the panchromatic intermediate image, the first panchromatic intermediate image and the second panchromatic intermediate image whose brightness is aligned except for the motion blur area are merged to obtain a panchromatic merged image.
  • the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program.
  • the processor 20 is caused to execute the image acquisition method described in any one of the foregoing embodiments.
  • FIG. 1, FIG. 3, FIG. 11, and FIG. 22 when the computer program is executed by the processor 20, so that the processor 20 executes the following steps:
  • Exposure of the pixel array 11 is controlled, wherein, for a plurality of photosensitive pixels 110 in the same subunit, at least one single-color photosensitive pixel is exposed at a first exposure time, and at least one single-color photosensitive pixel is exposed at a second exposure time that is less than the first exposure time Exposure, at least one full-color photosensitive pixel W is exposed for a third exposure time that is less than the first exposure time; and
  • the target image wherein the first color original image is obtained from the first color information generated by the single-color photosensitive pixels exposed at the first exposure time, and the second color original image is generated from the single-color photosensitive pixels exposed at the second exposure time.
  • the second color information is obtained, and the first full-color original image is obtained from the first full-color information generated by the full-color photosensitive pixel W exposed at the third exposure time.
  • the color intermediate merged image and the first panchromatic intermediate image are merged to obtain the target image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

一种图像获取方法、成像装置(100)、电子设备(300)及非易失性计算机可读存储介质(400)。方法包括:控制像素阵列(11)曝光;及根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像。

Description

图像获取方法、成像装置、电子设备及可读存储介质
优先权信息
本申请请求2020年3月11日向中国国家知识产权局提交的、专利申请号为202010166269.2的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及影像技术领域,特别涉及一种图像获取方法、成像装置、电子设备及非易失性计算机可读存储介质。
背景技术
高动态范围成像技术通常可以通过控制成像装置内的图像传感器执行长、短曝光,再将长、短曝光得到的图像进行融合得到,融合后的图像可以较好地展示暗区和亮区的细节。
发明内容
本申请实施方式提供了一种图像获取方法、成像装置、电子设备及非易失性计算机可读存储介质。
本申请实施方式的图像获取方法用于图像传感器。所述图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素和多个彩色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应。所述像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色感光像素及多个全色感光像素。所述图像获取方法包括:控制所述像素阵列曝光,其中,对于同一所述子单元中的多个感光像素,至少一个所述单颜色感光像素以第一曝光时间曝光,至少一个所述单颜色感光像素以小于所述第一曝光时间的第二曝光时间曝光,至少一个所述全色感光像素以小于所述第一曝光时间的第三曝光时间曝光;及根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,其中,所述第一彩色原始图像由以所述第一曝光时间曝光的所述单颜色感光像素生成的第一彩色信息得到,所述第二彩色原始图像由以所述第二曝光时间曝光的所述单颜色感光像素生成的第二彩色信息得到,所述第一全色原始图像由以所述第三曝光时间曝光的所述全色感光像素生成的第一全色信息得到。
本申请实施方式的成像装置包括图像传感器及处理器。所述图像传感器包括像素阵列。所述像素阵列包括多个全色感光像素和多个彩色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应。所述像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色感光像素及多个全色感光像素。所述图像传感器中的像素阵列曝光,其中,对于同一所述子单元中的多个感光像素,至少一个所述单颜色感光像素以第一曝光时间曝光,至少一个所述单颜色感光像素以小于所述第一曝光时间的第二曝光时间曝光,至少一个所述全色感光像素以小于所述第一曝光时间的第三曝光时间曝光。所述处理器用于根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,其中,所述第一彩色原始图像由以所述第一曝光时间曝光的所述单颜色感光像素生成的第一彩色信息得到,所述第二彩色原始图像由以所述第二曝光时间曝光的所述单颜色感光像素生成的第二彩色信息得到,所述第一全色原始图像由以所述第三曝光时间曝光的所述全色感光像素生成的第一全色信息得到。
本申请实施方式的电子设备包括壳体及成像装置。所述成像装置与所述壳体结合。所述成像装置包括图像传感器及处理器。所述图像传感器包括像素阵列。所述像素阵列包括多个全色感光像素和多个彩色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应。所述像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色感光像素及多个全色感光像素。所述图像传感器中的像素阵列曝光,其中,对于同一所述子单元中的多个感光像素,至少一个所述单颜色感光像素以第一曝光时间曝光,至少一个所述单颜色感光像素以小于所述第一曝光时间的第二曝光时间曝光,至少一个所述全色感光像素以小于所述第一曝光时间的第三曝光时间曝光。所述处理器用于根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,其中,所述第一彩色原始图像由以所述第一曝光时间曝光的所述单颜色感光像素生成的第一彩色信息得到,所述第二彩色原始图像由以所述第二曝光时间曝光的所述单颜色感光像素生成的第二彩色信息得到,所述第一全色原始图像由以所述第三曝光时间曝光的所述全色感光像素生成的第一全色信息得到。
本申请实施方式的包含计算机程序的非易失性计算机可读存储介质,所述计算机程序被处理器执行时,使得处理器执行以下步骤所述的图像获取方法:控制所述像素阵列曝光,其中,对于同一所述子单元中的多个感光像素,至少一个所述单颜色感光像素以第一曝光时间曝光,至少一个所述单颜色感光像素以小于所述第一曝光时间的第二曝光时间曝光,至少一个所述全色感光像素以小于所述第一曝光时间的第三曝光时间曝光;及根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,其中,所述第一彩色原始图像由以所述第一曝光时间曝光的所述单颜色感光像素生成的第一彩色信息得到,所述第二彩色原始图像由以所述第二曝光时间曝光的所述单颜色感光像素生成的第二彩色信息得到,所述第一全色原始图像由以所述第三曝光时间曝光的所述全色感光像素生成的第一全色信息得到。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请实施方式的成像装置的示意图;
图2是本申请实施方式的像素阵列的示意图;
图3是本申请实施方式的感光像素的截面示意图;
图4是本申请实施方式的感光像素的像素电路图;
图5至图10是本申请实施方式的像素阵列中最小重复单元的排布示意图;
图11至图16是本申请某些实施方式的成像装置中的处理器处理图像传感器获取的原始图像的原理示意图;
图17至图19是本申请某些实施方式的像素阵列的部分电路连接的示意图;
图20是本申请实施方式的电子设备的结构示意图;
图21是本申请某些实施方式的图像获取方法的流程示意图;
图22是本申请某些实施方式的非易失性计算机可读存储介质与处理器的交互示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。
本申请实施方式的图像获取方法用于图像传感器。图像传感器包括像素阵列,像素阵列包括多个全色感光像素和多个彩色感光像素,彩色感光像素具有比全色感光像素更窄的光谱响应。像素阵列包括最小重复单元,每个最小重复单元包含多个子单元,每个子单元包括多个单颜色感光像素及多个全色感光像素。图像获取方法包括:控制像素阵列曝光,其中,对于同一子单元中的多个感光像素,至少一个单颜色感光像素以第一曝光时间曝光,至少一个单颜色感光像素以小于第一曝光时间的第二曝光时间曝光,至少一个全色感光像素以小于第一曝光时间的第三曝光时间曝光;及根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像,其中,第一彩色原始图像由以第一曝光时间曝光的单颜色感光像素生成的第一彩色信息得到,第二彩色原始图像由以第二曝光时间曝光的单颜色感光像素生成的第二彩色信息得到,第一全色原始图像由以第三曝光时间曝光的全色感光像素生成的第一全色信息得到。
在某些实施方式中,所有全色感光像素以第三曝光时间曝光;根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像,包括:对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像;对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;根据第一全色中间图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;及融合彩色中间合并图像和第一全色中间图像以得到目标图像。
在某些实施方式中,同一子单元中的部分全色感光像素以第四曝光时间曝光,其余全色感光像素以第三曝光时间曝光,第四曝光时间小于或等于第一曝光时间,且大于第三曝光时间;根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像,包括:对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像,对第二全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第二全色中间图像,其中,第二全色原始图像由以第四曝光时间曝光的全色感光像素生成的第二全色信息得到;对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;对第一全色中间图像及第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像;融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;根据全色合并图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;及融合彩色中间合并图像和全色合并图像以得到目标图像。
在某些实施方式中,当所有全色感光像素以第三曝光时间曝光时,以第二曝光时间曝光的所有单颜色感光像素的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第三曝光时间曝光的所有全色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内;当同一子单元中的部分全色感光像素以第四曝光时间曝光,其余全色感光像素以第三曝光时间曝光时,以第二曝光时间曝光的所有单颜色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第三曝光时间曝光的所有全色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第四曝光时间曝光的所有单颜色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内。
在某些实施方式中,对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像,包括:识别第一彩色中间图像中像素值大于第一预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域;在预定区域内寻找像素值小于第一预设阈值的中间图像像素;利用中间图像像素及第二彩色中间图像对过曝图像像素的像素值进行修正;及利用过曝图像像素的修正后的像素值更新第一彩色中间图像以得到亮度对齐后的第一彩色中间图像。
在某些实施方式中,对第一全色中间图像及第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像,包括:识别第二全色中间图像中像素值大于第二预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域;在预定区域内寻找像素值小于第二预设阈值的中间图像像素;利用中间图像像素及第一 全色中间图像对过曝图像像素的像素值进行修正;及利用过曝图像像素的修正后的像素值更新第二全色中间图像以得到亮度对齐后的第二全色中间图像。
在某些实施方式中,融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像,包括:对亮度对齐后的第一彩色中间图像进行运动检测;在亮度对齐后的第一彩色中间图像中不存在运动模糊区域时,融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;在亮度对齐后的第一彩色中间图像中存在运动模糊区域时,融合亮度对齐后的第一彩色中间图像中除运动模糊区域以外的区域及第二彩色中间图像以得到彩色初始合并图像。
在某些实施方式中,融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像,包括:对亮度对齐后的第二全色中间图像进行运动检测;在亮度对齐后的第二全色中间图像中不存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;在亮度对齐后的第二全色中间图像中存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像中除运动模糊区域以外的区域以得到全色合并图像。
本申请实施方式的成像装置包括图像传感器及处理器。图像传感器包括像素阵列。像素阵列包括多个全色感光像素和多个彩色感光像素,彩色感光像素具有比全色感光像素更窄的光谱响应。像素阵列包括最小重复单元,每个最小重复单元包含多个子单元,每个子单元包括多个单颜色感光像素及多个全色感光像素。图像传感器中的像素阵列曝光,其中,对于同一子单元中的多个感光像素,至少一个单颜色感光像素以第一曝光时间曝光,至少一个单颜色感光像素以小于第一曝光时间的第二曝光时间曝光,至少一个全色感光像素以小于第一曝光时间的第三曝光时间曝光。处理器用于根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像,其中,第一彩色原始图像由以第一曝光时间曝光的单颜色感光像素生成的第一彩色信息得到,第二彩色原始图像由以第二曝光时间曝光的单颜色感光像素生成的第二彩色信息得到,第一全色原始图像由以第三曝光时间曝光的全色感光像素生成的第一全色信息得到。
在某些实施方式中,所有全色感光像素以第三曝光时间曝光;处理器还用于:
对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像;对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;根据第一全色中间图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;及融合彩色中间合并图像和第一全色中间图像以得到目标图像。
在某些实施方式中,同一子单元中的部分全色感光像素以第四曝光时间曝光,其余全色感光像素以第三曝光时间曝光,第四曝光时间小于或等于第一曝光时间,且大于第三曝光时间;处理器还用于:对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像,对第二全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第二全色中间图像,其中,第二全色原始图像由以第四曝光时间曝光的全色感光像素生成的第二全色信息得到;对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;对第一全色中间图像及第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像;融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;根据全色合并图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;及融合彩色中间合并图像和全色合并图像以得到目标图像。
在某些实施方式中,当所有全色感光像素以第三曝光时间曝光时,以第二曝光时间曝光的所有单颜色感光像素的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第三曝光时间曝光的所有全色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内;当同一子单元中的部分全色感光像素以第四曝光时间曝光,其余全色感光像素以第三曝光时间曝光时,以第二曝光时间曝光的所有单颜色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第三曝光时间曝光的所有全色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第四曝光时间曝光的所有单颜色感光像素的曝光进行时间位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内。
在某些实施方式中,处理器还用于:识别第一彩色中间图像中像素值大于第一预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域;在预定区域内寻找像素值小于第一预设阈值的中间图像像素;利用中间图像像素及第二彩色中间图像对过曝图像像素的像素值进行修正;及利用过曝图像像素的修正后的像素值更新第一彩色中间图像以得到亮度对齐后的第一彩色中间图像。
在某些实施方式中,处理器还用于:识别第二全色中间图像中像素值大于第二预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域;在预定区域内寻找像素值小于第二预设阈值的中间图像像素;利用中间图像像素及第一全色中间图像对过曝图像像素的像素值进行修正;及利用过曝图像像素的修正后的像素值更新第二全色中间图像以得到亮度对齐后的第二全色中间图像。
在某些实施方式中,处理器还用于:对亮度对齐后的第一彩色中间图像进行运动检测;在亮度对齐后的第一彩色中间图像中不存在运动模糊区域时,融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;在亮度对齐后的第一彩色中间图像中存在运动模糊区域时,融合亮度对齐后的第一彩色中间图像中除运动模糊区域以外的区域及第二彩色中间图像以得到彩色初始合并图像。
在某些实施方式中,处理器还用于:对亮度对齐后的第二全色中间图像进行运动检测;在亮度对齐后的第二全色中间图像中不存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;在亮 度对齐后的第二全色中间图像中存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像中除运动模糊区域以外的区域以得到全色合并图像。
在某些实施方式中,像素阵列以二维矩阵形式排布;对于任意两行相邻的感光像素,至少存在一行感光像素满足位于同一行的多个单颜色感光像素的曝光控制电路的控制端与一条第一曝光控制线连接,多个全色感光像素的曝光控制电路的控制端与一条第二曝光控制线连接,且多个单颜色感光像素及多个全色感光像素的复位电路的控制端与一条复位线连接;或对于任意两行相邻的感光像素,至少存在一行感光像素满足位于同一行中的多个单颜色感光像素的复位电路的控制端与一条第一复位线连接,多个全色感光像素的复位电路的控制端与一条第二复位线连接,且多个单颜色感光像素及多个全色感光像素的曝光控制电路的控制端与一条曝光控制线连接;或对于任意两行相邻的感光像素,至少存在一行感光像素满足位于同一行的多个单颜色感光像素的曝光控制电路的控制端与一条第一曝光控制线连接,多个全色感光像素的曝光控制电路的控制端与一条第二曝光控制线连接,且多个单颜色感光像素的复位电路的控制端与一条第一复位线连接,多个全色感光像素的复位电路的控制端与一条第二复位线连接。
在某些实施方式中,最小重复单元的排布方式为:
Figure PCTCN2021073292-appb-000001
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
本申请实施方式的电子设备包括壳体及上述任一实施方式的成像装置。成像装置与壳体结合。
本申请实施方式的包含计算机程序的非易失性计算机可读存储介质,计算机程序被处理器执行时,使得处理器执行上述任一实施方式的图像获取方法。
相关技术中,图像传感器中的控制单元可以控制由同一滤光片覆盖的多个光电转换元件分别执行不同时长的曝光,以得到多帧具有不同的曝光时间的彩色原始图像。处理器融合多张彩色原始图像即可得到高动态范围图像。然而,这种高动态范围图像的实现方式会降低高动态范围图像的分辨率,影响图像传感器成像的清晰度。
基于上述原因,请参阅图1至图3、及图5,本申请提供一种成像装置100。成像装置100包括图像传感器10及处理器20。图像传感器10包括像素阵列11。像素阵列11包括多个全色感光像素W和多个彩色感光像素。像素阵列11包括最小重复单元,每个最小重复单元包含多个子单元。每个子单元包括多个单颜色感光像素及多个全色感光像素W。图像传感器10中的像素阵列11曝光。其中,对于同一子单元中的多个感光像素110,至少一个单颜色感光像素以第一曝光时间曝光,至少一个单颜色感光像素以小于第一曝光时间的第二曝光时间曝光,至少一个全色感光像素以小于第一曝光时间的第三曝光时间曝光。处理器20与图像传感器10电连接。处理器10用于根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像。其中,第一彩色原始图像由以第一曝光时间曝光的单颜色感光像素生成的第一彩色信息得到,第二彩色原始图像由以第二曝光时间曝光的单颜色感光像素生成的第二彩色信息得到,第一全色原始图像由以第三曝光时间曝光的全色感光像素生成的第一全色信息得到。
本申请实施方式的成像装置100通过控制像素阵列11中的每个子单元内的多个感光像素110以不同的曝光时间来曝光,以获取具有高动态范围的目标图像。并且,在获取具有高动态范围的目标图像的过程中,对第一彩色原始图像及第二彩色原始图像执行插值处理,以使最终获得的目标图像能够具有与像素阵列11的分辨率相同的分辨率。并且,第一彩色原始图像与第二彩色原始图像的插值是基于第一全色原始图像中的信息来进行的,插值的结果更为准确,色彩还原效果更好。
下面结合附图对本申请实施方式的成像装置100作详尽描述。
图2是本申请实施方式中的图像传感器10的示意图。图像传感器10包括像素阵列11、垂直驱动单元12、控制单元13、列处理单元14和水平驱动单元15。
例如,图像传感器10可以采用互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
例如,像素阵列11包括以阵列形式二维排列(即二维矩阵形式排布)的多个感光像素110(图3所示),每个感光像素110包括光电转换元件1111(图4所示)。每个感光像素110根据入射在其上的光的强度将光转换为电荷。
例如,垂直驱动单元12包括移位寄存器和地址译码器。垂直驱动单元12包括读出扫描和复位扫描功能。读出扫描是指顺序地逐行扫描单位感光像素110,从这些单位感光像素110逐行地读取信号。例如,被选择并被扫描的感光像素行中的每一感光像素110输出的信号被传输到列处理单元14。复位扫描用于复位电荷,光电转换元件的光电荷被丢弃,从而可以开始新的光电荷的积累。
例如,由列处理单元14执行的信号处理是相关双采样(CDS)处理。在CDS处理中,取出从所选感光像素行中的每一感光像素110输出的复位电平和信号电平,并且计算电平差。因而,获得了一行中的感光像素110的信号。列处理单元14可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。
例如,水平驱动单元15包括移位寄存器和地址译码器。水平驱动单元15顺序逐列扫描像素阵列11。通过水平驱动单元15执行的选择扫描操作,每一感光像素列被列处理单元14顺序地处理,并且被顺序输出。
例如,控制单元13根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元12、列处理单元14和水平驱动单元15协同工作。
图3是本申请实施方式中一种感光像素110的示意图。感光像素110包括像素电路111、滤光片112、及微透镜113。沿感光像素110的收光方向,微透镜113、滤光片112、及像素电路111依次设置。微透镜113用于汇聚光线,滤光片 112用于供某一波段的光线通过并过滤掉其余波段的光线。像素电路111用于将接收到的光线转换为电信号,并将生成的电信号提供给图2所示的列处理单元14。
图4是本申请实施方式中一种感光像素110的像素电路111的示意图。图4中像素电路111可应用在图2所示的像素阵列11内的每个感光像素110(图3所示)中。下面结合图2至图4对像素电路111的工作原理进行说明。
如图4所示,像素电路111包括光电转换元件1111(例如,光电二极管)、曝光控制电路(例如,转移晶体管1112)、复位电路(例如,复位晶体管1113)、放大电路(例如,放大晶体管1114)和选择电路(例如,选择晶体管1115)。在本申请的实施例中,转移晶体管1112、复位晶体管1113、放大晶体管1114和选择晶体管1115例如是MOS管,但不限于此。
例如,光电转换元件1111包括光电二极管,光电二极管的阳极例如连接到地。光电二极管将所接收的光转换为电荷。光电二极管的阴极经由曝光控制电路(例如,转移晶体管1112)连接到浮动扩散单元FD。浮动扩散单元FD与放大晶体管1114的栅极、复位晶体管1113的源极连接。
例如,曝光控制电路为转移晶体管1112,曝光控制电路的控制端TG为转移晶体管1112的栅极。当有效电平(例如,VPIX电平)的脉冲通过曝光控制线(例如图17所示的TX)传输到转移晶体管1112的栅极时,转移晶体管1112导通。转移晶体管1112将光电二极管光电转换的电荷传输到浮动扩散单元FD。
例如,复位晶体管1113的漏极连接到像素电源VPIX。复位晶体管113的源极连接到浮动扩散单元FD。在电荷被从光电二极管转移到浮动扩散单元FD之前,有效复位电平的脉冲经由复位线(例如图17所示的RX)传输到复位晶体管113的栅极,复位晶体管113导通。复位晶体管113将浮动扩散单元FD复位到像素电源VPIX。
例如,放大晶体管1114的栅极连接到浮动扩散单元FD。放大晶体管1114的漏极连接到像素电源VPIX。在浮动扩散单元FD被复位晶体管1113复位之后,放大晶体管1114经由选择晶体管1115通过输出端OUT输出复位电平。在光电二极管的电荷被转移晶体管1112转移之后,放大晶体管1114经由选择晶体管1115通过输出端OUT输出信号电平。
例如,选择晶体管1115的漏极连接到放大晶体管1114的源极。选择晶体管1115的源极通过输出端OUT连接到图2中的列处理单元14。当有效电平的脉冲通过选择线被传输到选择晶体管1115的栅极时,选择晶体管1115导通。放大晶体管1114输出的信号通过选择晶体管1115传输到列处理单元14。
需要说明的是,本申请实施例中像素电路111的像素结构并不限于图4所示的结构。例如,像素电路111也可以具有三晶体管像素结构,其中放大晶体管1114和选择晶体管1115的功能由一个晶体管完成。例如,曝光控制电路也不局限于单个转移晶体管1112的方式,其它具有控制端控制导通功能的电子器件或结构均可以作为本申请实施例中的曝光控制电路,本申请实施方式中的单个转移晶体管1112的实施方式简单、成本低、易于控制。
图5至图10是本申请某些实施方式的像素阵列11(图2所示)中的感光像素110(图3所示)的排布示意图。感光像素110包括两类,一类为全色感光像素W,另一类为彩色感光像素。图5至图10仅示出了一个最小重复单元中的多个感光像素110的排布。对图5至图10所示的最小重复单元在行和列上多次复制,即可形成像素阵列11。每个最小重复单元均由多个全色感光像素W和多个彩色感光像素组成。每个最小重复单元包括多个子单元。每个子单元内包括多个单颜色感光像素和多个全色感光像素W。其中,彩色感光像素指的是能够接收彩色颜色通道的光线的感光像素,多个彩色感光像素包括多个类别的单颜色感光像素,不同类别的单颜色感光像素接收不同的彩色颜色通道的光线,需要说明的是,单颜色感光像素可以仅接收单个色彩的颜色通道的光线,也可以接收两个甚至更多个色彩的颜色通道的光线,在此不作限制。其中,图5至图8所示的最小重复单元中,每个子单元中的全色感光像素W和彩色感光像素交替设置。图9和图10所示的最小重复单元中,每个子单元中,同一行的多个感光像素110为同一类别的感光像素110;或者,同一列的多个感光像素110为同一类别的感光像素110。
具体地,例如,图5为本申请一个实施例的最小重复单元中感光像素110(图3所示)的排布示意图。其中,最小重复单元为4行4列16个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2021073292-appb-000002
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图5所示,对于每个子单元,全色感光像素W和单颜色感光像素交替设置。
例如,如图5所示,子单元的类别包括三类。其中,第一类子单元UA包括多个全色感光像素W和多个第一颜色感光像素A;第二类子单元UB包括多个全色感光像素W和多个第二颜色感光像素B;第三类子单元UC包括多个全色感光像素W和多个第三颜色感光像素C。每个最小重复单元包括四个子单元,分别为一个第一类子单元UA、两个第二类子单元UB及一个第三类子单元UC。其中,一个第一类子单元UA与一个第三类子单元UC设置在第一对角线方向D1(例如图5中左上角和右下角连接的方向),两个第二类子单元UB设置在第二对角线方向D2(例如图5中右上角和左下角连接的方向)。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
需要说明的是,在其他实施方式中,第一对角线方向D1也可以是右上角和左下角连接的方向,第二对角线方向D2也可以是左上角和右下角连接的方向。另外,这里的“方向”并非单一指向,可以理解为指示排布的“直线”的概念,可以有直线两端的双向指向。下文图6至图10中对第一对角线方向D1及第二对角线方向D2的解释与此处相同。
再例如,图6为本申请另一个实施例的最小重复单元中感光像素110(图3所示)的排布示意图。其中,最小重复单元为6行6列36个感光像素110,子单元为3行3列9个感光像素110。排布方式为:
Figure PCTCN2021073292-appb-000003
Figure PCTCN2021073292-appb-000004
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图6所示,对于每个子单元,全色感光像素W和单颜色感光像素交替设置。
例如,如图6所示,子单元的类别包括三类。其中,第一类子单元UA包括多个全色感光像素W和多个第一颜色感光像素A;第二类子单元UB包括多个全色感光像素W和多个第二颜色感光像素B;第三类子单元UC包括多个全色感光像素W和多个第三颜色感光像素C。每个最小重复单元包括四个子单元,分别为一个第一类子单元UA、两个第二类子单元UB及一个第三类子单元UC。其中,一个第一类子单元UA与一个第三类子单元UC设置在第一对角线方向D1,两个第二类子单元UB设置在第二对角线方向D2。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
再例如,图7为本申请又一个实施例的最小重复单元中感光像素110(图3所示)的排布示意图。其中,最小重复单元为8行8列64个感光像素110,子单元为4行4列16个感光像素110。排布方式为:
Figure PCTCN2021073292-appb-000005
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图7所示,对于每个子单元,全色感光像素W和单颜色感光像素交替设置。
例如,如图7所示,子单元的类别包括三类。其中,第一类子单元UA包括多个全色感光像素W和多个第一颜色感光像素A;第二类子单元UB包括多个全色感光像素W和多个第二颜色感光像素B;第三类子单元UC包括多个全色感光像素W和多个第三颜色感光像素C。每个最小重复单元包括四个子单元,分别为一个第一类子单元UA、两个第二类子单元UB及一个第三类子单元UC。其中,一个第一类子单元UA与一个第三类子单元UC设置在第一对角线方向D1,两个第二类子单元UB设置在第二对角线方向D2。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
具体地,例如,图8为本申请再一个实施例的最小重复单元中感光像素110(图3所示)的排布示意图。其中,最小重复单元为4行4列16个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2021073292-appb-000006
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
图8所示的最小重复单元中感光像素110的排布与图5所示的最小重复单元中感光像素110的排布大致相同,其不同之处在于,图8中位于左下角的第二类子单元UB中的全色感光像素W与单颜色感光像素的交替顺序与图5中位于左下角的第二类子单元UB中的全色感光像素W与单颜色感光像素的交替顺序不一致,并且,图8中的第三类子单元UC中的全色感光像素W与单颜色感光像素的交替顺序与图5中位于右下角的第三类子单元UC中的全色感光像素W与单颜色感光像素的交替顺序也不一致。具体地,图5中位于左下角的第二类子单元UB中,第一行的感光像素110的交替顺序为全色感光像素W、单颜色感光像素(即第二颜色感光像素B),第二行的感光像素110的交替顺序为单颜色感光像素(即第二颜色感光像素B)、全色感光像素W;而图8中位于左下角的第二类子单元UB中,第一行的感光像素110的交替顺序为单颜色感光像素(即第二颜色感光像素B)、全色感光像素W,第二行的感光像素110的交替顺序为全色感光像素W、单颜色感光像素(即第二颜色感光像素B)。图5中位于右下角的第三类子单元UC中,第一行的感光像素110的交替顺序为全色感光像素W、单颜色感光像素(即第三颜色感光像素C),第二行的感光像素110的交替顺序为单颜色感光像素(即第三颜色感光像素C)、全色感光像素W;而图8中位于右下角的第三类子单元UC中,第一行的感光像素110的交替顺序为单颜色感光像素(即第三颜色感光像素C)、全色感光像素W,第二行的感光像素110的交替顺序为全色感光像素W、单颜色感光像素(即第三颜色感光像素C)。
如图8所示,图8中的第一类子单元UA中的全色感光像素W与单颜色感光像素的交替顺序与第三类子单元UC中的全色感光像W素与单颜色感光像素的交替顺序不一致。具体地,图8所示的第一类子单元CA中,第一行的感光像素110的交替顺序为全色感光像素W、单颜色感光像素(即第一颜色感光像素A),第二行的感光像素110的交替顺序为单颜色感光像素(即第一颜色感光像素A)、全色感光像素W;而图8所示的第三类子单元CC中,第一行的感光像素110的交替顺序为单颜色感光像素(即第三颜色感光像素C)、全色感光像素W,第二行的感光像素110的交替顺序为全色感光像素W、单颜色感光像素(即第三颜色感光像素C)。也即是说,同一最小重复单元中,不同子单元内的 全色感光像素W与彩色感光像素的交替顺序可以是一致的(如图5所示),也可以是不一致的(如图8所示)。
再例如,图9为本申请还一个实施例的最小重复单元中感光像素110(图3所示)的排布示意图。其中,最小重复单元为4行4列16个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2021073292-appb-000007
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图9所示,对于每个子单元,同一行的多个感光像素110为同一类别的感光像素110。其中,同一类别的感光像素110包括:(1)均为全色感光像素W;(2)均为第一颜色感光像素A;(3)均为第二颜色感光像素B;(4)均为第三颜色感光像素C。
例如,如图9所示,子单元的类别包括三类。其中,第一类子单元UA包括多个全色感光像素W和多个第一颜色感光像素A;第二类子单元UB包括多个全色感光像素W和多个第二颜色感光像素B;第三类子单元UC包括多个全色感光像素W和多个第三颜色感光像素C。每个最小重复单元包括四个子单元,分别为一个第一类子单元UA、两个第二类子单元UB及一个第三类子单元UC。其中,一个第一类子单元UA与一个第三类子单元UC设置在第一对角线方向D1,两个第二类子单元UB设置在第二对角线方向D2。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
再例如,图10为本申请还一个实施例的最小重复单元中感光像素110(图3所示)的排布示意图。其中,最小重复单元为4行4列16个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2021073292-appb-000008
W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图10所示,对于每个子单元,同一列的多个感光像素110为同一类别的感光像素110。其中,同一类别的感光像素110包括:(1)均为全色感光像素W;(2)均为第一颜色感光像素A;(3)均为第二颜色感光像素B;(4)均为第三颜色感光像素C。
例如,如图10所示,子单元的类别包括三类。其中,第一类子单元UA包括多个全色感光像素W和多个第一颜色感光像素A;第二类子单元UB包括多个全色感光像素W和多个第二颜色感光像素B;第三类子单元UC包括多个全色感光像素W和多个第三颜色感光像素C。每个最小重复单元包括四个子单元,分别为一个第一类子单元UA、两个第二类子单元UB及一个第三类子单元UC。其中,一个第一类子单元UA与一个第三类子单元UC设置在第一对角线方向D1,两个第二类子单元UB设置在第二对角线方向D2。第一对角线方向D1与第二对角线方向D2不同。例如,第一对角线和第二对角线垂直。
例如,在其他实施方式中,同一最小重复单元中,也可以是部分子单元内的同一行的多个感光像素110为同一类别的感光像素110,其余部分子单元内的同一列的多个感光像素110为同一类别的感光像素110。
例如,如图5至图10所示的最小重复单元中,第一颜色感光像素A可以为红色感光像素R;第二颜色感光像素B可以为绿色感光像素G;第三颜色感光像素C可以为蓝色感光像素Bu。
例如,如图5至图10所示的最小重复单元中,第一颜色感光像素A可以为红色感光像素R;第二颜色感光像素B可以为黄色感光像素Y;第三颜色感光像素C可以为蓝色感光像素Bu。
例如,如图5至图10所示的最小重复单元中,第一颜色感光像素A可以为品红色感光像素M;第二颜色感光像素B可以为青色感光像素Cy;第三颜色感光像素C可以为黄色感光像素Y。
需要说明的是,在一些实施例中,全色感光像素W的响应波段可为可见光波段(例如,400nm-760nm)。例如,全色感光像素W上设置有红外滤光片,以实现红外光的滤除。在另一些实施例中,全色感光像素W的响应波段为可见光波段和近红外波段(例如,400nm-1000nm),与图像传感器10(图1所示)中的光电转换元件1111(图4所示)的响应波段相匹配。例如,全色感光像素W可以不设置滤光片或者设置可供所有波段的光线通过的滤光片,全色感光像素W的响应波段由光电转换元件1111的响应波段确定,即两者相匹配。本申请的实施例包括但不局限于上述波段范围。
请结合图1至图3、及图5,在某些实施方式中,控制单元13控制像素阵列11曝光。其中,对于同一子单元中的多个感光像素110,至少一个单颜色感光像素以第一曝光时间曝光,至少一个单颜色感光像素以小于第一曝光时间的第二曝光时间曝光,至少一个全色感光像素W以小于第一曝光时间的第三曝光时间曝光。像素阵列11中以第一曝光时间曝光的多个单颜色感光像素可以生成多个第一彩色信息,以第二曝光时间曝光的多个单颜色感光像素可以生成多个第二彩色信息,以第三曝光时间曝光的多个全色感光像素W(图5所示)可以生成多个第一全色信息。多个第一彩色信息可以形成第一彩色原始图像。多个第二彩色信息可以形成第二彩色原始图像,多个第一全色信息可以生成第一全色原始图像。成像装置100中的处理器20可以根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列11的分辨率相同的分辨率的目标图像。
下面结合两个实施例来对成像装置100获得具有高分辨率及高动态范围的目标图像的过程进行解释说明。
在一个例子中,如图1至图3、及图11所示,像素阵列11中的所有全色感光像素W均以第三曝光时间曝光。具体地,对于每个子单元中的多个(图11所示为4个)感光像素110,一个单颜色感光像素以第一曝光时间(例如图11所 示的长曝光时间L)曝光,一个单颜色感光像素以第二曝光时间(例如图11所示的短曝光时间S)曝光,两个全色感光像素W均以第三曝光时间(例如图11所示的短曝光时间S)曝光。
需要说明的是,在某些实施例中,像素阵列11的曝光过程可以是:(1)以第一曝光时间曝光的感光像素110、以第二曝光时间曝光的感光像素110及以第三曝光时间曝光的感光像素110依次序曝光(其中三者的曝光顺序不作限制),且三者的曝光进行时间均不重叠;(2)以第一曝光时间曝光的感光像素110、以第二曝光时间曝光的感光像素110及以第三曝光时间曝光的感光像素110依次序曝光(其中三者的曝光顺序不作限制),且三者的曝光进行时间存在部分重叠;(3)所有以较短的曝光时间曝光的感光像素110的曝光进行时间均位于以最长的曝光时间曝光的感光像素110的曝光进行时间内,例如,以第二曝光时间曝光的所有单颜色感光像素的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第三曝光时间曝光的所有全色感光像素W的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内。在本申请的具体实施例中,成像装置100采用第(3)种曝光方式,使用该种曝光方式可以缩短像素阵列11所需要的整体曝光时间,有利于提升图像的帧率。
像素阵列11曝光结束后,图像传感器10可以输出三张原始图像,分别为:(1)第一彩色原始图像,由以长曝光时间L曝光的多个单颜色感光像素生成的第一彩色信息组成;(2)第二彩色原始图像,由以短曝光时间S曝光的多个单颜色感光像素生成的第二彩色信息组成;(3)第一全色原始图像,由以短曝光时间S曝光的多个全色感光像素W生成的第一全色信息组成。
如图1至图3、及图12所示,图像传感器10获得第一彩色原始图像、第二彩色原始图像、及第一全色原始图像之后,会将这三张原始图像传输给处理器20,以由处理器20对这三张原始图像执行后续处理。具体地,处理器20可以对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列11的分辨率的第一彩色中间图像,并对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列11的分辨率的第二彩色中间图像。其中,处理器20对第一彩色原始图像执行插值处理指的是补齐第一彩色原始图像中每一个图像像素缺乏的颜色通道的值,从而使得插值后得到的第一彩色中间图像中的每一个图像像素均具有所有颜色通道的值。以图12所示的第一彩色原始图像中左上角的图像像素为例,该图像像素具有第一颜色通道(即A)的值,而缺乏第二颜色通道(即B)的值及第三颜色通道(即C)的值。处理器20可以通过插值处理来计算出该图像像素的第二颜色通道的值及第三颜色通道的值,并将第一颜色通道的值、第二颜色通道的值及第三颜色通道的值进行融合以得到第一彩色中间图像中左上角的图像像素的值,该图像像素的值由三个颜色通道的值组成,即A+B+C。同样地,处理器20对第二彩色原始图像执行插值处理指的是补齐第二彩色原始图像中每一个图像像素缺乏的颜色通道的值,从而使得插值后得到的第二彩色中间图像中的每一个图像像素均具有所有颜色通道的值。需要说明的是,图12中示出的A+B+C仅表示每一个图像像素的值由三个颜色通道的值组成,并不表示三个颜色通道的值直接相加。
处理器20还可以对第一全色原始图像执行插值处理以得到分辨率与像素阵列11的分辨率相等的第一全色中间图像。如图13所示,第一全色原始图像包括具有像素值的图像像素(第一全色原始图像中标记有S的图像像素)及不具有像素值的图像像素(第一全色原始图像中标记有N,即NULL的图像像素)。第一全色原始图像的每个子单元中均包括两个标记有S的图像像素及标记有N的两个图像像素。两个标记有S的图像像素所在位置对应于像素阵列11中的对应的子单元内的两个全色感光像素W所在位置,两个标记有N的图像像素所在位置对应于像素阵列11中的对应的子单元内的两个单颜色感光像素所在位置。处理器20对第一全色原始图像执行插值处理指的是计算出第一全色原始图像中的每个标记有N的图像像素的像素值,从而使得插值后得到的第一全色中间图像中的每个图像像素均能够具有W颜色通道的值。
在处理器20获得第一彩色中间图像及第二彩色中间图像后,处理器20可以对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像。亮度对齐主要包括以下实施过程。处理器20首先识别第一彩色中间图像中像素值大于第一预设阈值的过曝图像像素。随后,对于每一个过曝图像像素,处理器20以该过曝图像像素为中心扩展预定区域。随后,处理器20在预定区域内寻找像素值小于第一预设阈值的中间图像像素,并利用中间图像像素及第二彩色中间图像对过曝图像像素的像素值进行修正。最后,处理器20利用过曝图像像素的修正后的像素值更新第一彩色中间图像以得到亮度对齐后的第一彩色中间图像。具体地,请结合图1及图14,假设图像像素P12(图14中第一彩色中间图像内标记有虚线圆圈的图像像素)的像素值V1大于第一预设阈值V0,即图像像素P12为过曝图像像素P12,则处理器20以过曝图像像素P12中心扩展一个预定区域,例如,图14所示的3*3区域,当然,在其他实施例中,也可以是4*4区域、5*5区域、10*10区域等,在此不作限制。随后,处理器20在3*3的预定区域内寻找像素值小于第一预设阈值V0的中间图像像素,例如图14中的图像像素P21(图14中第一彩色中间图像内标记有点画线圆圈的图像像素)的像素值V2小于第一预设阈值V0,则图像像素P21即为中间图像像素P21。随后,处理器20在第二彩色中间图像中寻找与过曝图像像素P12及中间图像像素P21分别对应的图像像素,即图像像素P1’2’(图14中第二彩色中间图像内标记有虚线圆圈的图像像素)和图像像素P2’1’(图14中第二彩色中间图像内标记有点画线圆圈的图像像素),其中,图像像素P1’2’与过曝图像像素P12对应,图像像素P2’1’与中间图像像素P21对应,图像像素P1’2’的像素值为V3,图像像素P2’1’的像素值为V4。随后,处理器根据V1’/V3=V2/V4来计算出V1’,并利用V1’的值来替换掉V1的值。由此,即可计算出过曝图像像素P12的实际像素值。处理器20对第一彩色中间图像中的每一个过曝图像像素均执行这一亮度对齐的处理过程,即可得到亮度对齐后的第一彩色中间图像。由于亮度对齐后的第一彩色中间图像中的过曝图像像素的像素值经过了修正,亮度对齐后的第一彩色中间图像中的每个图像像素的像素值均较为准确。需要说明的是,以过曝图像像素为中心扩展的预定区域内可能存在多个像素值小于第一预设阈值的图像像素,一般地,一个区域内的多个图像像素的长短像素值比例的平均值为一个常量,其中,一个图像像素的长短像素值比例指的是该图像像素的对应于第一曝光时间的像素值(即长曝光像素值)与该图像像素的对应于第二曝光时间的像素值(即短曝光像素值)之间的比值。,因此,处理器20可以在这多个图像像素中任意挑选一个图像像素作为中间图像像素,并基于中间图像像素及第二彩色原始图像计算出过曝图像像素的实际像素值。
在获取到亮度对齐后的第一彩色中间图像及第二彩色中间图像后,处理器20可以对亮度对齐后的第一彩色中间图像及第二彩色中间图像进行融合以得到彩色初始合并图像。具体地,处理器20首先对亮度对齐后的第一彩色中间图像进行运动检测,以识别亮度对齐后的第一彩色中间图像中是否存在运动模糊区域。若亮度对齐后的第一彩色中间图像中不存在运动模糊区域,则直接融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像。若亮度对齐后的第一彩色中间图像中存在运动模糊区域,则将第一彩色中间图像中的运动模糊区域剔除,只融合第二彩色中间图像的所有区域以及亮度对齐后的第一彩色中间图像中除运动模糊区域以外的区域以得到彩色初始合并图像。其中,彩色初始合并图像的分辨率小于像素阵列11的分辨率。具体地,在融合亮度对齐后的第一彩色中间图像及第二彩色中间图像时,若亮度对齐后的第一彩色中间图像中不存在运动模糊区域,则此时两张中间图像的融合遵循以下原则:(1)亮度对齐后的第一彩色中间图像中,过曝区域的图像像素的像素值直接替换为第二彩色中间图像中对应于该过曝区域的图像像素的像素值;(2)亮度对齐后的第一彩色中间图像中,欠曝区域的图像像素的像素值为:长曝光像素值除以长短像素值比例;(3)亮度对齐后的第一彩色中间图像中,未欠曝也未过曝区域的图像像素的像素值为:长曝光像素值除以长短像素值比例。若亮度对齐后的第一彩色中间图像中存在运动模糊区域,则此时两张中间图像的融合除了遵循上述三个原则外,还需要遵循第(4)个原则:亮度对齐后的第一彩色中间图像中,运动模糊区域的图像像素的像素值直接替换为第二彩色中间图像中对应于该运动模糊区域的图像像素的像素值。需要说明的是,对于欠曝区域以及未欠曝也未过曝区域而言,这些区域内的图像像素的像素值为长曝光像素值除以长短像素值比例,即VL/(VL/VS)=VS’,其中,VL表示长曝光像素值,VS表示段曝光像素值,VS’表示计算出来的欠曝区域以及未欠曝也未过曝区域中图像像素的像素值。VS’的信噪比会大于VS的信噪比。
在获取到彩色初始合并图像及第一全色中间图像之后,处理器20根据第一全色中间图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像。具体地,请结合图1及图15,处理器20首先对第一全色中间图像划分出多个纹理区域,每个纹理区域包括多个图像像素(图15的例子中,每个纹理区域包括3*3个图像像素,在其他例子中,每个纹理区域中的图像像素的个数也可以为其他数量,在此不作限制)。随后,处理器20计算出每一个纹理区域的目标纹理方向,其中,目标纹理方向可能是水平方向、垂直方向、对角方向、反对角方向或平面方向中的任一种。具体地,对于每个纹理区域,处理器20首先计算出水平方向的特征值、垂直方向的特征值、对角方向的特征值、及反对角方向的特征值,再根据多个特征值来确定出纹理区域的目标纹理方向。假设纹理区域中的3*3个图像像素分别为P00、P01、P02、P10、P11、P12、P20、P21、P22,那么:(1)对于水平方向的特征值Diff_H,处理器20计算P00与P01的差值的绝对值、P01与P02的差值的绝对值、P10与P11的差值的绝对值、P11与P12的差值的绝对值、P20与P21的差值的绝对值、P21与P22的差值的绝对值,并计算这六个绝对值的均值,该均值即为水平方向的特征值Diff_H。(2)对于垂直方向的特征值Diff_V,处理器20计算P00与P10的差值的绝对值、P10与P20的差值的绝对值、P01与P11的差值的绝对值、P11与P21的差值的绝对值、P02与P12的差值的绝对值、P12与P22的差值的绝对值,并计算这六个绝对值的均值,该均值即为垂直方向的特征值Diff_V。(3)对于对角方向的特征值Diff_D,处理器20计算P00与P11的差值的绝对值、P01与P12的差值的绝对值、P10与P21的差值的绝对值、P11与P22的差值的绝对值,并计算这四个绝对值的均值,该均值即为对角方向的特征值Diff_D。(4)对于反对角方向的特征值Diff_AD,处理器20计算P01与P10的差值的绝对值、P02与P11的差值的绝对值、P11与P20的差值的绝对值、P12与P21的差值的绝对值,并计算这四个绝对值的均值,该均值即为反对角方向的特征值Diff_AD。在获得四个纹理方向的特征值后,处理器20可以根据四个特征值来确定出该纹理区域的目标纹理方向。示例地,处理器20从四个特征值中选取最大的特征值:(1)假设最大的特征值为Diff_H,预定阈值为Diff_PV,若Diff_H-Diff_V≥Diff_PV、Diff_H-Diff_D≥Diff_PV、且Diff_H-Diff_AD≥Diff_PV,则处理器20确定目标纹理方向为垂直方向;(2)假设最大的特征值为Diff_V,预定阈值为Diff_PV,若Diff_V-Diff_H≥Diff_PV、Diff_V-Diff_D≥Diff_PV、且Diff_V-Diff_AD≥Diff_PV,则处理器20确定目标纹理方向为水平方向;(3)假设最大的特征值为Diff_D,预定阈值为Diff_PV,若Diff_D-Diff_H≥Diff_PV、Diff_D-Diff_V≥Diff_PV、且Diff_D-Diff_AD≥Diff_PV,则处理器20确定目标纹理方向为反对角方向;(4)假设最大的特征值为Diff_AD,预定阈值为Diff_PV,若Diff_AD-Diff_H≥Diff_PV、Diff_AD-Diff_V≥Diff_PV、且Diff_AD-Diff_D≥Diff_PV,则处理器20确定目标纹理方向为对角方向。无论最大的特征值是哪一个待选纹理方向的特征值,只要该最大的特征值与其余的所有特征值的差值中,存在一个差值小于预定阈值,处理器20就确定目标纹理方向为平面方向。纹理区域的目标纹理方向为平面方向表示纹理区域对应的拍摄场景可能为纯色场景。
在确定出每个纹理区域的目标纹理方向后,处理器20即可借助每个纹理区域的目标纹理方向来确定出彩色初始合并图像中与对应的纹理区域相对应的区域的图像像素的插值方向,并基于确定出来的插值方向对彩色初始合并图像进行插值以得到分辨率与像素阵列11的分辨率相等的彩色中间合并图像。具体地,如果彩色初始合并图像中的某一区域对应在第一全色中间图像中的纹理区域的目标纹理方向为水平方向,则该区域的图像像素的插值方向为水平方向。如果彩色初始合并图像中的某一区域对应在第一全色中间图像中的纹理区域的目标纹理方向为垂直方向,则该区域的图像像素的插值方向为垂直方向。如果彩色初始合并图像中的某一区域对应在第一全色中间图像中的纹理区域的目标纹理方向为对角方向,则该区域的图像像素的插值方向为对角方向。如果彩色初始合并图像中的某一区域对应在第一全色中间图像中的纹理区域的目标纹理方向为反对角方向,则该区域的图像像素的插值方向为反对角方向。如果彩色初始合并图像中的某一区域对应在第一全色中间图像中的纹理区域的目标纹理方向为平面方向,则该区域的图像像素的插值方向为平面方向。如此,借助目标纹理方向来确定图像像素的插值方向,可以使得插值的结果更为准确,最终插值得到的图像的色彩还原效果更好,插值后得到的彩色中间合并图像的纹理与实际拍摄场景的纹理之间的一致性更高。
需要说明的是,在利用第一全色中间图像对彩色初始合并图像进行插值时,也可以不对彩色原始图像进行纹理区域的划分。此时,整张彩色初始合并图像即视为一个纹理区域。与划分区域的方式相比,不划分区域的方式可以减小处理器20所需处理的数据量,有利于提升图像的处理速度,且能够节约成像装置100的功耗。而划分区域的方式虽然增大了处理器20所需处理的数据量,但是采用此种方式计算得到的彩色中间图像的色彩还原度更准确。在实际使用过程种, 处理器20可以根据应用场景的不同来自适应地选择是否划分区域。例如,在成像装置100的电量较低时,可以采用不划分区域的方式来实现彩色初始合并图像的插值;在成像装置100的电量较高时,可以采用划分区域的方式来实现彩色初始合并图像的插值。在拍摄静态图像时,可以采用划分区域的方式来实现彩色初始合并图像的插值,在拍摄动态图像(如录像、视频等)时,可以采用不划分区域的方式来实现彩色初始合并图像的插值。
在获得彩色中间合并图像即第一全色中间图像之后,处理器20即可融合彩色中间合并图像和第一全色中间图像以得到目标图像。该目标图像具有较高的动态范围及较高的分辨率,图像的质量较好。
如图1至图3、及图16所示,在另一个例子中,同一所述子单元中的部分全色感光像素W以第四曝光时间曝光,其余全色感光像素W以第三曝光时间曝光。其中,第四曝光时间小于或等于第一曝光时间,且大于第三曝光时间。具体地,对于每个子单元中的(图16所示为4个)感光像素110,一个单颜色感光像素以第一曝光时间(例如图16所示的长曝光时间L)曝光,一个单颜色感光像素以第二曝光时间(例如图16所示的短曝光时间S)曝光,一个全色感光像素W以第三曝光时间(例如图16所示的短曝光时间S)曝光,一个全色感光像素W以第四曝光时间(例如图16所示的长曝光时间L曝光)。
需要说明的是,在某些实施例中,像素阵列11的曝光过程可以是:(1)以第一曝光时间曝光的感光像素110、以第二曝光时间曝光的感光像素110、以第三曝光时间曝光的感光像素110及以第四曝光时间曝光的感光像素110依次序曝光(其中四者的曝光顺序不作限制),且四者的曝光进行时间均不重叠;(2)以第一曝光时间曝光的感光像素110、以第二曝光时间曝光的感光像素110、以第三曝光时间曝光的感光像素110及以第四曝光时间曝光的感光像素110依次序曝光(其中四者的曝光顺序不作限制),且四者的曝光进行时间存在部分重叠;(3)所有以较短的曝光时间曝光的感光像素110的曝光进行时间均位于以最长的曝光时间曝光的感光像素110的曝光进行时间内,例如,以第二曝光时间曝光的所有单颜色感光像素的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第三曝光时间曝光的所有全色感光像素W的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内,以第四曝光时间曝光的所有全色感光像素W的曝光进行时间均位于以第一曝光时间曝光的所有单颜色感光像素的曝光进行时间内。在本申请的具体实施例中,成像装置100采用第(3)种曝光方式,使用该种曝光方式可以缩短像素阵列11所需要的整体曝光时间,有利于提升图像的帧率。
像素阵列11曝光结束后,图像传感器10可以输出四张原始图像,分别为:(1)第一彩色原始图像,由以长曝光时间L曝光的多个单颜色感光像素生成的第一彩色信息组成;(2)第二彩色原始图像,由以短曝光时间S曝光的多个单颜色感光像素生成的第二彩色信息组成;(3)第一全色原始图像,由以短曝光时间S曝光的多个全色感光像素W生成的第一全色信息组成;(4)第二全色原始图像,由以长曝光时间S曝光的多个全色感光像素W生成的第二全色信息组成。
图像传感器10获得第一彩色原始图像、第二彩色原始图像、第一全色原始图像、及第二全色原始图像之后,会将这四张原始图像传输给处理器20,以由处理器20对这四张原始图像执行后续处理。处理器20对四张原始图像的后续处理主要包括:(1)对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列11的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列11的分辨率的第二彩色中间图像;(2)对第一全色原始图像执行插值处理以得到分辨率与像素阵列11的分辨率相等的第一全色中间图像,对第二全色原始图像执行插值处理以得到分辨率与像素阵列11的分辨率相等的第二全色中间图像;(3)对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;(4)对第一全色中间图像及第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像;(5)融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;(6)融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;(7)根据全色合并图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;(8)融合彩色中间合并图像和全色合并图像以得到目标图像。
处理器20处理四张原始图像的过程及处理器20处理三张原始图像的过程大致相同。不同之处主要包括:
(a)处理器20在处理四张原始图像的过程中,还需要对第二全色原始图像执行插值处理以得到第二全色中间图像。第二全色原始图像的插值与第一全色原始图像的插值相同,均为对不具有像素值的图像像素进行像素值的计算,使得插值后的第二全色中间图像中的每一个图像像素均具有W颜色通道的像素值。
(b)处理器20还需要对第一全色中间图像与第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像,其具体包括:识别第二全色中间图像中像素值大于第二预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域,在预定区域内寻找像素值小于第二预设阈值的中间图像像素;利用中间图像像素及第一全色中间图像对过曝图像像素的像素值进行修正;利用过曝图像像素的修正后的像素值更新第二全色中间图像以得到亮度对齐后的第二全色中间图像。第一全色中间图像与第二全色中间图像的亮度对齐过程与第一彩色中间图像与第二彩色中间图像的亮度对齐过程类似,在此不再展开说明。
(c)处理器20需要融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像,其具体包括:对亮度对齐后的第二全色中间图像进行运动检测;在亮度对齐后的第二全色中间图像中不存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;在亮度对齐后的第二全色中间图像中存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像中除运动模糊区域以外的区域以得到全色合并图像。第一全色中间图像与亮度对齐后的第二全色中间图像的融合方式与第二彩色中间图像与亮度对齐后的第一彩色中间图像的融合方式类似,在此不再展开说明。
(d)处理器20是根据全色合并图像对彩色初始合并图像进行插值以得到彩色中间合并图像。处理器20同样需要计算出至少一个纹理区域的目标纹理方向,再基于目标纹理方向确定出彩色初始合并图像的插值方向,使得彩色初始合并图像基于确定出来的插值方向进行插值,以此得到色彩还原度较为准确的彩色中间合并图像。
综上,本申请实施方式的成像装置100通过控制像素阵列11中的每个子单元内的多个感光像素110以不同的曝光 时间来曝光,以获取具有高动态范围的目标图像。并且,在获取具有高动态范围的目标图像的过程中,利用由全色感光像素W生成的全色信息组成的全色图像来指示彩色图像的插值,不仅可以提升最终获得的目标图像的分辨率,还能够提升该目标图像的色彩还原度,极大地改善了成像装置100的成像质量。
图11及图16所示实施方式中第三曝光时间等于第二曝光时间等于短曝光时间。在其他实施方式中,第三曝光时间也可以与第二曝光时间不等,例如第三曝光时间大于第二曝光时间且小于第一曝光时间,或者第三曝光时间小于第二曝光时间等。图16所示实施方式中第四曝光时间等于第一曝光时间等于长曝光时间。在其他实施方式中,第四曝光时间也可以与第一曝光时间不等。
在某些实施方式中,处理器20也可以先对利用第一全色中间图像(或者全色合并图像)对第一彩色中间图像及第二彩色中间图像进行插值以分别得到分辨率与像素阵列11的分辨率相等的第一彩色高分辨率图像及第二彩色高分辨率图像。随后,处理器20再对第一彩色高分辨率图像及第二彩色高分辨率图像执行亮度对齐及融合处理,并将处理后的图像与第一全色中间图像(或者全色合并图像)融合以得到分辨率与像素阵列11的分辨率相等的目标图像。
在某些实施方式中,为实现同一子单元内的不同感光像素110的不同曝光时间的控制,感光像素110的电路连接方式可以是:对于任意两行相邻的感光像素110,至少存在一行感光像素110满足位于同一行中的多个单颜色感光像素的曝光控制电路的控制端TG与一条第一曝光控制线TX1连接,多个全色感光像素W的曝光控制电路的控制端TG与一条第二曝光控制线TX2连接,且多个单颜色感光像素及多个全色感光像素W的复位电路的控制端RG与一条复位线RX连接。此时,图像传感器10中的控制单元13可以通过控制RX、TX1、TX2的脉冲时序来实现同一子单元内的不同感光像素110的不同曝光时间的控制。示例地,如图17所示,对于第N行及第N+1行的感光像素110,同一行的多个单颜色感光像素的曝光控制电路的控制端TG与一条第一曝光控制线TX1连接,同一行的多个全色感光像素W的曝光控制电路的控制端TG与一条第二曝光控制线TX2连接,同一行的多个感光像素110的复位电路的控制端RG与一条复位线RX连接。图17所示的连接方式既可以适用于图11所示的像素阵列11的曝光方式,也可以适用于图16所示像素阵列11的曝光方式。当然,在其他实施例中,还可以是其中一行的多个单颜色感光像素的曝光控制电路的控制端TG与一条第一曝光控制线TX1连接,多个全色感光像素W的曝光控制电路的控制端TG与一条第二曝光控制线TX2连接,且多个感光像素110的复位电路的控制端RG与一条复位线RX连接,另一行的多个感光像素110的曝光控制电路的控制端TG与一条曝光控制线TX连接,复位电路的控制端RG与一条复位线RX连接。此种连接方式仅适用于图11所示的像素阵列11的曝光方式。
在某些实施方式中,为实现同一子单元内的不同感光像素110的不同曝光时间的控制,感光像素110的电路连接方式还可以是:对于任意两行相邻的感光像素110,至少存在一行感光像素110满足位于同一行中的多个单颜色感光像素的复位电路的控制端RG与一条第一复位线RX1连接,多个全色感光像素的复位电路的控制端RG与一条第二复位线RX2连接,且多个单颜色感光像素及多个全色感光像素W的曝光控制电路的控制端TG与一条曝光控制线TX连接。此时,图像传感器10中的控制单元13可以通过控制TX、RX1、RX2的脉冲时序来实现同一子单元内的不同感光像素110的不同曝光时间的控制。示例地,如图18所示,对于第N行及第N+1行的感光像素110,同一行的多个单颜色感光像素的复位电路的控制端RG与一条第一复位线RX1连接,同一行的多个全色感光像素W的复位电路的控制端RG与一条第二复位线RX2连接,同一行的多个感光像素110的曝光控制电路的控制端TG与一条曝光控制线TX连接。图18所示的连接方式既可以适用于图11所示的像素阵列11的曝光方式,也可以适用于图16所示像素阵列11的曝光方式。当然,在其他实施例中,还可以是其中一行的多个单颜色感光像素的复位电路的控制端RG与一条第一复位线RX1连接,多个全色感光像素W的复位电路的控制端RG与一条第二复位线RX2连接,且多个感光像素110的曝光控制电路的控制端TG与一条曝光控制线TX连接,另一行的多个感光像素110的曝光控制电路的控制端TG与一条曝光控制线TX连接,复位电路的控制端RG与一条复位线RX连接。此种连接方式仅适用于图11所示的像素阵列11的曝光方式。
在某些实施方式中,为实现同一子单元内的不同感光像素110的不同曝光时间的控制,感光像素110的电路连接方式还可以是:对于任意两行相邻的感光像素110,至少存在一行感光像素110满足位于同一行的多个单颜色感光像素的曝光控制电路的控制端TG与一条第一曝光控制线TX1连接,多个全色感光像素W的曝光控制电路的控制端TG与一条第二曝光控制线TX2连接,且多个单颜色感光像素的复位电路的控制端RG与一条第一复位线RX1连接,多个全色感光像素W的复位电路的控制端RG与一条第二复位线RX2连接。此时,图像传感器10中的控制单元13可以通过控制TX1、TX2、RX1、RX2的脉冲时序来实现同一子单元内的不同感光像素110的不同曝光时间的控制。示例地,如图19所示,对于第N行及第N+1行的感光像素110,同一行的多个单颜色感光像素的复位电路的控制端RG与一条第一复位线RX1连接,同一行的多个全色感光像素W的复位电路的控制端RG与一条第二复位线RX2连接,同一行的多个单颜色感光像素的曝光控制电路的控制端TG与一条第一曝光控制线TX1连接,同一行的多个全色感光像素W的曝光控制电路的控制端TG与一条第二曝光控制线TX2连接。图19所示的连接方式既可以适用于图11所示的像素阵列11的曝光方式,也可以适用于图16所示像素阵列11的曝光方式。当然,在其他实施方式中,还可以是,其中一行的多个单颜色感光像素的复位电路的控制端RG与一条第一复位线RX1连接,多个全色感光像素W的复位电路的控制端RG与一条第二复位线RX2连接,且多个单颜色感光像素的曝光控制电路的控制端TG与一条第一曝光控制线TX1连接,多个全色感光像素W的曝光控制电路的控制端TG与一条第二曝光控制线TX2连接,另一行的多个感光像素110的曝光控制电路的控制端与一条曝光控制线TX连接,复位电路的控制端与一条复位线RX连接。此种连接方式仅适用于图11所示的像素阵列11的曝光方式。
请参阅图20,本申请还提供一种电子设备300。电子设备300包括上述任意一项实施方式所述的成像装置100及壳体200。成像装置100与壳体200结合。
电子设备300可以是手机、平板电脑、笔记本电脑、智能穿戴设备(例如智能手表、智能手环、智能眼镜、智能头盔)、无人机、头显设备等,在此不作限制。
本申请实施方式的电子设备300通过控制像素阵列11中的每个子单元内的多个感光像素110以不同的曝光时间来 曝光,以获取具有高动态范围的目标图像。并且,在获取具有高动态范围的目标图像的过程中,利用由全色感光像素W生成的全色信息组成的全色图像来指示彩色图像的插值,不仅可以提升最终获得的目标图像的分辨率,还能够提升该目标图像的色彩还原度,极大地改善了电子设备300的成像质量。
请参阅图2、图3、图11及图21,本申请还提供一种可以用于上述任意一项实施方式所述的图像传感器10的图像获取方法。图像获取方法包括:
01:控制像素阵列11曝光,其中,对于同一子单元中的多个感光像素110,至少一个单颜色感光像素以第一曝光时间曝光,至少一个单颜色感光像素以小于第一曝光时间的第二曝光时间曝光,至少一个全色感光像素W以小于第一曝光时间的第三曝光时间曝光;及
02:根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像,其中,第一彩色原始图像由以第一曝光时间曝光的单颜色感光像素生成的第一彩色信息得到,第二彩色原始图像由以第二曝光时间曝光的单颜色感光像素生成的第二彩色信息得到,第一全色原始图像由以第三曝光时间曝光的全色感光像素W生成的第一全色信息得到。
在某些实施方式中,所有全色感光像素W以第三曝光时间曝光。步骤02包括:对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像;对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;根据第一全色中间图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;融合彩色中间合并图像和第一全色中间图像以得到目标图像。
在某些实施方式中,请参阅图16,同一子单元中的部分全色感光像素W以第四曝光时间曝光,其余全色感光像素W以第三曝光时间曝光,第四曝光时间小于或等于第一曝光时间,且大于第三曝光时间。步骤02包括:对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像,对第二全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第二全色中间图像,其中,第二全色原始图像由以第四曝光时间曝光的全色感光像素生成的第二全色信息得到;对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;对第一全色中间图像及第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像;融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;根据全色合并图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;融合彩色中间合并图像和全色合并图像以得到目标图像。
在某些实施方式中,对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像的步骤,包括:识别第一彩色中间图像中像素值大于第一预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域;
在预定区域内寻找像素值小于第一预设阈值的中间图像像素;利用中间图像像素及第二彩色中间图像对过曝图像像素的像素值进行修正;利用过曝图像像素的修正后的像素值更新所述第一彩色中间图像以得到亮度对齐后的所述第一彩色中间图像。
在某些实施方式中,对第一全色中间图像及第二全色中间图像执行亮度对齐处理以得到亮度对齐后的第二全色中间图像,包括:识别第二全色中间图像中像素值大于第二预设阈值的过曝图像像素;对于每一个过曝图像像素,以该过曝图像像素为中心扩展预定区域;在预定区域内寻找像素值小于第二预设阈值的中间图像像素;利用中间图像像素及第一全色中间图像对过曝图像像素的像素值进行修正;利用过曝图像像素的修正后的像素值更新第二全色中间图像以得到亮度对齐后的第二全色中间图像。
在某些实施方式中,融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像,包括:对亮度对齐后的第一彩色中间图像进行运动检测;在亮度对齐后的第一彩色中间图像中不存在运动模糊区域时,融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;在亮度对齐后的第一彩色中间图像中存在运动模糊区域时,融合亮度对齐后的第一彩色中间图像中除运动模糊区域以外的区域及第二彩色中间图像以得到彩色初始合并图像。
在某些实施方式中,融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像,包括:对亮度对齐后的第二全色中间图像进行运动检测;在亮度对齐后的第二全色中间图像中不存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像以得到全色合并图像;在亮度对齐后的第二全色中间图像中存在运动模糊区域时,融合第一全色中间图像及亮度对齐后的第二全色中间图像中除运动模糊区域以外的区域以得到全色合并图像。
上述任意一项实施方式所述的图像获取方法的具体实施过程与前述描写成像装置100(图1所示)获得高分辨率及高动态范围的目标图像的具体实施过程相同,在此不再展开说明。
请参阅图22,本申请还提供一种包含计算机程序的非易失性计算机可读存储介质400。计算机程序被处理器20执行时,使得处理器20执行上述任意一项实施方式所述的图像获取方法。
例如,请参阅图1、图3、图11及图22计算机程序被处理器20执行时,使得处理器20执行以下步骤:
控制像素阵列11曝光,其中,对于同一子单元中的多个感光像素110,至少一个单颜色感光像素以第一曝光时间曝光,至少一个单颜色感光像素以小于第一曝光时间的第二曝光时间曝光,至少一个全色感光像素W以小于第一曝光时间的第三曝光时间曝光;及
根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与第一全色原始 图像融合以得到具有与像素阵列的分辨率相同的分辨率的目标图像,其中,第一彩色原始图像由以第一曝光时间曝光的单颜色感光像素生成的第一彩色信息得到,第二彩色原始图像由以第二曝光时间曝光的单颜色感光像素生成的第二彩色信息得到,第一全色原始图像由以第三曝光时间曝光的全色感光像素W生成的第一全色信息得到。
再例如,请参阅图22,计算机程序被处理器20执行时,使得处理器20执行以下步骤:
对第一彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第一彩色中间图像,对第二彩色原始图像执行插值处理以得到分辨率小于像素阵列的分辨率的第二彩色中间图像;
对第一全色原始图像执行插值处理以得到分辨率与像素阵列的分辨率相等的第一全色中间图像;
对第一彩色中间图像及第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的第一彩色中间图像;
融合亮度对齐后的第一彩色中间图像及第二彩色中间图像以得到彩色初始合并图像;
根据第一全色中间图像对彩色初始合并图像进行插值以得到分辨率与像素阵列的分辨率相等的彩色中间合并图像;及
融合彩色中间合并图像和第一全色中间图像以得到目标图像。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (20)

  1. 一种图像获取方法,用于图像传感器,其特征在于,所述图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素和多个彩色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应,所述像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色感光像素及多个全色感光像素;所述图像获取方法包括:
    控制所述像素阵列曝光,其中,对于同一所述子单元中的多个感光像素,至少一个所述单颜色感光像素以第一曝光时间曝光,至少一个所述单颜色感光像素以小于所述第一曝光时间的第二曝光时间曝光,至少一个所述全色感光像素以小于所述第一曝光时间的第三曝光时间曝光;及
    根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,其中,所述第一彩色原始图像由以所述第一曝光时间曝光的所述单颜色感光像素生成的第一彩色信息得到,所述第二彩色原始图像由以所述第二曝光时间曝光的所述单颜色感光像素生成的第二彩色信息得到,所述第一全色原始图像由以所述第三曝光时间曝光的所述全色感光像素生成的第一全色信息得到。
  2. 根据权利要求1所述的图像获取方法,其特征在于,所有所述全色感光像素以所述第三曝光时间曝光;所述根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,包括:
    对所述第一彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第一彩色中间图像,对所述第二彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第二彩色中间图像;
    对所述第一全色原始图像执行插值处理以得到分辨率与所述像素阵列的分辨率相等的第一全色中间图像;
    对所述第一彩色中间图像及所述第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的所述第一彩色中间图像;
    融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到彩色初始合并图像;
    根据所述第一全色中间图像对所述彩色初始合并图像进行插值以得到分辨率与所述像素阵列的分辨率相等的彩色中间合并图像;及
    融合所述彩色中间合并图像和所述第一全色中间图像以得到所述目标图像。
  3. 根据权利要求1所述的图像获取方法,其特征在于,同一所述子单元中的部分所述全色感光像素以第四曝光时间曝光,其余所述全色感光像素以所述第三曝光时间曝光,所述第四曝光时间小于或等于所述第一曝光时间,且大于所述第三曝光时间;所述根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,包括:
    对所述第一彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第一彩色中间图像,对所述第二彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第二彩色中间图像;
    对所述第一全色原始图像执行插值处理以得到分辨率与所述像素阵列的分辨率相等的第一全色中间图像,对第二全色原始图像执行插值处理以得到分辨率与所述像素阵列的分辨率相等的第二全色中间图像,其中,所述第二全色原始图像由以所述第四曝光时间曝光的所述全色感光像素生成的第二全色信息得到;
    对所述第一彩色中间图像及所述第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的所述第一彩色中间图像;
    对所述第一全色中间图像及所述第二全色中间图像执行亮度对齐处理以得到亮度对齐后的所述第二全色中间图像;
    融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到彩色初始合并图像;
    融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像以得到全色合并图像;
    根据所述全色合并图像对所述彩色初始合并图像进行插值以得到分辨率与所述像素阵列的分辨率相等的彩色中间合并图像;及
    融合所述彩色中间合并图像和所述全色合并图像以得到所述目标图像。
  4. 根据权利要求2或3所述的图像获取方法,其特征在于,当所有所述全色感光像素以所述第三曝光时间曝光时,以所述第二曝光时间曝光的所有所述单颜色感光像素的曝光进行时间均位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内,以所述第三曝光时间曝光的所有所述全色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内;
    当同一所述子单元中的部分所述全色感光像素以第四曝光时间曝光,其余所述全色感光像素以所述第三曝光时间曝光时,以所述第二曝光时间曝光的所有所述单颜色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内,以所述第三曝光时间曝光的所有所述全色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内,以所述第四曝光时间曝光的所有所述单颜色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内。
  5. 根据权利要求2或3所述的图像获取方法,其特征在于,所述对所述第一彩色中间图像及所述第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的所述第一彩色中间图像,包括:
    识别所述第一彩色中间图像中像素值大于第一预设阈值的过曝图像像素;
    对于每一个所述过曝图像像素,以该所述过曝图像像素为中心扩展预定区域;
    在所述预定区域内寻找像素值小于所述第一预设阈值的中间图像像素;
    利用所述中间图像像素及所述第二彩色中间图像对所述过曝图像像素的像素值进行修正;及
    利用所述过曝图像像素的修正后的像素值更新所述第一彩色中间图像以得到亮度对齐后的所述第一彩色中间图像。
  6. 根据权利要求3所述的图像获取方法,其特征在于,所述对所述第一全色中间图像及所述第二全色中间图像执行亮度对齐处理以得到亮度对齐后的所述第二全色中间图像,包括:
    识别所述第二全色中间图像中像素值大于第二预设阈值的过曝图像像素;
    对于每一个所述过曝图像像素,以该所述过曝图像像素为中心扩展预定区域;
    在所述预定区域内寻找像素值小于所述第二预设阈值的中间图像像素;
    利用所述中间图像像素及所述第一全色中间图像对所述过曝图像像素的像素值进行修正;及
    利用所述过曝图像像素的修正后的像素值更新所述第二全色中间图像以得到亮度对齐后的所述第二全色中间图像。
  7. 根据权利要求2或3所述的图像获取方法,其特征在于,所述融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到彩色初始合并图像,包括:
    对亮度对齐后的所述第一彩色中间图像进行运动检测;
    在亮度对齐后的所述第一彩色中间图像中不存在运动模糊区域时,融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到所述彩色初始合并图像;
    在亮度对齐后的所述第一彩色中间图像中存在所述运动模糊区域时,融合亮度对齐后的所述第一彩色中间图像中除所述运动模糊区域以外的区域及所述第二彩色中间图像以得到所述彩色初始合并图像。
  8. 根据权利要求3所述的图像获取方法,其特征在于,所述融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像以得到全色合并图像,包括:
    对亮度对齐后的所述第二全色中间图像进行运动检测;
    在亮度对齐后的所述第二全色中间图像中不存在运动模糊区域时,融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像以得到所述全色合并图像;
    在亮度对齐后的所述第二全色中间图像中存在所述运动模糊区域时,融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像中除所述运动模糊区域以外的区域以得到所述全色合并图像。
  9. 一种成像装置,其特征在于,包括:
    图像传感器,所述图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素和多个彩色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应,所述像素阵列包括最小重复单元,每个所述最小重复单元包含多个子单元,每个所述子单元包括多个单颜色感光像素及多个全色感光像素,所述图像传感器中的像素阵列曝光,其中,对于同一所述子单元中的多个感光像素,至少一个所述单颜色感光像素以第一曝光时间曝光,至少一个所述单颜色感光像素以小于所述第一曝光时间的第二曝光时间曝光,至少一个所述全色感光像素以小于所述第一曝光时间的第三曝光时间曝光;及
    处理器,所述处理器用于根据第一全色原始图像对第一彩色原始图像及第二彩色原始图像进行插值,并将插值处理后的图像与所述第一全色原始图像融合以得到具有与所述像素阵列的分辨率相同的分辨率的目标图像,其中,所述第一彩色原始图像由以所述第一曝光时间曝光的所述单颜色感光像素生成的第一彩色信息得到,所述第二彩色原始图像由以所述第二曝光时间曝光的所述单颜色感光像素生成的第二彩色信息得到,所述第一全色原始图像由以所述第三曝光时间曝光的所述全色感光像素生成的第一全色信息得到。
  10. 根据权利要求9所述的成像装置,其特征在于,所有所述全色感光像素以所述第三曝光时间曝光;所述处理器还用于:
    对所述第一彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第一彩色中间图像,对所述第二彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第二彩色中间图像;
    对所述第一全色原始图像执行插值处理以得到分辨率与所述像素阵列的分辨率相等的第一全色中间图像;
    对所述第一彩色中间图像及所述第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的所述第一彩色中间图像;
    融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到彩色初始合并图像;
    根据所述第一全色中间图像对所述彩色初始合并图像进行插值以得到分辨率与所述像素阵列的分辨率相等的彩色中间合并图像;及
    融合所述彩色中间合并图像和所述第一全色中间图像以得到所述目标图像。
  11. 根据权利要求9所述的成像装置,其特征在于,同一所述子单元中的部分所述全色感光像素以第四曝光时间曝光,其余所述全色感光像素以所述第三曝光时间曝光,所述第四曝光时间小于或等于所述第一曝光时间,且大于所述第三曝光时间;所述处理器还用于:
    对所述第一彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第一彩色中间图像,对所述第二彩色原始图像执行插值处理以得到分辨率小于所述像素阵列的分辨率的第二彩色中间图像;
    对所述第一全色原始图像执行插值处理以得到分辨率与所述像素阵列的分辨率相等的第一全色中间图像,对第二全色原始图像执行插值处理以得到分辨率与所述像素阵列的分辨率相等的第二全色中间图像,其中,所述第二全色原始图像由以所述第四曝光时间曝光的所述全色感光像素生成的第二全色信息得到;
    对所述第一彩色中间图像及所述第二彩色中间图像执行亮度对齐处理以得到亮度对齐后的所述第一彩色中间图像;
    对所述第一全色中间图像及所述第二全色中间图像执行亮度对齐处理以得到亮度对齐后的所述第二全色中间图像;
    融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到彩色初始合并图像;
    融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像以得到全色合并图像;
    根据所述全色合并图像对所述彩色初始合并图像进行插值以得到分辨率与所述像素阵列的分辨率相等的彩色中间合并图像;及
    融合所述彩色中间合并图像和所述全色合并图像以得到所述目标图像。
  12. 根据权利要求10或11所述的成像装置,其特征在于,当所有所述全色感光像素以所述第三曝光时间曝光时,以所述第二曝光时间曝光的所有所述单颜色感光像素的曝光进行时间均位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内,以所述第三曝光时间曝光的所有所述全色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内;
    当同一所述子单元中的部分所述全色感光像素以第四曝光时间曝光,其余所述全色感光像素以所述第三曝光时间曝光时,以所述第二曝光时间曝光的所有所述单颜色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内,以所述第三曝光时间曝光的所有所述全色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内,以所述第四曝光时间曝光的所有所述单颜色感光像素的曝光进行时间位于以所述第一曝光时间曝光的所有所述单颜色感光像素的曝光进行时间内。
  13. 根据权利要求10或11所述的成像装置,其特征在于,所述处理器还用于:
    识别所述第一彩色中间图像中像素值大于第一预设阈值的过曝图像像素;
    对于每一个所述过曝图像像素,以该所述过曝图像像素为中心扩展预定区域;
    在所述预定区域内寻找像素值小于所述第一预设阈值的中间图像像素;及
    利用所述中间图像像素及所述第二彩色中间图像对所述过曝图像像素的像素值进行修正;及
    利用所述过曝图像像素的修正后的像素值更新所述第一彩色中间图像以得到亮度对齐后的所述第一彩色中间图像。
  14. 根据权利要求11所述的成像装置,其特征在于,所述处理器还用于:
    识别所述第二全色中间图像中像素值大于第二预设阈值的过曝图像像素;
    对于每一个所述过曝图像像素,以该所述过曝图像像素为中心扩展预定区域;
    在所述预定区域内寻找像素值小于所述第二预设阈值的中间图像像素;及
    利用所述中间图像像素及所述第一全色中间图像对所述过曝图像像素的像素值进行修正;及
    利用所述过曝图像像素的修正后的像素值更新所述第二全色中间图像以得到亮度对齐后的所述第二全色中间图像。
  15. 根据权利要求10或11所述的成像装置,其特征在于,所述处理器还用于:
    对亮度对齐后的所述第一彩色中间图像进行运动检测;
    在亮度对齐后的所述第一彩色中间图像中不存在运动模糊区域时,融合亮度对齐后的所述第一彩色中间图像及所述第二彩色中间图像以得到所述彩色初始合并图像;
    在亮度对齐后的所述第一彩色中间图像中存在所述运动模糊区域时,融合亮度对齐后的所述第一彩色中间图像中除所述运动模糊区域以外的区域及所述第二彩色中间图像以得到所述彩色初始合并图像。
  16. 根据权利要求11所述的成像装置,其特征在于,所述处理器还用于:
    对亮度对齐后的所述第二全色中间图像进行运动检测;
    在亮度对齐后的所述第二全色中间图像中不存在运动模糊区域时,融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像以得到所述全色合并图像;
    在亮度对齐后的所述第二全色中间图像中存在所述运动模糊区域时,融合所述第一全色中间图像及亮度对齐后的所述第二全色中间图像中除所述运动模糊区域以外的区域以得到所述全色合并图像。
  17. 根据权利要求9所述的成像装置,其特征在于,所述像素阵列以二维矩阵形式排布;
    对于任意两行相邻的所述感光像素,至少存在一行所述感光像素满足位于同一行的多个所述单颜色感光像素的曝光控制电路的控制端与一条第一曝光控制线连接,多个所述全色感光像素的曝光控制电路的控制端与一条第二曝光控制线连接,且多个所述单颜色感光像素及多个所述全色感光像素的复位电路的控制端与一条复位线连接;或
    对于任意两行相邻的所述感光像素,至少存在一行所述感光像素满足位于同一行中的多个所述单颜色感光像素的复位电路的控制端与一条第一复位线连接,多个所述全色感光像素的复位电路的控制端与一条第二复位线连接,且多个所述单颜色感光像素及多个所述全色感光像素的曝光控制电路的控制端与一条曝光控制线连接;或
    对于任意两行相邻的所述感光像素,至少存在一行所述感光像素满足位于同一行的多个所述单颜色感光像素的曝光控制电路的控制端与一条第一曝光控制线连接,多个所述全色感光像素的曝光控制电路的控制端与一条第二曝光控制线连接,且多个所述单颜色感光像素的复位电路的控制端与一条第一复位线连接,多个所述全色感光像素的复位电路的控制端与一条第二复位线连接。
  18. 根据权利要求9所述的成像装置,其特征在于,所述最小重复单元的排布方式为:
    Figure PCTCN2021073292-appb-100001
    W表示所述全色感光像素;A表示多个所述彩色感光像素中的第一颜色感光像素;B表示多个所述彩色感光像素中的第二颜色感光像素;C表示多个所述彩色感光像素中的第三颜色感光像素。
  19. 一种电子设备,其特征在于,包括:
    壳体;及
    权利要求9-18任意一项所述的成像装置,所述成像装置与所述壳体结合。
  20. 一种包含计算机程序的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,使得处理器执行权利要求1-8任意一项所述的图像获取方法。
PCT/CN2021/073292 2020-03-11 2021-01-22 图像获取方法、成像装置、电子设备及可读存储介质 WO2021179806A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21767801.0A EP4113977A4 (en) 2020-03-11 2021-01-22 IMAGE CAPTURE METHOD, IMAGING DEVICE, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
US17/940,780 US20230017746A1 (en) 2020-03-11 2022-09-08 Image acquisition method, electronic device, and non-transitory computerreadable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010166269.2A CN111405204B (zh) 2020-03-11 2020-03-11 图像获取方法、成像装置、电子设备及可读存储介质
CN202010166269.2 2020-03-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/940,780 Continuation US20230017746A1 (en) 2020-03-11 2022-09-08 Image acquisition method, electronic device, and non-transitory computerreadable storage medium

Publications (1)

Publication Number Publication Date
WO2021179806A1 true WO2021179806A1 (zh) 2021-09-16

Family

ID=71430632

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/073292 WO2021179806A1 (zh) 2020-03-11 2021-01-22 图像获取方法、成像装置、电子设备及可读存储介质

Country Status (4)

Country Link
US (1) US20230017746A1 (zh)
EP (1) EP4113977A4 (zh)
CN (1) CN111405204B (zh)
WO (1) WO2021179806A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098230A1 (zh) * 2021-12-01 2023-06-08 Oppo广东移动通信有限公司 图像传感器、摄像模组、电子设备、图像生成方法和装置

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405204B (zh) * 2020-03-11 2022-07-26 Oppo广东移动通信有限公司 图像获取方法、成像装置、电子设备及可读存储介质
CN111432099B (zh) * 2020-03-30 2021-04-30 Oppo广东移动通信有限公司 图像传感器、处理系统及方法、电子设备和存储介质
CN111835971B (zh) * 2020-07-20 2021-09-24 Oppo广东移动通信有限公司 图像处理方法、图像处理系统、电子设备及可读存储介质
CN111885320A (zh) * 2020-08-04 2020-11-03 深圳市汇顶科技股份有限公司 图像传感器及其自动曝光方法、电子设备
CN111899178B (zh) * 2020-08-18 2021-04-16 Oppo广东移动通信有限公司 图像处理方法、图像处理系统、电子设备及可读存储介质
CN114114317B (zh) * 2020-08-28 2023-11-17 上海禾赛科技有限公司 激光雷达、数据处理方法及数据处理模块、介质
CN112235485B (zh) * 2020-10-09 2023-04-07 Oppo广东移动通信有限公司 图像传感器、图像处理方法、成像装置、终端及可读存储介质
CN112822475B (zh) * 2020-12-28 2023-03-14 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、终端及可读存储介质
CN112738493B (zh) * 2020-12-28 2023-03-14 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、电子设备及可读存储介质
CN114466170B (zh) * 2021-08-27 2023-10-31 锐芯微电子股份有限公司 图像处理方法及系统
CN113840067B (zh) * 2021-09-10 2023-08-18 Oppo广东移动通信有限公司 图像传感器、图像生成方法、装置和电子设备
CN114693580B (zh) * 2022-05-31 2022-10-18 荣耀终端有限公司 图像处理方法及其相关设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233763A (zh) * 2005-07-28 2008-07-30 伊斯曼柯达公司 处理彩色和全色像素
US20090251575A1 (en) * 2008-04-01 2009-10-08 Fujifilm Corporation Imaging apparatus and method for driving the imaging apparatus
CN102369721A (zh) * 2009-03-10 2012-03-07 美商豪威科技股份有限公司 具有合成全色图像的彩色滤光器阵列(cfa)图像
CN105409205A (zh) * 2013-07-23 2016-03-16 索尼公司 摄像装置、摄像方法及程序
CN105578065A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 高动态范围图像的生成方法、拍照装置和终端
CN110740272A (zh) * 2019-10-31 2020-01-31 Oppo广东移动通信有限公司 图像采集方法、摄像头组件及移动终端
CN111405204A (zh) * 2020-03-11 2020-07-10 Oppo广东移动通信有限公司 图像获取方法、成像装置、电子设备及可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8164651B2 (en) * 2008-04-29 2012-04-24 Omnivision Technologies, Inc. Concentric exposure sequence for image sensor
EP2833635B1 (en) * 2012-03-27 2018-11-07 Sony Corporation Image processing device, image-capturing element, image processing method, and program
US20140063300A1 (en) * 2012-09-06 2014-03-06 Aptina Imaging Corporation High dynamic range imaging systems having clear filter pixel arrays

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233763A (zh) * 2005-07-28 2008-07-30 伊斯曼柯达公司 处理彩色和全色像素
US20090251575A1 (en) * 2008-04-01 2009-10-08 Fujifilm Corporation Imaging apparatus and method for driving the imaging apparatus
CN102369721A (zh) * 2009-03-10 2012-03-07 美商豪威科技股份有限公司 具有合成全色图像的彩色滤光器阵列(cfa)图像
CN105409205A (zh) * 2013-07-23 2016-03-16 索尼公司 摄像装置、摄像方法及程序
CN105578065A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 高动态范围图像的生成方法、拍照装置和终端
CN110740272A (zh) * 2019-10-31 2020-01-31 Oppo广东移动通信有限公司 图像采集方法、摄像头组件及移动终端
CN111405204A (zh) * 2020-03-11 2020-07-10 Oppo广东移动通信有限公司 图像获取方法、成像装置、电子设备及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4113977A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098230A1 (zh) * 2021-12-01 2023-06-08 Oppo广东移动通信有限公司 图像传感器、摄像模组、电子设备、图像生成方法和装置

Also Published As

Publication number Publication date
CN111405204B (zh) 2022-07-26
EP4113977A4 (en) 2023-06-07
CN111405204A (zh) 2020-07-10
US20230017746A1 (en) 2023-01-19
EP4113977A1 (en) 2023-01-04

Similar Documents

Publication Publication Date Title
WO2021179806A1 (zh) 图像获取方法、成像装置、电子设备及可读存储介质
WO2021196554A1 (zh) 图像传感器、处理系统及方法、电子设备和存储介质
WO2021208593A1 (zh) 高动态范围图像处理系统及方法、电子设备和存储介质
WO2021196553A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN112261391B (zh) 图像处理方法、摄像头组件及移动终端
WO2022007469A1 (zh) 图像获取方法、摄像头组件及移动终端
CN110740272B (zh) 图像采集方法、摄像头组件及移动终端
WO2021212763A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
WO2021179805A1 (zh) 图像传感器、摄像头组件、移动终端及图像获取方法
WO2021103818A1 (zh) 图像传感器、控制方法、摄像头组件及移动终端
CN111586323A (zh) 图像传感器、控制方法、摄像头组件和移动终端
WO2021223364A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
US20230086743A1 (en) Control method, camera assembly, and mobile terminal
CN111899178B (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质
WO2022007215A1 (zh) 图像获取方法、摄像头组件及移动终端
CN111970460B (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
US20220150450A1 (en) Image capturing method, camera assembly, and mobile terminal
CN111970459B (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN111031297B (zh) 图像传感器、控制方法、摄像头组件和移动终端
CN112235485B (zh) 图像传感器、图像处理方法、成像装置、终端及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21767801

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021767801

Country of ref document: EP

Effective date: 20220930