WO2021138869A1 - Image sensor and device comprising an image sensor - Google Patents

Image sensor and device comprising an image sensor Download PDF

Info

Publication number
WO2021138869A1
WO2021138869A1 PCT/CN2020/071160 CN2020071160W WO2021138869A1 WO 2021138869 A1 WO2021138869 A1 WO 2021138869A1 CN 2020071160 W CN2020071160 W CN 2020071160W WO 2021138869 A1 WO2021138869 A1 WO 2021138869A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
channel
image sensor
image data
exposure time
Prior art date
Application number
PCT/CN2020/071160
Other languages
French (fr)
Inventor
Stephen Busch
Jianhua Zheng
Bing Qu
Milos KOMARCEVIC
Yamin SUN
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/CN2020/071160 priority Critical patent/WO2021138869A1/en
Priority to EP20911692.0A priority patent/EP4078943A4/en
Publication of WO2021138869A1 publication Critical patent/WO2021138869A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/533Control of the integration time by using differing integration times for different sensor regions
    • H04N25/534Control of the integration time by using differing integration times for different sensor regions depending on the spectral component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array

Definitions

  • the present disclosure relates generally to the field of imaging, and particularly to an image sensor and a device comprising an image sensor.
  • the image sensor has at least two channels (for example, two color channels) .
  • the image sensor (and/or the device comprising the image sensor) may control an exposure time period and/or an analog gain of pixels of one of the channels independently from pixels of the other one of the channels.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal–Oxide–Semiconductor
  • CFA Bayer Color Filter Array
  • FIG. 11 schematically illustrates a conventional Bayer RGB color filter array 1100.
  • the RGB color filters may be arranged on a two-dimensional (2D) array of pixel’s photodiodes) , the incident photons are filtered by wavelength (e.g., Blue/Green/Red centered) before hitting the light sensitive pixel photodiodes in the image sensor.
  • wavelength e.g., Blue/Green/Red centered
  • a full resolution color reconstruction is then achievable as the signal intensity is relatively well b alanced between the R, G and B channels.
  • the noisy measurement of conventional image sensors and absorbing a substantial amount of light by the Bayer CFA 1100 may lead to, for example, a low Signal-to-Noise Ratio (SNR) , a loss of spatial resolution compared to a non-filtered sensor, etc., and it may further be difficult to (cleanly) reconstruct some of the colors.
  • SNR Signal-to-Noise Ratio
  • This problem creates a negative impact on the image quality, especially when capturing images in challenging low light conditions.
  • FIG. 12A schematically illustrates a conventional CFA 1200A comprising white (clear) /yellow color filters
  • FIG. 12B schematically illustrates a conventional CFA 1200B comprising white (clear) filters.
  • the CFA comprising the white (clear) (W) and/or yellow (Y) color filters enable more light reaches the image sensor (i.e., the photodiodes of image sensors) .
  • the conventional devices have several disadvantages, due to trade-offs imposed by different channel sensitivity.
  • the embodiments of the invention aim to improve the conventional devices and methods.
  • An objective is to provide an image sensor and device (e.g., digital camera) that achieves better quality images and video (for example, more details, less noise, richer colors, etc. ) in challenging low light use cases.
  • the image sensor may benefit from a CFA using a combination of the wideband and narrowband filters.
  • Another goal is that that the image sensor and/or the device is able to prevent signal clipping in combined wide and narrow spectral band sensors.
  • a first aspect of the invention provides an image sensor comprising a plurality of pixels arranged in a two dimensional (2D) array, the plurality of pixels comprising a first set of pixels associated with a first channel and a second set of pixels associated with a second channel, and a plurality of optical filters, wherein each optical filter is associated with one of the plurality of pixels, the plurality of optical filters comprising a first set of optical filters and a second set of optical filters, wherein the first set of optical filters is associated with the pixels of the first channel, each optical filter of the first set being configured to pass light in a first wavelength range, wherein the second set of the optical filters is associated with the pixels of the second channel, each optical filter of the second set being configured to pass light in a second wavelength range, and wherein the image sensor is configured to control an exposure time period and/or an analog gain of the pixels of the first channel independently from the pixels of the second channel.
  • 2D two dimensional
  • the image sensor may be any image sensor, for example, it may be a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, a Charge-Coupled Device (CCD) sensor, etc. without limiting the present disclosure in that regard.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • CCD Charge-Coupled Device
  • the image sensor comprises a plurality of pixels.
  • the pixels may include for example photo detectors for generating photoelectrons based on light impinging on the photo detector.
  • the pixels may further be configured to generate an analog signal corresponding to an intensity of light impinged on the photo detector.
  • the first channel and/or the second channel may be any kind of channels, for example, they may be color channels such as an R channel, a B channel, a G channel, etc., and the present disclosure is not limited to a specific type of a channel, a specific type of pixel, or optical filter.
  • the first wavelength range and/or the second wavelength range may be, for example, any wavelength, any wavelength range, or a combination of two or more wavelength ranges, etc.
  • the wavelength ranges may be a specific wavelength, a range of continuous wavelengths (i.e., from a first wavelength to a second wavelength) , a combination of at least two or more wavelength ranges such as a first wavelength range and a second wavelength range which may be (or may not be) continuous, etc.
  • the image sensor of the first aspect may provide native support for combined wideband and narrowband image sensors.
  • the image sensor may control the analog gain and/or the exposure time period per color channel.
  • dedicated ADCs may be provided so each channel may receive a well utilized signal dynamic range.
  • controlling the exposure time and/or the analog again of different (color) channels may provide an advantage over conventional devices (as discussed, the conventional image sensors use the same analog gain /exposure time for all channels of the color filter and usually a shared ADC, as area and power critical component, and thus they impose a trade- off which decreases the image quality) . Consequently the image sensor of the first aspect can achieve better quality images and video (for example, more details, less noise, richer colors, etc. ) in challenging low light use cases, in high dynamic range use cases, etc.
  • the wide band pixels may be saturated before the narrow band pixels.
  • the exposure time may be set to accommodate the brightest part of the scene.
  • the narrow band pixels in dark regions may therefore only receive a very low signal and may further have poor Peak SNR (PSNR) performance.
  • PSNR Peak SNR
  • an image of a regular scene captured in an ambient light condition e.g., a day light condition
  • a high intensity light condition may also be improved using the disclosed devices and methods of the invention.
  • a third set of the optical filters is associated with pixels of a third channel, each optical filter being configured to pass light in a third wavelength range, and the image sensor is further configured to control an exposure time period and/or an analog gain of the pixels of the third channel independently from the pixels of the first channel and/or the pixels of the second channel.
  • the present disclosure is not limited to a specific number of channels or a specific type (e.g., color type) of channels.
  • a specific type e.g., color type
  • a fourth set of the optical filters is associated with pixels of a fourth channel, each optical filter being configured to pass light in a fourth wavelength range, and the image sensor is further configured to control an exposure time period and/or an analog gain of the pixels of the fourth channel independently from the pixels of the first channel and/or the pixels of the second channel and/or the pixels of the third channel.
  • each of the plurality of pixels is configured to detect light passed through its associated optical filter.
  • the first set of the optical filters and/or the second set of the optical filters is based on a narrow band optical filter comprising at least one of:
  • the third set of the optical filters is based on a wide band optical filter comprising at least one of:
  • White color optical filter in particular a clear or a panchromatic optical filter
  • the 2D array comprises a plurality of rows and a plurality of columns, wherein a reset wire for odd pixels of a determined row is disconnected from a reset wire for even pixels of that determined row, and/or a reset wire for odd pixels of a determined column is disconnected from a reset wire for even pixels of that determined column.
  • the present disclosure is not limited to a specific configuration (e.g., arrangement, connecting or disconnecting) of the reset wires.
  • the reset wire for odd pixels of a determined row is disconnected from a reset wire for even pixels of that determined row (or a determined column) .
  • any other configuration of the reset wires may be used which may be based on a repeating arrangement, a random arrangement (randomly connecting or disconnecting the reset wires, a specific arrangement for a specific application or channel combination, etc. ) may be used, as it is generally known to the skilled person.
  • the 2D array comprises a plurality of quad-pixels comprising a first set of quad-pixels associated with a first channel and a second set of quad-pixels associated with a second channel, wherein a reset wire for the first set of quad-pixels is disconnected from a reset wire for the second set of quad-pixels.
  • an improved image sensor with improved image quality in low light situations, in the shadow regions of a daylight scene, etc. can be provided.
  • the image sensor is further comprising an Analog-to-Digital Converters (ADC) configured to provide a first analog gain to the first channel and a second analog gain to the second channel, and/or a first ADC associated with the first channel and a second ADC associated with the second channel.
  • ADC Analog-to-Digital Converters
  • each pixel may be associated (e.g., may include) an ADC, while in another embodiments, a plurality of pixels may be connected to one or a plurality of ADCs.
  • the image sensor is further comprising at least one multiplexer configured to obtain a plurality of input signals from one ADC and/or from a plurality of ADCs, and output digital number pixel values based on multiplexing the obtained input signals.
  • an improved image sensor with improved image quality in low light situations, in the shadow regions of a daylight scene, etc. can be provided.
  • a second aspect of the invention provides a method of operating an image sensor, wherein the image sensor comprises a plurality of pixels arranged in a 2D array, the plurality of pixels comprising a first set of pixels associated with a first channel and a second set of pixels associated with a second channel, and a plurality of optical filters, wherein each optical filter is associated with one of the plurality of pixels, the plurality of optical filters comprising a first set of optical filters and a second set of optical filters, wherein the first set of optical filters is associated with the pixels of the first channel, each optical filter of the first set being configured to pass light in a first wavelength range, wherein the second set of the optical filters is associated with the pixels of the second channel, each optical filter of the second set being configured to pass light in a second wavelength range, and wherein the method comprises controlling an exposure time period and/or an analog gain of the pixels of the first channel independently from the pixels of the second channel.
  • a third set of the optical filters is associated with pixels of a third channel, each optical filter being configured to pass light in a third wavelength range, and the method comprises controlling an exposure time period and/or an analog gain of the pixels of the third channel independently from the pixels of the first channel and/or the pixels of the second channel.
  • a fourth set of the optical filters is associated with pixels of a fourth channel, each optical filter being configured to pass light in a fourth wavelength range, and the method comprises controlling an exposure time period and/or an analog gain of the pixels of the fourth channel independently from the pixels of the first channel and/or the pixels of the second channel and/or the pixels of the third channel.
  • each of the plur ality of pixels is configured to detect light passed through its associated optical filter.
  • the first set of the optical filters and/or the second set of the optical filters is based on a narrow b and optical filter comprising at least one of:
  • the third set of the optical filters is based on a wide band optical filter comprising at least one of:
  • White color optical filter in particular a clear or a panchromatic optical filter
  • the 2D array comprises a plurality of rows and a plurality of colum ns, wherein a reset wire for odd pixels of a determined row is disconnected from a reset w ire for even pixels of that determ ined row, and/or a reset wire for odd pixels of a determined column is disconnected from a reset wire for even pixels of that determined column.
  • the 2D array comprises a plurality of quad-pixels comprising a first set of quad-pixels associated with a first channel and a second set of quad-pixels associ ated with a second channel, wherein a reset wire for the first set of quad-pixels is disconnected from a reset wire for the second set of quad-pixels.
  • the method further comprises providing, by an ADC, a first analog gain to the first channel and a second analog gain to the second channel, and/or associating, a first ADC with the first channel and a second ADC with the second channel.
  • the method further comprises obtaining, by at least one multiplexer, a plurality of input signals from one ADC and/or from a plurality of ADCs, and outputting digital number pixel values based on multiplexing the obtained input signals.
  • the method of the second aspect and its implementation forms provide the same advantages and effects as the image sensor of the first aspect and its respective implementation forms.
  • a third aspect of the invention provides a device comprising an image sensor according to first aspect (and/or one of the implementation form of the first aspect) , wherein the device is configured to obtain image data of the image sensor, the image data comprising a first set of image data associated with a first channel and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data associated with a second channel and obtained based on a second exposure time period and/or a second analog gain, and perform a dynamic range normalization procedure on the first set of image data and the second set of image data considering their respective exposure time periods and/or their respective analog gains.
  • the device may be, or may be incorporated in, a digital camera, a digital video recorder, a mobile phone, as martphone, an augmented reality device, a virtual reality device, a laptop, a tablet, etc.
  • the device may comprise a circuitry.
  • the circuitry may comprise hardware and software.
  • the hardware may comprise analog or digital circuitry, or both analog and digital circuitry.
  • the circuitry comprises one or more processors and a non-volatile memory connected to the one or more processors.
  • the non-volatile memory may carry executable program code which, when executed by the one or more processors, causes the device to perform the operations or methods described herein.
  • the device may comprise an ISP unit which may consider this perchannel information into account when processing the obtained image data from the image sensor.
  • the device is further configured to determine a longer exposure time period and a shorter exposure time period from the first and the second exposure time periods, dete rmine an amplification gain based on a division of the longer exposure time period by the shorter exposure time period, and amplify the shorter exposure time period based on the determined amplification gain.
  • the amplification gain and the exposure time may be determined based on how much light a pixel receives.
  • the narrow band pixels may typically receive less light than the wide band pixels. Therefore, the narrow band pixels may use a longer exposure time (e.g., within the limits imposed by the quantity of motion in the image, the required frame rate, etc. ) and/or analog gain in order to fully exploit the dynamic range of the pixels (and therefore the best possible PSNR may be achieved) .
  • the device is further configured to correct a black level offset of the first set of image data independently from a black level offset of the second set of image data.
  • the device is further configured to perform a noise reduction procedure on the first set of image data based on a first noise model, and perform a noise reduction procedure on the second set of image data based on a second noise model.
  • the device is further configured to construct a multi-channel image based on performing a demosaicing procedure on the noise-reduced first set of image data and the noise-reduced second set of image data, and convert the constructed multi-channel image to a Red/Green/Blue color space.
  • the consulted multi-channel image may be converted to the Red/Green/Blue color space.
  • the constructed multi-channel image may be converted to any other color space which may be required for a desired application.
  • the device of the third aspect and its implementation forms enjoy the advantages and effects achieved by the image sensor of the first aspect and its implementation forms.
  • a fourth aspect of the invention provide s a method for a device comprising an image sensor, wherein the method comprises obtaining image data of the image sensor, the image data comprising a first set of image data associated with a first channel and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data associated with a second channel and obtained based on a second exposure time period and/or a second analog gain, and performing a dynamic range normalization procedure on the first set of image data and the second set of image data considering their respective exposure time periods and/or their respective analog gains.
  • the method further comprises determining a longer exposure time period and a shorter exposure time period from the first and the second exposure time periods, determining an amplification gain based on a division of the longer exposure time period by the shorter exposure time period, and amplifying the shorter exposure time period based on the determined amplification gain.
  • the method further comprises correcting a black level offset of the first set of image data independently from a black level offset of the second set of image data.
  • the method further comprises performing a noise reduction procedure on the first set of image data based on a first noise model, and performing a noise reduction procedure on the second set of image data based on a second noise model.
  • the method further comprises constructing a multi-channel image based on performing a demosaicing procedure on the noise-reduced first set of image data and the noise-reduced second set of image data, and converting the constructed multi-channel image to a Red/Green/Blue color space.
  • the method of the fourth aspect and its implementation forms provide the same advantages and effects as the device of the third aspect and its respective implementation forms.
  • a fifth aspect of the invention provides a computer program which, when executed by a computer, causes the method of the second aspect and/or the method of the fourth aspect and/or one of their implementation forms, to be performed.
  • the com puter program can be provided on a non-transitory computer-readable recording medium.
  • FIG. 1 is a schematic view of an image sensor, according to an embodiment of the present invention.
  • FIG. 2 is a schematic view of a device comprising an image sensor, according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating the obtained signals of the first channel and the second channels in digital number pixel values.
  • FIG. 4 is a schematic view of a diagram schematically illustrating different FWCs in a pixel.
  • FIG. 5 is a schematic view of a diagram illustrating controlling an analog gain of the pixels of the first channel independently from the pixels of the second channel.
  • FIG. 6 is a flowchart of an image signal processing pipeline for processing im age data per channel.
  • FIG. 7 is another schematic view of the image sensor, according to an embodiment of the present invention.
  • FIGS. 8A-C are a schematic view of an ISI module comprising a DRN unit (FIG. 8A) , a diagram showing an exemplary input of the DRN unit (FIG. 8B) , and a diagram showing an exemplary output of the DRN unit (FIG. 8C) .
  • FIG. 9 is a flowchart of a method of op erating an im age sensor, according to an embodiment of the invention.
  • FIG. 10 is a flowchart of a method for a device comprising an image sensor, according to an embodiment of the invention.
  • FIGS. 11 shows a schematic view of a conventional Bayer CFA.
  • FIGS. 12A-B schematically illustrate a c onventional CFA comprising W/Y filter (FIG. 12A) , and s chematically illustrate a conventional CFA comprising W filters (FIG. 12B) .
  • FIG. 1 is a schematic view of an image sensor 100, according to an embodiment of the present invention.
  • the image sensor 100 comprising a plurality of pixels 110 arranged in a two dimensional array, the plurality of pixels 110 comprising a first set of pixels 111 associated with a first channel 131 and a second set of pixels 112 associated with a second channel 132.
  • the image sensor 100 further comprising a plurality of optical filters 120, wherein each optical filter is associated w ith one of the plurality of pixels 110, the plurality of optical filters 120 comprising a first set of optical filters 121 and a second set of optical filters 122,
  • first set of optical filters 121 is associated with the pixels 111 of the first channel 131, each optical filter of the first set 121 being configured to pass light in a first wavelength range
  • second set of the optical filters 122 is associated with the pixels 112 of the second channel 132, each o ptical filter of the second set 122 being configured to pass light in a second wavelength range.
  • the wavelength range may, for example, a specific wavelength, a wavelength range of continuous wavelengths (i.e., from a first wavelength to a second wavelength) , a combination of at least two or more wavelength ranges such as a first wavelength range and a second wavelength range which may be (or may not be) continuous, etc.
  • the image sensor 100 is confi gured to control an exposure time period and/or an analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132.
  • the first channel and/or the second channel may be the color channels, etc., without limiting the present disclosure.
  • the image sensor 100 may be configured such that the analog gain/exposure time in the image sensor per (color) channel is controlled, and dedicated ADCs for different channels may be allocated such that each receives a well utilized signal dynamic range
  • the image sensor may comprise a circuitry (not shown in FIG. 1) .
  • T he circuitry may comprise hardware and software.
  • the hardware may comprise analog or digital circuitry, or both analog and digital circuitry.
  • the circuitry comprises one or more processors and a non-volatile memory connected to the one or more processors.
  • the non-volatile memory may carry executable pr ogram code which, when executed by the one or more processors, causes the device to perform the operations or methods described herein.
  • FIG. 2 is a schematic view of a device 200 comprising an image sensor 100, according to an embodiment of the present invention.
  • the device 200 is configured to obtain image data 210 of the image sensor 100, the image data 210 comprising a first set of image data 211 associated with a first channel 131 and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data 212 associated with a sec ond channel 132 and obtained based on a second exposure time period and/or a second analog gain.
  • the device 200 is further configured to perform a dynamic range norm alization procedure on the first set of image data 211 and the second set of image data 212 considering their respective exposure time periods and/or their respective analog gains.
  • the device 200 may be, for example, an imaging device such as a digital camera comprising the image sensor 100.
  • the device 200 may comprise, for example, a specific ISP that considers this per channel information into account when processing obt ained image data from the image sensor 100.
  • the device 200 may comprise a circuitry (not shown in FIG. 2) .
  • the circuitry may comprise hardware and software.
  • the hardware may comprise analog or digital circuitry, or both analog and digital circuitry.
  • the circuitry comprises one or more processors and a non-volatile memory connected to the one or more processors.
  • the non-volatile memory may carry executable program code which, when executed by the one or more processors, causes the device to perform the operations or methods described herein.
  • FIG. 3 is a diagram 300 illustrating obtained signals of the first channel 131 and the second channels 132 in digital number pixel values.
  • the signals may be obtained by the image sensor 100 and/or the device 200 (comprising the image sensor 100) without limiting the present disclosure to a specific device or a specific configuration.
  • Diagram 300 of FIG. 3 illustrates the digital number pixel values for an R signal 301, a B signal 302 and a W signal 304.
  • both the wideband channel e.g., a channel corresponding to W Signal 304
  • narrowband channe ls e.g., channels corresponding to the R signal 301 and the B signal 302 are well exposed to achieve a good SNR at all times (i.e., exposure times) , by using the full dynamic range of the pixel’s Full-Well Capacity (FWC) and a subsequent ADC (e.g., in the image sensor 100 and/or the device 200) .
  • FWC Full-Well Capacity
  • ADC e.g., in the image sensor 100 and/or the device 200
  • the image sensor 100 and/or the device 200 control the exposure time and/or the analog again of different (color) channels, independently, and hence, they may provide an advantage over the conventional devices (as discussed, the conve ntional image sensors use the same analog gain/exposure time for all channels of the color filter and usually a shared ADC, as area and power critical component, and thus they impose the discussed trade-off) .
  • FIG. 4 is a schematic view of the diagram 400 illustrating different FWCs in a pixel.
  • diagram 400 it is assumed that the first channel 131 and the second channel 132 have the same FWC, but different fill level due to different exposure time.
  • different FWCs may imply different pixel size.
  • the color filter (narrow band vs wide band) may affect at what rate the pixels are filled with the electrons.
  • the exposure time changes the time (for how long) at which the pixels are allowed to be filled, before reading their content.
  • the exposure time and the analog gain may determines how the number of pixels is converted into a digital value.
  • a pixel of given FWC may be filled up at different speeds, and may further be read out at different times (moments) .
  • different gains may be used in order to produce the best possible digital signal.
  • the image sensor 100 and/or the device 200 may control the exposure time period of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132, without limiting the present disclosure to a specific device or a specific configuration.
  • the first channel 131 is a channel that is saturated well.
  • the second channel 132 is a channel that is under-utilized well.
  • the image sensor 100 and/or the device 200 may enable the exposure (integration) time of each channel 131, 132 to be independently c ontrolled, and may further ensure that the electron FWC in each pixel is utilized well, while also ensure that the channels do not overflow.
  • FIG. 5 is a schematic view of the diagram 500 illustrating controlling an analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132.
  • the image sensor 100 and/or the device 200 may control the analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132, without limiting the present disclosure to a specific device or a specific configuration.
  • the image sensor 100 is further comprising the ADC 501.
  • the ADC 501 is configured to control, independently, the analog (conversion) gain of each channel (e.g., the analog gain the first channel 131 is controlled inde pendently from the analog gain of the second channel 132. This ensures that, for example, the signal supplied to the ADC 501 will utilize its full digital number (DN) range.
  • DN digital number
  • the present disclosure is not limited to a specific number of ADC or a specific configuration, although in FIG. 5 one ADC 501 is used. In some embodiments, it might be (necessary) to introd uce a dedicated ADCs per channel, or introduce one ADC that supports multiple conversion gains, to readout and convert pixels fast enough, or the like.
  • controlling the analog gains independently as it is shown in diagram 500 of FIG. 5 may be used in combination with controlling the exposures times independently as it was shown in diagram 400 of FIG. 4. For example, it may be possible to minimize exposure time to reduce motion blur and increase the gain accordingly. Moreover, it may be possible to (always) ensure the dynamic range of the corres ponding ADC is fully utilized (e.g., 0-4095 DN for 12-bit) .
  • FIG. 6 is a schematic view of a flowchart 600 of an image signal processing pipeline for processing image data per channel.
  • the flowchart of the IS P 600 is an exemplarily provided ISP.
  • the present disclosure is not limited to a specific ISP, and in some other embodiments, the device 200 may include a similar or another types of ISP.
  • the ISP may be provided in the device 200 which may comprise the image sensor 100.
  • the device 200 is further comprising the firm ware 3A (for example, it may provide a low-level control for the image sensor 100 which may be related to an Auto-Exposure (AE) part of the image sensor control) .
  • AE Auto-Exposure
  • the ISP module of the device 200 is specifica lly adapted for the image sensor 100.
  • the quantized color channel signals in DNs may be provided such that different channels having independent (or maybe different) dynamic ranges.
  • the device 200 performs raw data correction on the obtained image data 120 of the image sensor 100.
  • the image data 210 comprising the first set of image data 211 associated with the first channel 131 and obtained based on the first exposure time period and/or the first analog gain, and the second set of image data 212 associated with the second channel 132 and obtained based on the second exposure time period and/or the second analog gain.
  • the raw data correction comprises correcting black level offset individually per channel (for example, correction may be reducing substantially the black level) .
  • the device 200 may correct a black level offset of the first set of image data 211 independently from a black level offset of the second set of image data 212.
  • the device 200 performs a white balancing procedure (e.g., independently on the corrected first set of image data 211 and the corrected second set of image data 212) .
  • the device 200 performs a lens correction procedure.
  • the device 200 performs a noise reduction procedure.
  • the device 200 may estimate the correct per channel noise model in the denoising algorithm. For instance, the device 200 may perform a noise reduction procedure on the first set of image data 211 based on a first noise model, and perform a noise reduction procedure on the second set of image data 212 based on a second noise model.
  • the device 200 performs a dynamic range correction procedure.
  • the device 200 may re-adjust gains to equalize all channels to a common dynamic range at an appropriate stage (for example, it may be performed at Step S605 or S602) . Furthermore, due to a more advanced process, it is more efficient to expand/compress the bit depth at specific stages in the ISP to support the required total dynamic range across the channels than in the sensor from the start.
  • the device 200 performs a demosaicing procedure (i.e., color interpretation) .
  • the device 200 may reconstruct a correct full resolution multi-channel image during the demosaicing procedure.
  • the device 200 creates a 3 ⁇ 3 color matrix.
  • Creating the 3 ⁇ 3 color matrix may comprise a conversion to standard RGB color space.
  • the device 200 obtains Gamma LUT.
  • the device 200 performs a sharpening procedure.
  • the device 200 performs a digital image stabilization procedure.
  • the device 200 converts from the RGB color space to YUV (444/422) .
  • FIG. 7 is another schematic view of the image sensor 100, according to an embodiment of the present invention.
  • the image sensor 100 is exemplary based on a CMOS active pixel sensor array comprising a 2 ⁇ 2 CFA pattern.
  • the first set of pixels 111 associated with the first channel 131 and the first set of optical filters 121 is associated with the pixels 111 of the first channel 131.
  • the second set of pixels 112 associated with the second channel 132, and the second set of the optical filters 122 is associated with the pixels 112 of the second channel 132.
  • the image sensor 100 further exemplary comprises even/odd RST reset wires for alternate pixels in a row so that integration time for pixels belonging to different color channels can be individually controlled.
  • the image sensor 100 further comprises separate even/odd COL column select wires so alternate pixel signals in a column can be routed to dedicated ADC.
  • the image sensor 100 further comprises four dedicated ADCs for each pixel in the 2 ⁇ 2 pattern so each color channel’s analog gain can be individually controlled.
  • the ADC 801 (ADC 11) is dedicated to a pixel of the first channel 131.
  • the ADC 802 (ADC 21) is dedicated to a pixel of the second channel 132.
  • the image sensor 100 further comprises a multiplexer 803 configured to combine the dedicated ADC’s output of the pixel DN values in a sensor readout order.
  • the image sensor 100 may comprise a first set of quad-pixels which may be associated with the first channel and a second set of quad-pixels which may be associated with the second channel. Moreover, a reset wire for the first set of quad-pixels may be disconnected from a reset wire for the second set of quad-pixels.
  • FIG. 8A, FIG. 8B and FIG. 8C are a schematic view of an ISP module comprising a dynamic range normalization unit (FIG. 8A) , a diagram 820b showing exemplary input of the DRN unit (FIG. 8B) , and a diagram 820c showing exemplary output of the DRN unit (FIG. 8C) .
  • the ISP module illustrated in FIG. 8A may be provided in the device 200.
  • the ISP of the device 200 may consider the difference in exposure versus quantization that is received by the image sensor 100.
  • the ISP module of the device 200 comprises the white balance unit 801, the Dynamic Range Normalization (DRN) module 802 and the demosaic unit 803.
  • DNN Dynamic Range Normalization
  • the DRN module 802 transforms different obtained image data (e.g., as if it were coming from a CMOS sensor with a single ADC) , and it may further provide a higher bit depth/dynamic range.
  • the DRN input comprises DN values different channels comprising the R channel indicated with 821b having exposure time E2, B channel indicated with 822b having exposure time E3, and W channel indicated with 824b having exposure time E1.
  • FIG. 8B and 8C are exemplarily illustrated for the R/B/W channels, however, the present disclosure is not limited to the R/B/W channels and in some other embodiments any channel may be used.
  • the least sensitive channel (822c) that is saturating at the longest exposure E3 is not amplified.
  • the channel (821c) with higher sensitivity that saturate at shorter exposure (E2) is amplified by a gain E3/E2
  • the channel (824c) with higher sensitivity that saturate at shortest exposure (E1) is amplified by a gain E3/E1.
  • FIG. 9 shows a method 900 of operating an image sensor.
  • the method 900 may be a method of operating the image sensor 100, as it described above and/or operating an image sensor when being incorporated in another device, such as the device 100, as it is described above.
  • the steps of method 900 are exemplary discussed as a method of operating the image sensor 100, wherein the image sensor 100 comprises a plurality of pixels 110 arranged in a 2D array, the plurality of pixels 110 comprising a first set of pixels 111 associated with a first channel 131 and a second set of pixels 112 associated with a second channel 132, and a plurality of optical filters 120, wherein each optical filter is associated with one of the plurality of pixels 110, the plurality of optical filters 120 comprising a first set of optical filters 121 and a second set of optical filters 122, wherei n the first set of optical filters 121 is associated with the pixels 111 of the first channel 131, each optical filter of the first set 121 being configured to pass light in a first waveleng thrange, wherein the second set of the optical filters 122 is associated with the pixels 112 of the second channel 132, each optical filter of the second set 122 being configured to pass light in a second wavelength range.
  • the method 900 comprises a step 901 of controlling an exposure time period and/or an analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132.
  • FIG. 10 shows a method 1000 according to an embodiment of the invention for a device 200 comprising an image sensor 100.
  • the method 1000 may be carried out by the device 200, as it is described above.
  • the method 1000 comprises a step 1001 of obtaining image data 210 of the image sensor 100, the image data 210 comprising a first set of image data 211 associated with a first channel 131 and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data 212 associated with a second channel 132 and obtained based on a second exposure time period and/or a second analog gain
  • the method 1000 further comprises a step 1002 of performing a dynamic range normalization procedure on the first set of image data 211 and the second set of image data 212 considering their respective exposure time periods and/or their respective analog gains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

An image sensor comprises a plurality of pixels arranged in a two dimensional (2D) array, and a plurality of optical filters. A first set of pixels is associated with a first channel and a second set of pixels is associated with a second channel. Each optical filter is associated with one of the plurality of pixels. A first set of optical filters is associated with the pixels of the first channel, each optical filter of the first set passing light in a first wavelength range. A second set of optical filters is associated with the pixels of the second channel, each optical filter of the second set passing light in a second wavelength range. The image sensor controls an exposure time period and/or an analog gain of the pixels of the first channel independently from the pixels of the second channel.

Description

IMAGE SENSOR AND DEVICE COMPRISING AN IMAGE SENSOR TECHNICAL FIELD
The present disclosure relates generally to the field of imaging, and particularly to an image sensor and a device comprising an image sensor. The image sensor has at least two channels (for example, two color channels) . Moreover, the image sensor (and/or the device comprising the image sensor) may control an exposure time period and/or an analog gain of pixels of one of the channels independently from pixels of the other one of the channels.
BACKGROUND
Conventional silicon based image sensors (for example, Charge-Coupled Device (CCD) sensors, Complementary Metal–Oxide–Semiconductor (CMOS) sensors, etc. ) convert photons to electrons irrespective of their wavelength, and are, in that sense, monochrome detectors. Moreover, in order to distinguish different colors (e.g., Blue (B) /Green (G) /Red (R) ) , the Bayer Color Filter Array (CFA) has been used in conventional camera’s image sensors.
FIG. 11 schematically illustrates a conventional Bayer RGB color filter array 1100. In an image sensor that is using the conventional Bayer CFA (f or example, the RGB color filters may be arranged on a two-dimensional (2D) array of pixel’s photodiodes) , the incident photons are filtered by wavelength (e.g., Blue/Green/Red centered) before hitting  the light sensitive pixel photodiodes in the image sensor. Moreover, a full resolution color reconstruction is then achievable as the signal intensity is relatively well b alanced between the R, G and B channels.
However, the noisy measurement of conventional image sensors and absorbing a substantial amount of light by the Bayer CFA 1100 may lead to, for example, a low Signal-to-Noise Ratio (SNR) , a loss of spatial resolution compared to a non-filtered sensor, etc., and it may further be difficult to (cleanly) reconstruct some of the colors. This problem creates a negative impact on the image quality, especially when capturing images in challenging low light conditions.
Furthermore, conventional devices are known th at combine the possible light sensitivity of wideband (even m onochrome) filters and color selectivity of narrowband filters, in order to work around this limitation in the form of new CFA patterns.
FIG. 12A schematically illustrates a conventional CFA 1200A comprising white (clear) /yellow color filters, and FIG. 12B schematically illustrates a conventional CFA 1200B comprising white (clear) filters. The CFA comprising the white (clear) (W) and/or yellow (Y) color filters enable more light reaches the image sensor (i.e., the photodiodes of image sensors) . However, the conventional devices have several disadvantages, due to trade-offs imposed by different channel sensitivity.
Moreover, other disadvantages of the convent ional devices include signal saturation (or overflow, or clipping) in the sensitive channels, limited operational condition (e.g., working in lower exposure time and/or lower ISO gain) which leads to SNR decrease in less sensitive channels and as a consequence the image quality is degraded.
Furthermore, conventional methods are also known that are based on Image Signal Processing (ISP) that computationally reconstruct the information in the saturated highlights which is ineffective and lost details cannot (always) be correctly recovered and additional artefacts are often introduced.
SUMMARY
In view of the above-mentioned problems and disadvantages, the embodiments of the invention aim to improve the conventional devices and methods. An objective is to provide an image sensor and device (e.g., digital camera) that achieves better quality images and video (for example, more details, less noise, richer colors, etc. ) in challenging low light use cases. Thereby, the image sensor may benefit from a CFA using a combination of the wideband and narrowband filters. Another goal is that that the image sensor and/or the device is able to prevent signal clipping in combined wide and narrow spectral band sensors.
The objective of the present invention is achieved by the embodiments provided in the enclosed independent claims. Advantageous implementations of the embodiments are further defined in the dependent claims.
A first aspect of the invention provides an image sensor comprising a plurality of pixels arranged in a two dimensional (2D) array, the plurality of pixels comprising a first set of pixels associated with a first channel and a second set of pixels associated with a second channel, and a plurality of optical filters, wherein each optical filter is associated with one  of the plurality of pixels, the plurality of optical filters comprising a first set of optical filters and a second set of optical filters, wherein the first set of optical filters is associated with the pixels of the first channel, each optical filter of the first set being configured to pass light in a first wavelength range, wherein the second set of the optical filters is associated with the pixels of the second channel, each optical filter of the second set being configured to pass light in a second wavelength range, and wherein the image sensor is configured to control an exposure time period and/or an analog gain of the pixels of the first channel independently from the pixels of the second channel.
The image sensor may be any image sensor, for example, it may be a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, a Charge-Coupled Device (CCD) sensor, etc. without limiting the present disclosure in that regard.
The image sensor comprises a plurality of pixels. The pixels may include for example photo detectors for generating photoelectrons based on light impinging on the photo detector. The pixels may further be configured to generate an analog signal corresponding to an intensity of light impinged on the photo detector.
The first channel and/or the second channel may be any kind of channels, for example, they may be color channels such as an R channel, a B channel, a G channel, etc., and the present disclosure is not limited to a specific type of a channel, a specific type of pixel, or optical filter.
The first wavelength range and/or the second wavelength range may be, for example, any wavelength, any wavelength range, or a combination of two or more wavelength ranges, etc. For example, the wavelength ranges may be a specific wavelength, a range of  continuous wavelengths (i.e., from a first wavelength to a second wavelength) , a combination of at least two or more wavelength ranges such as a first wavelength range and a second wavelength range which may be (or may not be) continuous, etc.
The image sensor of the first aspect may provide native support for combined wideband and narrowband image sensors. For example, the image sensor may control the analog gain and/or the exposure time period per color channel. In some embodiments, dedicated ADCs may be provided so each channel may receive a well utilized signal dynamic range.
In particular, controlling the exposure time and/or the analog again of different (color) channels, independently, may provide an advantage over conventional devices (as discussed, the conventional image sensors use the same analog gain /exposure time for all channels of the color filter and usually a shared ADC, as area and power critical component, and thus they impose a trade- off which decreases the image quality) . Consequently the image sensor of the first aspect can achieve better quality images and video (for example, more details, less noise, richer colors, etc. ) in challenging low light use cases, in high dynamic range use cases, etc.
For example, in some embodiments, the wide band pixels may be saturated before the narrow band pixels. Moreover, the exposure time may be set to accommodate the brightest part of the scene. Furthermore, the narrow band pixels in dark regions may therefore only receive a very low signal and may further have poor Peak SNR (PSNR) performance.
In some embodiments, an image of a regular scene captured in an ambient light condition (e.g., a day light condition) , or a high intensity light condition may also be improved using the disclosed devices and methods of the invention.
In an implementation form of the first aspect, a third set of the optical filters is associated with pixels of a third channel, each optical filter being configured to pass light in a third wavelength range, and the image sensor is further configured to control an exposure time period and/or an analog gain of the pixels of the third channel independently from the pixels of the first channel and/or the pixels of the second channel.
The present disclosure is not limited to a specific number of channels or a specific type (e.g., color type) of channels. For example, there may be any number or combination of the narrowband and/or the wideband channels.
In an implementation form of the first aspect, a fourth set of the optical filters is associated with pixels of a fourth channel, each optical filter being configured to pass light in a fourth wavelength range, and the image sensor is further configured to control an exposure time period and/or an analog gain of the pixels of the fourth channel independently from the pixels of the first channel and/or the pixels of the second channel and/or the pixels of the third channel.
In a further implementation form of the first aspect, each of the plurality of pixels is configured to detect light passed through its associated optical filter.
In a further implementation form of the first aspect, the first set of the optical filters and/or the second set of the optical filters is based on a narrow band optical filter comprising at least one of:
- a Blue color optical filter,
- a Green color optical filter,
- a Red color optical filter.
In a further implementation form of the first aspect, the third set of the optical filters is based on a wide band optical filter comprising at least one of:
- a White color optical filter, in particular a clear or a panchromatic optical filter,
- a Cyan optical filter,
- a Magenta optical filter,
- a Yellow color optical filter.
In a further implementation form of the first aspect, the 2D array comprises a plurality of rows and a plurality of columns, wherein a reset wire for odd pixels of a determined row is disconnected from a reset wire for even pixels of that determined row, and/or a reset wire for odd pixels of a determined column is disconnected from a reset wire for even pixels of that determined column.
The present disclosure is not limited to a specific configuration (e.g., arrangement, connecting or disconnecting) of the reset wires. For example, although, in some embodiments, the reset wire for odd pixels of a determined row is disconnected from a reset wire for even pixels of that determined row (or a determined column) . In some other embodiments, any other configuration of the reset wires may be used which may be based on a repeating arrangement, a random arrangement (randomly connecting or  disconnecting the reset wires, a specific arrangement for a specific application or channel combination, etc. ) may be used, as it is generally known to the skilled person.
In a further implementation form of the first aspect, the 2D array comprises a plurality of quad-pixels comprising a first set of quad-pixels associated with a first channel and a second set of quad-pixels associated with a second channel, wherein a reset wire for the first set of quad-pixels is disconnected from a reset wire for the second set of quad-pixels.
With the above implementation forms, an improved image sensor with improved image quality in low light situations, in the shadow regions of a daylight scene, etc., can be provided.
In a furth er implementation form of the first aspect, the image sensor is further comprising an Analog-to-Digital Converters (ADC) configured to provide a first analog gain to the first channel and a second analog gain to the second channel, and/or a first ADC associated with the first channel and a second ADC associated with the second channel.
In some embodiments, each pixel may be associated (e.g., may include) an ADC, while in another embodiments, a plurality of pixels may be connected to one or a plurality of ADCs.
In a furth er implementation form of the first aspect, the image sensor is further comprising at least one multiplexer configured to obtain a plurality of input signals from one ADC and/or from a plurality of ADCs, and output digital number pixel values based on multiplexing the obtained input signals.
With the above implementation forms, an improved image sensor with improved image quality in low light situations, in the shadow regions of a daylight scene, etc., can be provided.
A second aspect of the invention provides a method of operating an image sensor, wherein the image sensor comprises a plurality of pixels arranged in a 2D array, the plurality of pixels comprising a first set of pixels associated with a first channel and a second set of pixels associated with a second channel, and a plurality of optical filters, wherein each optical filter is associated with one of the plurality of pixels, the plurality of optical filters comprising a first set of optical filters and a second set of optical filters, wherein the first set of optical filters is associated with the pixels of the first channel, each optical filter of the first set being configured to pass light in a first wavelength range, wherein the second set of the optical filters is associated with the pixels of the second channel, each optical filter of the second set being configured to pass light in a second wavelength range, and wherein the method comprises controlling an exposure time period and/or an analog gain of the pixels of the first channel independently from the pixels of the second channel.
In an implementation form of the second aspect, a third set of the optical filters is associated with pixels of a third channel, each optical filter being configured to pass light in a third wavelength range, and the method comprises controlling an exposure time period and/or an analog gain of the pixels of the third channel independently from the pixels of the first channel and/or the pixels of the second channel.
In an implementation form of the second aspect, a fourth set of the optical filters is associated with pixels of a fourth channel, each optical filter being configured to pass light in a fourth wavelength range, and the method comprises controlling an exposure time period and/or an analog gain of the pixels of the fourth channel independently from the pixels of the first channel and/or the pixels of the second channel and/or the pixels of the third channel.
In a further implementation form of the second aspect, each of the plur ality of pixels is configured to detect light passed through its associated optical filter.
In a further implementation form of the second aspect, the first set of the optical filters and/or the second set of the optical filters is based on a narrow b and optical filter comprising at least one of:
- a Blue color optical filter,
- a Green color optical filter,
- a Red color optical filter.
In a further implementation form of the second aspect, the third set of the optical filters is based on a wide band optical filter comprising at least one of:
- a White color optical filter, in particular a clear or a panchromatic optical filter,
- a Cyan optical filter,
- a Magenta optical filter,
- a Yellow color optical filter.
In a further implementation form of the second aspect, the 2D array comprises a plurality of rows and a plurality of colum ns, wherein a reset wire for odd pixels of a determined  row is disconnected from a reset w ire for even pixels of that determ ined row, and/or a reset wire for odd pixels of a determined column is disconnected from a reset wire for even pixels of that determined column.
In a further implementation form of the second aspect, the 2D array comprises a plurality of quad-pixels comprising a first set of quad-pixels associated with a first channel and a second set of quad-pixels associ ated with a second channel, wherein a reset wire for the first set of quad-pixels is disconnected from a reset wire for the second set of quad-pixels.
In a further implementation form of the second aspect, the method further comprises providing, by an ADC, a first analog gain to the first channel and a second analog gain to the second channel, and/or associating, a first ADC with the first channel and a second ADC with the second channel.
In a further implementation form of the second aspect, the method further comprises obtaining, by at least one multiplexer, a plurality of input signals from one ADC and/or from a plurality of ADCs, and outputting digital number pixel values based on multiplexing the obtained input signals.
The method of the second aspect and its implementation forms provide the same advantages and effects as the image sensor of the first aspect and its respective implementation forms.
A third aspect of the invention provides a device comprising an image sensor according to first aspect (and/or one of the implementation form of the first aspect) , wherein the device is configured to obtain image data of the image sensor, the image data comprising  a first set of image data associated with a first channel and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data associated with a second channel and obtained based on a second exposure time period and/or a second analog gain, and perform a dynamic range normalization procedure on the first set of image data and the second set of image data considering their respective exposure time periods and/or their respective analog gains.
The device may be, or may be incorporated in, a digital camera, a digital video recorder, a mobile phone, as martphone, an augmented reality device, a virtual reality device, a laptop, a tablet, etc.
The device may comprise a circuitry. The circuitry may comprise hardware and software. The hardware may comprise analog or digital circuitry, or both analog and digital circuitry. In some embodiments, the circuitry comprises one or more processors and a non-volatile memory connected to the one or more processors. The non-volatile memory may carry executable program code which, when executed by the one or more processors, causes the device to perform the operations or methods described herein.
In some embodiments, the device may comprise an ISP unit which may consider this perchannel information into account when processing the obtained image data from the image sensor.
In an implementation form of the third aspect, the device is further configured to determine a longer exposure time period and a shorter exposure time period from the first and the second exposure time periods, dete rmine an amplification gain based on a  division of the longer exposure time period by the shorter exposure time period, and amplify the shorter exposure time period based on the determined amplification gain.
In particular, the amplification gain and the exposure time may be determined based on how much light a pixel receives. For example, the narrow band pixels may typically receive less light than the wide band pixels. Therefore, the narrow band pixels may use a longer exposure time (e.g., within the limits imposed by the quantity of motion in the image, the required frame rate, etc. ) and/or analog gain in order to fully exploit the dynamic range of the pixels (and therefore the best possible PSNR may be achieved) .
In a further implementation form of the third aspect, the device is further configured to correct a black level offset of the first set of image data independently from a black level offset of the second set of image data.
In a further implementation form of the third aspect, the device is further configured to perform a noise reduction procedure on the first set of image data based on a first noise model, and perform a noise reduction procedure on the second set of image data based on a second noise model.
In a further implementation form of the third aspect, the device is further configured to construct a multi-channel image based on performing a demosaicing procedure on the noise-reduced first set of image data and the noise-reduced second set of image data, and convert the constructed multi-channel image to a Red/Green/Blue color space.
In some embodiments, the cons tructed multi-channel image may be converted to the Red/Green/Blue color space. In some embodiments, the constructed multi-channel image  may be converted to any other color space which may be required for a desired application.
The device of the third aspect and its implementation forms enjoy the advantages and effects achieved by the image sensor of the first aspect and its implementation forms.
A fourth aspect of the invention provide s a method for a device comprising an image sensor, wherein the method comprises obtaining image data of the image sensor, the image data comprising a first set of image data associated with a first channel and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data associated with a second channel and obtained based on a second exposure time period and/or a second analog gain, and performing a dynamic range normalization procedure on the first set of image data and the second set of image data considering their respective exposure time periods and/or their respective analog gains.
In an implementation form of the fourth aspect, the method further comprises determining a longer exposure time period and a shorter exposure time period from the first and the second exposure time periods, determining an amplification gain based on a division of the longer exposure time period by the shorter exposure time period, and amplifying the shorter exposure time period based on the determined amplification gain.
In a further implementation form of the fourth aspect, the method further comprises correcting a black level offset of the first set of image data independently from a black level offset of the second set of image data.
In a further implementation form of the fourth aspect, the method further comprises performing a noise reduction procedure on the first set of image data based on a first noise model, and performing a noise reduction procedure on the second set of image data based on a second noise model.
In a further implementation form of the fourth aspect, the method further comprises constructing a multi-channel image based on performing a demosaicing procedure on the noise-reduced first set of image data and the noise-reduced second set of image data, and converting the constructed multi-channel image to a Red/Green/Blue color space.
The method of the fourth aspect and its implementation forms provide the same advantages and effects as the device of the third aspect and its respective implementation forms.
A fifth aspect of the invention provides a computer program which, when executed by a computer, causes the method of the second aspect and/or the method of the fourth aspect and/or one of their implementation forms, to be performed.
In some embodiments, the com puter program can be provided on a non-transitory computer-readable recording medium.
It has to be noted that all devices, elements, units and means described in the present application could be implemented in the soft ware or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured  to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof.
BRIEF DESCRIPTION OF DRAWINGS
The above described aspects and implementation forms of the present invention will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which
FIG. 1 is a schematic view of an image sensor, according to an embodiment of the present invention.
FIG. 2 is a schematic view of a device comprising an image sensor, according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating the obtained signals of the first channel and the second channels in digital number pixel values.
FIG. 4 is a schematic view of a diagram schematically illustrating different FWCs in a pixel.
FIG. 5 is a schematic view of a diagram illustrating controlling an analog gain of the pixels of the first channel independently from the pixels of the second channel.
FIG. 6 is a flowchart of an image signal processing pipeline for processing im age data per channel.
FIG. 7 is another schematic view of the image sensor, according to an embodiment of the present invention.
FIGS. 8A-C are a schematic view of an ISI module comprising a DRN unit (FIG. 8A) , a diagram showing an exemplary input of the DRN unit (FIG. 8B) , and a diagram showing an exemplary output of the DRN unit (FIG. 8C) .
FIG. 9 is a flowchart of a method of op erating an im age sensor, according to an embodiment of the invention.
FIG. 10 is a flowchart of a method for a device comprising an image sensor, according to an embodiment of the invention.
FIGS. 11 shows a schematic view of a conventional Bayer CFA.
FIGS. 12A-B schematically illustrate a c onventional CFA comprising W/Y filter (FIG. 12A) , and s chematically illustrate a conventional CFA comprising W filters (FIG. 12B) .
DETAILED DESCRIPTION OF EMBODIMENTS
FIG. 1 is a schematic view of an image sensor 100, according to an embodiment of the present invention.
The image sensor 100 comprising a plurality of pixels 110 arranged in a two dimensional array, the plurality of pixels 110 comprising a first set of pixels 111 associated with a first channel 131 and a second set of pixels 112 associated with a second channel 132.
The image sensor 100 further comprising a plurality of optical filters 120, wherein each optical filter is associated w ith one of the plurality of pixels 110, the plurality of optical filters 120 comprising a first set of optical filters 121 and a second set of optical filters 122,
wherein the first set of optical filters 121 is associated with the pixels 111 of the first channel 131, each optical filter of the first set 121 being configured to pass light in a first wavelength range, wherein the second set of the optical filters 122 is associated with the pixels 112 of the second channel 132, each o ptical filter of the second set 122 being configured to pass light in a second wavelength range.
The wavelength range may, for example, a specific wavelength, a wavelength range of continuous wavelengths (i.e., from a first wavelength to a second wavelength) , a combination of at least two or more wavelength ranges such as a first wavelength range and a second wavelength range which may be (or may not be) continuous, etc.
The image sensor 100 is confi gured to control an exposure time period and/or an analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132.
The first channel and/or the second channel may be the color channels, etc., without limiting the present disclosure. Moreover, the image sensor 100 may be configured such that the analog gain/exposure time in the image sensor per (color) channel is controlled, and dedicated ADCs for different channels may be allocated such that each receives a well utilized signal dynamic range
The image sensor may comprise a circuitry (not shown in FIG. 1) . T he circuitry may comprise hardware and software. The hardware may comprise analog or digital circuitry, or both analog and digital circuitry. In some embodiments, the circuitry comprises one or more processors and a non-volatile memory connected to the one or more processors. The non-volatile memory may carry executable pr ogram code which, when executed by the one or more processors, causes the device to perform the operations or methods described herein.
Reference is now made to FIG. 2, which is a schematic view of a device 200 comprising an image sensor 100, according to an embodiment of the present invention;
The device 200 is configured to obtain image data 210 of the image sensor 100, the image data 210 comprising a first set of image data 211 associated with a first channel 131 and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data 212 associated with a sec ond channel 132 and obtained based on a second exposure time period and/or a second analog gain.
The device 200 is further configured to perform a dynamic range norm alization procedure on the first set of image data 211 and the second set of image data 212 considering their respective exposure time periods and/or their respective analog gains.
The device 200 may be, for example, an imaging device such as a digital camera comprising the image sensor 100.
The device 200 may comprise, for example, a specific ISP that considers this per channel information into account when processing obt ained image data from the image sensor 100.
The device 200 may comprise a circuitry (not shown in FIG. 2) . The circuitry may comprise hardware and software. The hardware may comprise analog or digital circuitry, or both analog and digital circuitry. In some embodiments, the circuitry comprises one or more processors and a non-volatile memory connected to the one or more processors. The non-volatile memory may carry executable program code which, when executed by the one or more processors, causes the device to perform the operations or methods described herein.
Reference is now made to FIG. 3, which is a diagram 300 illustrating obtained signals of the first channel 131 and the second channels 132 in digital number pixel values.
The signals may be obtained by the image sensor 100 and/or the device 200 (comprising the image sensor 100) without limiting the present disclosure to a specific device or a specific configuration.
Diagram 300 of FIG. 3 illustrates the digital number pixel values for an R signal 301, a B signal 302 and a W signal 304.
As it can be derived from diagram 300 of FIG. 3, both the wideband channel (e.g., a channel corresponding to W Signal 304) and narrowband channe ls (e.g., channels corresponding to the R signal 301 and the B signal 302) are well exposed to achieve a good SNR at all times (i.e., exposure times) , by using the full dynamic range of the pixel’s Full-Well Capacity (FWC) and a subsequent ADC (e.g., in the image sensor 100 and/or the device 200) .
Moreover, from the diagram 300 of FIG. 3, it may be derived that, the image sensor 100 and/or the device 200, control the exposure time and/or the analog again of different (color) channels, independently, and hence, they may provide an advantage over the conventional devices (as discussed, the conve ntional image sensors use the same analog gain/exposure time for all channels of the color filter and usually a shared ADC, as area and power critical component, and thus they impose the discussed trade-off) .
Reference is now made to FIG. 4, which is a schematic view of the diagram 400 illustrating different FWCs in a pixel.
In diagram 400 it is assumed that the first channel 131 and the second channel 132 have the same FWC, but different fill level due to different exposure time.
For example, different FWCs may imply different pixel size. Moreover, the color filter (narrow band vs wide band) may affect at what rate the pixels are filled with the electrons. Also, the exposure time changes the time (for how long) at which the pixels are allowed  to be filled, before reading their content. The exposure time and the analog gain may determines how the number of pixels is converted into a digital value.
Although, in FIG. 4 the pixels of different FWC are provided. In some embodiments, a pixel of given FWC may be filled up at different speeds, and may further be read out at different times (moments) . Moreover, different gains may be used in order to produce the best possible digital signal.
For example, the image sensor 100 and/or the device 200 (comprising the image sensor 100) may control the exposure time period of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132, without limiting the present disclosure to a specific device or a specific configuration.
In the diagram 400 of FIG. 4, the first channel 131 is a channel that is saturated well. Moreover, the second channel 132 is a channel that is under-utilized well.
The image sensor 100 and/or the device 200 may enable the exposure (integration) time of each  channel  131, 132 to be independently c ontrolled, and may further ensure that the electron FWC in each pixel is utilized well, while also ensure that the channels do not overflow.
Reference is now made to FIG. 5, which is a schematic view of the diagram 500 illustrating controlling an analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132.
For example, the image sensor 100 and/or the device 200 (comprising the image sensor 100) may control the analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132, without limiting the present disclosure to a specific device or a specific configuration.
The image sensor 100 is further comprising the ADC 501. The ADC 501 is configured to control, independently, the analog (conversion) gain of each channel (e.g., the analog gain the first channel 131 is controlled inde pendently from the analog gain of the second channel 132. This ensures that, for example, the signal supplied to the ADC 501 will utilize its full digital number (DN) range.
The present disclosure is not limited to a specific number of ADC or a specific configuration, although in FIG. 5 one ADC 501 is used. In some embodiments, it might be (necessary) to introd uce a dedicated ADCs per channel, or introduce one ADC that supports multiple conversion gains, to readout and convert pixels fast enough, or the like.
Furthermore, controlling the analog gains independently as it is shown in diagram 500 of FIG. 5 may be used in combination with controlling the exposures times independently as it was shown in diagram 400 of FIG. 4. For example, it may be possible to minimize exposure time to reduce motion blur and increase the gain accordingly. Moreover, it may be possible to (always) ensure the dynamic range of the corres ponding ADC is fully utilized (e.g., 0-4095 DN for 12-bit) .
Reference is now made to FIG. 6, which is a schematic view of a flowchart 600 of an image signal processing pipeline for processing image data per channel.
The flowchart of the IS P 600 is an exemplarily provided ISP. The present disclosure is not limited to a specific ISP, and in some other embodiments, the device 200 may include a similar or another types of ISP.
The ISP may be provided in the device 200 which may comprise the image sensor 100. The device 200 is further comprising the firm ware 3A (for example, it may provide a low-level control for the image sensor 100 which may be related to an Auto-Exposure (AE) part of the image sensor control) .
The ISP module of the device 200 is specifica lly adapted for the image sensor 100. For example, since the quantized color channel signals in DNs may be provided such that different channels having independent (or maybe different) dynamic ranges.
At step S601, the device 200 performs raw data correction on the obtained image data 120 of the image sensor 100. The image data 210 comprising the first set of image data 211 associated with the first channel 131 and obtained based on the first exposure time period and/or the first analog gain, and the second set of image data 212 associated with the second channel 132 and obtained based on the second exposure time period and/or the second analog gain.
The raw data correction comprises correcting black level offset individually per channel (for example, correction may be reducing substantially the black level) . For instance, the device 200 may correct a black level offset of the first set of image data 211 independently from a black level offset of the second set of image data 212.
At step S602, the device 200 performs a white balancing procedure (e.g., independently on the corrected first set of image data 211 and the corrected second set of image data 212) .
At step S603, the device 200 performs a lens correction procedure.
At step S604, the device 200 performs a noise reduction procedure.
The device 200 may estimate the correct per channel noise model in the denoising algorithm. For instance, the device 200 may perform a noise reduction procedure on the first set of image data 211 based on a first noise model, and perform a noise reduction procedure on the second set of image data 212 based on a second noise model.
At step S605, the device 200 performs a dynamic range correction procedure.
For example, the device 200 may re-adjust gains to equalize all channels to a common dynamic range at an appropriate stage (for example, it may be performed at Step S605 or S602) . Furthermore, due to a more advanced process, it is more efficient to expand/compress the bit depth at specific stages in the ISP to support the required total dynamic range across the channels than in the sensor from the start.
At step S606, the device 200 performs a demosaicing procedure (i.e., color interpretation) .
For example, the device 200 may reconstruct a correct full resolution multi-channel image during the demosaicing procedure.
At step S607, the device 200 creates a 3×3 color matrix.
Creating the 3×3 color matrix may comprise a conversion to standard RGB color space.
At step S608, the device 200 obtains Gamma LUT.
At step S609, the device 200 performs a sharpening procedure.
At step S610, the device 200 performs a digital image stabilization procedure.
At step S611, the device 200 converts from the RGB color space to YUV (444/422) .
Reference is made to FIG. 7 which is another schematic view of the image sensor 100, according to an embodiment of the present invention.
In FIG. 7, the image sensor 100 is exemplary based on a CMOS active pixel sensor array comprising a 2×2 CFA pattern.
The first set of pixels 111 associated with the first channel 131 and the first set of optical filters 121 is associated with the pixels 111 of the first channel 131.
Moreover, the second set of pixels 112 associated with the second channel 132, and the second set of the optical filters 122 is associated with the pixels 112 of the second channel 132.
The image sensor 100 further exemplary comprises even/odd RST reset wires for alternate pixels in a row so that integration time for pixels belonging to different color channels can be individually controlled.
The image sensor 100 further comprises separate even/odd COL column select wires so alternate pixel signals in a column can be routed to dedicated ADC.
The image sensor 100 further comprises four dedicated ADCs for each pixel in the 2×2 pattern so each color channel’s analog gain can be individually controlled. The ADC 801 (ADC 11) is dedicated to a pixel of the first channel 131. The ADC 802 (ADC 21) is dedicated to a pixel of the second channel 132.
The image sensor 100 further comprises a multiplexer 803 configured to combine the dedicated ADC’s output of the pixel DN values in a sensor readout order.
In some embodiments, the image sensor 100 may comprise a first set of quad-pixels which may be associated with the first channel and a second set of quad-pixels which may be associated with the second channel. Moreover, a reset wire for the first set of quad-pixels may be disconnected from a reset wire for the second set of quad-pixels.
References is now made to FIG. 8A, FIG. 8B and FIG. 8C, which are a schematic view of an ISP module comprising a dynamic range normalization unit (FIG. 8A) , a diagram 820b showing exemplary input of the DRN unit (FIG. 8B) , and a diagram 820c showing exemplary output of the DRN unit (FIG. 8C) .
The ISP module illustrated in FIG. 8A may be provided in the device 200. The ISP of the device 200 may consider the difference in exposure versus quantization that is received by the image sensor 100.
The ISP module of the device 200 comprises the white balance unit 801, the Dynamic Range Normalization (DRN) module 802 and the demosaic unit 803.
The DRN module 802 transforms different obtained image data (e.g., as if it were coming from a CMOS sensor with a single ADC) , and it may further provide a higher bit depth/dynamic range.
The DRN input comprises DN values different channels comprising the R channel indicated with 821b having exposure time E2, B channel indicated with 822b having exposure time E3, and W channel indicated with 824b having exposure time E1. Although FIG. 8B and 8C are exemplarily illustrated for the R/B/W channels, however, the present disclosure is not limited to the R/B/W channels and in some other embodiments any channel may be used.
As it can be derived from diagram 820c (DRN output) , the least sensitive channel (822c) that is saturating at the longest exposure E3 is not amplified. Moreover, the channel (821c) with higher sensitivity that saturate at shorter exposure (E2) is amplified by a gain E3/E2, and the channel (824c) with higher sensitivity that saturate at shortest exposure (E1) is amplified by a gain E3/E1.
The bit width of the DRN output (in diagram 820c) is increased (this examples assumes E3/E1<=4 so 2 bits are added to the 14b input) , and the downstream ISP modules adapted accordingly and can continue processing the image signal with a (known) algorithms.
FIG. 9 shows a method 900 of operating an image sensor.
The method 900 may be a method of operating the image sensor 100, as it described above and/or operating an image sensor when being incorporated in another device, such as the device 100, as it is described above.
Without limiting the present disclosure, in following the steps of method 900 are exemplary discussed as a method of operating the image sensor 100, wherein the image sensor 100 comprises a plurality of pixels 110 arranged in a 2D array, the plurality of pixels 110 comprising a first set of pixels 111 associated with a first channel 131 and a second set of pixels 112 associated with a second channel 132, and a plurality of optical filters 120, wherein each optical filter is associated with one of the plurality of pixels 110, the plurality of optical filters 120 comprising a first set of optical filters 121 and a second set of optical filters 122, wherei n the first set of optical filters 121 is associated with the pixels 111 of the first channel 131, each optical filter of the first set 121 being configured to pass light in a first waveleng thrange, wherein the second set of the optical filters 122 is associated with the pixels 112 of the second channel 132, each optical filter of the second set 122 being configured to pass light in a second wavelength range.
The method 900 comprises a step 901 of controlling an exposure time period and/or an analog gain of the pixels 111 of the first channel 131 independently from the pixels 112 of the second channel 132.
FIG. 10 shows a method 1000 according to an embodiment of the invention for a device 200 comprising an image sensor 100. The method 1000 may be carried out by the device 200, as it is described above.
The method 1000 comprises a step 1001 of obtaining image data 210 of the image sensor 100, the image data 210 comprising a first set of image data 211 associated with a first channel 131 and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data 212 associated with a second channel 132 and obtained based on a second exposure time period and/or a second analog gain
The method 1000 further comprises a step 1002 of performing a dynamic range normalization procedure on the first set of image data 211 and the second set of image data 212 considering their respective exposure time periods and/or their respective analog gains.
The present invention has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed invention, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.

Claims (17)

  1. An image sensor (100) comprising:
    a plurality of pixels (110) arranged in a two dimensional, 2D, array, the plurality of pixels (110) comprising a first set of pixels (111) associated with a first channel (131) and a second set of pixels (112) associated with a second channel (132) , and
    a plurality of optical filters (120) , wherein each optical filter is associated with one of the plurality of pixels (110) , the plurality of optical filters (120) comprising a first set of optical filters (121) and a second set of optical filters (122) ,
    wherein the first set of optical filters (121) is associated with the pixels (111) of the first channel (131) , each optical filter of the first set (121) being configured to pass light in a first wavelength range,
    wherein the second set of the optical filters (122) is associated with the pixels (112) of the second channel (132) , each optical filter of the second set (122) being configured to pass light in a second wavelength range, and
    wherein the image sensor (100) is configured to control an exposure time period and/or an analog gain of the pixels (111) of the first channel (131) independently from the pixels (112) of the second channel (132) .
  2. The image sensor (100) according to claim 1, wherein:
    a third set of the optical filters is associated with pixels of a third channel, each optical filter being configured to pass light in a third wavelength range, and
    the image sensor (100) is further configured to control an exposure time period and/or an analog gain of the pixels of the third channel independently from the pixels (111) of the first channel (131) and/or the pixels (112) of the second channel (132) .
  3. The image sensor (100) according to claim 1 or 2, wherein:
    each of the plurality of pixels (111, 112) is configured to detect light that passed through its associated optical filter (121, 122) .
  4. The image sensor (100) according to one of the claims 1 to 3, wherein:
    the first set (111) of the optical filters and/or the second set (112) of the optical filters is based on a narrow band optical filter comprising at least one of:
    - a Blue color optical filter,
    - a Green color optical filter,
    - a Red color optical filter.
  5. The image sensor (100) according to one of the claims 2 to 4, wherein:
    the third set of the optical filters is based on a wide band optical filter comprising at least one of:
    - a White color optical filter, in particular a clear or a panchromatic optical filter,
    - a Cyan optical filter,
    - a Magenta optical filter,
    - a Yellow color optical filter.
  6. The image sensor (100) according to one of the claims 1 to 5, wherein:
    the 2D array comprises a plurality of rows and a plurality of columns,
    a reset wire for odd pixels of a determined row is disconnected from a reset wire for even pixels of that determined row, and/or
    a reset wire for odd pixels of a determined column is disconnected from a reset wire for even pixels of that determined column.
  7. The image sensor (100) according to one of the claims 1 to 5, wherein:
    the 2D array comprises a plurality of quad-pixels comprising a first set of quad-pixels associated with a first channel (131) and a second set of quad-pixels associated with a second channel (132) , wherein
    a reset wire for the first set of quad-pixels is disconnected from a reset wire for the second set of quad-pixels.
  8. The image sensor (100) according to one of the claims 1 to 7, further comprising:
    an Analog-to-Digital Converters (501) , ADC, configured to provide a first analog gain to the first channel (131) and a second analog gain to the second channel (132) , and/or
    a first ADC (801) associated with the first channel (131) and a second ADC (802) associated with the second channel (132) .
  9. The image sensor (100) according to claim 8, further comprising:
    at least one multiplexer (803) configured to obtain a plurality of input signals from one ADC (501) and/or from a plurality of ADCs (801, 802) , and
    output digital number pixel values based on multiplexing the obtained input signals.
  10. A device (200) comprising an image sensor (100) according to one of the claims 1 to 9, wherein the device (200) is configured to:
    obtain image data (210) of the image sensor (100) , the image data (210) comprising a first set of image data (211) associated with a first channel (131) and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data (212) associated with a second channel (132) and obtained based on a second exposure time period and/or a second analog gain, and
    perform a dynamic range normalization procedure on the first set of image data (211) and the second set of image data (212) considering their respective exposure time periods and/or their respective analog gains.
  11. The device (200) according to claim 10, further configured to:
    determine a longer exposure time period and a shorter exposure time period from the first and the second exposure time periods,
    determine an amplification gain based on a division of the longer exposure time period by the shorter exposure time period, and
    amplify the shorter exposure time period based on the determined amplification gain.
  12. The device (200) according to one of the claims 10 to 11, further configured to:
    correct a black level offset of the first set of image data (211) independently from a black level offset of the second set of image data (212) .
  13. The device (200) according to one of the claims 10 to 12, further configured to:
    perform a noise reduction procedure on the first set of image data (211) based on a first noise model, and
    perform a noise reduction procedure on the second set of image data (212) based on a second noise model.
  14. The device (200) according to claim 13, further configured to:
    construct a multi-channel image based on performing a demosaicing procedure on the noise-reduced first set of image data and the noise-reduced second set of image data, and
    convert the constructed multi-channel image to a Red/Green/Blue color space.
  15. A method (900) of operating an image sensor (100) ,
    wherein the image sensor (100) comprises:
    a plurality of pixels (110) arranged in a two dimensional, 2D, array, the plurality of pixels (110) comprising a first set of pixels (111) associated with a first channel (131) and a second set of pixels (112) associated with a second channel (132) , and
    a plurality of optical filters (120) , wherein each optical filter is associated with one of the plurality of pixels (110) , the plurality of optical filters (120) comprising a first set of optical filters (121) and a second set of optical filters (122) ,
    wherein the first set of optical filters (121) is associated with the pixels (111) of the first channel (131) , each optical filter of the first set (121) being configured to pass light in a first wavelength range,
    wherein the second set of the optical filters (122) is associated with the pixels (112) of the second channel (132) , each optical filter of the second set (122) being configured to pass light in a second wavelength range, and
    wherein the method (900) comprises:
    controlling (901) an exposure time period and/or an analog gain of the pixels (111) of the first channel (131) independently from the pixels (112) of the second channel (132) .
  16. A method (1000) for a device (200) comprising an image sensor (100) , wherein the method (1000) comprises:
    obtaining (1001) image data (210) of the image sensor (100) , the image data (210) comprising a first set of image data (211) associated with a first channel (131) and obtained based on a first exposure time period and/or a first analog gain, and a second set of image data (212) associated with a second channel (132) and obtained based on a second exposure time period and/or a second analog gain, and
    performing (1002) a dynamic range normalization procedure on the first set of image data (211) and the second set of image data (212) considering their respective exposure time periods and/or their respective analog gains.
  17. A computer program which, when executed by a computer, causes the method (900) of claim 15 and/or the method (1000) of claim 16 to be performed.
PCT/CN2020/071160 2020-01-09 2020-01-09 Image sensor and device comprising an image sensor WO2021138869A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/071160 WO2021138869A1 (en) 2020-01-09 2020-01-09 Image sensor and device comprising an image sensor
EP20911692.0A EP4078943A4 (en) 2020-01-09 2020-01-09 Image sensor and device comprising an image sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/071160 WO2021138869A1 (en) 2020-01-09 2020-01-09 Image sensor and device comprising an image sensor

Publications (1)

Publication Number Publication Date
WO2021138869A1 true WO2021138869A1 (en) 2021-07-15

Family

ID=76787709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071160 WO2021138869A1 (en) 2020-01-09 2020-01-09 Image sensor and device comprising an image sensor

Country Status (2)

Country Link
EP (1) EP4078943A4 (en)
WO (1) WO2021138869A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100265370A1 (en) 2009-04-15 2010-10-21 Mrityunjay Kumar Producing full-color image with reduced motion blur
US20110032395A1 (en) 2005-10-03 2011-02-10 Konica Minolta Photo Imaging, Inc. Imaging unit and image sensor
US20110069200A1 (en) * 2009-09-22 2011-03-24 Samsung Electronics Co., Ltd. High dynamic range image generating apparatus and method
CN102348075A (en) * 2010-07-23 2012-02-08 美商豪威科技股份有限公司 Image sensor with dual element color filter array and three channel color output
US20150029355A1 (en) * 2013-07-25 2015-01-29 Samsung Electronics Co., Ltd. Image sensors and imaging devices including the same
US20150130967A1 (en) * 2013-11-13 2015-05-14 Nvidia Corporation Adaptive dynamic range imaging
US20150244916A1 (en) * 2014-02-21 2015-08-27 Samsung Electronics Co., Ltd. Electronic device and control method of the same
CN108269811A (en) * 2016-12-30 2018-07-10 豪威科技股份有限公司 High dynamic range color image sensor and related methods
US20190259795A1 (en) * 2018-02-21 2019-08-22 SK Hynix Inc. Image sensing device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4618342B2 (en) * 2008-05-20 2011-01-26 日本テキサス・インスツルメンツ株式会社 Solid-state imaging device
CN110649057B (en) * 2019-09-30 2021-03-05 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110032395A1 (en) 2005-10-03 2011-02-10 Konica Minolta Photo Imaging, Inc. Imaging unit and image sensor
US20100265370A1 (en) 2009-04-15 2010-10-21 Mrityunjay Kumar Producing full-color image with reduced motion blur
US20110069200A1 (en) * 2009-09-22 2011-03-24 Samsung Electronics Co., Ltd. High dynamic range image generating apparatus and method
CN102348075A (en) * 2010-07-23 2012-02-08 美商豪威科技股份有限公司 Image sensor with dual element color filter array and three channel color output
US20150029355A1 (en) * 2013-07-25 2015-01-29 Samsung Electronics Co., Ltd. Image sensors and imaging devices including the same
US20150130967A1 (en) * 2013-11-13 2015-05-14 Nvidia Corporation Adaptive dynamic range imaging
US20150244916A1 (en) * 2014-02-21 2015-08-27 Samsung Electronics Co., Ltd. Electronic device and control method of the same
CN108269811A (en) * 2016-12-30 2018-07-10 豪威科技股份有限公司 High dynamic range color image sensor and related methods
US20190259795A1 (en) * 2018-02-21 2019-08-22 SK Hynix Inc. Image sensing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4078943A4

Also Published As

Publication number Publication date
EP4078943A4 (en) 2023-01-18
EP4078943A1 (en) 2022-10-26

Similar Documents

Publication Publication Date Title
US10122951B2 (en) Imaging apparatus, imaging system, and image processing method
JP6584131B2 (en) Imaging apparatus, imaging system, and signal processing method
US10321081B2 (en) Solid-state imaging device
US9344637B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP5935876B2 (en) Image processing apparatus, imaging device, image processing method, and program
US9287316B2 (en) Systems and methods for mitigating image sensor pixel value clipping
US7777804B2 (en) High dynamic range sensor with reduced line memory for color interpolation
KR100617781B1 (en) Apparatus and method for improving image quality in a image sensor
US6734905B2 (en) Dynamic range extension for CMOS image sensors
US9793306B2 (en) Imaging systems with stacked photodiodes and chroma-luma de-noising
US8508631B2 (en) Pixel defect detection and correction device, imaging apparatus, pixel defect detection and correction method, and program
US8089533B2 (en) Fixed pattern noise removal circuit, fixed pattern noise removal method, program, and image pickup apparatus
EP3618430B1 (en) Solid-state image capturing device and electronic instrument
JP2008187249A (en) Solid-state imaging apparatus
US10229475B2 (en) Apparatus, system, and signal processing method for image pickup using resolution data and color data
JP2016213740A (en) Imaging apparatus and imaging system
JP5917160B2 (en) Imaging apparatus, image processing apparatus, image processing method, and program
WO2021138869A1 (en) Image sensor and device comprising an image sensor
CN108432239B (en) Solid-state imaging device, method for driving solid-state imaging device, and electronic apparatus
JP7457568B2 (en) Imaging device
WO2018079071A1 (en) Imaging device
JP4720130B2 (en) Imaging device
JP2005086630A (en) Imaging apparatus
JP2005110235A (en) Imaging apparatus
JP2023071586A (en) image sensing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911692

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020911692

Country of ref document: EP

Effective date: 20220719

NENP Non-entry into the national phase

Ref country code: DE