WO2012174752A1 - 多景深感光器件、系统、景深扩展方法及光学成像系统 - Google Patents

多景深感光器件、系统、景深扩展方法及光学成像系统 Download PDF

Info

Publication number
WO2012174752A1
WO2012174752A1 PCT/CN2011/076338 CN2011076338W WO2012174752A1 WO 2012174752 A1 WO2012174752 A1 WO 2012174752A1 CN 2011076338 W CN2011076338 W CN 2011076338W WO 2012174752 A1 WO2012174752 A1 WO 2012174752A1
Authority
WO
WIPO (PCT)
Prior art keywords
photosensitive
light
layer
pixel
spectrum
Prior art date
Application number
PCT/CN2011/076338
Other languages
English (en)
French (fr)
Inventor
胡笑平
Original Assignee
博立码杰通讯(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 博立码杰通讯(深圳)有限公司 filed Critical 博立码杰通讯(深圳)有限公司
Priority to ES11868080.0T priority Critical patent/ES2645020T3/es
Priority to KR1020147000198A priority patent/KR101572020B1/ko
Priority to HUE11868080A priority patent/HUE034756T2/en
Priority to PCT/CN2011/076338 priority patent/WO2012174752A1/zh
Priority to RU2014102161A priority patent/RU2609106C2/ru
Priority to EP11868080.0A priority patent/EP2725616B1/en
Priority to US14/128,921 priority patent/US9369641B2/en
Priority to CA2840267A priority patent/CA2840267C/en
Priority to PL11868080T priority patent/PL2725616T3/pl
Priority to JP2014516163A priority patent/JP2014520397A/ja
Publication of WO2012174752A1 publication Critical patent/WO2012174752A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/1462Coatings
    • H01L27/14621Colour filter arrangements
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14603Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • H01L27/14627Microlenses
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14641Electronic components shared by two or more pixel-elements, e.g. one amplifier shared by two pixel elements
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14643Photodiode arrays; MOS imagers
    • H01L27/14645Colour imagers
    • H01L27/14647Multicolour imagers having a stacked pixel-element structure, e.g. npn, npnpn or MQW elements
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14643Photodiode arrays; MOS imagers
    • H01L27/14649Infrared imagers
    • H01L27/14652Multispectral infrared imagers, having a stacked pixel-element structure, e.g. npn, npnpn or MQW structures
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L31/00Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof
    • H01L31/18Processes or apparatus specially adapted for the manufacture or treatment of these devices or of parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/236Image signal generators using stereoscopic image cameras using a single 2D image sensor using varifocal lenses or mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • H04N25/778Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising amplifiers shared between a plurality of pixels, i.e. at least one part of the amplifier must be on the sensor array itself

Definitions

  • Multi-depth depth sensing device system, depth of field expansion method and optical imaging system
  • the present invention relates to the field of light sensing, and in particular to a multi-depth sensing device, a photosensitive system using the multi-depth sensing device, a depth of field expanding method, and an optical imaging system and method. Background technique
  • the present invention is a "multi-photosensitive photosensitive device and a method for fabricating the same" earlier by the inventors.
  • the photosensitive system is a system for capturing and collecting scenes through an optical lens, and recording a scene by a photosensitive device such as a CMOS sensor.
  • a photosensitive device such as a CMOS sensor.
  • the process of adjusting the lens to make the scene at a certain distance from the lens clear is called focusing.
  • the point where the scene is located is called the focus point. Because it is "clear,” it is relative, so the focus is before (close to the lens). ), the image of the scene within a certain distance can be clear, the sum of the front and back range is called the depth of field.
  • the foreground is deeper than the back depth of field, that is, after precise focus, there is only 4 inches in front of the focus. Clear imaging, and the scene within 4 minutes after the focus is clear.
  • V is the image distance, that is, the distance from the lens to the photosensitive device. It can be seen that the dynamic adjustment of the image distance is also one of the means to obtain a clear image with wide depth of field.
  • the autofocus method in the existing photosensitive system adopts one of the above two methods.
  • the lens consists of a set of lenses, and by adjusting the distance between the lenses, the focal length of the lens or the distance between the lens and the photosensitive device can be adjusted (for optical zooming or focusing);
  • the CMOS sensor is displaced, thereby changing the image distance (and achieving optical focus).
  • both methods require a motorized mechanism and complex, precision mechanical components to drive the lens or photosensitive device. This not only significantly increases the size, but also significantly increases the cost and power consumption. In more than four applications, such as cell phone photography and medical photography, these are obvious disadvantages.
  • EDoF Extended Depth of Focus
  • DXO Company uses a special lens design to focus the red photosensitive pixels in the photosensitive device. At infinity, the blue sensitized pixels are focused as close as possible (eg 50 cm). The green photosensitive pixel is focused at a certain intermediate position.
  • a clear color as the main body, the color that is not clear enough is auxiliary information, and a clearer range can be restored and a clearer image can be calculated.
  • the technical problem to be solved by the present invention is to provide a multi-depth sensor device, a photosensitive system using the multi-depth sensor device, a depth of field expansion method, and an optical imaging system and method, which realize autofocus or more by physical means.
  • Distance imaging avoids the use of motorized mechanisms and complex, precision mechanical components with good depth of field expansion.
  • the present invention adopts the following technical solutions:
  • a multi-depth-of-depth photosensitive device comprising at least two photosensitive pixel layers capable of sensing a light source, at least two of said photosensitive pixel layers being arranged at a predetermined distance interval such that a lens is provided at a specific distance from said photosensitive device
  • the different optical signals are focused to different layers of the photosensitive pixels.
  • the photosensitive pixel layer includes at least one of an electrolessly plated photosensitive pixel layer and a semiconductor photosensitive pixel layer.
  • the electrolessly plated photosensitive pixel layer comprises quantum dot sensitive pixels.
  • the semiconductor photosensitive pixel layer comprises a CMOS photodiode, a CMOS photoreceptor, a CCD photodiode, a CCD photoreceptor, and CMOS and CCD photodiodes and photoreceptors having bidirectional charge transfer.
  • the different optical signals include optical signals of different distances or optical signals of different wavelengths.
  • a shorter wavelength optical signal is focused onto a photosensitive pixel layer that is closer to the lens.
  • the photosensitive pixel layer is two layers, and purple light, blue light, green light, or cyan light is focused to a photosensitive pixel layer closer to the lens, green light, red light , yellow light, visible light, or infrared light is focused onto the photosensitive pixel layer that is further away from the lens.
  • the photosensitive pixel layer is three layers, and the ultraviolet light, the blue light, or the cyan light is focused to the photosensitive pixel layer closest to the lens; blue light, green light, red light, Or yellow light is focused to the photosensitive pixel layer in the middle; red, yellow, visible, or infrared light is focused to the photosensitive pixel layer furthest from the lens.
  • the farther light signal is focused to a photosensitive pixel layer that is closer to the lens.
  • the photosensitive pixel layer is two layers, and the optical signal at infinity is focused to a photosensitive pixel layer that is closer to the lens, and the optical signal of the shortest distance is focused to The photosensitive pixel layer that is farther from the lens.
  • the purple, blue, green, or cyan light at infinity is focused to a photosensitive pixel layer that is closer to the lens, the shortest distance of green light, red light, Yellow, visible, or infrared light is focused onto the photosensitive pixel layer that is further from the lens.
  • the photosensitive pixel layer is three layers, and the optical signal at infinity is focused to the photosensitive pixel layer closest to the lens, and the shortest distance optical signal is focused to the off lens.
  • the farthest photosensitive pixel layer, the optical signal at an intermediate distance between the infinity and the shortest distance of interest, is focused to the photosensitive pixel layer in the middle.
  • the photosensitive pixel layer is three layers, and the ultraviolet light, the blue light, or the cyan light at infinity is focused to the photosensitive pixel layer closest to the lens, and the shortest distance is of interest.
  • Red, yellow, visible, or infrared light is focused to the photosensitive pixel layer furthest from the lens, at an intermediate distance between the infinity and the shortest distance of interest, blue, green, red, or yellow
  • the light is focused to the photosensitive pixel layer in the middle.
  • the shortest distance of interest comprises 2 mm, 5 mm, 7 mm, 1 cm, 2 cm, 3 cm, 5 cm, 7 cm, 10 cm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm, 70 cm, 80 cm 100 cm, Or 150cm o
  • a light transmissive layer is disposed between at least two of the photosensitive pixel layers.
  • the photosensitive pixels in the photosensitive pixel layer are front side photosensitive pixels, back side photosensitive pixels, or erected photosensitive pixels.
  • the photosensitive alignment is in the form of isolation selection, time division, partition selection, or pixel selection.
  • the photosensitive pixels in the photosensitive pixel layer respectively sense a complementary segment or sub-spectrum of ultraviolet, visible, near-infrared, and far-infrared; or the electroless coating
  • the photosensitive pixel and the semiconductor photosensitive pixel respectively sense one orthogonal segment or sub-segment in the ultraviolet, visible, near-infrared, and far-infrared.
  • the complementary spectral segment or sub-segment includes ultraviolet spectrum, blue spectrum, green spectrum, red spectrum, near-infrared pupil, far-infrared spectrum, cyan light, yellow light language , white light transmission, near infrared spectroscopy + far infrared ray, red spectrum + near infrared spectroscopy, red spectrum + near red radiance spectrum + far red ray spectrum, yellow light language + near infrared spectrum, yellow light + near infrared spectrum + far infrared spectroscopy, visible spectrum + near infrared spectroscopy + far infrared spectroscopy, ultraviolet light + visible spectrum, ultraviolet spectrum + visible spectrum + near infrared ray, ultraviolet spectrum + visible spectrum + near infrared spectroscopy + far red ray spectrum;
  • the orthogonal spectral segment or sub-spectral segment includes ultraviolet spectrum, blue pupil, green spectrum, red spectrum, near-infrared spectrum, far infrared spectrum, cyan spectrum, yellow light language, white spectrum, near red eve ⁇ ⁇ ⁇ ", ⁇ i", ⁇ , yellow spectrum + near-infrared spectrum, yellow spectrum + near-infrared spectrum + far-infrared spectrum, visible spectrum + near-infrared spectrum + far-infrared spectrum, ultraviolet spectrum + visible spectrum, UV ⁇ + visible ⁇ + near-infrared ridge, UV spectrum + visible spectrum + near-infrared ⁇ + far infrared spectrum.
  • the color arrangement in each photosensitive pixel layer comprises the same arrangement, horizontal arrangement, vertical arrangement, diagonal arrangement, generalized Bayesian arrangement, YUV422 arrangement, lateral YUV422 arrangement, honeycomb arrangement, and Cloth arrangement.
  • a filter film is disposed on a front side, a back side or both sides of a part or all of the photosensitive pixels in at least one of the photosensitive pixel layers, and the frequency selection characteristic of the filter film includes infrared Cut-off filter, blue band pass, green band pass, red band pass, cyan band pass, yellow band pass, magenta band pass, or visible band pass.
  • two adjacent layers of the photosensitive pixel layer are each provided with a read circuit; or two adjacent layers of the photosensitive pixel layer share a read circuit.
  • the read circuit is an active pixel read circuit, a passive pixel read circuit, or an active pixel and passive pixel hybrid read circuit.
  • the active pixels comprise 3T, 4", 5" or 6" active pixels.
  • the sharing manner of the reading circuit includes a single layer or a 4-point sharing mode of a single layer or a single layer or a 6-point sharing mode of a single layer or an upper layer, or a single layer or an 8-point sharing mode of the upper and lower layers, or Single layer or upper and lower layers can be shared by any point.
  • the read circuit includes inter-pixel, adjacent-row, or hetero-column pixels in an array of pixels for each photosensitive pixel layer Performing two-two combined sampling to obtain a first merging unit of sample data of the first merged pixel; and combining sample data for the first merged pixel obtained by the first merging unit to obtain a second merged pixel A second merging unit that samples the data.
  • the reading circuit further includes a third merging unit, configured to combine the sampled data of the second merged pixel obtained by the second merging unit to obtain a third merged pixel. Sampling data.
  • the pixel combining manner of the first merging unit or the second merging unit is a charge adding manner or a signal averaging manner between pixels of the same or different colors, wherein different color pixels are arranged
  • the pixel merging method follows the color space transformation method to meet the requirements of color reconstruction.
  • the color space transformation comprises a transformation from RGB to CyYeMgX space, a transformation from RGB to YUV space, or a transformation from CyYeMgX to YUV space, where X is R (red), G (green) , B (lane) of any of them.
  • the charge addition is accomplished by direct parallel connection of pixels or simultaneous transfer of charge into a read capacitor (FD).
  • FD read capacitor
  • the color-based merge sampling mode of the first merging unit or the second merging unit includes a same color merging mode, a heterochromatic merging mode, a hybrid merging mode, or a selective discarding of the excess color merging mode.
  • the merge mode of the first merge unit and the second merge unit is not the same color merge mode.
  • the location-based merge sampling mode of the first merging unit or the second merging unit includes at least one of the following ways: automatic averaging of signals directly outputted to the bus, hopping Row or skip mode, and sample by sample.
  • the merged manner of the third merging unit comprises: at least one of a color space conversion manner and a back end digital image scaling manner.
  • a global electronic shutter having a cross-layer read function is included, the global electronic shutter comprising a plurality of charge or voltage values that can simultaneously transfer and read one or more layers of photosensitive pixels The non-photosensitive transfer and reading of pixels.
  • the plurality of non-photosensitive transfer and read pixels are located in the non-photosensitive pixel transfer and read layer; or in the photosensitive pixel layer.
  • each photosensitive pixel layer is provided with an adjacent non-inductive pixel transfer and read layer.
  • the non-photosensitive transfer and read pixels are made of semiconductor circuitry.
  • the invention also provides a depth of field expansion method, comprising:
  • a sharp image is obtained by images of different sharpness from different photosensitive pixel layers.
  • the invention also provides an optical imaging method, comprising:
  • Providing a lens and a photosensitive device comprising at least two photosensitive pixel layers capable of sensing a light source; placing the photosensitive device at a specific distance from the lens and at a predetermined distance between at least two of the photosensitive pixel layers The spacing is such that different optical signals from the lens are focused to different ones of the photosensitive pixel layers.
  • the present invention also provides an optical imaging system comprising a lens and a multi-depth sensor device, the multi-depth sensor device being disposed at a specific distance from the lens, comprising at least two photosensitive pixel layers capable of sensing a light source, at least two The photosensitive pixel layers are arranged at a predetermined distance interval such that different optical signals from the lens are focused to different ones of the photosensitive pixel layers.
  • the different optical signals include optical signals of different distances or optical signals of different wavelengths.
  • a shorter wavelength optical signal is focused onto a photosensitive pixel layer that is closer to the lens.
  • the farther light signal is focused to a photosensitive pixel layer that is closer to the lens.
  • the present invention also provides a photosensitive system comprising the above-described photosensitive device.
  • the photosensitive system comprises a digital camera, a camera phone, a camera, a video or camera monitoring system, an image recognition system, a medical image system, a military, fire or downhole image system, an automatic tracking system, a stereo Imaging systems, machine vision systems, automotive vision or assisted driving systems, video game systems, webcams, infrared and night vision systems, multi-spectral imaging systems, and computer cameras.
  • the stroke of the lens must be 0. More than 2mm, that is, the image distance of the clear image at infinity and the image distance of the clear image at 10cm is at least 0.2mm, that is, 200um. It is well known that silicon or other semiconductor materials are opaque. After the light enters the silicon, it is already absorbed at about 12um. Therefore, even with the autofocus system, it is difficult to obtain a wide depth of field range for existing light sensing systems.
  • the multi-depth sensor of the present invention the photosensitive system using the multi-depth sensor, the depth of field expansion method, and the optical imaging system and method, at least two of the photosensitive layers are provided by providing at least two photosensitive pixel layers that can sense the light source
  • the pixel layers are arranged at a predetermined distance interval such that different optical signals from the lens at a specific distance from the photosensitive device are focused to different photosensitive pixel layers, so that different photosensitive pixel layers can obtain images of different depths of field.
  • the sensor can be fabricated as a sensor chip, but from a application perspective, such as optical imaging, the sensor usually needs to be used with a lens.
  • the lens has different focusing characteristics depending on its size, material, and surface design.
  • the depth of field is usually infinity to 2 m. Exceeding this depth of field, it is necessary to use autofocus technology, for example.
  • a clear view of, for example, 50 cm - 30 cm can be obtained by adjusting the distance from the photosensitive device to the lens, that is, adjusting the image distance to a suitable data.
  • the selected application lens is a mobile phone lens
  • two photosensitive pixel layers referred to as a first photosensitive pixel layer and a second photosensitive pixel layer
  • the photosensitive device is placed at a specific distance from the lens.
  • the distance from the first photosensitive pixel layer to the lens is the first image distance, and the distance from the second photosensitive pixel layer to the lens.
  • the second image distance the first image distance is smaller than the second image distance
  • the specific distance from the lens and the preset distance between the two photosensitive pixel layers so that the scene from the infinity to the depth of field of 2m
  • the image can be clearly imaged on the first photosensitive pixel layer, and a scene from a depth of field of 50 cm to 30 cm will be clearly imaged in the second photosensitive pixel layer. This achieves two depths of field or depth of field expansion.
  • the number of photosensitive pixel layers and the depth of field range are only exemplary data.
  • the number of photosensitive pixel layers and the preset distance between each other can be adjusted to form a connection.
  • Continuous, overlapping, complementary, or orthogonal depth of field range the depth of field range of multiple photosensitive pixel layers will give the sensor a wide range of depth of field, enabling clear images over a wide depth of field without autofocus Avoid the use of motor-drive mechanisms and complex, sophisticated mechanical components, saving significant space and cost.
  • relatively complete image information can usually be obtained from at least one photosensitive pixel layer, so that the image has a relatively high definition without cumbersome mathematical calculations.
  • the present invention will describe, by way of example, this novel, powerful hybrid multispectral photoreceptor set, photosensitive device and system. These preferred implementations are merely illustrative of the advantages and implementations of the invention, and are not intended to limit the scope of the invention.
  • Figure 1 is a schematic diagram of a spectral distribution.
  • Visible light generally refers to light at wavelengths from 390 nm to 760 legs.
  • the wavelength of blue light seen from visible light by the spectroscopic effect of the prism is 440 - 490 nm
  • the wavelength of green light is 520 - 570 nm
  • the wavelength of red light is 630 - 740 ⁇ .
  • 390 is generally used.
  • - 500 ships are classified as blue
  • 500-610nm is classified as green
  • 610-760nm is classified as red.
  • the division of red, green and blue is not absolute.
  • the waveforms of red, green, blue, cyan (blue-green composite), and yellow (green-red composite) in the figure are ideal wavelength response curves required for primary or secondary (composite) photosensitive pixels. If the primary color sensitized pixels or the complementary color (composite color) sensitized pixels as the primary color do not have similar wavelength response curves, it is difficult to reconstruct most of the colors that humans can see.
  • Figure 1 is a 3T read circuit of a photosensitive pixel.
  • Figure 3 is a 4T read circuit of a photosensitive pixel.
  • Figure 4 is my own “a multi-spectral photosensitive device and its sampling method” (China Application No.: 200910105948. 2) and "a photosensitive device and its reading method, reading circuit” (China Application No.: 200910106477. 7 The proposed four-point shared read circuit.
  • Figure 5 is my own “a multi-spectral photosensitive device and its sampling method” (China Application No.: 200910105948. 2) and "a photosensitive device and its reading method, reading circuit” (China Application No.: 200910106477. 7 The proposed two-layer six-point shared read circuit.
  • Figure 6 is my "a multi-spectral photosensitive device and its sampling method” (China application number: 200910105948. 2) and the two-layer eight-point shared reading circuit proposed in "A Photosensitive Device and Its Reading Method, Reading Circuit” (China Application No.: 200910106477. 7).
  • Fig. 7 is an arbitrary N-point shared reading circuit proposed by myself in "A Photosensitive Device and Its Reading Method, Reading Circuit” (China Application No.: 200910106477. 7).
  • Figure 8 is presented in the "Multispectral Photosensitive Device and Its Manufacturing Method” (Chinese Application No.: 200810217270. 2) and “Multi-Optical Sensing Device” (China Application No.: 2009101 05372. X).
  • Fig. 9 is a subsampling method for realizing charge combining between different color pixels, which is proposed in "Multi-spectral Photosensitive Device and Sampling Method" (Chinese Application No.: 200910105948. 2). This method is equally applicable to the multi-photosensitive photosensitive device of the present invention.
  • Fig. 10 is a pixel merging and subsampling method which is implemented by color space transformation, which is proposed in "Multi-photosensitive photosensitive device and sampling method" (Chinese Application No.: 200910105948. 2). This method is equally applicable to the multi-photosensitive photosensitive device of the present invention.
  • Figure 11 is a schematic view showing the structure of a two-layer photosensitive member for depth of field extension proposed by the present invention, wherein the thickness of the light transmitting layer is determined by the image difference (V2-V1) of the desired two photosensitive planes.
  • FIG. 12 is a schematic structural view of a three-layer photosensitive device for depth of field extension proposed by the present invention, wherein the thickness of the light transmissive layer is determined by the difference in image distance between the two photosensitive planes (V2-V1 or V3-V2) To decide.
  • Figure 13 is the first schematic diagram of the realization of depth of field expansion using a multilayer photosensor.
  • objects at different distances will be clearly focused on the different photosensitive pixel layers of the multilayer photosensor. Therefore, any object located between these three distances will be clearly or relatively clearly focused on one, or two photosensitive pixel layers, thereby achieving the effect of depth of field expansion.
  • Ul, U2, U3 are the object distance (ie, the distance from the object to the lens)
  • VI, V2, and V3 are the image distances (ie, the distance from the photosensitive layer to the lens).
  • Fig. 14 is a schematic diagram showing the principle of using a special lens design method and a multilayer photosensitive device to achieve a better depth of field expansion effect.
  • the lens is specially designed to allow shorter wavelength light to be focused on the photosensitive pixel layer closer to the lens, or the photosensitive pixel layer closer to the light source; The light is focused on the photosensitive layer farther from the lens, or the photosensitive pixel layer farther from the light source; and the medium-wavelength light is focused on the intermediate photosensitive layer.
  • the imaging system combines the characteristics of multi-spectral and multi-image distance to greatly expand the depth of field. This system has an unparalleled advantage for macro photography.
  • Fig. 15 is a schematic view showing the principle of realizing the photosensitive pixel level of the multi-depth sensor device shown in Fig. 11.
  • the distance between the two photosensitive pixel layers can be adjusted so that the photosensitive pixels of the two photosensitive pixel layers correspond to different depths of field.
  • the upper and lower photosensitive pixel layers each employ a semiconductor photosensitive pixel layer.
  • Figures 16(a), (b), (c), and (d) are schematic diagrams showing the principle of another photosensitive pixel level for realizing the multi-depth sensor device shown in Figure 11.
  • the distance between the two photosensitive pixel layers can be adjusted so that the photosensitive pixels of the two photosensitive pixel layers correspond to different depths of field.
  • the upper photosensitive layer is electrolessly coated with a photosensitive pixel layer
  • the lower photosensitive layer is coated with a semiconductor photosensitive pixel layer.
  • the two can be interchanged without affecting the effect of achieving multiple depth of field.
  • Fig. 15 and Fig. 16 only depict the case of photosensitive pixels, and other reading circuits and auxiliary circuits are omitted since they can be the same as the prior art.
  • Figure 17 (a), (b), (c), (d) are schematic diagrams showing the principle of another photosensitive pixel level for realizing the multi-depth sensor of Figure 11 .
  • the distance between one photosensitive pixel layer of the upper layer and the other two photosensitive pixel layers of the lower layer can be adjusted, so that the photosensitive pixels of different photosensitive pixel layers are respectively corresponding to different Depth of field.
  • one photosensitive pixel layer of the upper layer is electrolessly coated with a photosensitive pixel layer, and the other two photosensitive pixel layers of the lower layer are made of a semiconductor photosensitive pixel layer, noting that two of Figs. 17) and (b) are noted.
  • the semiconductor photosensitive pixel layer is disposed on both sides of a semiconductor substrate, and the two semiconductor photosensitive pixel layers in Figs. 17(c), (d) are disposed on one side of a semiconductor substrate.
  • the direction of illumination may be the front side or the back side of the semiconductor substrate. It should also be noted that due to the transparency of the semiconductor, the thickness of the semiconductor substrate is generally thin, and generally does not satisfy the need for the distance of the photosensitive pixel layers required for depth of field expansion. Therefore, two semiconductor photosensitive pixel layers are more used to achieve multi-spectral requirements.
  • FIGS. 18(a) and (b) are schematic diagrams showing the principle of another photosensitive pixel level for realizing the multi-depth sensor of FIG.
  • the distance between one photosensitive pixel layer of the upper layer and the other two photosensitive pixel layers of the lower layer can be adjusted, so that the photosensitive pixels of different photosensitive pixel layers are respectively corresponding to different Depth of field.
  • one photosensitive pixel layer of the upper layer is electrolessly coated with a photosensitive pixel layer, and the other two photosensitive pixel layers of the lower layer are respectively made of a semiconductor sense.
  • Photo pixel layer and electroless film photosensitive pixel layer may contain read pixels and sampling circuitry necessary to read the three photosensitive pixel layers.
  • Fig. 19 (a) and (b) are schematic diagrams showing the principle of a photosensitive pixel level for realizing the multi-depth sensor of Fig. 12. Note that, in this example, the electroless plate photosensitive layer, the first light transmitting layer, the first semiconductor photosensitive pixel layer, the second light transmitting layer, and the second semiconductor photosensitive pixel layer are disposed in this order from top to bottom.
  • the first semiconductor photosensitive pixel layer and the second semiconductor photosensitive pixel layer are respectively realized on different two semiconductor base layers, and the distance between the electroless plated photosensitive pixel layer and the first semiconductor photosensitive pixel layer is adjusted by adjusting the first light transmission
  • the thickness of the layer is achieved by the distance between the first semiconductor photosensitive pixel layer and the second semiconductor photosensitive pixel layer being adjusted by adjusting the thickness of the second light transmitting layer.
  • the read and sample circuits can be implemented in the first semiconductor photosensitive pixel layer located in the middle or in the two semiconductor photosensitive pixel layers.
  • FIG. 20-23 For an embodiment having two layers of semiconductor photosensitive pixels, such as Figures 8 and 15, etc., if the photosensitive pixels in one of the photosensitive pixel layers are removed, a layer dedicated to reading circuits and signal processing is formed.
  • a photosensitive device with a global electronic shutter (having a cross-layer reading function) as proposed by the present invention as shown in Figs. 20-23 can be obtained.
  • Figures 20-23 show only the photosensitive pixel layer of the photosensitive device with the global electronic shutter (with cross-layer reading function) and the non-photosensitive transfer and read pixel layers, obviously, in combination with the foregoing, when retained When a plurality of photosensitive pixel layers of different depths of field are focused, a multi-depth sensor with a global electronic shutter (having a cross-layer reading function) can be obtained.
  • Figure 20 is a schematic diagram of two rows of transfer pixels (read capacitors) proposed by the present invention.
  • This is actually a new implementation of the interlaced scanning method in a photosensitive device and its reading method and reading circuit (China Application No.: 200910106477. 7).
  • the transfer pixel and the photosensitive pixel are not in the same layer, so that the use efficiency of a better photosensitive area can be obtained, but at the same time, the shutter speed is doubled.
  • this method can be used for a photosensitive device in which a photosensitive material (e.g., a photosensitive film) is used as a photosensitive pixel layer.
  • a photosensitive material e.g., a photosensitive film
  • Fig. 21 is a view showing the use of the poor light transmission property of a semiconductor to increase the thickness of the semiconductor base layer to a certain thickness so that the pixels of the lower layer do not sense light. Then, using the metal perforation or surface routing, external connection, the upper photosensitive pixel signal is led to the read pixel of the non-photosensitive pixel layer through the diode or the read amplification switch circuit, where the sampling is performed. Reading, thereby degrading a two-layer photosensitive device into a single-layer photosensitive device (with a cross-layer reading function) global electronic shutter. This device is double-layered in structure, but it is single-layer in effect. When this method is used for the multilayer photosensitive device shown in Fig. 17 (a), the global (with cross-layer reading function) can be obtained. Multi-depth sensor for electronic shutters.
  • Figure 22 is a schematic diagram of a multi-optical two-layer photosensitive device with a global electronic shutter (having a cross-layer reading function) based on a conventional (CMOS and CCD) semiconductor circuit proposed by the present invention. Similarly, the transfer of the photosensitive pixel signal to the non-sensitized read pixel is controlled by a diode or amplifying switch circuit.
  • FIG. 23 is a schematic diagram of another multi-spectral two-layer photosensitive device with a global electronic shutter (having a cross-layer reading function) based on a chemical photosensitive material (such as a quantum photosensitive film) according to the present invention, wherein the photosensitive image
  • the layer is made of a chemical photosensitive material (such as a quantum photosensitive film), and the reading circuit and the signal processing layer are CMOS semiconductor layers.
  • each of the photosensitive pixels corresponds to a non-photosensitive charge transfer pixel for implementing a global electronic shutter. This is also a deliberate degradation of the multilayer photosensitive device for the realization of a global electronic shutter.
  • Figure 24 is a simultaneous use of active pixels and passive pixels to read photosensitive pixel signals, which is proposed in "A Photosensitive Device and Its Reading Method, Reading Circuit” (China Application No.: 200910106477. 7). Read the circuit.
  • the advantage of this approach is that it greatly expands the dynamic range of the photoreceptor and doubles the power consumption during image preview.
  • This hybrid read circuit is especially useful in high sensitivity multi-layer multispectral light sensitive devices and multi-optical photosensitive devices with global electronic shutters.
  • Fig. 25 is a view showing a sampling control circuit for describing the pixel merging and sub-sampling method proposed in the invention in "Multi-photosensitive photosensitive device and its sample method" (China Application No.: 200910105948. 2). This novel method of pixel merging and subsampling will also be used in the present invention. detailed description
  • the multi-depth sensor device to be proposed by the present invention is mainly used for depth of field expansion, which is currently called EDoF (i.e., Ext ended Depth of Focus) in the mobile phone industry.
  • Depth of field extension is especially widely used in camera phones.
  • the current depth of field expansion mainly uses optical and mathematical means, usually using optical zoom or autofocus to achieve depth of field adjustment, which requires motor-driven mechanisms and complex, sophisticated machinery. The fit of the components will therefore add significant space and cost.
  • the multi-depth-of-depth sensing device in combination with the implementation of the multi-layer photosensitive device, includes at least two photosensitive pixel layers capable of sensing a light source, and at least two of the photosensitive pixel layers are spaced apart by a predetermined distance Arranging such that different optical signals from a lens at a specific distance from the photosensitive device are focused to different photosensitive pixel layers, such that different photosensitive pixel layers constitute different image distances
  • the photosensitive plane can focus on different depth of field, thus expanding the depth of field of the photosensitive device, which is equivalent to realizing autofocus from the physical means of multi-point optical focusing, which can remove the motor-driven mechanism and complex and precise mechanical components. Coordination saves space and cost significantly.
  • the photosensitive pixel layer includes at least one of an electroless plated photosensitive pixel layer and a semiconductor photosensitive pixel layer. That is, the at least two photosensitive pixel layers may all be electrolessly coated photosensitive pixel layers, or all of them are semiconductor photosensitive pixel layers, or partially electrolessly coated photosensitive pixel layers, and partially semiconductor photosensitive pixel layers. Wherein, the electrolessly coated photosensitive pixel layer comprises quantum dot photosensitive pixels.
  • the semiconductor photosensitive pixel layer includes a CMOS photodiode, a CMOS photoreceptor, a CCD photodiode, a CCD photoreceptor, and CMOS and CCD photodiodes and photoreceptors with bidirectional charge transfer.
  • the photosensitive pixel layers described above are used to sense different optical signals, respectively.
  • the characteristics of the optical signal concerned mainly include the spectral characteristics of the optical signal, that is, the wavelength of the light, and the distance characteristic of the optical signal, that is, the distance of the optical signal to the lens. Therefore, the difference between the optical signals refers to the difference between at least one of the two optical signals, that is, between the two optical signals, which may be different wavelengths, or different distances, or different wavelengths and distances.
  • a shorter wavelength optical signal is focused to a photosensitive pixel layer closer to the lens.
  • the light signal of the photosensitive pixel layer that is closer to the lens is purple, blue light, cyan, or green light, and the light of the photosensitive pixel layer that is farther from the lens is focused.
  • the signal is green light, red light, yellow light, visible light (white light), or infrared light.
  • the light signal focused on the photosensitive pixel layer closest to the lens is ultraviolet light, blue light, or cyan light; the light signal focused on the photosensitive pixel layer in the middle is green Light, blue light, yellow light, red light, or visible light (white light);
  • the light signal that is focused on the photosensitive pixel layer farthest from the light source is red light, yellow light, visible light, or infrared light.
  • the farther distance light signals are focused to the photosensitive pixel layer closer to the lens.
  • the photosensitive pixel layer is two layers, and the photosensitive pixel layer that is closer to the light source is an optical signal at infinity, and the photosensitive pixel layer that is farther away from the light source is the optical signal of the shortest distance of interest. .
  • ultraviolet light, blue light, cyan light, or green light at infinity is focused to a photosensitive pixel layer that is closer to the light source; green light of the shortest distance of interest, Red light, yellow light, visible light (white The chromatic light, or infrared light, is focused to a photosensitive pixel layer that is further from the source.
  • the photosensitive pixel layer is three layers, and the photosensitive pixel layer closest to the light source is an optical signal at infinity, and the light sensor that is focused on the photosensitive pixel layer farthest from the light source is the shortest distance of interest. Focusing on the photosensitive pixel layer in the middle is an optical signal at an intermediate distance from the shortest distance of interest at infinity.
  • ultraviolet light at any distance, blue light, cyan light, or green light is focused to the photosensitive pixel layer closest to the light source; at the infinity and the shortest distance of interest
  • An intermediate distance of green, blue, yellow, red, or visible (white light) is focused to the photosensitive pixel layer in the middle; the shortest distance of red, yellow, visible, or infrared light is Focus on the photosensitive pixel layer furthest from the light source.
  • the shortest distance of interest includes 2 mm, 5 mm, 7 mm, 1 cm, 2 cm, 3 cm, 5 cm, 7 cm, 1 Ocm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm, 70 cm, 80 cm, 1 00 cm, 150 cm.
  • the shortest distance of interest refers to the closest distance from the scene of interest to the lens.
  • the shortest distance of interest is 2 ⁇ , which means that the closest distance of the subject's attention to the lens is 2 ⁇ , and when the distance from the scene to the lens is less than 2 ⁇ , it is no longer concerned.
  • Figure 13 shows the relationship between the distance and the focus plane.
  • objects located at different distances will be clearly focused on different photosensitive pixel layers of the multilayer photoreceptor. Therefore, any object located between these three distances will be clearly or relatively clearly focused on one, or two photosensitive pixel layers, so that they can obtain their clear images simultaneously from the same photosensitive device.
  • Figure 14 shows the relationship between the wavelength and the focal plane.
  • the shorter wavelength light has a shorter focal length. Therefore, by designing the lens, the shorter wavelength light can be focused on the photosensitive pixel layer closer to the lens; the longer wavelength light is focused on the photosensitive pixel layer farther from the lens; the medium wavelength light Focusing on the photosensitive pixel layer in the middle.
  • this imaging system combines the characteristics of multiple apertures and multiple image distances.
  • Each photosensitive layer has its own depth of field range, and for different wavelengths of light, the depth of field distance and range are different, and the depth of field of each photosensitive layer can be varied. Integration, which greatly expands the depth of field, has an unparalleled advantage for macro photography.
  • Embodiments also include implementing, in the multi-depth sensing device described above, a global electronic shutter having a cross-layer reading function, including a plurality of non-photosensitive transfer and read pixels, the non-photosensitive transfer and Each of the read pixels can be used to transfer and read the charge or voltage value of at least one of the photosensitive pixels at the other layers.
  • a plurality of non-photosensitive transfer and read pixels can simultaneously transfer and read the charge or voltage values of one or more layers of photosensitive pixels.
  • a plurality of non-photosensitive transfer and read pixels can be in the same pixel layer as the photosensitive pixels, which obviously means a decrease in the sensitivity of the pixel layer.
  • a plurality of non-photosensitive transfer and read pixels can also be located in different pixel layers than the photosensitive pixels, i.e., form separate photosensitive pixel layers and separate non-photosensitive transfer and read pixel layers, which obviously means
  • the global electronic shutter with cross-layer reading capability can only be implemented in two or more layers of photosensitive devices.
  • An adjacent non-photosensitive pixel transfer and read layer can be provided for each photosensitive pixel layer, and the non-photosensitive pixel transfer and read layer can simultaneously transfer the charge of all pixels of the corresponding photosensitive pixel layer. Or a voltage value; or the charge or voltage value of the odd or even row of pixels of the corresponding photosensitive pixel layer can be simultaneously transferred.
  • Figure 20 shows the design of a row of read capacitors shared by two rows to implement progressive scan.
  • the non-photosensitive transfer and read pixel layers can be made of a semiconductor circuit.
  • Fig. 21 shows a degraded implementation of a two-layer photosensitive device to obtain a single-layer single-layer photosensitive device with a global electron shutter.
  • This method utilizes the poor light transmission of the semiconductor material and thickens the two semiconductor substrates so that the underlying layer does not sense light and can only be used for pixel reading.
  • this method is applied to the three-layer photosensitive device shown in Fig. 17 (a), a two-layer multi-depth photosensitive device with a global electronic shutter can be obtained.
  • Figures 22 and 23 show an implementation of a pixel level of a global electronic shutter with cross-layer read capability.
  • the distance, up and down positional relationship appearing in the text refers to the relative position based on the light source.
  • the description of the upper photosensitive pixel layer and the lower photosensitive pixel layer means that the photosensitive pixel layer is horizontally placed, and the light source is vertically irradiated from above to the photosensitive pixel layer.
  • the upper and lower relationship in this paper actually has a broader meaning, that is, for example, the photosensitive surface is placed vertically, the light source is irradiated from the left side or the right side, or the front side or the back side is vertically irradiated to the photosensitive surface, so that the so-called upper and lower relationship is Equivalent to the context or the left and right relationship.
  • top and bottom surfaces of the base layer described below also have similar meanings, that is, horizontally placed, the light source is vertically irradiated from above to the base layer. At this time, the surface of the base layer located above is referred to as the top surface, and the surface of the base layer located below is referred to as Bottom surface.
  • the light source when the base layer is placed vertically, the light source is vertically irradiated to the base layer from the left side or the right side, or from the front side or the back side, it can be replaced by the front side.
  • the “sensing" of the photosensitive pixel layer means that the pixel has a light-sensing capability, "a light source can be sensed”, Is the result of whether the photosensitive pixel can sense the light source, that is, whether the photosensitive capability of the photosensitive pixel is exerted, for example, due to the transparency of the semiconductor, when a semiconductor photosensitive pixel is disposed on the top and bottom surfaces of a semiconductor substrate.
  • a layer if the thickness of the semiconductor substrate exceeds the transmittance limit of the semiconductor, when the light source is irradiated to the semiconductor substrate, only the top semiconductor photosensitive pixel layer can sense the light source, and the bottom semiconductor photosensitive pixel layer is received by the semiconductor substrate If the thickness is limited and the light source cannot be sensed, the semiconductor photosensitive pixel layer on the top surface is a photosensitive pixel layer that can sense the light source, that is, the photosensitive capability of
  • the photosensitive pixel layer to the light source that is, the photosensitive ability of the photosensitive pixel, fails to function. It is noted that in the following, the photosensitive layer which is not inductive to the light source can be used to form a non-photosensitive transfer and read pixel layer.
  • the electroless photosensitive pixel or the semiconductor photosensitive pixel is a two-way photosensitive pixel
  • the problem of photosensitive alignment is involved, that is, although it is capable of two-way sensing, it cannot accept illumination in two directions at the same time, and it is necessary to select one direction at a time.
  • the light source illumination, the sensitizing direction can be isolated or steered, time-division, partition directional, or pixel directional, etc., that is, the time division and sub-region can be realized by, for example, occlusion of a light shielding film. , the pixel selection of the pixel.
  • the case of two-way illumination as shown in Figure 8.
  • the photosensitive pixel layer is substantially equivalent to a photosensitive plane perpendicular to the direction in which the light source is irradiated.
  • a photosensitive plane a plurality of photosensitive pixels (usually formed into a plurality of rows and columns of pixel arrays) are arranged, for a plurality of photosensitive images
  • Each of the photosensitive pixel layers in the layer may be of a planar hybrid type, that is, both electrolessly plated photosensitive pixels and semiconductor photosensitive pixels are disposed.
  • only one photosensitive pixel is disposed in the same photosensitive pixel layer, and thus, an electroless plating photosensitive pixel layer or a semiconductor photosensitive pixel layer will be formed.
  • the same position of the photosensitive device ie, the light penetrated by the pixel position of one photosensitive pixel layer illuminates the position on the other photosensitive pixel layer
  • the photosensitive pixels of different layers respectively, sensing a complementary segment or sub-segment in the ultraviolet, visible, near-infrared, and far-infrared; or respectively sensing an orthogonal spectral segment or a sub-segment in the ultraviolet, visible, near-infrared, and far-infrared.
  • the complementary segment or sub-spectrum includes ultraviolet ray, blue spectrum, green spectrum, red spectrum, near infrared spectroscopy, far infrared spectroscopy, cyan light transmission, yellow spectrum, white spectrum, near infrared spectroscopy + far infrared spectroscopy, Red spectrum + near infrared spectrum, red spectrum + near Red radiance spectrum + far red ray spectrum, yellow light + near infrared spectrum, yellow spectrum + near infrared spectrum + far infrared spectrum, visible spectrum + near infrared spectrum + far infrared spectrum, ultraviolet spectrum + visible spectrum, ultraviolet spectrum + Visible spectrum + near infrared spectroscopy, ultraviolet ⁇ + visible spectrum + near infrared spectroscopy + far infrared spectroscopy;
  • Orthogonal or sub-spectrals include UV, blue, green, red, near-infrared, far-infrared, cyan, yellow, white, near-infrared, far-infrared Spectroscopy, red spectrum + near-infrared spectrum, red spectrum + near-red spectrum + far-red spectrum, yellow spectrum + near-infrared spectrum, yellow pupil + near-infrared ray + far infrared spectrum, visible spectrum + near-infrared spectrum + far infrared spectroscopy, ultraviolet ⁇ + visible spectrum, ultraviolet light + visible light + near infrared spectroscopy, ultraviolet ⁇ + visible spectrum + near infrared spectroscopy + far infrared spectroscopy.
  • Embodiments include sensing at least one layer of the photosensitive device two different pupil (i.e., radio frequency) segments.
  • the color arrangement of its pixel array includes the same arrangement (the pixels in the pixel array have the same color), the horizontal arrangement (the same line of pixels in the pixel array has the same color), and the vertical arrangement (The same column of pixels in the pixel array has the same color), diagonally arranged (the same diagonal pixels in the pixel array have the same color), and generalized Bayesian arrangement (on the diagonal of the pixel array)
  • the color of the pixels is the same, the color of the pixels on the other diagonal is different), the YUV422 arrangement, the horizontal YUV422 arrangement, the honeycomb arrangement, the uniform arrangement (four pixels are evenly interlaced equidistantly arranged) and the like.
  • the term "arrangement" herein includes various fabrication processes for forming an electroless plated photosensitive pixel layer or a semiconductor photosensitive pixel layer on, for example, a semiconductor substrate or a light transmissive layer.
  • the semiconductor substrate is an N-type silicon crystal substrate.
  • a certain depth of P impurity is placed from the surface of the pixel to the inside of the substrate according to the depth requirement of the color to form a P.
  • the P doped layer is formed as a semiconductor pixel, if another impurity of N is implanted in the P doped layer, an N doped layer formed in the P doped layer, the N doping
  • the impurity layer is formed as another semiconductor photosensitive pixel (the semiconductor photosensitive pixel of the previous P-doped layer is in a different photosensitive pixel layer, but the pixel position corresponds to the pixel position), according to the "multispectral photosensitive device and the manufacturing method thereof" (PCT/CN2007/071262) provides a method for setting a layered line around 390 nm, around 500 nm, near 610 nm, and near 76 Onm, so that the corresponding point pixels on either layer line sense complementary or orthogonal spectra.
  • Figure 1 shows an example of the arrangement of a layered line, in which different colors are formed by the incorporation of impurities at different depths.
  • the electroless plating solution is applied to the surface of the substrate to form an electrolessly-coated photosensitive pixel layer, which is described in the context of "arrangement” due to the variety of fabrication or processing processes.
  • the above two layers of semiconductor photosensitive pixels are arranged at different depths, realizing one on the substrate
  • the same pixel position on the surface can sense at least two segments, providing better flexibility in pixel pattern arrangement on the surface and more pixel placement, which can greatly increase the sensitivity of the photosensitive device. , resolution, and dynamic range.
  • two layers of photosensitive pixels are arranged at the same position at the same position, because three layers are arranged at the same position, which is extremely difficult to process, and at the same time, on the wiring, due to the layers
  • the leads between the two need to be isolated from each other, and the three-layer leads obviously cause difficulty in wiring.
  • a plurality of the above-mentioned semiconductor photosensitive pixel layers are disposed on the same surface, and color reconstruction can be performed in combination with the pixel pattern arrangement on the plane, thereby achieving better color sensitivity. Since the two semiconductor photosensitive pixel layers are arranged in the deepest doping manner on the same side, the difficulty of the three-dimensional processing process is significantly reduced, and the wiring is relatively simple.
  • a single-sided or double-sided process can be used to form a single-sided photosensitive device or a double-sided photosensitive device.
  • Double-sided photosensitive device for the above-described deep doping processing, if one of the two semiconductor photosensitive pixel layers is disposed on the top surface of the substrate, and the other is disposed on the double-sided arrangement of the bottom surface of the substrate, for each side
  • the tube is processed into a planar processing process, and after one surface of the photosensitive pixel layer is processed on one side, the substrate is flipped, and on the other side, the other photosensitive pixel layer is processed by a planar processing process, so that the processing is performed.
  • the process is similar to the processing process of the existing single-sided single-layer photosensitive device, and is more compact than the above-mentioned two-layer doping of the same side.
  • a plurality of photosensitive pixels can be disposed at a certain position on the substrate.
  • the semiconductor photosensitive pixel layer is usually formed on the semiconductor base layer.
  • a light-transmissive layer for example, a transparent glass layer
  • One or more layers of semiconductor photosensitive pixels are formed on the semiconductor substrate, and then a light transmissive layer is placed on the semiconductor substrate, and then an electroless plated photosensitive pixel layer is formed on the light transmissive layer.
  • the depth of field is extended by the different thickness of the light-transmitting layer, which is equivalent to the distance between the predetermined electroless photosensitive layer and the semiconductor photosensitive pixel layer.
  • the filter film is not applied to the front side, the back side, or the double side of the electroless photosensitive layer or the semiconductor photosensitive pixel layer.
  • embodiments include the use of filter films.
  • the filter film is disposed on the front side, the back side, or both sides of all or part of the photosensitive pixels in the electroless photosensitive layer or the semiconductor photosensitive pixel layer.
  • the frequency selection characteristics of the filter film include infrared cut filter, wave, blue Ribbon pass, green band pass, red band pass, cyan band pass, yellow band pass, magenta band pass, or visible band pass.
  • the filter film is used to remove the influence of unwanted spectra by sacrificing the sensitivity of a few pixels, reducing the interference between the upper and lower pixels (cros s lk lk ), or obtaining the three primary colors with better orthogonality or A more pure complementary color signal.
  • Embodiments include having adjacent layers of the multi-layer photosensitive layer of the multi-depth sensor device each use its own read circuit.
  • Embodiments include a read circuit in which adjacent layers of the plurality of photosensitive pixel layers of the multi-depth sensor device are shared.
  • Embodiments include having the read circuit of the multi-depth sensor device located in a semiconductor photoreceptor layer, or a separate read circuit layer.
  • Embodiments of the read circuit of the multi-depth-of-depth sensing device include "a multi-spectral photosensitive device and a sampling method thereof” (Chinese Application No.: 200910105948. 2) and “a photosensitive device and a reading method thereof, and reading Pixel reading and subsampling methods in Circuits (China Application No.: 200910106477. 7).
  • Embodiments include employing an active pixel reading circuit, a passive pixel reading circuit, or an active pixel and passive pixel hybrid reading circuit in the signal reading circuit of the multi-depth sensor.
  • the active pixels comprise 3T, 4", 5", or 6" active pixels.
  • the active pixel structures of 3 ⁇ and 4 ⁇ are shown in Figure 2 and Figure 3, respectively.
  • the sharing mode of the reading circuit includes a no-sharing mode, a single layer or a 4-point sharing mode of a single layer or a single layer, or a 6-point sharing mode of a single layer or an upper layer, a single layer or an 8-layer sharing mode of the upper and lower layers, or a single layer or an upper layer or a lower layer.
  • Point sharing method The 4-point sharing mode, the 6-point sharing mode, the 8-point sharing mode, and the arbitrary point sharing mode are shown in Fig. 4, Fig. 5, Fig. 6, and Fig. 7, respectively.
  • the reading circuit of the multi-depth-of-depth sensing device includes between adjacent pixels in a pixel array of each photosensitive pixel layer, adjacent rows, different rows, or different rows of pixels. Combining a pair of samples to obtain a first merging unit of sample data of the first merged pixel; and combining sample data for the first merged pixel obtained by the first merging unit to obtain a sample of the second merged pixel The second merge unit of data.
  • the embodiment further includes: the reading circuit further comprising a third merging unit, configured to perform merging sampling data of the second merging pixel obtained by the second merging unit to obtain sampling data of the third merging pixel.
  • the pixel combining manner of the first merging unit or the second merging unit is a charge addition manner or a signal averaging manner between pixels of the same or different color,
  • the way in which pixels are merged between different color pixels follows the square of color space conversion , to meet the requirements of color reconstruction.
  • the first merged pixel and the second merged pixel described above are derived from the process of dividing the sub-sample into at least two processes, a first merge sample process and a second merge sample process.
  • the first merge sampling process and the second merge sampling process generally occur between the row (combined) sampling and the column (combined) sampling of the pixel, mainly for the analog signal, except that the charge addition portion is usually only in the first combined sampling
  • its order and content are usually exchangeable.
  • a third merge sampling process may also be included, and the third merge sampling process occurs after the analog to digital conversion, mainly for digital signals.
  • first merge sampling process two immediately adjacent pixels in the pixel array are taken for merging.
  • the merging of the adjacent pixels is completed, and the merged pixels are referred to as the first merged pixels.
  • first merged pixels are only described in the present invention, and the concept is used to refer to The pixels after the first merging process, rather than physically, there is a "first merged pixel" in the pixel array; the data obtained by combining the two adjacent pixels is called the sample of the first merged pixel. data.
  • first merged pixels the data obtained by combining the two adjacent pixels.
  • the immediate situation consists of a peer-to-peer column, a different row, or a different row.
  • the signal will average at least two pixels, and the noise will decrease. Therefore, after the combination, at least the signal-to-noise ratio can be increased by " ⁇ times, and the combination can be
  • the two combined colors can be different, that is, the color is added or averaged, it can be known from the principle of the three primary colors of the color that the addition of the two primary colors is another primary color.
  • the complementary color that is, the pixels of two different primary colors are combined to produce a complementary color of another primary color, from the primary color space to the complementary color space, only the color space transformation occurs, and the color reconstruction can still be completed by different complementary colors.
  • pixels of different colors can be combined to improve the signal-to-noise ratio, and at the same time, color reconstruction can be performed.
  • the entire sub-sampling process is also optimized, which is more suitable for the high-speed demand of large-array pixel arrays.
  • a basic requirement of color space transformation is that the combination of transformed colors can be (by interpolation, etc.) Rebuild the RGB (or YUV, or CYMK) color you need.
  • the first merged sample simply combines the two pixels, and obviously, the merged first merged pixels also have a plurality of pixels.
  • the color combination used may be the same or different.
  • the first merge is all carried out between the same colors, we call it the same color merge mode; when the first merge is all performed between different colors, we call it the heterochromatic merge mode; when the first merge part Between the same color, some between different colors, we call it hybrid The way; when some extra colors in the pixel array are discarded (of course, discarding is selective, for example, can not affect color reconstruction), such a color combination is called selective discarding of excess color.
  • the second merging process is an operation on a plurality of first merged pixels.
  • the first merged pixels of the same color may be merged; the first merged pixels of different colors may also be combined (of course In this case, all of the three primary colors may be added and the color cannot be reconstructed).
  • the above-mentioned method of merging, heterochromatic merging, hybrid merging, etc. is to perform color-based categorization of combined sampling, and, in addition, the angle selected from the position of the combined sampling, the combined sampling mode of the first merging process and the second merging process
  • These include: automatic averaging of signals output directly to the bus, skipping or skipping, sample-by-sampling, and simultaneous use of two or three of these methods.
  • the first merge process and the second merge process are identical and interchangeable except for the difference in order, except that the charge addition portion can usually only be done during the first merge sampling process.
  • the so-called automatic output averaging method for direct output to the bus that is, the signals to be combined (the same or different colors) are simultaneously output to the data acquisition bus, and the average of the signals to be combined is obtained by the automatic balancing of the (voltage) signals. value.
  • the so-called skip or skip mode is to skip some rows or columns to achieve (merge) sampling by reducing the amount of data.
  • the so-called sample-by-sampling method actually does not do any merging, and thus reads the original pixel or the first merged pixel. Some of these three methods can be used simultaneously. For example, the skip or skip mode can be used simultaneously with the automatic averaging or sample-by-sampling method of direct output to the bus.
  • the subsampling method of the third merge sampling process includes a color space conversion method, a back end digital image scaling method, and serial use of the two methods.
  • the first and second combining processes are mainly performed on the analog signal
  • the third sub-sampling process is mainly performed on the digital signal, that is, after the analog-to-digital conversion.
  • Charge addition can be achieved during combined sampling.
  • the current combined sampling almost always achieves the average of voltage or current signals.
  • the signal-to-noise ratio can only be increased by a factor of at most. This is because the existing combined sampling is a combination of N pixels of the same color sharing one output line. On this output line, the voltage or current signal of each pixel must be performed (automatic). On average, therefore, the improvement of its signal-to-noise ratio is only after the noise merges. It is low, so that the signal-to-noise ratio is increased by up to 2.
  • the charge addition method of the present invention for example, by reading a capacitor to store a charge, the charge is accumulated, so that the signal can be superimposed so that the signal-to-noise ratio can be increased by at least N times, which is at least twice as high as the signal average method. That is to say, the N signals are combined by the method of charge addition, theoretically up to the effect of N 2 signals phase averaging or better (as described below), which is a very significant effect of improving the signal to noise ratio. means.
  • full-picture sampling that is, sampling at the highest resolution of an image
  • the picture read frame rate is doubled when taking a single shot. If you increase the AD converter and the line buffer, then the full picture read frame rate can be increased even more. This method is very important for eliminating mechanical shutters.
  • the progressive scan, interlaced or inter-row read mode of the present invention is different from inter l eaved scanning in conventional television systems.
  • the traditional field sweep 4 method is interlaced and interlaced. Therefore, the odd field and the even field (whether sensitized or read) are one time out of time, that is, a field.
  • the pixels are exactly the same in the photographic time sequence as the progressive scan and progressive read mode, except that the read order of the lines is changed.
  • a Multispectral Photosensitive Device and Its Sampling Method China Application No.: 200910105948. 2
  • a Photosensitive Device and Its Reading Method, Reading Circuit (China Application No.: 200910106477. 7) ).
  • the color space conversion comprises: RGB to CyYeMgX space transform, RGB to YUV space transform, or CyYeMgX to YUV space transform, where X is R (red) ), G (green), B (blue).
  • Figure 10 shows a way to implement subsampling using color space transformation.
  • the above-described charge addition manner is performed by directly connecting pixels in parallel or transferring charges simultaneously into a read capacitor (FD).
  • the color-based combined sampling manner of the first merging unit or the second merging unit includes a same color combining mode, a heterochromatic merging mode, a hybrid merging mode, or a selective discarding of the excess color merging method, and First merge unit and second merge unit
  • the combined sampling mode is different, the same color combination mode is adopted, that is, at least one of the two merged units does not adopt the same color combination mode.
  • the location-based merge sampling mode of the first merging unit or the second merging unit includes at least one of the following ways: automatic averaging of signals directly outputted to the bus, skipping or hopping, and one by one Sampling method. That is to say, these several location-based combined sampling methods can be used alone or in combination.
  • the combined sampling mode of the third combined sampling unit can be realized by at least one of a color space conversion method and a back-end digital image scaling method.
  • Figure 9 shows how a heterochromatic pixel charge is combined.
  • the sub-sampling function described above is the row address decoding controller and the column address decoding controller as shown in FIG.
  • the row address decoding controller will output two types of signals, row row signal Row [i] (one line per line) and row control vector signal RS [i] (one or more lines per line), where i is the label of the row.
  • the column address decoding controller will output two types of signals, column signal Col [" ⁇ ] (one line per column) and column control vector signal T [j] (one or more lines per column), where j is The label of the column.
  • the row selection signal Row [i] is used to make the row selection
  • the column selection signal Col [j] is used to make the column selection.
  • This is a relatively standard set of signals for both groups.
  • the row control vector signal RS [i] is an extension of the existing CMOS row control signal (one line per line is extended to multiple lines per line), and the column control vector signal T [j], some CMOS sensors are not available at all. Even if there is, there is only one column.
  • RS [ i] and T [j] are used to control the reset, clear, sensitization time control, charge transfer, pixel merging, and pixel reading of the photosensitive pixels. Due to the symmetry of the ranks, RS [i] and T [j] have many specific implementations. The specific implementation of these signals is not limited.
  • the full-image sampling method of the multi-spectral photosensitive device includes a progressive scan, a progressive read mode or a progressive scan, an interlaced or a cross-reading method.
  • Embodiments also include making a photosensitive system comprising the multiple depth of field sensing device described above.
  • the photosensitive system is used to acquire a front, back, or bidirectional image.
  • the photosensitive system includes a digital camera, a camera phone, a video camera, a video or camera monitoring system, an image recognition system, a medical image system, a military, a fire, and a downhole image system, an automatic tracking system, a stereoscopic image system, a machine vision system, and a car vision.
  • Embodiments also include implementing a depth of field expansion method, including the steps of: providing at least two photosensitive pixel layers capable of sensing a light source in a photosensitive device, and arranging at least two of the photosensitive pixel layers at a predetermined distance interval, Different optical signals from the lens at a specific distance from the photosensitive device are focused to different photosensitive pixel layers.
  • the embodiment further includes an imaging method, or the application of the photosensitive device in imaging is to set a lens and a photosensitive device including at least two photosensitive pixel layers that can sense the light source; placing the photosensitive device in A specific distance from the lens, and at least two of the photosensitive pixel layers are arranged at a predetermined distance interval such that different optical signals from the lens are focused to different ones of the photosensitive pixel layers.
  • an embodiment further includes an optical imaging system including a lens and a multi-depth sensor, the multi-depth sensor disposed at a specific distance from the lens, including at least two light sources that can sense the light source.
  • Photosensitive pixel layer at least the photosensitive pixel layers are arranged at a predetermined distance interval, and different optical signals from a lens at a specific distance from the photosensitive device are focused to different photosensitive pixel layers.
  • light of all wavelengths of interest at different distances may be respectively focused on each photosensitive pixel layer; or as shown in FIG. 14, light of different wavelengths at the same distance may be respectively focused on each photosensitive image.
  • the layer of light may also be different wavelengths of light at different distances, respectively focusing on each photosensitive pixel layer.
  • Embodiments include light that is focused by each of the photosensitive pixel layers, the wavelength of which increases from near to far from the optical lens in the respective photosensitive pixel layers. Or in each photosensitive pixel layer, a longer distance optical signal is focused on the photosensitive pixel layer closer to the lens.
  • the two photosensitive pixel layers are respectively located at the first image distance and the second image distance of the lens, and can be designed by an optical lens to emit ultraviolet light, blue light, and green light.
  • Light, cyan, or white light is focused on the photosensitive pixel layer closest to the lens; correspondingly, blue, green, red, yellow, or infrared light is focused on the photosensitive pixel layer furthest from the lens.
  • the three photosensitive pixel layers are respectively located at the first image distance, the second image distance, and the third image distance of the lens, and can be designed by an optical lens to Light, blue, green, or cyan light is focused on the photosensitive pixel layer closest to the lens; correspondingly, red, yellow, visible, or infrared light is focused on the photosensitive pixel layer furthest from the lens; a photosensitive pixel that focuses green light, yellow light, visible light, or red light in the middle Floor.
  • the four photosensitive pixel layers are respectively located at the first image distance, the second image distance, the third image distance, and the fourth image distance of the lens, and can be optically
  • the lens design focuses the ultraviolet, blue, green, or cyan light on the photosensitive pixel layer closest to the lens; correspondingly, focuses the red, yellow, white, or infrared light farthest from the lens Sensing the pixel layer; correspondingly, focusing the blue, green, or cyan light on the second closest photosensitive pixel layer; correspondingly, focusing the green, red, white, or yellow light away
  • the third closest photosensitive pixel layer of the lens is focuses the ultraviolet, blue, green, or cyan light on the photosensitive pixel layer closest to the lens.
  • the two photosensitive pixel layers are respectively located at the first image distance and the second image distance of the lens, and can be designed by an optical lens to focus ultraviolet light or visible light.
  • the three photosensitive pixel layers are respectively located at the first image distance, the second image distance, and the third image distance of the lens, and can be designed by an optical lens to Light or white light is focused on the photosensitive pixel layer closest to the lens; white or infrared light is focused on the photosensitive pixel layer furthest from the lens; white light is focused on the intermediate photosensitive pixel layer.
  • the four photosensitive pixel layers are respectively located at the first image distance, the second image distance, the third image distance, and the fourth image distance of the lens, and can be optically
  • the lens design focuses the ultraviolet or white light on the photosensitive pixel layer closest to the lens; focuses the white or infrared light on the photosensitive pixel layer farthest from the lens; focuses the white light on the second closest to the lens a pixel layer; focuses white light on a photosensitive pixel layer that is third closest to the lens.
  • white light if white light is said to be focused to different photosensitive pixel layers, it generally originates from different distances, that is, for example, closest to the lens.
  • the photosensitive pixel layer focuses on white light at infinity, and the photosensitive pixel layer farthest from the lens focuses on the shortest distance of white light of interest. That is, when the spectral characteristics of the optical signals focused by the two photosensitive pixel layers are the same, they must have different distance characteristics.
  • the multi-depth-field photosensitive device of the invention has the advantages of multi-spectral, and can simultaneously obtain a plurality of color signals and other optical signals, for example, in a four-layer photosensitive device, which can be arranged along the optical path from the light source from near to far.
  • a first electroless photosensitive layer for sensing ultraviolet light
  • a first semiconductor photosensitive pixel layer for inducing blue, green, or cyan
  • a second semiconductor photosensitive for inducing red, yellow, or green light
  • a pixel layer a second electroless photosensitive pixel layer that senses infrared light.
  • the first semiconductor photosensitive pixel layer and the second semiconductor photosensitive pixel layer are respectively implemented on two semiconductor base layers, and a light transmissive layer having a predetermined thickness is disposed between the two semiconductor base layers.
  • the first electroless photosensitive layer is disposed above the top surface of the base layer where the first semiconductor photosensitive pixel layer is located; and the second electroless photosensitive layer is disposed below the bottom surface of the base layer where the second semiconductor photosensitive pixel layer is located.
  • Such a four-layer multi-spectral photosensitive device is not very difficult to fabricate. If combined with the advanced sampling previously invented by myself and the sub-sampling circuit and method featuring charge combining and color conversion, the complexity of the photosensitive device and system can be greatly reduced, thereby providing great convenience for various applications. And sublime performance.
  • the first special use of the multi-depth sensor of the present invention is the depth of field expansion.
  • the existing EDoF mainly uses optical and mathematical means to achieve depth of field expansion, and generally requires autofocus such as a lens, for example.
  • the present invention achieves depth of field expansion by directly arranging such physical means by different distances of different photosensitive pixels in the device at predetermined distance intervals.
  • the second special application realized is to realize the global electronic shutter.
  • the existing global electronic shutter (Global Shut ter) mainly uses the means of reading the circuit.
  • the present invention utilizes the non-photosensitive transfer and reading. Pixels enable high-speed, high-resolution photography without the need for a mechanical shutter.
  • the multi-depth sensor device of the present invention can greatly improve the depth of field of the system by adjusting the distance of different photosensitive pixel layers, in addition to greatly improving the sensitivity, thereby making the image clearer, the system response speed is faster, and the application surface is wider. Even eliminate autofocus requirements in some applications.
  • the multi-depth-sensing device of the present invention can quickly acquire a clear image in the depth of field range it covers without going through a focusing process. In addition to reducing the difficulty and cost of autofocus, depth of field extension, even in some applications such as cell phone photography, macro photography, or telephoto, completely eliminates the need for autofocus.
  • Depth of field extension also allows objects at different distances in the same photo to be clear at the same time, which is also very useful in some special applications, which is not possible with autofocus, because the existing autofocus system can only Objects within a certain distance are clearly imaged, and objects within a very wide range cannot be made clear. Therefore, the depth of field extension of the present invention is realized, and in a system with autofocus capability, it still has great value.
  • the speed of light can be greatly increased, thereby providing a possibility to remove the mechanical shutter in many applications.
  • an implementation of a global electronic shutter having a cross-layer reading function is also proposed, with a view to Replaces mechanical shutters that may be required in some applications.
  • the global electronic shutter function is to copy the charge or voltage value in the photosensitive pixel to the non-photosensitive read pixel for a moment, so that the read circuit can read it out.
  • a high-performance, high-speed, high-pixel photosensitive system that does not require autofocus and mechanical shutters can be implemented in a chip manner, greatly reducing The size, complexity, power consumption and cost of the system make it possible for many new applications.
  • This photographic device with a global electronic shutter or multiple depth of field eliminates the need for a mechanical shutter for the photographic system or eliminates the need for an autofocus system (or reduces the need for an autofocus system) without speeding up the photosensor clock. , High-speed electronic shutter or clear imaging.
  • the present invention can be maximized by employing a two or more layer layout in combination with an advanced two or more layers of complementary or orthogonal color pattern arrangements.
  • the read circuit and the processing calculation of the read circuit layer can be made very fine and complicated, which provides great convenience for the fabrication of the single-chip photosensitive system.
  • This multi-depth-of-field sensor can simultaneously obtain a large number of color signals and other spectral signals.
  • the photosensitive device With the advanced sampling and the sub-sampling circuit and method which are characterized by charge combining and color conversion, the photosensitive device can be greatly reduced.
  • the complexity of the system provides great convenience and high performance for a variety of applications.
  • This multi-depth sensor can be used for front side sensitization, back side sensitization, or two-way sensitization.
  • various preferred multi-spectral photosensitive devices can be produced, such as high-sensitivity color sensing devices, high-sensitivity color and infrared sensing devices, High-sensitivity color or multi-spectral light-sensing devices with variegated colors (caused by interpolation).
  • Ultra-low-power sensitized devices can be obtained by combining active pixels with passive pixel reading.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Electromagnetism (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Manufacturing & Machinery (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Color Television Image Signal Generators (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Apparatus Associated With Microorganisms And Enzymes (AREA)

Abstract

提供一种多景深感光器件、系统、景深扩展方法及光学成像系统。多景深感光器件包括至少两个可感应到光源的感光象素层,至少两个感光象素层之间按预定的距离间隔设置,使来自距离感光器件特定距离的镜头的不同光信号被聚焦到不同的感光象素层。可以避免使用电动机构和机械部件而实现自动对焦。

Description

多景深感光器件、 系统、 景深扩展方法及光学成像系统 技术领域
本发明涉及感光领域, 具体的说, 涉及一种多景深感光器件、 利用该 多景深感光器件的感光系统、景深扩展方法以及一种光学成像系统与方法。 背景技术
本发明是本发明人稍早一点的 《多光镨感光器件及其制作方法》
( PCT/CN2007/071262 ), 《多光 i普感光器件及其制作方法》(中国申请号:
200810217270. 2 ) , 《多光谱感光器件》 (中国申请号: 200910105372. X) , 《一种多光谱感光器件及其采样方法》(中国申请号: 200910105948. 2 ), 《一种感光器件及其读取方法、读取电路》(中国申请号: 200910106477. 7) 的延续, 旨在提供更为具体而且优选的多光谙感光器件在芯片及系统级别 的实现。
感光系统, 是一种通过光学镜头进行景物的捕捉、 收集, 并由感光器 件, 例如 CMOS感光芯片进行景物的记录的系统。 感光系统工作时, 调节镜 头, 使距离镜头一定距离的景物清晰成像的过程, 叫做对焦, 该景物所在 的点, 称为对焦点, 因为 "清晰,'具有相对性, 所以对焦点前(靠近镜头)、 后一定距离内的景物的成像都可以是清晰的, 这个前后范围的总和称为景 深。 通常前景深小于后景深, 即精确对焦后, 对焦点前只有 4艮短一段距离 内的景物能清晰成像, 而对焦点后 4艮长一段距离内的景物都是清晰的。
获得宽景深清晰成像的系统是人们长期以来的研究目标之一。 研究表 明, 景深的大小与镜头焦距有关, 焦距长的镜头景深小, 焦距短的镜头景 深大。 可见, 镜头焦距调整是获得宽景深清晰成像的手段之一; 另外, 根 据几何光学的基本成像公式(丄 =丄+丄, 其中, f 为镜头焦距, u为物距, f U V
即被摄物到镜头的距离, V为象距, 即镜头到感光器件的距离), 可见, 象 距的动态调整也是获得宽景深清晰成像的手段之一。
因此, 现有感光系统中的自动对焦方式, 都采用上述两种手段之一。 例如, 镜头由一组镜片构成, 通过调整镜片之间的距离, 从而可以调整镜 头焦距或(镜头与感光器件的)象距(而实现光学变焦或对焦); 或者通过驱 动例如 CMOS感光器件位移, 从而改变象距(而实现光学对焦)。 然而, 显 然的, 这两种方式的对焦, 都需要电动机构和复杂、 精密的机械部件来进 行镜片或感光器件的位移驱动。 这样, 不仅显著增加了尺寸, 也显著增加 了成本和功耗。 在 4艮多的应用里, 如手机照相和医用照相里, 这些都是明 显的不利因素。
一些不采用运动机构的宽景深系统因此被提出, 试图在某些应用里取 代自动对焦的需求。 这种系统在手机照相的应用里, 被称之为 EDoF (Extended Depth of Focus), 例如, DXO公司提出的一种 EDoF系统, 通过 特别的镜头设计, 让感光器件中的红色感光象素聚焦在无穷远处, 兰色感 光象素聚焦在尽可能的近距 (例如 50cm) 。 而绿色感光象素聚焦在某个中 间的位置。 这样, 无论物体处于哪个位置, 总有一个颜色的图像是清楚或 相对比较清楚的。 之后, 通过数学的手段, 以比较清楚的颜色为主体, 不 够清楚的颜色为辅助信息, 就能在比较宽的范围, 还原和计算出较清楚的 图像。
然而, 采用单层感光器件, 当红色感光象素聚焦在无穷远时, 兰色感 光象素的聚焦距离一般 4艮难做得比 50cm更小。 此外, 对于采用贝叶图案 的感光器件, 红色象素和兰色象素都只占感光象素的 1/4。 因此, 当需要 以红色或兰色作为清晰度计算的主体时, 图像的分辨率已经减少到以绿色 为主体时的分辨率的一半以下。 可见, 这种方案有一定的局限性。
因此, 仍然有必要对现有感光器件或系统进行改进。 发明内容
本发明所要解决的技术问题是, 提供一种多景深感光器件、 利用该多 景深感光器件的感光系统、 景深扩展方法以及一种光学成像系统与方法, 其以物理的手段实现了自动对焦或多距离成像,避免使用电动机构和复杂、 精密的机械部件, 具有良好的景深扩展性能。
为解决上述技术问题, 本发明采用了以下技术方案:
一种多景深感光器件, 包括至少两个可感应到光源的感光象素层,至 少两个所述感光象素层之间按预设距离间隔布置, 使得来自距所述感光器 件特定距离的镜头的不同光信号被聚焦到不同的所述感光象素层。
在本发明的一种实施例中, 所述感光象素层包括化学镀膜感光象素层 和半导体感光象素层中的至少一者。 在本发明的一种实施例中, 所述化学镀膜感光象素层包括量子点感光 象素。
在本发明的一种实施例中, 所述半导体感光象素层包括 CMOS 光敏二 极管、 CMOS感光门、 CCD光敏二极管、 CCD感光门、 和具有双向电荷转移 功能的 CMOS和 CCD感光二极管和感光门。
在本发明的一种实施例中,所述不同光信号, 包括不同距离的光信号, 或者不同波长的光信号。
在本发明的一种实施例中, 波长更短的光信号被聚焦到离镜头更近的 感光象素层。
在本发明的一种实施例中, 所述感光象素层为两层, 紫色光、 兰色光、 绿色光、 或青色光被聚焦到离镜头较近的感光象素层, 绿色光、 红色光、 黄色光、 可见光、 或红外光被聚焦到离镜头较远的感光象素层。
在本发明的一种实施例中, 所述感光象素层为三层, 紫外光、 兰色光、 或青色光被聚焦到离镜头最近的感光象素层; 兰色光、 绿色光、 红色光、 或黄色光被聚焦到位于中间的感光象素层; 红色光、 黄色光、 可见光、 或 红外光被聚焦到离镜头最远的感光象素层。
在本发明的一种实施例中, 更远距离的光信号被聚焦到离镜头更近的 感光象素层。
在本发明的一种实施例中, 所述感光象素层为两层, 无穷远处的光信 号被聚焦到离镜头较近的感光象素层, 感兴趣最短距离的光信号被聚焦到 离镜头较远的感光象素层。
在本发明的一种实施例中, 无穷远处的紫色光、 兰色光、 绿色光、 或 青色光被聚焦到离镜头较近的感光象素层, 感兴趣最短距离的绿色光、 红 色光、 黄色光、 可见光、 或红外光被聚焦到离镜头较远的感光象素层。
在本发明的一种实施例中, 所述感光象素层为三层, 无穷远处的光信 号被聚焦到离镜头最近的感光象素层, 感兴趣最短距离的光信号被聚焦到 离镜头最远的感光象素层, 无穷远处与感兴趣最短距离之间的一个中间距 离的光信号被聚焦到位于中间的感光象素层。
在本发明的一种实施例中, 所述感光象素层为三层, 无穷远处的紫外 光、 兰色光、 或青色光被聚焦到离镜头最近的感光象素层, 感兴趣最短距 离的红色光、 黄色光、 可见光、 或红外光被聚焦到离镜头最远的感光象素 层, 无穷远处与感兴趣最短距离之间的一个中间距离的兰色光、 绿色光、 红色光、 或黄色光被聚焦到位于中间的感光象素层。 在本发明的一种实施例中, 所述感兴趣最短距离包括 2mm, 5mm, 7mm, lcm, 2cm, 3cm, 5cm, 7cm, 10cm, 20cm, 30cm, 40cm, 50cm, 60cm, 70cm, 80cm 100cm, 或 150cmo
在本发明的一种实施例中, 至少两个所述感光象素层之间设置有透光 层。
在本发明的一种实施例中, 所述感光象素层中的感光象素为正面感光 象素、 背面感光象素、 或汉向感光象素。
在本发明的一种实施例中, 当感光象素为双向感光象素时, 其感光选 向方式为隔离选向、 分时选向、 分区选向、 或象素选向。
在本发明的一种实施例中, 所述感光象素层中的感光象素分别感应包 含紫外线、 可见光、 近红外、 和远红外中的一个互补潘段或子谱段; 或者 所述化学镀膜感光象素和半导体感光象素分别感应包含紫外线、 可见光、 近红外、 和远红外中的一个正交谓-段或子普段。
在本发明的一种实施例中, 所述互补谱段或子语段包含紫外光谱, 兰 色光谱, 绿色光谱, 红色光谱, 近红外光谙, 远红外光谱, 青色光 i普, 黄 色光语,白色光傳, 近红外光谱 +远红外光谙, 红色光谱 +近红外光谱, 红 光谱 +近红夕卜光谱 +远红夕卜光谱, 黄色光语 +近红外光谱, 黄色光 +近红外 光谱 +远红外光谱, 可见光谱 +近红外光谱 +远红外光谱, 紫外光傳+可见光 谱, 紫外光谱 +可见光谱 +近红外光谙, 紫外光谱 +可见光谱 +近红外光谱 + 远红夕卜光谱;
所述正交谱段或子谱段包含紫外光谱, 兰色光谙, 绿色光谱, 红色 光谱, 近红外光谱, 远红外光谱, 青色光谱, 黄色光语,白色光谱, 近红 夕卜^ 夕卜^ ", 工色 夕卜 i ", 夕卜 夕卜 光谱, 黄色光谱 +近红外光谱, 黄色光谱 +近红外光谱 +远红外光谱, 可见 光谱 +近红外光谱 +远红外光谱, 紫外光谱 +可见光谱, 紫外光侮 +可见光镨 +近红外光脊, 紫外光谱 +可见光谱 +近红外光谙 +远红外光谱。
在本发明的一种实施例中, 每一感光象素层中的色彩排列包括同一排 列、 水平排列、 垂直排列、 对角排列、 广义贝叶排列、 YUV422排列、 横向 YUV422排列、 蜂窝排列、 均布排列。
在本发明的一种实施例中, 至少一个所述感光象素层中的部分或全部 感光象素的正面、 背面或双面设置有滤光膜, 所述滤光膜的选频特性包括 红外截止滤波、 兰色带通、 绿色带通、 红色带通、 青色带通、 黄色带通、 品红色带通、 或可见光带通。 在本发明的一种实施例中, 所述感光象素层中的相邻两层各自设有读 取电路; 或者所述感光象素层的相邻两层共用读取电路。
在本发明的一种实施例中, 所述读取电路为主动象素读取电路、 被动 象素读取电路、 或主动象素与被动象素混合读取电路。
在本发明的一种实施例中, 所述主动象素包括 3T、 4Τ、 5Τ或 6Τ主动 象素。
在本发明的一种实施例中, 所述读取电路的共用方式包括单层或上下 层 4点共享方式、 单层或上下层 6点共享方式、 单层或上下层 8点共享方 式、 或单层或上下层任意点共享方式。
在本发明的一种实施例中, 所述读取电路包括用于对每一感光象素层 的象素阵列中的紧邻的同行异列、 异行同列、 或异行异列的象素间进行两 两合并采样, 获得第一合并象素的采样数据的第一合并单元; 以及用于对 第一合并单元得到的第一合并象素的采样数据进行合并采样以获得第二合 并象素的采样数据的第二合并单元。
在本发明的一种实施例中, 所述读取电路还包括第三合并单元, 用于 对第二合并单元得到的第二合并象素的采样数据进行合并釆样以获得第三 合并象素的采样数据。
在本发明的一种实施例中, 所述第一合并单元或第二合并单元的象素 合并方式为相同或不同色彩象素间的电荷相加方式或信号平均方式, 其中 不同色彩象素间的象素合并方式遵照色彩空间变换的方式, 以满足色彩重 建的要求。
在本发明的一种实施例中, 所述色彩空间变换包括 RGB到 CyYeMgX空 间的变换、 RGB到 YUV空间的变换、 或 CyYeMgX到 YUV空间的变换, 其中 X 为 R (红)、 G (绿)、 B (兰)中的任一种。
在本发明的一种实施例中, 所述电荷相加方式通过象素直接并联或将 电荷同时转移到读取电容(FD ) 中完成。
在本发明的一种实施例中, 所述第一合并单元或第二合并单元的基于 色彩的合并采样方式包括同色合并方式、 异色合并方式、 混杂合并方式、 或选择性抛弃多余色彩合并方式, 且第一合并单元和第二合并单元采用的 合并采样方式不同时为同色合并方式。
在本发明的一种实施例中, 所述第一合并单元或第二合并单元的基于 位置的合并采样方式包括以下几种方式中的至少一种: 直接输出到总线的 信号自动平均方式、 跳行或跳列方式、 和逐个采样方式。 在本发明的一种实施例中, 所述第三合并单元的合并釆样方式包括: 色彩空间变换方式和后端数字图像缩放方式中的至少一种。
在本发明的一种实施例中, 包括具有跨层读取功能的全局电子快门, 所述全局电子快门包含多个可同时转移并读取一层或多层感光象素层的电 荷或电压值的不感光的转移和读取象素。
在本发明的一种实施例中, 所述多个不感光的转移和读取象素位于不 感光象素转移和读取层; 或者位于所述感光象素层。
在本发明的一种实施例中, 每一感光象素层, 设置有一个紧邻的不感 光象素转移和读取层。
在本发明的一种实施例中, 所述不感光的转移和读取象素由半导体电 路制成。
本发明也提供了一种景深扩展方法, 包括:
在感光器件中设置至少两个可感应到光源的感光象素层,并将至少两 个所述感光象素层按预设距离间隔布置, 使得来自距所述感光器件特定距 离的镜头的不同光信号被聚焦到不同的所述感光象素层。
在本发明的一种实施例中, 通过来自不同感光象素层的具有不同清晰 度的图像而获取一幅清晰图像。
本发明还提供了一种光学成像方法, 包括:
设置镜头和包括至少两个可感应到光源的感光象素层的感光器件; 将 所述感光器件放置在距所述镜头特定距离, 且至少两个所述感光象素层之 间按预设距离间隔布置, 使得来自镜头的不同光信号被聚焦到不同的所述 感光象素层。
本发明还提供了一种光学成像系统, 包括镜头和多景深感光器件, 所 述多景深感光器件布置在距所述镜头特定距离, 包括至少两个可感应到光 源的感光象素层, 至少两个所述感光象素层之间按预设距离间隔布置, 使 得来自所述镜头的不同光信号被聚焦到不同的所述感光象素层。
在本发明的一种实施例中,所述不同光信号, 包括不同距离的光信号, 或者不同波长的光信号。
在本发明的一种实施例中, 波长更短的光信号被聚焦到离镜头更近的 感光象素层。
在本发明的一种实施例中, 更远距离的光信号被聚焦到离镜头更近的 感光象素层。
本发明还提供了一种感光系统, 包括上述的感光器件。 在本发明的一种实施例中, 所述感光系统包括数码相机, 照相手机, 摄像机, 视频或照相监控系统, 图像识别系统, 医学图像系统, 军用、 消 防或井下图像系统, 自动跟踪系统, 立体影像系统, 机器视觉系统, 汽车 视觉或辅助驾驶系统, 电子游戏系统, 网络摄像头, 红外和夜视系统, 多 光谱成像系统, 和电脑摄像头中的一种。
现有的感光系统, 不仅自动对焦系统需要电动机构和复杂、 精密的机 械部件, 而且对于 6mm直径以上的镜头而言, 为了实现在 10cm至无穷远 的宽距自动对焦, 镜头的行程必须在 0. 2mm 以上, 也就是说, 无穷远处 清晰成像的象距与 10cm处清晰成像的象距差, 至少是 0.2mm, 即 200um。 众所周知, 硅或其它半导体材料都是不透明的。 光进入硅后, 大约在 12um 处, 就已经被吸收得所剩无几了。 因此即使使用自动对焦系统, 现有的感 光系统也很难获得较宽的景深范围。
本发明的多景深感光器件、 利用该多景深感光器件的感光系统、 景深 扩展方法, 以及光学成像系统及方法, 通过设置至少两个可感应到光源的 感光象素层,至少两个所述感光象素层之间按预设距离间隔布置,使得来自 距感光器件特定距离的镜头的不同光信号被聚焦到不同的感光象素层, 从 而不同感光象素层可以得到不同景深的图像。 从产品角度, 感光器件可以 制作成感光芯片独立存在, 但从应用角度, 例如光学成像, 感光器件通常 需要与镜头配合使用。 镜头, 依其大小、 材料、 曲面设计等具有不同的聚 焦特性, 例如以普通的手机镜头为例, 其景深范围通常是无穷远到 2m, 超 出该景深范围, 则需要釆用自动对焦技术, 例如通过调整感光器件到镜头 的距离, 也即调整象距到某一合适数据, 才能获得例如 50cm- 30cm的清晰 景象。 而在本发明中, 如果选定的应用镜头是手机镜头, 则可以按下例在 感光器件中设置两个感光象素层 (称第一感光象素层和第二感光象素层), 当与该选定的手机镜头配合时, 感光器件被放置在距该镜头特定距离, 此 时, 第一感光象素层到镜头的距离为第一象距, 第二感光象素层到镜头的 距离为第二象距(第一象距小于第二象距), 此时, 距镜头的特定距离以及 两个感光象素层之间的预设距离,使得从无穷远到 2m的景深范围的景物将 可在第一感光象素层清晰成像, 从 50cm-30cm的景深范围的景物将可在第 二感光象素层清晰成像。 由此, 实现了两个景深或景深扩展。 需要注意的 是, 上述说明例中, 感光象素层的数量和景深范围均仅为示例性数据, 可 以理解, 通过对感光象素层的数量和相互间的预设距离调整, 可以形成连 续、 交叠、 互补、 或正交的景深范围, 多个感光象素层各自的景深范围叠 加将使感光器件具有相当宽的景深范围, 从而无需自动对焦即可获得宽景 深范围内的清晰图像, 避免使用电动机构和复杂、 精密的机械部件, 显著 的节约了空间和成本。 另一方面, 在本发明中, 通常至少可以从一个感光 象素层获得较为完整的图像信息, 从而图像具有相当高的清晰度, 且无需 繁瑣的数学计算。
本发明将通过实施例描述这种新型的、 威力巨大的混合多光谱感光象 素组、 感光器件和系统。 这些优选实现方式, 仅仅是作为举例来说明本发 明的优点和实现方法而已, 而不是为了限制这些发明的保护范围。
对于相关业界的有识之士而言, 本发明的上述及其它目的和优点, 在 阅读过下面的优选的带有多个插图解释的实现案例的细节描述之后, 将是 十分明显的。 附图说明
图 1是一个光谱分布的示意图。 可见光一般是指 390nm到 760腿波长 的光。 一般由棱镜的分光效应而从可见光分出来看到的兰光波长在 440 - 490nm, 绿光波长在 520 - 570nm, 红光波长在 630 - 740誦, 而在感光 器件的设计中, 一般把 390 - 500舰划为兰色区, 500- 610nm划为绿色区, 610-760nm 划为红色区, 但这种红、 绿、 兰的 i普段的划分并不是绝对的。 图中的红、 绿、 兰、 青(兰绿复合)、 和黄(绿红复合)的波形, 是原色感 光象素或补色 (复合色)感光象素所需求的理想的波长响应曲线。 如果作 为基色的原色感光象素或补色 (复合色)感光象素不具备类似的波长响应 曲线, 则 4艮难重建人类所能看到绝大部分色彩。
图 1是感光象素的 3T读取电路。
图 3是感光象素的 4T读取电路。
图 4 是本人在 《一种多光谱感光器件及其采样方法》(中国申请号: 200910105948. 2 )和《一种感光器件及其读取方法、 读取电路》 (中国申请 号: 200910106477. 7)提出的四点共享读取电路。
图 5 是本人在 《一种多光谱感光器件及其采样方法》(中国申请号: 200910105948. 2 )和《一种感光器件及其读取方法、 读取电路》 (中国申请 号: 200910106477. 7)提出的两层六点共享读取电路。
图 6 是本人在 《一种多光谱感光器件及其采样方法》(中国申请号: 200910105948. 2 )和《一种感光器件及其读取方法、 读取电路》 (中国申请 号: 200910106477. 7)提出的两层八点共享读取电路。
图 7是本人在《一种感光器件及其读取方法、读取电路》(中国申请号: 200910106477. 7)提出的任意 N点共享读取电路。
图 8 是本人在 《多光谱感光器件及其制作方法》 (中国申请号: 200810217270. 2 )和《多光语感光器件》 (中国申请号: 2009101 05372. X) 中提出的, 上下层感光象素在感兴趣的光语上互补或正交的双层感光器件 的示意图。 这种感光器件通过采用精心选择的彩色图案和排列, 可以得到 非常多优秀的双层感光器件。这些感光器件可以用于正面感光, 背面感光, 和双向感光。这些方法和原理, 同样也可适用于本发明的多光谱感光器件。
图 9 是本人在 《一种多光谱感光器件及其采样方法》(中国申请号: 200910105948. 2 ) 中提出的一种在不同色彩象素间实现电荷合并的子采样 方法。 这种方法同样适用于本发明的多光谙感光器件。
图 10是本人在 《一种多光普感光器件及其采样方法》(中国申请号: 200910105948. 2 ) 中提出的采用色彩空间变换来实现的一种象素合并和子 采样方法。 这种方法同样适用于本发明的多光谙感光器件。
图 11 是本发明提出的一种用于景深扩展的两层感光器件的结构示意 图, 其中透光层的厚度, 由所希望的两个感光平面的象距差(V2-V1 )来决 定。
图 12 是本发明提出的一种用于景深扩展的三层感光器件的结构示意 图, 其中透光层的厚度, 由所希望的两个感光平面的象距差 (V2-V1 或 V3-V2 ) 来决定。
图 1 3是第一种利用多层感光器件实现景深扩展的原理示意图。在这个 示意图里, 位于不同距离的物体, 将清晰地聚焦在多层感光器件的不同感 光象素层。 因此, 位于这三个距离之间的任一物体, 都将清楚或者比较清 楚地聚焦在一个,或者两个感光象素层, 从而达到景深扩展的效果。 图中, Ul , U2, U3是物距(即物体到镜头的距离), VI , V2 , V3是象距(即感光象 素层到镜头的距离)。
图 14是第二种同时利用特殊的镜头设计方法和多层感光器件来实现 更好的景深扩展效果的原理示意图。对一般光学系统而言, 波长更短的光, 焦距更短。 因此, 对镜头进行特别的设计,可以让波长更短的光,聚焦在离 镜头更近的感光象素层,或者说, 离光源更近的感光象素层; 而将波长更长 的光, 聚焦在离镜头更远的感光层,或者说, 离光源更远的感光象素层; 而 将波长中等的光,聚焦在中间的感光层。这样, 这种成像系统同时结合多光 谱和多象距的特点, 就能极大地扩展景深。 这种系统, 对于微距照相, 有 着无比的优势。
图 15是实现图 11所示的多景深感光器件的感光象素级别的原理示意 图。 通过调整透光层的厚度, 可以调整两个感光象素层之间的距离, 从而 让两个感光象素层的感光象素, 分别对应不同的景深。 该例中, 上下两个 感光象素层均采用半导体感光象素层。
图 16 ( a )、 (b )、 (c )、 (d )是实现图 11所示的多景深感光器件的另 一种感光象素级别的原理示意图。 同样, 通过调整透光层的厚度, 可以调 整两个感光象素层之间的距离, 从而让两个感光象素层的感光象素, 分别 对应不同的景深。 该例中, 上层的感光象素层采用化学镀膜感光象素层, 而下层的感光象素层釆用半导体感光象素层, 显然二者可以互换而不影响 其实现多景深的效果。
图 15和图 16都只描绘了感光象素的情况,其它读取电路和辅助电路, 因为可以与现有的相同, 都被省略。
图 17 ( a )、 (b )、 (c )、 (d )是实现图 11所示的多景深感光器件的另 一种感光象素级别的原理示意图。 同样, 通过调整透光层的厚度, 可以调 整上层的一个感光象素层与下层的另两个感光象素层之间的距离, 从而让 不同感光象素层的感光象素, 分别对应不同的景深。 在该例中, 上层的一 个感光象素层采用化学镀膜感光象素层, 下层的另两个感光象素层采用半 导体感光象素层, 注意到, 图 17 )、 ( b ) 中的两个半导体感光象素层布 置在一个半导体基层的两面, 图 17 ( c )、 ( d ) 中的两个半导体感光象素层 布置在一个半导体基层的一面。 照射方向可以是该半导体基层的正面或是 背面。 另需注意的是, 由于半导体的透光性限制, 半导体基层厚度一般较 薄, 通常不满足景深扩展所需的感光象素层间隔距离的需求。 因此, 两个 半导体感光象素层更多的用来实现多光谱的需求。
图 18 ( a )、 (b ) 是实现图 11所示的多景深感光器件的另一种感光象 素级别的原理示意图。 同样, 通过调整透光层的厚度, 可以调整上层的一 个感光象素层与下层的另两个感光象素层之间的距离, 从而让不同感光象 素层的感光象素, 分别对应不同的景深。 在该例中, 上层的一个感光象素 层采用化学镀膜感光象素层, 下层的另两个感光象素层分别采用半导体感 光象素层和化学镀膜感光象素层。 中间的半导体感光象素层(的两面)可 以含有读取三个感光象素层所必需的读取象素和采样电路。
图 19 ( a )、 (b ) 是实现图 12所示的多景深感光器件的一种感光象素 级别的原理示意图。 注意到, 在该例中, 从上到下依次布置化学镀膜感光 象素层、 第一透光层、 第一半导体感光象素层、 第二透光层、 第二半导体 感光象素层。 第一半导体感光象素层和第二半导体感光象素层分别实现在 不同的两个半导体基层上, 化学镀膜感光象素层与第一半导体感光象素层 之间的距离通过调整第一透光层的厚度实现, 第一半导体感光象素层与第 二半导体感光象素层之间的距离通过调整第二透光层的厚度实现。 读取和 采样电路可以在位于中间的第一半导体感光象素层实现 , 也可以分布在两 个半导体感光象素层。
对于具有两层半导体感光象素层的实施例, 例如图 8和图 15等, 如 果将其中一个感光象素层中的感光象素去掉, 做成专门用于读取电路和信 号处理的层, 则可以得到如图 20-23 所示的本发明提出的带有(具有跨层 读取功能的)全局电子快门的感光器件。 图 20-23仅是展示了该带有(具有 跨层读取功能的)全局电子快门的感光器件的感光象素层和不感光的转移 和读取象素层, 显然, 结合前述, 当保留定焦在不同景深的多个感光象素 层时, 则可以得到带有(具有跨层读取功能的)全局电子快门的多景深感光 器件。
图 20是本发明提出的两行共用一行转移象素(读取电容)的示意图。 这实际上是 《一种感光器件及其读取方法、 读取电路》 (中国申请号: 200910106477. 7)中隔行扫描方式的一种新的实现。在这里,转移象素和感 光象素不在同一层, 因而可以获得更好的感光面积的使用效率, 但同时增 加了一倍的快门速度。重要的是,这种方式可以用于以化学感光材料 (如量 子感光膜)为感光象素层的感光器件。
图 21是利用半导体的不良透光性能,将半导体基层厚度增加到一定厚 度, 使得下层的象素感不到光。 然后, 利用金属穿孔或是表面走线, 外部 连接的方式, 将上层的感光象素信号, 通过二极管或读取放大开关电路引 到不感光象素层的读取象素上, 在那里进行采样读取, 从而将一个两层的 感光器件退化成一个(具有跨层读取功能的)全局电子快门的单层感光器 件。 这个器件在结构上是双层的, 但在效果上却是单层的。 当这种方式用 于图 17 (a)所示的多层感光器件时, 就能得到(具有跨层读取功能的)全局 电子快门的多景深感光器件。
图 22是本发明提出的以传统的(CMOS和 CCD)半导体电路为基础的带有 (具有跨层读取功能的)全局电子快门的多光谙两层感光器件的示意图。 同样地, 感光象素信号到不感光读取象素上的转移, 由二极管或放大开关 电路来控制的。
图 23是本发明提出的另一种以化学感光材料 (如量子感光膜)为基础 的带有 (具有跨层读取功能的) 全局电子快门的多光谱两层感光器件的示 意图, 其中感光象素层采用化学感光材料(如量子感光膜),而读取电路和 信号处理层则是 CMOS半导体层。注意到在该图示例中,每一个感光象素都 对应有一个不感光的电荷转移象素, 用于实现全局电子快门。 这也是多层 感光器件的为筒单实现全局电子快门而特意进行的一种退化。
图 24是本人在《一种感光器件及其读取方法、 读取电路》 (中国申请 号: 200910106477. 7)提出的一种同时采用主动象素和被动象素来读取感 光象素信号的读取电路。 采用这种方法的好处是, 能够极大地扩大感光器 件的动态范围, 并且成倍地节省图像预览时的功耗。 这种混合读取电路在 高灵敏度的多层多光谱感光器件和带有全局电子快门的多光谙感光器件 里尤其有用。
图 25 是 《一种多光谙感光器件及其釆样方法》 (中国申请号: 200910105948. 2 ) 中用来描述该发明中提出的象素合并和子采样方法的采 样控制电路示意图。 本发明也将用到这种新型的象素合并和子采样方法。 具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。
本发明将提出的多景深感光器件, 其主要用途是景深扩展, 即手机行 业里目前所称的 EDoF (即 Ext ended Dep th of Focus)。 景深扩展尤其在照 相手机中有着非常广泛的应用, 然而, 目前的景深扩展主要使用光学和数 学手段, 通常是利用光学变焦或自动对焦的方式实现景深调整, 这要求电 动机构和复杂、 精密的机械部件的配合, 因而将显著增加空间和成本。
本发明实施例所提出的多景深感光器件, 结合多层感光器件的实现, 包括至少两个可感应到光源的感光象素层,至少两个所述感光象素层之间 按预设距离间隔布置, 使得来自距感光器件特定距离的镜头的不同光信号 被聚焦到不同的感光象素层, 从而不同感光象素层构成了具有不同象距的 感光平面, 可以定焦不同的景深, 由此扩展了感光器件的景深范围, 相当 于从多点光学对焦这一物理手段上实现了自动对焦, 相应可以去除电动机 构和复杂、 精密的机械部件的配合, 显著地节省了空间和成本。
所述感光象素层, 包括化学镀膜感光象素层和半导体感光象素层中的 至少一者。 即上述至少两个感光象素层,可以全部是化学镀膜感光象素层, 或者全部是半导体感光象素层, 或者部分是化学镀膜感光象素层, 部分是 半导体感光象素层。 其中, 化学镀膜感光象素层包括量子点感光象素。 半 导体感光象素层包括 CMOS光敏二极管、 CMOS感光门、 CCD光敏二极管、 CCD 感光门、 和具有双向电荷转移功能的 CMOS和 CCD感光二极管和感光门。
上述的感光象素层, 分别用来感应不同的光信号。 本发明中, 关注的 光信号特性主要包括光信号的频谱特性, 即光的波长, 以及光信号的距离 特性, 即光信号到镜头的距离。 因此, 光信号的不同, 是指两个光信号之 间, 上述两个特性中至少有一者不同, 即两个光信号之间, 可能是波长不 同, 或者距离不同, 或者波长和距离都不同。 当然, 由于自然界常见复色 光, 例如白光, 因此如果需要得到不同波长的光, 一般需要配合镜头设计, 利用例如棱镜的分光效应, 镜头材料或曲面设计对不同波长的光的不同折 射率从而使不同波长的光分离并聚焦在不同的感光象素层上。
在聚焦不同波长的光信号的时候, 一般地, 波长更短的光信号被聚焦 到离镜头更近的感光象素层。 例如, 感光象素层为两层, 则聚焦在离镜头 较近的感光象素层的光信号是紫色、 兰色光、 青色、 或绿色光, 聚焦在离 镜头较远的感光象素层的光信号是绿色光、 红色光、 黄色光、 可见光(白色 光)、 或红外光。 又如, 感光象素层为三层, 则聚焦在离镜头最近的感光象 素层的光信号是紫外光、 兰色光、 或青色光; 聚焦在位于中间的感光象素 层的光信号是绿色光、 兰色光、 黄色光、 红色光、 或可见光(白色光); 聚 焦在离光源最远的感光象素层的光信号是红色光、 黄色光、 可见光、 或红 外光。
在聚焦不同距离的光信号的时候, 一般地, 更远距离的光信号被聚焦 到离镜头更近的感光象素层。 例如, 感光象素层为两层, 聚焦在离光源较 近的感光象素层的是无穷远处的光信号, 聚焦在离光源较远的感光象素层 的是感兴趣最短距离的光信号。 结合聚焦不同波长的光, 还可以进一步设 置为: 无穷远处的紫外光、 兰色光、 青色光、 或绿色光被聚焦到离光源较 近的感光象素层; 感兴趣最短距离的绿色光、 红色光、 黄色光、 可见光(白 色光)、 或红外光被聚焦到离光源较远的感光象素层。
又如, 感光象素层为三层, 聚焦在离光源最近的感光象素层的是无穷 远处的光信号, 聚焦在离光源最远的感光象素层的是感兴趣最短距离的光 信号, 聚焦在位于中间的感光象素层的是无穷远处与感兴趣最短距离的一 个中间距离的光信号。 结合聚焦不同波长的光, 还可以进一步设置为: 无 穷远处的紫外光、 兰色光、 青色光、 或绿色光被聚焦到离光源最近的感光 象素层; 无穷远处与感兴趣最短距离的一个中间距离的绿色光、 兰色光、 黄色光、 红色光、 或可见光(白色光)被聚焦到位于中间的感光象素层; 感 兴趣最短距离的红色光、 黄色光、 可见光、 或红外光被聚焦到离光源最远 的感光象素层。
实施方式中,感兴趣最短距离包括 2mm, 5mm, 7mm, l cm, 2cm, 3cm, 5cm, 7cm, 1 Ocm, 20cm, 30cm, 40cm, 50cm, 60cm, 70cm, 80cm, 1 00cm, 150cm。 所谓感兴趣最短距离, 是指用户关注的景物到镜头的最近距离。 例如, 感 兴趣最短距离是 2匪,是指用户关注的景物到镜头的最近距离是 2匪, 当景 物到镜头的距离小于 2誦, 则不再关注。
图 1 3显示了距离与聚焦平面的关系。在该图中, 位于不同距离的物体, 将清晰地聚焦在多层感光器件的不同感光象素层。 因此, 位于这三个距离 之间的任一物体, 都将清楚或者比较清楚地聚焦在一个,或者两个感光象 素层, 从而从同一个感光器件, 能够同时得到它们的清晰图像。
图 14显示了波长与聚焦平面的关系, 对一般光学系统而言, 波长更 短的光, 焦距更短。 因此, 通过对镜头进行设计, 可以让波长更短的光, 聚焦在离镜头更近的感光象素层; 波长更长的光, 聚焦在离镜头更远的感 光象素层; 波长中等的光,聚焦在中间的感光象素层。这样, 处于不同距离 的物体, 总有一个颜色, 在一个感光层中是清楚的。 于是, 这种成像系统 同时结合了多光镨和多象距的特点, 每个感光层有自己的景深范围, 并且 对于不同波长的光, 景深距离和范围不同,可以将各个感光层的景深范围 进行整合,能够极大地扩展景深, 对于微距照相, 有着无比的优势。
由于可以从不同感光象素层得到具有不同清晰度的多个图像, 因而这 些图像之间可以相互参照, 通过图像的整合、 选用和舍弃, 以及插值、 增 强、 或反卷积等数学处理,而获得清晰的图像。
实施方式还包括在上述多景深感光器件中,实现一种具有跨层读取功能 的全局电子快门,包括多个不感光的转移和读取象素,所述不感光的转移和 读取象素的每一个能够用来转移和读取处在其它层的至少一个感光象素的 电荷或电压值。 从而多个不感光的转移和读取象素可同时转移并读取一层 或多层感光象素层的电荷或电压值。 多个不感光的转移和读取象素可以与 感光象素位于同一象素层,显然,这意味着该象素层的感光灵敏度的下降。 多个不感光的转移和读取象素也可以与感光象素位于不同的象素层, 即形 成独立的感光象素层和独立的不感光的转移和读取象素层, 显然, 这意味 该具有跨层读取功能的全局电子快门只能在两层或多层感光器件中实现。
可以为每一个感光象素层设置一个紧邻的对应的不感光象素转移和读 取层,所述不感光象素转移和读取层可同时转移对应的感光象素层的所有 象素的电荷或电压值; 或者可同时转移对应的感光象素层的奇数行或偶数 行象素的电荷或电压值。 图 20显示了两行共用一行读取电容的设计,以实 现逐行扫描的功能。 不感光的转移和读取象素层, 可以半导体电路做成。
图 21显示了一种两层感光器件的退化实现,以获得单层的带有全局电 子快门的单层感光器件。 这种方法利用了半导体材料的不良透光性, 将两 半导体基层加厚,使得底层感不到光, 而只能用于做象素读取。将这个方法 用于图 17(a)所示的三层感光器件时, 就能得到带全局电子快门的两层多景 深感光器件。
图 22和图 23显示了一种具有跨层读取功能的全局电子快门的象素级 别的实现。
文中出现的远近,上下等位置关系,均是指以光源为基准的相对位置。 例如, 上层感光象素层和下层感光象素层的描述, 是指感光象素层水平放 置, 光源从上方垂直照射向感光象素层而言。 显而易见, 本文中的上下关 系实际上具有更加广泛的含义, 即, 例如感光面垂直放置, 光源从左侧或 右侧, 或者从前侧或后侧垂直照射向感光面, 则所谓的上下关系, 则等同 于前后关系或者左右关系。 不失一般性, 本领域技术人 可以理解, 其中 的上方、 下方等描述, 可以用左侧、 右侧、 前侧、 后侧等描述进行等同的 替代。 不同种类的感光象素层, 如化学镀膜感光象素层或半导体感光象素 层, 其上下关系并无限制, 何者处于上方, 何者处于下方是可以根据需要 任意设置的。 下文所述的基层顶面和底面, 同样表达类似含义, 即以水平 放置, 光源从上方垂直照射向基层而言, 此时, 位于上方的基层表面称为 顶面, 位于下方的基层表面称为底面。 可以理解, 当基层垂直放置, 光源 从左侧或右侧, 或者从前侧或后侧垂直照射向基层时, 可等同替代为前侧 面和后侧面, 左侧面和右侧面的表述。
此外, 也需特别注意术语 "可感应到光源" 和 "感光" 之间的差异, 感光象素层的 "感光", 指的是该象素具有感光能力, "可感应到光源", 指 的是感光象素是否能够感应到光源的结果, 即感光象素的感光能力是否得 到发挥, 例如, 由于半导体的透光性限制, 当在一个半导体基层的顶面和 底面各布置一个半导体感光象素层时, 如果该半导体基层厚度超出半导体 的透光性限制, 则光源向该半导体基层照射时, 只有顶面的半导体感光象 素层能够感应到光源, 而底面的半导体感光象素层受半导体基层厚度限制 而无法感应到光源, 则称顶面的半导体感光象素层为可感应到光源的感光 象素层, 即感光象素的感光能力得以发挥; 称底面的半导体感光象素层为 不可感应到光源的感光象素层, 即感光象素的感光能力未能发挥。 注意到 在后文中, 利用不可感应到光源的感光象素层, 可以形成不感光的转移和 读取象素层。
当化学镀膜感光象素或半导体感光象素为双向感光象素时, 则涉及到 感光选向问题, 即尽管能够双向感光, 但不能接受同一时刻两个方向的光 照, 需要在一个时刻选择一个方向的光源照射, 感光选向方式可以为隔离 选向、 分时选向、 分区选向、 或象素选向等等, 也就是说, 可以通过例如 遮光膜遮挡等方式来实现分时刻、 分区域、 分象素的感光选向。 双向照射 的情况,例如图 8所示。
感光象素层, 大致相当于垂直于光源照射方向的感光平面, 在这样的 感光平面中, 布置有多个感光象素(通常形成为多行多列的象素阵列), 对 于多个感光象素层中的每一感光象素层, 其可能是平面混合型, 也就是既 布置有化学镀膜感光象素, 也布置有半导体感光象素。 另外的情形是, 同 一感光象素层中仅布置一种感光象素,如此,将形成化学镀膜感光象素层, 或者半导体感光象素层。
在实施方式中, 感光器件的同一位置(即由一层感光象素层的该象素 位置上穿透而来的光照射到另一感光象素层上的位置)但不同层的感光象 素, 分别感应包含紫外线、 可见光、 近红外、 和远红外中的一个互补普段 或子语段; 或者分别感应包含紫外线、 可见光、 近红外、 和远红外中的一 个正交谱段或子 i普段。其中,互补傳段或子谱段包含紫外光谙, 兰色光谱, 绿色光谱, 红色光谱, 近红外光谱, 远红外光谱, 青色光傳, 黄色光谱, 白色光谱, 近红外光谱 +远红外光谱, 红色光谱 +近红外光谱, 红光谱 +近 红夕卜光谱 +远红夕卜光谱, 黄色光诸+近红外光谱, 黄色光谱 +近红外光谱 +远 红外光谱, 可见光谱 +近红外光谱 +远红外光谱, 紫外光谱 +可见光谱, 紫 外光谱 +可见光谱 +近红外光谱, 紫外光谙+可见光谱 +近红外光谱 +远红外 光谱;
正交谱段或子谱段包含紫外光语, 兰色光谱, 绿色光谱, 红色光谱, 近红外光谱, 远红外光谱, 青色光语, 黄色光谙,白色光谱, 近红外光谱 + 远红夕卜光谱, 红色光谱 +近红外光谱, 红光谱 +近红夕卜光谱 +远红夕卜光谱, 黄色光谱 +近红外光谱, 黄色光谙+近红外光谙+远红外光谱, 可见光谱 +近 红外光谱 +远红外光谱, 紫外光谙+可见光谱, 紫外光语 +可见光语+近红外 光谱, 紫外光镨 +可见光谱 +近红外光谱 +远红外光谱。
实施方式包括让感光器件中至少一层感应两个不同的光谙(即射频) 语段。对于每一感光象素层而言,其象素阵列的色彩排列包括同一排列(象 素阵列中的象素色彩相同)、 水平排列 (象素阵列中的同一行象素色彩相 同)、 垂直排列 (象素阵列中的同一列象素色彩相同)、 对角排列 (象素阵 列中的同一对角线的象素色彩相同)、广义贝叶排列(象素阵列中的一条对 角线上的象素色彩相同, 另外一条对角线上的象素色彩不同)、 YUV422 排 列、横向 YUV422排列、蜂窝排列、均布排列(四个象素均匀交错等距排列) 等。
需要注意的是, 本文中的术语 "布置", 包含了各种在例如半导体基 层或透光层上形成化学镀膜感光象素层或半导体感光象素层的制作工艺。 例如, 半导体基层为一个 N 型硅晶体基板,在该基板一面上的一个象素位 置, 根据色彩的深度需求, 由该象素位置表面向基板内部做一定深度的 P 杂质置入,形成一个 P掺杂层,该 P掺杂层即形成为一个半导体象素,如果 在该 P摻杂层做另一定深度的 N杂质置入,形成在 P掺杂层中的 N掺杂层, 该 N摻杂层即形成为另一半导体感光象素 (与前一 P掺杂层的半导体感光 象素在不同感光象素层, 但象素位置相对应), 可以按照《多光谱感光器件 及其制作方法》(PCT/CN2007/ 071262 )提供的方法, 在 390nm附近, 500nm 附近, 610nm附近, 和 76 Onm附近设置分层线, 使得任一分层线上下的对 应点象素感应互补或正交的光谱。 图 1给出了一个分层线的设置的例子, 即通过不同深度的杂质掺入, 形成不同的色彩。 在基板的该面进行化学镀 膜溶液的涂抹加工, 可以形成化学镀膜感光象素层, 由于制作或者加工工 艺的多样性, 在本文中均以 "布置" 进行表述。
上述的两层半导体感光象素在不同深度上的布置, 实现了在基板一个 表面上的同一象素位置可以感应到至少两个语段, 从而提供了在该表面上 的象素图案排列上的更好的灵活性以及更多的象素布置, 能够大幅提高感 光器件的灵敏度,解析度,和动态范围。
对于上述在半导体基板某一面的不同深度的掺杂加工, 其同一位置最 多布置两层感光象素, 这是由于在同一位置布置 3层, 加工上难度极大, 同时在布线上, 由于各层间的引线需要相互隔离, 3 层引线显然造成了布 线上的困难。 而在本发明中, 采用同一面上最多布置两层上述的半导体感 光象素层, 并可结合平面上的象素图案排列完成彩色重建, 因而可以实现 更好的彩色感光性能。 由于同一面上最多以深度掺杂方式布置两个半导体 感光象素层, 因而明显降低了立体加工工艺的难度, 且在布线上, 也相对 简单。
对于基板, 可以采用单面或者双面加工工艺, 从而形成单面感光器件 或者双面感光器件。 双面感光器件, 对于上述的深度掺杂加工, 如果采用 两个半导体感光象素层一者布置在基板的顶面, 另一者布置在基板底面的 双面布置方式, 则对于每一面而言, 其都筒化为平面加工工艺, 可以在一 面上完成一个感光象素层的平面加工后, 将基板进行翻转而在另一面同样 以平面加工工艺完成另一感光象素层的加工, 使得加工工艺近似于现有的 单面单层感光器件的加工工艺, 相对于上述的同一面的两层掺杂的立体加 工而言, 更为筒单。 另一方面, 沿光照方向, 在基板的某一位置, 可以布 置多层感光象素。
半导体感光象素层通常制作在半导体基层上, 为了实现景深扩展, 在 实际制作中, 通常采用增设透光层 (例如透明玻璃层) 来调整不同感光象 素层之间的距离, 例如, 在一半导体基层上制作一层或多层半导体感光象 素层, 而后在该半导体基层上放置一透光层, 而后, 在该透光层上涂抹加 工出一层化学镀膜感光象素层。 通过透光层的不同厚度设置, 相当于预设 化学镀膜感光象素层和半导体感光象素层之间的间隔距离, 进而实现景深 扩展。
在很多的应用里, 在化学镀膜感光象素层或半导体感光象素层的正 面, 背面, 或双面并不涂任何滤光膜。 但在另外一些应用里, 如对于色彩 还原要求特别高的专业相机或摄像机, 实施方式包括使用滤光膜的方式。 滤光膜设置在化学镀膜感光象素层或半导体感光象素层中的全部或部分感 光象素的正面, 背面, 或双面。 滤光膜的选频特性包括红外截止滤、波、 兰 色带通、 绿色带通、 红色带通、 青色带通、 黄色带通、 品红色带通、 或可 见光带通。 滤光膜的使用是通过牺牲少数象素的灵敏度, 来去除不想要的 光谱的影响, 减少上下左右象素之间的干涉 (cros s ta lk ) ,或者获得正交 性更好的三原色或是更纯正的补色信号。
实施方式包括让所述多景深感光器件的多层感光象素层的相邻两层 各自使用自己的读取电路。
实施方式包括让所述多景深感光器件的多层感光象素层的相邻两层 共用置于其中一层的读取电路。
实施方式包括让所述多景深感光器件的读取电路位于半导体感光象 素层,或独立的读取电路层。
所述多景深感光器件的读取电路的实施方式包括采用 《一种多光谱感 光器件及其采样方法》(中国申请号: 200910105948. 2 )和《一种感光器件 及其读取方法、 读取电路》 (中国申请号: 200910106477. 7)中的象素读取 和子采样方法。
实施方式包括在所述多景深感光器件的信号读取电路里采用主动象 素读取电路、 被动象素读取电路、 或主动象素与被动象素混合读取电路。
所述主动象素包括 3T、 4Τ、 5Τ、 或 6Τ主动象素。 3Τ和 4Τ的主动象 素结构分别如图 2和图 3所示。
所述读取电路的共享方式包括无共享方式、 单层或上下层 4点共享方 式、 单层或上下层 6点共享方式、 单层或上下层 8点共享方式、 或单层或 上下层任意点共享方式。 4点共享方式, 6点共享方式, 8点共享方式, 和 任意点共享方式,分别如图 4,图 5, 图 6, 和图 7所示。
实施方式中, 多景深感光器件的所述读取电路包括用于对每一感光象 素层的象素阵列中的紧邻的同行异列、 异行同列、 或异行异列的象素间进 行两两合并采样, 获得第一合并象素的采样数据的第一合并单元; 以及用 于对第一合并单元得到的第一合并象素的采样数据进行合并采样以获得第 二合并象素的采样数据的第二合并单元。
实施方式还包括: 读取电路还包括第三合并单元, 用于对第二合并单 元得到的第二合并象素的采样数据进行合并采样以获得第三合并象素的采 样数据。
在本发明的一种实施例中, 所述的感光器件, 所述第一合并单元或第 二合并单元的象素合并方式为相同或不同色彩象素间的电荷相加方式或信 号平均方式, 其中不同色彩象素间的象素合并方式遵照色彩空间变换的方 式, 以满足色彩重建的要求。
上述的第一合并象素和第二合并象素来自于将子采样至少分为两个过 程的处理, 即第一合并釆样过程和第二合并采样过程。 第一合并采样过程 和第二合并采样过程, 通常发生在象素的行(合并)采样和列 (合并)采 样之间, 主要对模拟信号进行, 除电荷相加部分通常只在第一合并采样过 程中做以外, 其次序和内容通常是可以交换的。 此外, 也可以包括第三合 并采样过程, 第三合并釆样过程发生在模数转换之后, 主要对数字信号进 行。
对于第一合并采样过程,是取象素阵列中两个紧邻的象素来进行合并。 一方面, 完成了紧邻象素的合并, 将合并后的象素称为第一合并象素, 需 要理解的是, 第一合并象素只是为本发明描述之便, 利用该概念来指代进 行第一合并过程后的象素, 而不代表物理上, 在象素阵列中存在一个 "第 一合并象素";将两个紧邻象素合并采样后的数据称为第一合并象素的采样 数据。 紧邻, 系指两个象素之间从水平, 垂直, 或对角方向上来看紧挨着, 中间没有其它象素。 紧邻的情况包含同行异列, 异行同列, 或异行异列。 一般而言, 在这种合并中, 信号将至少是两个象素的信号平均, 而噪声则 会降低 , 因此, 合并后, 至少可以将信噪比提高"^倍, 且这种合并可 以在相同或不同色彩的象素之间进行。 另一方面, 由于两个合并的色彩可 以不同, 即色彩相加或平均, 从色彩的三原色原理可知, 两种原色的相加 是另一种原色的补色, 就是说, 两个不同原色的象素合并, 产生另一种原 色的补色, 从原色空间, 变换到了补色空间, 仅仅是发生了色彩空间变换, 仍然可以通过不同的补色而完成彩色重建。 也即通过这种方式, 既能实现 不同色彩的象素合并以提高信噪比, 同时又能够进行彩色重建。 整个子采 样过程也因此得到优化, 更加适应大数据量的象素阵列的高速需求。 色彩 空间变换的一个基本要求是, 变换后的色彩的组合, 能够 (通过插值等手 段) 重建所需要的 RGB (或 YUV, 或 CYMK ) 色彩。
需要了解, 由于通常象素阵列包含多个象素, 第一合并采样只是将两 个象素进行合并, 显然, 合并形成的第一合并象素也具有多个。 对于不同 的第一合并象素, 其采用的色彩合并方式可以相同, 也可以不同。 当第一 合并全部在相同的色彩间进行时, 我们将之称为同色合并方式; 当第一合 并全部在不同的色彩间进行时, 我们将之称为异色合并方式; 当第一合并 部分在相同色彩间进行、 部分在不同色彩间进行, 我们将之称为混杂合并 方式; 当对象素阵列中的一些多余的色彩进行抛弃 (当然, 抛弃是选择性 的, 例如, 不能因此而影响到彩色重建), 这样的色彩合并方式称为选择性 抛弃多余彩色方式。
显然的, 第二合并过程是对多个第一合并象素的操作, 同样的, 可以 将色彩相同的第一合并象素进行合并; 也可以将色彩不同的第一合并象素 进行合并 (当然, 这种情况下可能导致三原色的全部相加而无法重建出彩 色)。
上述的同色合并、 异色合并、 混杂合并等方式, 是将合并采样做基于 色彩的分类, 另夕卜, 从合并采样的位置选取的角度, 第一合并过程和第二 合并过程的合并采样方式包括: 直接输出到总线的信号自动平均方式、 跳 行或跳列方式, 逐个采样方式, 以及这些方式的两种或三种的同时使用。 除电荷相加部分通常只能在第一合并采样过程中做以外, 第一合并过程和 第二合并过程, 除了次序的不同外, 其方式都是相同和可以交换的。
所谓直接输出到总线的信号自动平均方式, 就是, 将需要合并的信号 (色彩相同或是不同), 同时输出到数据采集总线上去, 通过(电压)信号 的自动平衡, 来获得需要合并信号的平均值。 所谓跳行或跳列方式就是跳 过一些行或列, 从而通过减少数据量的方式来实现 (合并) 采样。 所谓逐 个采样方式, 实际上就是不做任何合并, 依此读取原来的象素或第一合并 象素。 这三个方式有一些是可以同时使用的, 例如, 跳行或跳列方式可与 直接输出到总线的信号自动平均方式或逐个采样方式同时使用。
第三合并采样过程的子采样方式包括色彩空间变换方式、 后端数字图 像缩放方式、 以及这两个方式的串行使用。 第一和第二合并过程主要是在 模拟信号上进行, 而第三子采样过程主要是在数字信号上进行, 即模数转 换之后进行。 通过将处于不同空间位置的三个或四个色彩象素, 当作同一 个点上的值而转换到另一个色彩空间, 就又可实现水平和 (或) 垂直方向 上的数据减少, 从而达到子采样的效果。 而数字图像缩放方式, 是最为直 观常用的子采样方式。
在合并采样时可以实现电荷相加。 目前的合并采样几乎都是只做到了 电压或电流信号的平均, 这种方式在合并 N点时, 最多只能将信噪比提高 倍。 这是因为现有的合并采样都是 N个相同色彩的象素共用一根输出 线的方式进行合并采样, 在这根输出线上, 各个象素的电压或电流信号必 然要进行(自动的)平均, 因此, 其信噪比的提高只是在于噪声合并后降 低了 , 从而使信噪比提高最多^倍。 而采用本发明的电荷相加方式, 例如通过读取电容存储电荷, 实现电荷的累加, 从而信号可以进行叠加而 使得信噪比可以提高至少 N倍, 比信号平均的方法高至少 倍。 也就是 说, 将 N个信号以电荷相加的方法合并, 理论上最高可以达到 N2个信号相 平均的效果或更好 (如下面所述),这是效果非常显著的提高信噪比的手段。
紧邻象素相加, 还带来另外一个显著的效果, 就是, 象素之间的相互 干扰(cros s-ta lking)效果被减弱。这是由于本来相互干扰的色彩,现在是 合法的一体, 也就是说, 原来属于噪声的一部分信号, 现在成了有效的信 号部分, 因此, N 个信号电荷相加带来信噪比的改进, 可以接近理论上的 上限, 即 N 倍, 从而, 相当于 N3个信号相平均的效果。
在全图采样 (即对一个图像的按最高分辨率进行采样)时, 可以采用 逐行扫描、 隔行或跨行读取的方式,不需要提高时钟速度和采用帧緩存器, 将大阵列图像的全图读取帧率在拍单张照时翻倍。如果增加 AD转换器和行 緩存, 那么, 全图读取帧率还可以提高更多。 这个方法对于省去机械快门 有非常重要的价值。
请注意本发明的逐行扫描、 隔行或跨行读取的方式, 与传统电视系统 里的场扫描方式( inter l eaved scanning )是不同的。 传统的场扫 4 方式, 是隔行扫描, 隔行读取, 因此, 奇数场和偶数场 (无论是感光还是读取) 在时间上差了一场, 即半帧。 而本发明的逐行扫描、 隔行或跨行读取的方 式, 象素在感光时间顺序上却是与逐行扫描、逐行读取方式是完全一样的, 只是行的读取次序做了变化。 细节描述请见 《一种多光谱感光器件及其采 样方法〉〉(中国申请号: 200910105948. 2 )和《一种感光器件及其读取方法、 读取电路》 (中国申请号: 200910106477. 7) 。
在本发明的一种实施例中, 所述的感光器件, 所述色彩空间变换包括 RGB到 CyYeMgX空间的变换、 RGB到 YUV空间的变换、 或 CyYeMgX到 YUV 空间的变换, 其中 X为 R (红)、 G (绿)、 B (兰)中的任一种。 图 10显示了一 种利用色彩空间变换来实现子采样的一种方式。
实施方式中包括, 上述电荷相加方式通过象素直接并联或将电荷同时 转移到读取电容(FD ) 中完成。
如上所述, 多景深感光器件中, 第一合并单元或第二合并单元的基于 色彩的合并采样方式包括同色合并方式、 异色合并方式、 混杂合并方式、 或选择性抛弃多余色彩合并方式, 且第一合并单元和第二合并单元采用的 合并采样方式不同时为同色合并方式, 也即两个合并单元中至少有一个合 并单元不采用同色合并方式。
如上所述, 第一合并单元或第二合并单元的基于位置的合并采样方式 包括以下几种方式中的至少一种: 直接输出到总线的信号自动平均方式、 跳行或跳列方式、 和逐个采样方式。 亦即这几种基于位置的合并采样方式 可以单独使用, 也可以组合使用。
如上所述, 在上述感光器件中, 可以用色彩空间变换方式和后端数字 图像缩放方式中的至少一种来实现所述第三合并采样单元的合并采样方 式。
图 9显示了一种异色象素电荷合并的方式。
实现上述子采样功能的是如图 25 所示的行地址解码控制器和列地址 解码控制器。 行地址解码控制器将输出两类信号, 行选信号 Row [ i] (每行 一条线) 和行控制矢量信号 RS [i] (每行一条或多条线), 其中 i 为行的 标号。 类似地, 列地址解码控制器将输出两类信号, 列选信号 Col [」·] (每 列一条线) 和列控制矢量信号 T [j] (每列一条或多条线), 其中 j为列的 标号。
行选信号 Row [ i]是用来做行的选择,而列选信号 Col [j]是用来做列的 选择。 这是两组相对标准的信号。 行控制矢量信号 RS [ i]是对现有 CMOS行 控制信号的扩展 (每行一条线扩展到每行多条线), 而列控制矢量信号 T [j] , 有的 CMOS感光器件根本没有, 即使有, 也是一列只有一个。
RS [ i]和 T [j]用来控制感光象素的复位, 清零, 感光时间控制, 电荷 转移, 象素合并, 和象素读取。 由于行列的对称性, RS [ i]和 T [j]有很多 种具体的实现方式。 这些信号的具体实现方式并不受限。
如上所述, 多光谱感光器件的全图采样方式包括逐行扫描、 逐行读取 方式或逐行扫描、 隔行或跨^ "读取方式。
实施方式还包括制作一种感光系统,包括上述的多景深感光器件。 所述感光系统用于获取正面,背面,或双向的图像。
所述感光系统包括数码相机,照相手机, 摄像机, 视频或照相监控系 统, 图像识别系统, 医学图像系统, 军用、 消防、 和井下图像系统, 自动 跟踪系统, 立体影像系统, 机器视觉系统,汽车视觉或辅助驾驶系统,电子 游戏系统, 网络摄像头, 红外和夜视系统, 多光谱成像系统, 和电脑摄像 头。 实施方式还包括实现一种景深扩展方法, 包括步骤: 在感光器件中设 置至少两个可感应到光源的感光象素层,并将至少两个所述感光象素层按 预设距离间隔布置, 使得来自距所述感光器件特定距离的镜头的不同光信 号, 被聚焦到不同的感光象素层。
景深扩展方法中, 通过来自不同感光象素层的具有不同清晰度的图像 而获取一幅清晰图像。
实施方式中还包括一种成像方法, 或者说所述感光器件在成像中的应 用, 是设置镜头和包括至少两个可感应到光源的感光象素层的感光器件; 将所述感光器件放置在距所述镜头特定距离, 且至少两个所述感光象素层 之间按预设距离间隔布置, 使得来自镜头的不同光信号被聚焦到不同的所 述感光象素层。
参见图 11-14 , 实施方式中还包括了一种光学成像系统, 包括镜头和 多景深感光器件, 所述多景深感光器件布置在距所述镜头特定距离, 包括 至少两个可感应到光源的感光象素层; 至少所述感光象素层之间按预设距 离间隔布置, 来自距所述感光器件特定距离的镜头的不同光信号, 被聚焦 到不同的感光象素层。
如图 13 , 可以是不同距离上的所有感兴趣波长的光, 分别聚焦在每一 感光象素层; 或者如图 14 , 可以是相同距离上的不同波长的光, 分别聚焦 在每一感光象素层; 也可以是不同距离上的不同波长的光, 分别聚焦在每 一感光象素层。
实施方式包括各个感光象素层聚焦的光, 其波长按各个感光象素层距 所述光学镜头从近到远逐渐增长。 或者在各个感光象素层中, 更远距离的 光信号, 被聚焦在离镜头更近的感光象素层。
例如, 当包括两个可感应到光源的感光象素层, 两个感光象素层分别 位于镜头的第一象距和第二象距, 可以通过光学镜头设计, 将紫外光、 兰 色光、绿色光、青色光、或白色光聚焦在离镜头最近的感光象素层;相应的, 将兰色光、 绿色光、 红色光、 黄色光、 或红外光聚焦在离镜头最远的感光 象素层。
又如, 当包括三个可感应到光源的感光象素层, 三个感光象素层分别 位于镜头的第一象距、 第二象距、 第三象距, 可以通过光学镜头设计, 将 紫外光、 兰色光、 绿色光、 或青色光聚焦在离镜头最近的感光象素层;相应 的, 将红色光、 黄色光、 可见光、 或红外光聚焦在离镜头最远的感光象素 层;相应的, 将绿色光、 黄色光、 可见光、 或红色光聚焦在中间的感光象素 层。
又如, 当包括四个可感应到光源的感光象素层, 四个感光象素层分别 位于镜头的第一象距、 第二象距、 第三象距、 第四象距, 可以通过光学镜 头设计, 将紫外光、 兰色光、 绿色光、 或青色光聚焦在离镜头最近的感光 象素层; 相应的, 将红色光、 黄色光、 白色光、 或红外光聚焦在离镜头最 远的感光象素层;相应的, 将兰色光、 绿色光、 或青色光聚焦在离镜头第二 近的感光象素层;相应的, 将绿色光、 红色光、 白色光、 或黄色光聚焦在离 镜头第三近的感光象素层。
又如, 当包括两个可感应到光源的感光象素层, 两个感光象素层分别 位于镜头的第一象距、 第二象距, 可以通过光学镜头设计, 将紫外光或可 见光聚焦在离镜头最近的感光象素层;将可见光或红外光聚焦在离镜头最 远的感光象素层。
又如, 当包括三个可感应到光源的感光象素层, 三个感光象素层分别 位于镜头的第一象距、 第二象距、 第三象距, 可以通过光学镜头设计, 将 紫外光或白色光聚焦在离镜头最近的感光象素层;将白色光或红外光聚焦 在离镜头最远的感光象素层;将白色光聚焦在中间的感光象素层。
又如, 当包括四个可感应到光源的感光象素层, 四个感光象素层分别 位于镜头的第一象距、 第二象距、 第三象距、 第四象距, 可以通过光学镜 头设计,将紫外光或白色光聚焦在离镜头最近的感光象素层;将白色光或红 外光聚焦在离镜头最远的感光象素层;将白色光聚焦在离镜头第二近的感 光象素层;将白色光聚焦在离镜头第三近的感光象素层。
需要注意的是, 在上述例子中, 对于包括所有感兴趣波长的光, 例如 白色光, 如果说明白色光被聚焦到不同感光象素层, 则其一般来源于不同 距离, 即例如离镜头最近的感光象素层聚焦的是无穷远处的白色光, 离镜 头最远的感光象素层聚焦的是感兴趣最短距离的白色光。 即当两个感光象 素层聚焦的光信号的频谱特性相同, 则其必须具有不同的距离特性。
本发明的多景深感光器件, 兼具多光谱的优异特性, 可同时获得众多 的彩色信号和其它光傳信号, 例如在一种四层感光器件中, 可沿光路按距 光源从近到远布置一个感应紫外光的第一化学镀膜感光象素层、 一个感应 兰光, 绿光, 或青光的第一半导体感光象素层, 一个感应红光, 黄光, 或 绿光的第二半导体感光象素层, 一个感应红外光的第二化学镀膜感光象素 层。 其中, 第一半导体感光象素层和第二半导体感光象素层分别实现在两 个半导体基层上, 两个半导体基层之间设置一个具有预设厚度的透光层, 第一化学镀膜感光象素层布置在第一半导体感光象素层所在基层的顶面上 方; 第二化学镀膜感光象素层布置在第二半导体感光象素层所在基层的底 面下方。 由此, 不仅实现了景深扩展, 且可几乎最大程度的利用入射光能 量, 在得到彩色的同时, 也得到全光谱的信息, 充分发挥不同感光材料的 特点。 这样一种四层多光谱感光器件, 制作难度并不太高。 如果结合采用 前文中本人之前发明的先进采样和以电荷合并和色彩变换为显著特征的子 采样电路和方法, 更能够大幅地降低感光器件和系统的复杂度,从而为各 种应用提供巨大的方便和崇高的性能。
本发明的多景深感光器件, 首先实现的第一种特殊用途是景深扩展, 现有的 EDoF主要是使用了光学和数学的手段实现景深扩展,一般需要借助 例如镜头等进行自动对焦, 相比之下, 本发明是直接的通过器件内的不同 感光象素层按预设距离间隔设置这样的物理手段实现景深扩展。 其次实现 的第二种特殊用途是实现全局电子快门, 现有的全局电子快门(Global Shut ter) 主要是使用了读取电路的手段, 相比之下,本发明利用不感光的 转移和读取象素,可以在不用机械快门的情况下, 实现高象素的高速拍照。 当这两种实现(即景深扩展和全局电子快门), 在同一个感光器件上得以集 成的时候, 多层多光语感光器件的巨大威力就得以充分的发挥。 因此, 本 发明在艮多指标和性能上, 大幅超越现有的方法。
本发明的多景深感光器件, 借助于调整不同感光象素层的距离, 除了 大幅提高灵敏度以外,还能大幅提高系统景深范围, 从而让图像更清楚, 系统反应速度更快, 应用面更宽, 乃至消除某些应用中的自动对焦需求。 本发明的多景深感光器件在其涵盖的景深范围, 能够快速地获取清晰的图 像, 而不需要经过一个调焦的过程。 景深扩展除了能够降低自动对焦的难 度和成本外, 甚至在某些应用中如手机照相,微距照相, 或远距照相, 能 够彻底消除自动对焦的需求。 景深扩展还能让同一张照片里处于不同距离 的物体同时清楚, 这在一些特殊应用中也是极有用的, 而这是自动对焦所 不能做到的, 因为现有的自动对焦系统, 只能让某个距离内的物体清楚成 像, 而不能让非常宽的一个范围内的物体都清楚。 因此, 本发明的景深扩 展实现, 在具有自动对焦能力的系统里,也仍然具备有很大的价值
由于本发明的多景深感光器件的高灵敏度,其感光速度也可大幅提高, 从而为在很多应用中取下机械快门提供了可能。 由此, 在本发明的多景深 感光器件中, 还提出一种具有跨层读取功能的全局电子快门的实现, 以期 取代某些应用中可能需要的机械快门。全局电子快门作用是在一瞬间,将感 光象素里的电荷或电压值拷贝到不感光的读取象素里去, 以便读取电路从 容的读出。
结合景深扩展全局电子快门的实现,将二者集成在一个感光器件上时, 一个不需要自动对焦和机械快门的高性能、 高速度、 高象素感光系统, 就 可以芯片的方式来实现, 大大地降低系统的尺寸、 复杂度、 功耗和成本, 为很多新的应用提供了可能。
这种带有全局电子快门的或多景深的感光器件,可为感光系统省去机 械快门或者省去自动对焦系统(或者降低对自动对焦系统的要求), 并在不 加快感光器件时钟的情况下, 实现高速的电子快门或清晰成像。
在极大地筒化了感光系统的机械复杂度的要求的同时, 本发明采用两 层或多层的布局,结合先进的两层或多层互补或正交色彩图案排列的方法, 能够最大化的利用入射光子的能量, 不用或只用少许的彩色滤光膜, 从而 达到或接近到达光电转换效率的理论上限, 并在完整地重建彩色的同时, 获得其它光镨的图像, 包括紫外线图像, 近红外图像, 和远红外图像。
当感光象素层与读取电路分层后, 读取电路层的读取电路和处理计算 可以做得非常精细和复杂,为单芯片感光系统的制作, 提供了巨大的便利。
这种多景深感光器件可同时获得众多的彩色信号和其它光谱信号 , 采 用本人之前发明的先进的采样和以电荷合并和色彩变换为显著特征的子采 样电路和方法, 能够大幅地降低感光器件和系统的复杂度,从而为各种应 用提供巨大的方便和崇高的性能。
这种多景深感光器件, 可用于正面感光, 背面感光, 或双向感光。 通 过精细布置各层感光器件的象素感应光语段和各层色彩图案的合理布局, 可以产生各种优选的多光谱感光器件, 如高灵敏度彩色感光器件, 高灵敏 度彩色和红外感光器件, 无杂色 (杂色由插值引起的) 的高灵敏度彩色或 多光谱感光器件等等。
采用主动象素和被动象素读取相结合的手段, 可以获得超低功耗感光 器件, 超高动态范围感光器件。
以上内容是结合具体的实施方式对本发明所作的进一步详细说明, 不 能认定本发明的具体实施只局限于这些说明。 对于本发明所属技术领域的 普通技术人员来说, 在不脱离本发明构思的前提下, 还可以做出若千筒单 推演或替换, 都应当视为属于本发明的保护范围。

Claims

权 利 要 求 书
1. 一种多景深感光器件, 其特征在于, 包括至少两个可感应到光源的 感光象素层,至少两个所述感光象素层之间按预设距离间隔布置, 使 得来自距所述感光器件特定距离的镜头的不同光信号被聚焦到不同 的所述感光象素层。
2. 如权利要求 1 所述的感光器件, 其特征在于, 所述感光象素层包括 化学镀膜感光象素层和半导体感光象素层中的至少一者。
3. 如权利要求 1 所述的感光器件, 其特征在于, 所述化学镀膜感光象 素层包括量子点感光象素。
4. 如权利要求 2 所述的感光器件, 其特征在于, 所述半导体感光象素 层包括 CMOS光敏二极管、 CMOS感光门、 CCD光敏二极管、 CCD感光 门、 和具有双向电荷转移功能的 CMOS和 CCD感光二极管和感光门。
5. 如权利要求 1 -4 任一所述的感光器件, 其特征在于, 所述不同光信 号, 包括不同距离的光信号, 或者不同波长的光信号。
6. 如权利要求 5 所述的感光器件, 其特征在于, 波长更短的光信号被 聚焦到离镜头更近的感光象素层。
7. 如权利要求 6 所述的感光器件, 其特征在于, 所述感光象素层为两 层, 紫色光、 兰色光、 绿色光、 或青色光被聚焦到离镜头较近的感 光象素层, 绿色光、 红色光、 黄色光、 可见光、 或红外光被聚焦到 离镜头较远的感光象素层。
8. 如权利要求 6 所述的感光器件, 其特征在于, 所述感光象素层为三 层, 紫外光、 兰色光、 或青色光被聚焦到离镜头最近的感光象素层; 兰色光、 绿色光、 红色光、 或黄色光被聚焦到位于中间的感光象素 层; 红色光、 黄色光、 可见光、 或红外光被聚焦到离镜头最远的感 光象素层。
9. 如权利要求 5 所述的感光器件, 其特征在于, 更远距离的光信号被 聚焦到离镜头更近的感光象素层。
10. 如权利要求 9 所述的感光器件, 其特征在于, 所述感光象素层为两 层, 无穷远处的光信号被聚焦到离镜头较近的感光象素层, 感兴趣 最短距离的光信号被聚焦到离镜头较远的感光象素层。
11. 如权利要求 10所述的感光器件, 其特征在于, 无穷远处的紫色光、 兰色光、 绿色光、 或青色光被聚焦到离镜头较近的感光象素层, 感 兴趣最短距离的绿色光、 红色光、 黄色光、 可见光、 或红外光被聚 焦到离镜头较远的感光象素层。
12. 如权利要求 9 所述的感光器件, 其特征在于, 所述感光象素层为三 层, 无穷远处的光信号被聚焦到离镜头最近的感光象素层, 感兴趣 最短距离的光信号被聚焦到离镜头最远的感光象素层, 无穷远处与 感兴趣最短距离之间的一个中间距离的光信号被聚焦到位于中间的 感光象素层。
1 3. 如权利要求 12所述的感光器件, 其特征在于, 所述感光象素层为三 层, 无穷远处的紫外光、 兰色光、 或青色光被聚焦到离镜头最近的 感光象素层, 感兴趣最短距离的红色光、 黄色光、 可见光、 或红外 光被聚焦到离镜头最远的感光象素层, 无穷远处与感兴趣最短距离 之间的一个中间距离的兰色光、 绿色光、 红色光、 或黄色光被聚焦 到位于中间的感光象素层。
14. 如权利要求 1 0-1 3 任一所述的感光器件, 其特征在于, 所述感兴趣 最短距离包括 2mm, 5mm, 7mm, l cm, 2cm, 3cm, 5 cm, 7cm, 10cm, 20cm, 30cm, 40cm, 50cm, 60cm, 70cm, 80cm, 100cm, 或 150cm。
15. 如权利要求 1-14任一所述的感光器件, 其特征在于, 至少两个所述 感光象素层之间设置有透光层。
16. 如权利要求 1-15任一所述的感光器件, 其特征在于, 所述感光象素 层中的感光象素为正面感光象素、 背面感光象素、 或双向感光象素。
17. 如权利要求 16所述的感光器件, 其特征在于, 当感光象素为双向感 光象素时, 其感光选向方式为隔离选向、 分时选向、 分区选向、 或 象素选向。
18. 如权利要求 1-17任一所述的感光器件, 其特征在于, 所述感光象素 层中的感光象素分别感应包含紫外线、 可见光、 近红外、 和远红外 中的一个互补 i普段或子谱段; 或者所述化学镀膜感光象素和半导体 感光象素分别感应包含紫外线、 可见光、 近红外、 和远红外中的一 个正交谱段或子谱段。
19. 如权利要求 18所述的感光器件, 其特征在于, 所述互补语段或子镨 段包含紫外光谱, 兰色光谱, 绿色光谱, 红色光谮, 近红外光谙, 远红夕卜光傳, 青色光谱, 黄色光谱,白色光傳, 近红夕卜光谱 +远红夕卜 光谱, 红色光谱 +近红外光谱, 红光谱 +近红夕卜光谱 +远红夕卜光谱, 黄色光谱 +近红外光谱, 黄色光谱 +近红外光谱 +远红外光谱, 可见 光 ί普 +近红外光谱 +远红外光谱, 紫外光谱 +可见光谱, 紫外光谱 +可 见光谱 +近红外光谱, 紫外光谱 +可见光谱 +近红外光谱 +远红外光 谱;
所述正交谱段或子 i普段包含紫外光谱, 兰色光谱, 绿色光 i普, 红色 光谱, 近红夕卜光 i普, 远红夕卜光谱, 青色光谱, 黄色光谱,白色光谱, 近红外光语 +远红外光谱, 红色光语+近红外光谱, 红光语 +近红外 光谱 +远红外光谱, 黄色光谱 +近红外光谱, 黄色光谱 +近红外光谱 + 远红外光谱, 可见光谱 +近红外光谱 +远红外光谱, 紫外光谱 +可见 光 i番, 紫外光侮 +可见光谱 +近红外光谱, 紫外光谱 +可见光谱 +近红 夕卜 夕卜 。
20. 如权利要求 1-19任一所述的感光器件, 其特征在于, 每一感光象素 层中的色彩排列包括同一排列、 水平排列、 垂直排列、 对角排列、 广义贝叶排列、 YUV422排列、 横向 YUV422排列、 蜂窝排列、 均布排 列。
21. 如权利要求 1-20任一所述的感光器件, 其特征在于, 至少一个所述 感光象素层中的部分或全部感光象素的正面、 背面或双面设置有滤 光膜, 所述滤光膜的选频特性包括红外截止滤波、 兰色带通、 绿色 带通、 红色带通、 青色带通、 黄色带通、 品红色带通、 或可见光带 通。
22. 如权利要求 1-21任一所述的感光器件, 其特征在于, 所述感光象素 层中的相邻两层各自设有读取电路; 或者所述感光象素层的相邻两 层共用读取电路。
23. 如权利要求 22所述的感光器件, 其特征在于, 所述读取电路为主动 象素读取电路、 被动象素读取电路、 或主动象素与被动象素混合读 取电路。
24. 如权利要求 23所述的感光器件,其特征在于,所述主动象素包括 3T、
4Τ、 5Τ或 6Τ主动象素。
25. 如权利要求 22所述的感光器件, 其特征在于, 所述读取电路的共用 方式包括单层或上下层 4点共享方式、 单层或上下层 6点共享方式、 单层或上下层 8点共享方式、 或单层或上下层任意点共享方式。
26. 如权利要求 22-25 所述的感光器件, 其特征在于, 所述读取电路包 括用于对每一感光象素层的象素阵列中的紧邻的同行异列、 异行同 歹1■!、 或异行异列的象素间进行两两合并釆样, 获得第一合并象素的 采样数据的第一合并单元; 以及用于对第一合并单元得到的第一合 并象素的采样数据进行合并采样以获得第二合并象素的采样数据的 第二合并单元。
27. 如权利要求 26所述的感光器件, 其特征在于, 所述读取电路还包括 第三合并单元, 用于对第二合并单元得到的第二合并象素的采样数 据进行合并采样以获得第三合并象素的采样数据。
28. 如权利要求 26或 27所述的感光器件, 其特征在于, 所述第一合并 单元或第二合并单元的象素合并方式为相同或不同色彩象素间的电 荷相加方式或信号平均方式, 其中不同色彩象素间的象素合并方式 遵照色彩空间变换的方式, 以满足色彩重建的要求。
29. 如权利要求 28所述的感光器件, 其特征在于, 所述色彩空间变换包 括 RGB到 CyYeMgX空间的变换、 RGB到 YUV空间的变换、 或 CyYeMgX 到 YUV空间的变换, 其中 X为 R (红)、 G (绿)、 B (兰)中的任一种。
30. 如权利要求 29所述的感光器件, 其特征在于, 所述电荷相加方式通 过象素直接并联或将电荷同时转移到读取电容( FD ) 中完成。
31. 如权利要求 26-30 所述的感光器件, 其特征在于, 所述第一合并单 元或第二合并单元的基于色彩的合并采样方式包括同色合并方式、 异色合并方式、 混杂合并方式、 或选择性抛弃多余色彩合并方式, 且第一合并单元和第二合并单元采用的合并采样方式不同时为同色 合并方式。
32. 如权利要求 26-31 所述的感光器件, 其特征在于, 所述第一合并单 元或第二合并单元的基于位置的合并采样方式包括以下几种方式中 的至少一种: 直接输出到总线的信号自动平均方式、 跳行或跳列方 式、 和逐个采样方式。
33. 如权利要求 27-32 所述的感光器件, 其特征在于, 所述第三合并单 元的合并采样方式包括: 色彩空间变换方式和后端数字图像缩放方 式中的至少一种。
34. 如权利要求 1-33所述的感光器件, 其特征在于, 包括具有跨层读取 功能的全局电子快门, 所述全局电子快门包含多个可同时转移并读 取一层或多层感光象素层的电荷或电压值的不感光的转移和读取象 素。
35. 如权利要求 34所述的感光器件, 其特征在于, 所述多个不感光的转 移和读取象素位于不感光象素转移和读取层; 或者位于所述感光象 素层。
36. 如权利要求 35所述的感光器件, 其特征在于, 每一感光象素层, 设 置有一个紧邻的不感光象素转移和读取层。
37. 如权利要求 34-36 任一所述的感光器件, 所述不感光转移和读取象 素由半导体电路制成。
38. 一种景深扩展方法, 其特征在于, 包括:
在感光器件中设置至少两个可感应到光源的感光象素层,并将至少 两个所述感光象素层按预设距离间隔布置, 使得来自距所述感光器 件特定距离的镜头的不同光信号被聚焦到不同的所述感光象素层。
39. 如权利要求 38所述的景深扩展方法, 其特征在于, 通过来自不同感 光象素层的具有不同清晰度的图像而获取一幅清晰图像。
40. —种光学成像方法, 其特征在于, 包括:
设置镜头和包括至少两个可感应到光源的感光象素层的感光器件; 将所述感光器件放置在距所述镜头特定距离, 且至少两个所述感光 象素层之间按预设距离间隔布置, 使得来自镜头的不同光信号被聚 焦到不同的所述感光象素层。
41. 一种光学成像系统, 其特征在于, 包括镜头和多景深感光器件, 所 述多景深感光器件布置在距所述镜头特定距离, 包括至少两个可感 应到光源的感光象素层, 至少两个所述感光象素层之间按预设距离 间隔布置, 使得来自所述镜头的不同光信号被聚焦到不同的所述感 光象素层。
42. 如权利要求 41所述的光学成像系统,其特征在于,所述不同光信号, 包括不同距离的光信号, 或者不同波长的光信号。
43. 如权利要求 42所述的光学成像系统, 其特征在于, 波长更短的光信 号被聚焦到离镜头更近的感光象素层。
44. 如权利要求 42所述的光学成像系统, 其特征在于, 更远距离的光信 号被聚焦到离镜头更近的感光象素层。
45. 一种感光系统, 其特征在于, 包括如权利要求 1-35任一所述的感光 器件。
46. 如权利要求 45所述的感光系统, 其特征在于, 所述感光系统包括数 码相机, 照相手机, 摄像机, 视频或照相监控系统, 图像识别系统, 医学图像系统, 军用、 消防或井下图像系统, 自动跟踪系统, 立体 影像系统, 机器视觉系统, 汽车视觉或辅助驾驶系统, 电子游戏系 统, 网络摄像头, 红外和夜视系统, 多光谱成像系统, 和电脑摄像 头中的一种。
PCT/CN2011/076338 2011-06-24 2011-06-24 多景深感光器件、系统、景深扩展方法及光学成像系统 WO2012174752A1 (zh)

Priority Applications (10)

Application Number Priority Date Filing Date Title
ES11868080.0T ES2645020T3 (es) 2011-06-24 2011-06-24 Dispositivo fotosensible de múltiples profundidades de escena, sistema del mismo, método de expansión de profundidad de escena, y sistema de obtención de imágenes ópticas
KR1020147000198A KR101572020B1 (ko) 2011-06-24 2011-06-24 다중 피사계 심도 감광소자, 시스템, 피사계 심도 확장방법 및 광학 화상 시스템
HUE11868080A HUE034756T2 (en) 2011-06-24 2011-06-24 Multi-depth light sensor, its system, depth of field enhancement and optical imaging system
PCT/CN2011/076338 WO2012174752A1 (zh) 2011-06-24 2011-06-24 多景深感光器件、系统、景深扩展方法及光学成像系统
RU2014102161A RU2609106C2 (ru) 2011-06-24 2011-06-24 Светочувствительное устройство с множественной глубиной резкости, система, способ расширения глубины резкости и оптическая система формирования изображений
EP11868080.0A EP2725616B1 (en) 2011-06-24 2011-06-24 Multi scene depth photo sensitive device, system thereof, scene depth expanding method, and optical imaging system
US14/128,921 US9369641B2 (en) 2011-06-24 2011-06-24 Multi-depth-of-field light-sensing device, and system and method for use thereof having plural light-sensing pixel layers sense light signals with different wavelength bands from different distances
CA2840267A CA2840267C (en) 2011-06-24 2011-06-24 Multi-depth-of-field light-sensing device, system, depth of field extension method, and optical imaging system
PL11868080T PL2725616T3 (pl) 2011-06-24 2011-06-24 Urządzenie światłoczułe o wieloscenowej przedmiotowej głębi ostrości, jego układ, sposób rozszerzania głębi sceny i system obrazowania optycznego
JP2014516163A JP2014520397A (ja) 2011-06-24 2011-06-24 マルチ被写界深度感光デバイス、システム、被写界深度拡大方法及び光学イメージングシステム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/076338 WO2012174752A1 (zh) 2011-06-24 2011-06-24 多景深感光器件、系统、景深扩展方法及光学成像系统

Publications (1)

Publication Number Publication Date
WO2012174752A1 true WO2012174752A1 (zh) 2012-12-27

Family

ID=47422011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/076338 WO2012174752A1 (zh) 2011-06-24 2011-06-24 多景深感光器件、系统、景深扩展方法及光学成像系统

Country Status (10)

Country Link
US (1) US9369641B2 (zh)
EP (1) EP2725616B1 (zh)
JP (1) JP2014520397A (zh)
KR (1) KR101572020B1 (zh)
CA (1) CA2840267C (zh)
ES (1) ES2645020T3 (zh)
HU (1) HUE034756T2 (zh)
PL (1) PL2725616T3 (zh)
RU (1) RU2609106C2 (zh)
WO (1) WO2012174752A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2974302A4 (en) * 2013-03-15 2016-11-16 Sionyx Llc THREE-DIMENSIONAL IMAGING WITH STACKED IMAGE ELEMENTS AND ASSOCIATED PROCESSES
US9762830B2 (en) 2013-02-15 2017-09-12 Sionyx, Llc High dynamic range CMOS image sensor having anti-blooming properties and associated methods
US9761739B2 (en) 2010-06-18 2017-09-12 Sionyx, Llc High speed photosensitive devices and associated methods
US9905599B2 (en) 2012-03-22 2018-02-27 Sionyx, Llc Pixel isolation elements, devices and associated methods
US9911781B2 (en) 2009-09-17 2018-03-06 Sionyx, Llc Photosensitive imaging devices and associated methods
US9939251B2 (en) 2013-03-15 2018-04-10 Sionyx, Llc Three dimensional imaging utilizing stacked imager devices and associated methods
US10229951B2 (en) 2010-04-21 2019-03-12 Sionyx, Llc Photosensitive imaging devices and associated methods
US10244188B2 (en) 2011-07-13 2019-03-26 Sionyx, Llc Biometric imaging devices and associated methods
US10269861B2 (en) 2011-06-09 2019-04-23 Sionyx, Llc Process module for increasing the response of backside illuminated photosensitive imagers and associated methods
US10347682B2 (en) 2013-06-29 2019-07-09 Sionyx, Llc Shallow trench textured regions and associated methods
US10361083B2 (en) 2004-09-24 2019-07-23 President And Fellows Of Harvard College Femtosecond laser-induced formation of submicrometer spikes on a semiconductor substrate
US10361232B2 (en) 2009-09-17 2019-07-23 Sionyx, Llc Photosensitive imaging devices and associated methods
US10374109B2 (en) 2001-05-25 2019-08-06 President And Fellows Of Harvard College Silicon-based visible and near-infrared optoelectric devices
US10692904B2 (en) 2015-03-13 2020-06-23 Sony Semiconductor Solutions Corporation Solid-state image sensing device, drive method, and electronic apparatus

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2786760C (en) * 2010-06-01 2018-01-02 Boly Media Communications (Shenzhen) Co., Ltd. Multi-spectrum photosensitive device
US10136107B2 (en) * 2013-11-21 2018-11-20 Semiconductor Components Industries, Llc Imaging systems with visible light sensitive pixels and infrared light sensitive pixels
JP6633268B2 (ja) * 2014-09-03 2020-01-22 グローリー株式会社 センサモジュール及び紙葉類処理装置
JP2017112169A (ja) 2015-12-15 2017-06-22 ソニー株式会社 イメージセンサ、撮像システム及びイメージセンサの製造方法
US11506766B2 (en) * 2017-07-27 2022-11-22 Maxell, Ltd. Image-capturing device, image-capturing apparatus and method of acquiring distance image
EP3474328B1 (en) 2017-10-20 2021-09-29 Samsung Electronics Co., Ltd. Combination sensors and electronic devices

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005526458A (ja) * 2002-05-13 2005-09-02 マイクロン テクノロジー インコーポレイテッド カラーフィルタイメージングアレイ及びその製造方法
US20060266921A1 (en) * 2005-05-25 2006-11-30 Kang Shin J Image sensor for semiconductor light-sensing device and image processing apparatus using the same
WO2008131313A2 (en) * 2007-04-18 2008-10-30 Invisage Technologies, Inc. Materials systems and methods for optoelectronic devices
CN101740587A (zh) * 2008-11-05 2010-06-16 博立码杰通讯(深圳)有限公司 多光谱感光器件及其制作方法
CN101345248B (zh) * 2007-07-09 2010-07-14 博立码杰通讯(深圳)有限公司 多光谱感光器件及其制作方法
CN101807590A (zh) * 2009-02-16 2010-08-18 博立码杰通讯(深圳)有限公司 多光谱感光器件
CN101834974A (zh) * 2009-03-09 2010-09-15 博立码杰通讯(深圳)有限公司 一种多光谱感光器件及其采样方法
CN102244083A (zh) * 2010-05-13 2011-11-16 博立多媒体控股有限公司 一种混合多光谱感光象素组、感光器件、及感光系统
CN102263114A (zh) * 2010-05-24 2011-11-30 博立多媒体控股有限公司 多景深感光器件、系统、景深扩展方法及光学成像系统

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5737336A (en) * 1980-08-19 1982-03-01 Asahi Optical Co Ltd Automatic focus detector for camera
JPS61135280A (ja) * 1984-12-06 1986-06-23 Toshiba Corp 三次元撮像素子
JPH10150603A (ja) * 1996-11-15 1998-06-02 Nikon Corp 超小型電子カメラ
JP2002232789A (ja) * 2001-02-05 2002-08-16 Univ Tokyo 撮像方法及び撮像装置
US6555854B2 (en) * 2001-04-25 2003-04-29 Umax Data Systems Inc. Charge coupled device with multi-focus lengths
US7593556B2 (en) * 2002-10-29 2009-09-22 National Institute Of Radiological Sciences Sample picture data processing method and sample inspection system and method
JP4578797B2 (ja) * 2003-11-10 2010-11-10 パナソニック株式会社 撮像装置
JP2005258995A (ja) * 2004-03-15 2005-09-22 Natl Rehabilitation Center For The Disabled 認知機能障害者用支援装置
JP4515796B2 (ja) * 2004-03-17 2010-08-04 株式会社日立国際電気 撮像装置
JP2005268609A (ja) 2004-03-19 2005-09-29 Fuji Photo Film Co Ltd 多層積層型多画素撮像素子及びテレビカメラ
JP2006005762A (ja) * 2004-06-18 2006-01-05 Sharp Corp 画像入力装置およびそれを備えた携帯情報端末装置
JP4911445B2 (ja) 2005-06-29 2012-04-04 富士フイルム株式会社 有機と無機のハイブリッド光電変換素子
JP4384198B2 (ja) * 2007-04-03 2009-12-16 シャープ株式会社 固体撮像装置およびその製造方法、電子情報機器
JP5365221B2 (ja) * 2009-01-29 2013-12-11 ソニー株式会社 固体撮像装置、その製造方法および撮像装置
TW201119019A (en) * 2009-04-30 2011-06-01 Corning Inc CMOS image sensor on stacked semiconductor-on-insulator substrate and process for making same
JP2011049524A (ja) * 2009-07-27 2011-03-10 Sony Corp 固体撮像素子および固体撮像素子の製造方法
JP4547462B1 (ja) * 2009-11-16 2010-09-22 アキュートロジック株式会社 撮像素子、撮像素子の駆動装置、撮像素子の駆動方法、画像処理装置、プログラム、及び、撮像装置
JP6027832B2 (ja) * 2012-09-21 2016-11-16 オリンパス株式会社 撮像装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005526458A (ja) * 2002-05-13 2005-09-02 マイクロン テクノロジー インコーポレイテッド カラーフィルタイメージングアレイ及びその製造方法
US20060266921A1 (en) * 2005-05-25 2006-11-30 Kang Shin J Image sensor for semiconductor light-sensing device and image processing apparatus using the same
WO2008131313A2 (en) * 2007-04-18 2008-10-30 Invisage Technologies, Inc. Materials systems and methods for optoelectronic devices
CN101345248B (zh) * 2007-07-09 2010-07-14 博立码杰通讯(深圳)有限公司 多光谱感光器件及其制作方法
CN101740587A (zh) * 2008-11-05 2010-06-16 博立码杰通讯(深圳)有限公司 多光谱感光器件及其制作方法
CN101807590A (zh) * 2009-02-16 2010-08-18 博立码杰通讯(深圳)有限公司 多光谱感光器件
CN101834974A (zh) * 2009-03-09 2010-09-15 博立码杰通讯(深圳)有限公司 一种多光谱感光器件及其采样方法
CN102244083A (zh) * 2010-05-13 2011-11-16 博立多媒体控股有限公司 一种混合多光谱感光象素组、感光器件、及感光系统
CN102263114A (zh) * 2010-05-24 2011-11-30 博立多媒体控股有限公司 多景深感光器件、系统、景深扩展方法及光学成像系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2725616A4

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10374109B2 (en) 2001-05-25 2019-08-06 President And Fellows Of Harvard College Silicon-based visible and near-infrared optoelectric devices
US10361083B2 (en) 2004-09-24 2019-07-23 President And Fellows Of Harvard College Femtosecond laser-induced formation of submicrometer spikes on a semiconductor substrate
US10741399B2 (en) 2004-09-24 2020-08-11 President And Fellows Of Harvard College Femtosecond laser-induced formation of submicrometer spikes on a semiconductor substrate
US10361232B2 (en) 2009-09-17 2019-07-23 Sionyx, Llc Photosensitive imaging devices and associated methods
US9911781B2 (en) 2009-09-17 2018-03-06 Sionyx, Llc Photosensitive imaging devices and associated methods
US10229951B2 (en) 2010-04-21 2019-03-12 Sionyx, Llc Photosensitive imaging devices and associated methods
US9761739B2 (en) 2010-06-18 2017-09-12 Sionyx, Llc High speed photosensitive devices and associated methods
US10505054B2 (en) 2010-06-18 2019-12-10 Sionyx, Llc High speed photosensitive devices and associated methods
US10269861B2 (en) 2011-06-09 2019-04-23 Sionyx, Llc Process module for increasing the response of backside illuminated photosensitive imagers and associated methods
US10244188B2 (en) 2011-07-13 2019-03-26 Sionyx, Llc Biometric imaging devices and associated methods
US10224359B2 (en) 2012-03-22 2019-03-05 Sionyx, Llc Pixel isolation elements, devices and associated methods
US9905599B2 (en) 2012-03-22 2018-02-27 Sionyx, Llc Pixel isolation elements, devices and associated methods
US9762830B2 (en) 2013-02-15 2017-09-12 Sionyx, Llc High dynamic range CMOS image sensor having anti-blooming properties and associated methods
EP2974302A4 (en) * 2013-03-15 2016-11-16 Sionyx Llc THREE-DIMENSIONAL IMAGING WITH STACKED IMAGE ELEMENTS AND ASSOCIATED PROCESSES
US9939251B2 (en) 2013-03-15 2018-04-10 Sionyx, Llc Three dimensional imaging utilizing stacked imager devices and associated methods
US10347682B2 (en) 2013-06-29 2019-07-09 Sionyx, Llc Shallow trench textured regions and associated methods
US11069737B2 (en) 2013-06-29 2021-07-20 Sionyx, Llc Shallow trench textured regions and associated methods
US10692904B2 (en) 2015-03-13 2020-06-23 Sony Semiconductor Solutions Corporation Solid-state image sensing device, drive method, and electronic apparatus

Also Published As

Publication number Publication date
KR101572020B1 (ko) 2015-11-26
JP2014520397A (ja) 2014-08-21
EP2725616A1 (en) 2014-04-30
US20140183337A1 (en) 2014-07-03
RU2609106C2 (ru) 2017-01-30
EP2725616B1 (en) 2017-07-26
CA2840267C (en) 2017-11-14
KR20140041679A (ko) 2014-04-04
CA2840267A1 (en) 2012-12-27
US9369641B2 (en) 2016-06-14
HUE034756T2 (en) 2018-02-28
ES2645020T3 (es) 2017-12-01
PL2725616T3 (pl) 2018-01-31
RU2014102161A (ru) 2015-07-27
EP2725616A4 (en) 2015-03-18

Similar Documents

Publication Publication Date Title
WO2012174752A1 (zh) 多景深感光器件、系统、景深扩展方法及光学成像系统
WO2012174751A1 (zh) 一种混合多光谱感光象素组、感光器件、及感光系统
CA2786760C (en) Multi-spectrum photosensitive device
US10032810B2 (en) Image sensor with dual layer photodiode structure
US8035708B2 (en) Solid-state imaging device with an organic photoelectric conversion film and imaging apparatus
US7916180B2 (en) Simultaneous multiple field of view digital cameras
JP5538553B2 (ja) 固体撮像素子及び撮像装置
JP2022027813A (ja) 固体撮像素子およびその製造方法、並びに電子機器
US20170201663A1 (en) Devices, methods, and systems for expanded-field-of-view image and video capture
US8208052B2 (en) Image capture device
TW201336302A (zh) 固體攝像元件及照相機系統
CN104241310A (zh) 一种具有双微透镜层的cmos图像像素阵列
JP6017311B2 (ja) マルチスペクトル感光部材およびその製作方法
CN102263114B (zh) 多景深感光器件、系统、景深扩展方法及光学成像系统
CN102244083B (zh) 一种混合多光谱感光象素组、感光器件、及感光系统
WO2015062141A1 (zh) 3d cmos图像传感器的像素单元
KR20220077735A (ko) 이미지 센서 및 이미지 센서의 픽셀 어레이

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11868080

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014516163

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2840267

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20147000198

Country of ref document: KR

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2011868080

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2014102161

Country of ref document: RU

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 14128921

Country of ref document: US