EP4000096A1 - Bildsensorpixel - Google Patents

Bildsensorpixel

Info

Publication number
EP4000096A1
EP4000096A1 EP20739697.9A EP20739697A EP4000096A1 EP 4000096 A1 EP4000096 A1 EP 4000096A1 EP 20739697 A EP20739697 A EP 20739697A EP 4000096 A1 EP4000096 A1 EP 4000096A1
Authority
EP
European Patent Office
Prior art keywords
pixel
image sensor
photodetector
photodetectors
organic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20739697.9A
Other languages
English (en)
French (fr)
Inventor
Camille DUPOIRON
Benjamin BOUTHINON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Isorg SA
Original Assignee
Isorg SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isorg SA filed Critical Isorg SA
Publication of EP4000096A1 publication Critical patent/EP4000096A1/de
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • H01L27/14627Microlenses
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14665Imagers using a photoconductor layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K30/00Organic devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation
    • H10K30/80Constructional details
    • H10K30/87Light-trapping means
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K39/00Integrated devices, or assemblies of multiple devices, comprising at least one organic radiation-sensitive element covered by group H10K30/00
    • H10K39/30Devices controlled by radiation
    • H10K39/32Organic image sensors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy
    • Y02E10/549Organic PV cells

Definitions

  • the present application relates to an image sensor or electronic imager.
  • Image sensors are currently used in many fields, in particular in electronic devices. Image sensors are found in particular in human-machine interface and image-taking applications. Areas of use for such image sensors are, for example, smart phones, automobiles, drones, robotics and virtual or augmented reality systems.
  • the same electronic device can have several image sensors of different types.
  • a device can thus comprise, for example, a first color image sensor, a second infrared image sensor, a third image sensor making it possible to evaluate a distance, with respect to the device, from various points of view. a scene or subject, etc.
  • One embodiment provides for a pixel comprising:
  • One embodiment provides for an image sensor comprising several pixels as described.
  • One embodiment provides for a method of manufacturing such a pixel or such an image sensor, comprising steps consisting in:
  • said organic photodetectors are coplanar.
  • said organic photodetectors are separated from each other by a dielectric
  • each organic photodetector comprises a first electrode, distinct from the first electrodes of the other organic photodetectors, formed on the surface of the CMOS support.
  • each first electrode is connected, preferably connected, to a read circuit, each read circuit preferably comprising three transistors formed in the CMOS support.
  • said organic photodetectors are suitable for evaluating a distance by time of flight.
  • the pixel or the sensor as described is adapted to operate:
  • HDR high dynamic range imaging
  • each pixel further comprises, under the lens, a color filter allowing electromagnetic waves to pass in a frequency range of the visible spectrum and in the infrared spectrum.
  • the senor as described is suitable for capturing a color image.
  • each pixel comprises exactly:
  • the first organic photodetector and the second organic photodetector are rectangular in shape and are jointly inscribed in a square.
  • the first organic photodetector is connected to a second electrode
  • the second organic photodetector is connected to a third electrode.
  • One embodiment provides for a sensor in which:
  • the second electrode is common to all the first organic photodetectors of the pixels of the sensor.
  • the third electrode is common to all the second organic photodetectors of the pixels of the sensor.
  • FIG. 1 is an exploded perspective view, schematic and partial, of an embodiment of an image sensor
  • FIG. 2 is a top view, schematic and partial, of the image sensor of FIG. 1;
  • FIG. 3 is an electrical diagram of an embodiment of circuits for reading two pixels of the image sensor of FIGS. 1 and 2;
  • FIG. 4 is a timing diagram of signals of an example of operation of the image sensor having the read circuits of FIG. 3;
  • Figure 5 is a sectional view, schematic and partial, of a step of an embodiment of a method for producing the image sensor of Figures 1 and 2;
  • Figure 6 is a sectional view, schematic and partial, of another step of the embodiment of the method of making the image sensor of Figures 1 and
  • FIG. 7 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor of FIGS. 1 and 2;
  • FIG. 8 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor of FIGS. 1 and 2;
  • FIG. 9 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor of FIGS. 1 and 2;
  • FIG. 10 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor of FIGS. 1 and 2;
  • Figure 11 is a sectional view, schematic and partial, of a variant of the embodiment of the method of making the image sensor of Figures 1 and 2;
  • FIG. 12 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor of FIGS. 1 and 2;
  • Figure 13 is a sectional view, schematic and partial, of yet another step of the embodiment of the method of making the image sensor of Figures 1 and 2;
  • Figure 14 is a sectional view along the plane AA, schematic and partial, of the image sensor of Figures 1 and 2;
  • Figure 15 is a sectional view along the plane BB, schematic and partial, of the image sensor of Figures 1 and 2;
  • Figure 16 is a sectional view, schematic and partial, of another embodiment of an image sensor.
  • binary signal is used to mean a signal which alternates between a first constant state, for example a low state, denoted "0", and a second constant state, for example a high state, denoted "1".
  • first constant state for example a low state
  • second constant state for example a high state
  • the high and low states of different binary signals of the same electronic circuit can be different.
  • the binary signals may correspond to voltages or currents which may not be perfectly constant in the high or low state.
  • insulator and “conductor” mean “electrically insulating” and “electrically conductive” respectively.
  • the transmittance of a layer to a radiation corresponds to the ratio between the intensity of the radiation leaving the layer and the intensity of the radiation entering the layer, the rays of the incoming radiation being perpendicular to the layer.
  • a layer or a film is said to be opaque to radiation when the transmittance of the radiation through the layer or the film is less than 10%.
  • a layer or a film is said to be transparent to radiation when the transmittance of the radiation through the layer or the film is greater than 10%.
  • visible light is called electromagnetic radiation whose wavelength is between 400 nm and 700 nm and infrared radiation is called electromagnetic radiation whose wavelength is between 700 nm and 1 mm.
  • infrared radiation one distinguishes in particular the near infrared radiation, the wavelength of which is between 700 nm and 1.7 ⁇ m.
  • One pixel of an image corresponds to the unitary element of the image captured by an image sensor.
  • the optoelectronic device is a color image sensor, it generally comprises, for each pixel of the color image to be acquired, at least three components. These three components each acquire light radiation substantially in a single color, i.e. in a range of wavelengths less than 130 nm wide (e.g. red, green, and blue) . Each component can in particular comprise at least one photodetector.
  • FIG. 1 is an exploded perspective view, schematic and partial, of an embodiment of an image sensor 1.
  • This image sensor 1 comprises a matrix of coplanar pixels. To simplify, only four pixels 10, 12, 14 and 16 of the image sensor 1 have been shown in FIG. 1, it being understood that, in practice, the image sensor 1 may have more pixels.
  • the image sensor 1 comprises, for example, several million or even several tens of millions of pixels.
  • the pixels 10, 12, 14 and 16 are located on the surface of a CMOS support 3, for example a piece of silicon wafer on and inside which integrated circuits (not shown ) were produced using CMOS (Complementary Metal Oxide Semiconductor) technology.
  • CMOS Complementary Metal Oxide Semiconductor
  • These integrated circuits form, in this example, a matrix of read circuits associated with the pixels 10, 12, 14 and 16 of the image sensor 1.
  • read circuit is understood to mean a set of read, address and transistors. command associated with each pixel.
  • each pixel comprises a first photodetector, identified by the suffix "A”, and a second photodetector, identified by the suffix "B". More precisely, in the example of figure 1:
  • pixel 10 includes a first photodetector 10A and a second photodetector 10B;
  • pixel 12 includes a first photodetector 12A and a second photodetector 12B;
  • pixel 14 includes a first photodetector 14A and a second photodetector 14B;
  • pixel 16 includes a first photodetector 16A and a second photodetector 16B.
  • photodetectors 10A, 10B, 12A, 12B, 14A, 14B, 16A and 16B can correspond to organic photodiodes (Organic Photodiode - OPD) or to organic photoresistors. In the remainder of the description, it is considered that the photodetectors of the pixels of the image sensor 1 correspond to organic photodiodes.
  • each photodetector comprises an active layer included or "sandwiched" between two electrodes. More precisely, in the example of FIG. 1 where only side faces of organic photodetectors 10A, 10B, 14A, 14B and 16B are visible:
  • the first photodetector 10A consists of an active layer 100A comprised between a first electrode 102A and a second electrode 104A;
  • the second photodetector 10B consists of an active layer 100B lying between a first electrode 102B and a second electrode 104B;
  • the first photodetector 14A consists of an active layer 140A included between a first electrode 142A and a second electrode 144A;
  • the second photodetector 14B consists of an active layer 140B included between a first electrode 142B and a second electrode 144B;
  • the second photodetector 16B consists of an active layer 160B included between a first electrode 162B and a second electrode 164B.
  • the first photodetector 12A consists of a active layer 120A (not visible in FIG. 1) included between a first electrode 122A (not visible in FIG. 1) and a second electrode 124A (not visible in FIG. 1);
  • the second photodetector 12B consists of an active layer 120B (not visible in FIG. 1) lying between a first electrode 122B (not visible in FIG. 1) and a second electrode 124B (not visible in FIG. 1); and
  • the first photodetector 16A consists of an active layer 160A (not visible in FIG. 1) included between a first electrode 162A (not visible in FIG. 1) and a second electrode 164A (not visible in FIG. 1).
  • first electrodes will also be designated by the expression “lower electrodes” while the second electrodes will also be designated by the expression “upper electrodes”.
  • each organic photodetector constitutes an anode electrode while the lower electrode of each organic photodetector constitutes a cathode electrode.
  • each photodetector of each pixel of the image sensor 1 is individually connected, preferably connected, to a read circuit (not shown) of the CMOS support 3. Each photodetector of the image sensor 1 is therefore addressed individually by its lower electrode.
  • each photodetector has a lower electrode distinct from the lower electrodes of all the other photodetectors.
  • each photodetector of a pixel has a separate lower electrode:
  • the upper electrodes 104A, 124A, 144A and 164A, belonging respectively to the first photodetectors 10A, 12A, 14A and 16A, are interconnected or form a common first upper electrode;
  • the upper electrodes 104B, 124B, 144B and 164B, belonging respectively to the first photodetectors 10B, 12B, 14B and 16B, are interconnected or form a second common upper electrode, distinct from the first common upper electrode.
  • each pixel comprises a lens 18, also called microlens 18 because of its dimensions.
  • the pixels 10, 12, 14 and 16 each comprise a lens 18.
  • Each lens 18 thus covers all or part of the first and second photodetectors of each pixel of the image sensor 1. More precisely , the lens 18 physically covers the upper electrodes of the first and second photodetectors of the pixel.
  • Figure 2 is a top view, schematic and partial, of the image sensor 1 of Figure 1.
  • the first and second photodetectors are represented by rectangles and the microlenses by circles. More precisely, in figure
  • a microlens 18 covers the upper electrode 104A, respectively 104B, of the photodetector 10A, respectively 10B, of the pixel 10; a microlens 18 covers the upper electrode 124A, respectively 124B, of the photodetector 12A, respectively 12B, of the pixel 12;
  • a microlens 18 covers the upper electrode 144A, respectively 144B, of the photodetector 14A, respectively 14B, of the pixel 14;
  • a microlens 18 covers the upper electrode 164A, respectively 164B, of the photodetector 16A, respectively 16B, of the pixel 16.
  • the lenses 18 completely cover the respective electrodes of the pixels with which they are associated.
  • the pixels are substantially of square shape, preferably of square shape. All the pixels of the image sensor 1 preferably have identical dimensions, except for manufacturing dispersions.
  • the square formed by each pixel of the image sensor 1, seen from above in FIG. 2 has a side between approximately 0.8 ⁇ m and 10 ⁇ m, preferably between approximately 0.8 ⁇ m and 3 ⁇ m, even more preferably between 0.8 pm and 3 pm.
  • the first photodetector and the second photodetector belonging to the same pixel are both rectangular in shape. These photodetectors have substantially the same dimensions and are part of the square formed by the pixel to which they belong.
  • the rectangle formed by each photodetector of each pixel of the image sensor 1 has a length substantially equal to the side of the square formed by each pixel and a width substantially equal to half of the side of the square formed by each pixel. A space is however left between the first and the second photodetector of each pixel, so that their respective lower electrodes are separate.
  • each microlens 18 has, viewed from above in FIG. 2, a diameter substantially equal, preferably equal, to the side of the square formed by the pixel to which it belongs.
  • each pixel comprises a microlens 18.
  • Each microlens 18 of the image sensor 1 is preferably centered with respect to the square formed by the photodetectors that it covers.
  • each microlens 18 can be replaced by another type of micrometric-sized optical element, in particular a micrometric-sized Fresnel lens, a micrometric-sized gradient index lens or a diffraction grating of micrometric size.
  • the microlenses 18 are convergent lenses each having a focal length f of between 1 ⁇ m and 100 ⁇ m, preferably between 1 ⁇ m and 10 ⁇ m. According to one embodiment, all the microlenses 18 are substantially identical.
  • the microlenses 18 can be made of silica, polymethyl methacrylate (PMMA), a positive photosensitive resin, polyethylene terephthalate (PET), polyethylene naphthalate (PEN), cycloolefin copolymer (COP), polydimethylsiloxane (PDMS) / silicone, or in epoxy resin.
  • the microlenses 18 can be formed by creeping blocks of a photosensitive resin.
  • Microlenses 18 can further be formed by molding on a layer of PET, PEN, COP, PDMS / silicone or epoxy resin.
  • FIG. 3 is an electric diagram of an embodiment of circuits for reading two pixels of the image sensor of FIGS. 1 and 2.
  • each photodetector is associated with a read circuit. More precisely, in figure 3:
  • the first photodetector 10A of pixel 10 is associated with a first read circuit 20A;
  • the second photodetector 10B of pixel 10 is associated with a second read circuit 20B;
  • the first photodetector 12A of pixel 12 is associated with a first read circuit 22A;
  • the second photodetector 12B of pixel 12 is associated with a second read circuit 22B.
  • the first read circuit 20A of the first photodetector 10A of the pixel 10 and the second read circuit 20B of the second photodetector 10B of the pixel 10 jointly form a read circuit 20 of the pixel 10.
  • the first circuit of reading 22A of the first photodetector 12A of pixel 12 and the second reading circuit 22B of the second photodetector 12B of pixel 12 jointly form a reading circuit 22 of pixel 12.
  • each read circuit 20A, 20B, 22A, 22B comprises three MOS transistors. Such a circuit is commonly designated, with its photodetector, by the expression “3T sensor”.
  • each read circuit 20A, 22A associated with a first photodetector comprises a MOS transistor in follower assembly 200, in series with a MOS transistor selection 202, between two terminals 204 and 206A.
  • each read circuit 20B, 22B associated with a second photodetector comprises a MOS transistor in follower assembly 200, in series with a MOS selection transistor 202, between two terminals 204 and 206B.
  • Each terminal 204 is connected to a source of a high reference potential, denoted Vpix, in the case where the transistors of the read circuits are N-channel MOS transistors.
  • Each terminal 204 is connected to a source of a low reference potential, for example ground, in the case where the transistors of the read circuits are P-channel MOS transistors.
  • Each terminal 206A is connected to a first conductive track 208A.
  • the first conductive track 208A can be connected to all the first photodetectors of the same column.
  • the first conductive track 208A is preferably connected to all the first photodetectors of the image sensor 1.
  • each terminal 206B is connected to a second conductive track 208B.
  • the second conductive track 208B can be connected to all the second photodetectors of the same column.
  • the second conductor track 208B is preferably connected to all the second photodetectors of the image sensor 1.
  • the second conductor track 208B is preferably separate from the first conductor track 208A.
  • the first conductive track 208A is connected to a first current source 209A which is not part of the read circuits 20, 22 of the pixels 10, 12 of the image sensor 1.
  • the second conductive track 208B is connected to a second current source 209B which is not part of the read circuits. 20, 22 of the pixels 10, 12 of the image sensor 1.
  • the current sources 209A and 209B of the image sensor 1 are external to the pixels and read circuits.
  • the gate of transistor 202 is intended to receive a signal, denoted SEL_R1, for selecting pixel 10 in the case of read circuit 20 of pixel 10.
  • the gate of transistor 202 is intended to receive another signal, denoted SEL_R2 , selection of pixel 12 in the case of read circuit 22 of pixel 12.
  • the gate of transistor 200 associated with the first photodetector 10A of pixel 10 is connected to a node FD_1A;
  • the gate of transistor 200 associated with the first photodetector 12A of pixel 12 is connected to a node FD_2A;
  • the gate of transistor 200 associated with second photodetector 12B of pixel 12 is connected to a node FD_2B.
  • Each node FD_1A, FD_1B, FD_2A, FD_2B is connected, by a reset MOS transistor 210, to a terminal of application of a reset potential Vrst, this potential possibly being identical to the potential Vpix.
  • the gate of transistor 210 is intended to receive a signal RST for resetting the photodetector, making it possible in particular to reset the node FD_1A, FD_1B, FD_2A or FD_2B substantially to the potential Vrst.
  • the node FD_1A is connected to the cathode electrode 102A of the first photodetector 10A of the pixel 10;
  • the node FD_1B is connected to the cathode electrode 102B of the second photodetector 10B of the pixel 10;
  • the FD 2A node is connected to the cathode electrode 122A of the first photodetector 12A of pixel 12; and the node FD_2B is connected to the cathode electrode 122B of the second photodetector 12B of the pixel 12.
  • the anode electrode 104A of the first photodetector 10A of the pixel 10 is connected to a source of a reference potential Vtop_Cl;
  • the anode electrode 104B of the second photodetector 10B of the pixel 10 is connected to a source of a reference potential Vtop_C2;
  • the anode electrode 124A of the first photodetector 12A of the pixel 12 is connected to a source of the reference potential Vtop_Cl;
  • the anode electrode 124B of the second photodetector 12B of the pixel 12 is connected to a source of the reference potential Vtop_C2.
  • the potential Vtop_Cl is applied to the first upper electrode common to all the first photodetectors.
  • the potential Vtop_C2 is, for its part, applied to the second upper electrode common to all the second photodetectors.
  • VFD_1A the voltage present at node FD_1A
  • VFD_1B the voltage present at node FD_1B
  • VSEL_R1 the voltage applied to the gate of transistors 202 of pixel 10, that is to say the voltage applied to the gate of transistor 202 of first photodetector 10A and the voltage applied to the gate of transistor 202 of second photodetector 10B;
  • VSEL_R2 the voltage applied to the gate of transistors 202 of pixel 12, i.e. the voltage applied to the gate of transistor 202 of first photodetector 12A and the voltage applied to the gate of transistor 202 of second photodetector 12B.
  • VSEL_R1 the application of the voltage VSEL_R1, respectively VSEL_R2, is controlled by the binary signal denoted SEL_R1, respectively SEL_R2.
  • FIG. 4 is a timing diagram of signals of an example of operation of the image sensor 1 having the read circuits of FIG. 3.
  • the timing diagram of FIG. 4 corresponds, more particularly, to an example of operation of the image sensor 1 in “time of flight” mode (Time of Flight - ToF).
  • the pixels of the image sensor 1 are used to evaluate a distance separating them from a subject (object, scene, face, etc.) placed or located opposite this image sensor. images 1.
  • a light pulse in the direction of the subject with an associated emitting system not described in this text. This light pulse is generally obtained by briefly illuminating the subject with radiation from a source, for example near infrared radiation from a light-emitting diode. This light pulse is then at least partially reflected by the subject, then picked up by the image sensor 1.
  • a time taken by the light pulse to travel back and forth between the source and the subject is then calculated or measured.
  • the image sensor 1 being advantageously located near the source, this duration corresponds approximately to twice the time. time taken by the light pulse to travel the distance separating the subject from this image sensor 1.
  • the timing diagram of FIG. 4 illustrates an example of the evolution of the binary signals RST and SEL_R1 as well as of the potentials Vtop_Cl, Vtop_C2, VFD_1A and VFD_1B of two photodetectors of the same pixel of the image sensor 1, for example the first photodetector 10A and the second photodetector 10B of pixel 10.
  • FIG. 4 also represents, in dotted lines, the binary signal SEL_R2 of another pixel of image sensor 1, for example pixel 12.
  • the timing diagram of FIG. 4 was established by considering that the MOS transistors of the read circuit 20 of the pixel 10 are N-channel transistors.
  • the signal SEL_R1 is in the low state so that the transistors 202 of the pixel 10 are blocked.
  • a reset phase is then initiated.
  • the RST signal is kept high so that the reset transistors 210 of pixel 10 are on.
  • the charges accumulated in the photodiodes 10A and 10B are then evacuated to the source of the potential Vrst.
  • the potential Vtop_Cl is, still at the instant t0, at a high level.
  • This high level corresponds to a bias of the first photodetector 10A under a voltage greater than a voltage resulting from the application of a potential called “intrinsic potential” (built-in potential).
  • This intrinsic potential is equivalent to a difference between an anode output work and a cathode output work.
  • the potential Vtop_Cl is set to a low level.
  • This low level corresponds to a polarization of the first photodetector 10A under a negative voltage, that is to say less than 0 V. This thus allows the first photodetector 10A to integrate photogenerated charges.
  • What has been described previously in relation to the polarization of the first photodetector 10A by the potential Vtop_Cl is transposed to the explanation of the operation of the polarization of the second photodetector 10B by the potential Vtop_C2.
  • a first infrared light pulse (IR light emitted) to a scene comprising one or more objects whose distance we want to measure, which makes it possible to acquire a depth map of the scene.
  • This first infrared light pulse has a duration denoted tON.
  • the RST signal is set low, so that the reset transistors 210 of pixel 10 are turned off, and the potential Vtop_C2 is set high.
  • phase denotes integration of a pixel the phase during which the pixel collects charges under the effect of incident radiation.
  • tD subsequent to the instant tl and separated from this instant tl by a duration denoted tD, we begin to receive a second infrared light pulse (IR light received) resulting from the reflection of the first infrared light pulse by an object in the scene, or by a point of an object, the distance from which is to be measured with respect to pixel 10.
  • the duration tD is therefore a function of the distance of the object from the sensor 1.
  • CCA first phase of charge collection
  • the first phase of charge collection causes a drop in the level of the potential VFD_1A at the node FD_1A of the read circuit 20A.
  • the first infrared light pulse ceases to be emitted.
  • the potential Vtop_Cl is simultaneously set high, thus marking the end of the first phase of integration, and therefore of the first phase of charge collection.
  • the potential Vtop_C2 is set to a low level.
  • a second integration phase denoted ITB
  • a second charge collection phase denoted CCB
  • the second phase of charge collection causes a drop in the level of the potential VFD_1B at the node FD_1B of the read circuit 20B.
  • the second light pulse ceases to be picked up by the second photodetector 10B of the pixel 10.
  • the second phase of charge collection is therefore completed at this instant t4.
  • the potential Vtop_C2 is set to the high level. This marks the end of the second phase of integration.
  • a reading phase is carried out, denoted RT, during which a measurement of the quantity of charges collected by the photodiodes of the pixels of the image sensor 1 is carried out.
  • RT a reading phase
  • the rows of pixels of the image sensor 1 are read sequentially.
  • the signals SEL_R1 and SEL_R2 are successively set to the high state to alternately read the signals. pixels 10 and 12 of image sensor 1.
  • a new reset phase (RESET) is started.
  • the RST signal is set high so that the reset transistors 210 of pixel 10 are on.
  • the charges accumulated in the photodiodes 10A and 10B are then evacuated to the source of the potential Vrst.
  • the duration tD which separates the start of the first light pulse emitted from the start of the second light pulse received, is calculated using the following formula:
  • the magnitude noted AVFD_1A corresponds to a drop in potential VFD_1A during the integration phase of the first photodetector 10A.
  • the magnitude noted AVFD_1B corresponds to a drop in potential VFD_1B during the integration phase of the second photodetector 10B.
  • a new distance evaluation is started by the emission of a second light pulse.
  • This new distance evaluation includes times t2 'and t4' similar to times t2 and t4, respectively.
  • the operation of the image sensor 1 has been illustrated above in relation to an example of operation in time-of-flight mode, in which the photodetectors of the same pixels are driven out of sync.
  • An advantage of the image sensor 1 is that it can also operate in other modes, in particular modes in which the photodetectors of the same pixel are controlled in a synchronized manner.
  • the image sensor 1 can, for example, be driven in global shutter mode, that is to say that this image sensor 1 can also implement an image acquisition method in in which the beginnings and ends of the pixel integration phases are simultaneous.
  • the image sensor 1 is therefore to be able to operate alternately according to different modes.
  • the image sensor 1 can, for example, operate alternately in time-of-flight mode and in global shutter imaging mode.
  • the reading circuits of the photodetectors of the image sensor 1 are driven alternately in other operating modes, for example modes where the image sensor 1 is adapted to operate:
  • HDR high dynamic range
  • the image sensor 1 can thus be used to perform different types of images without loss of resolution, because the different imaging modes that can be implemented by this image sensor 1 use the same number of pixels. .
  • FIGS. 5 to 13 illustrate successive steps of an embodiment of a method for producing the image sensor 1 of Figures 1 and 2.
  • FIGS. 5 to 13 illustrate the production of a single pixel of the image sensor 1, for example the pixel 12 of the image sensor 1.
  • this method can be extended to the production any number of pixels from an image sensor similar to image sensor 1.
  • FIG. 5 is a sectional view, schematic and partial, of a step of an embodiment of a method for producing the image sensor 1 of FIGS. 1 and 2.
  • CMOS support 3 comprising in particular the read circuits (not shown) of the pixel 12.
  • This CMOS support 3 also comprises, on the upper surface 30, recovery elements. contact 32A and 32B.
  • These contact recovery elements 32A and 32B have, seen in section in Figure 5, a "T" shape including:
  • a horizontal part extends on the upper surface 30 of the CMOS support 3;
  • a vertical part extends downwards from the upper surface 30 of the CMOS support 3 to come into contact with lower metallization levels (not shown) of the CMOS support 3 connected or connected to the read circuits (not shown).
  • the contact recovery elements 32A and 32B are for example made from conductive tracks formed on the upper surface 30 of the CMOS support 3 (horizontal parts of the contact recovery elements 32A and 32B) and vias conductors (vertical parts of contact recovery elements 32A and 32B) contacting these conductive tracks.
  • the conductive tracks and the conductive vias can be made of a metallic material, for example silver (Ag), aluminum (Al), gold (Au), copper (Cu), nickel (Ni), titanium (Ti) and chromium (Cr), or titanium nitride (TiN).
  • the conductive tracks and the conductive vias can have a monolayer or multilayer structure.
  • these conductive tracks may consist of a stack of conductive layers separated by insulating layers. The vias then pass through these insulating layers.
  • the conductive layers may be of a metallic material from the above list and the insulating layers may be of silicon nitride (SiN) or of silicon oxide (Si0 2) .
  • the CMOS support 3 is cleaned in order to remove any impurities found on its surface 30.
  • This cleaning is carried out, for example, by plasma. The cleaning thus makes it possible to obtain satisfactory cleanliness of the CMOS support 3 before carrying out a series of successive deposits, detailed in relation to the figures below.
  • FIG. 6 is a sectional view, schematic and partial, of another step of the embodiment of the method for producing the image sensor 1 of FIGS. 1 and 2 from the structure as described. in relation to figure 5.
  • a deposition, on the surface of the contact recovery elements 32A and 32B, of an electron injector material Preferably, a material is deposited which attaches or binds selectively on the surface of the contact recovery elements 32A and 32B to constitute a self-assembled monolayer (SAM).
  • SAM self-assembled monolayer
  • one proceeds to a full plate deposition of an electron injector material having a sufficiently low lateral conductivity so as not to create conduction paths between two neighboring contact recovery elements.
  • the lower electrodes 122A and 122B constitute electron injection layers (Electron Injection Layer - EIL) of the photodetectors 12A and 12B, respectively. These lower electrodes 122A and 122B are also called cathodes of photodetectors 12A and 12B.
  • the lower electrodes 122A and 122B are preferably produced by spin coating or by dip coating.
  • the material making up the lower electrodes 122A and 122B is chosen from the group comprising:
  • a metal or a metal alloy for example of silver (Ag), aluminum (Al), lead (Pb), palladium (Pd), gold (Au), copper (Cu), nickel (Ni), tungsten ( W), molybdenum (Mo), titanium (Ti) or chromium (Cr) or an alloy of magnesium and silver (MgAg);
  • Transparent Conductive Oxide - TCO transparent Conductive Oxide - TCO
  • transparent Conductive Oxide - TCO transparent Conductive Oxide - TCO
  • indium oxide doped with tin Indium Tin Oxide - ITO
  • oxide of zinc and aluminum Al
  • Al Zinc Oxide - AZO oxide of zinc and aluminum
  • Gadium Zinc Oxide - GZO gallium zinc oxide
  • ITO / Ag / ITO multilayer an ITO / Mo / ITO multilayer, an AZO / Ag / AZO multilayer or a ZnO / Ag / ZnO multilayer
  • PEI polyethyleneimine
  • PEIE polyethyleneimine
  • the lower electrodes 122A and 122B can have a single-layer or multi-layer structure.
  • Figure 7 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor 1 of Figures 1 and 2 from the structure such as described in relation to figure 6.
  • a non-selective deposition of a first layer 120 is carried out on the side of the upper surface 30 of the CMOS support 3.
  • This deposition is referred to as a “full plate” deposition because it covers all of the material.
  • the upper surface 30 of the CMOS support 3 as well as the free surfaces of the contact recovery elements 32A, 32B and of the lower electrodes 122A and 122B.
  • the deposition of the first layer 120 is preferably carried out by centrifugal spin coating.
  • the first layer 120 is intended to form the future active layers 120A and 120B of the photodetectors 12A and 12B of the pixel 12.
  • the active layers 120A and 120B of the photodetectors 12A and 12B of the pixel 12 have , preferably, a composition and a thickness identical to those of the first layer 120.
  • the first layer 120 can comprise small molecules, oligomers or polymers. They may be organic or inorganic materials, in particular comprising quantum dots.
  • the first layer 120 may comprise an ambipolar semiconductor material, or a mixture of an N-type semiconductor material and a P-type semiconductor material, for example in the form of superimposed layers or of an intimate mixture at the nanoscale. so as to form a heterojunction by volume.
  • the thickness of the first layer 120 can be between 50 nm and 2 ⁇ m, for example of the order of 300 nm.
  • P-type semiconductor polymers suitable for making the layer 120 are:
  • P3HT poly (3-hexylthiophene)
  • PCDTBT (4, 7-di-2-thienyl-2 ', 1', 3 '-benzothiadiazole)]
  • N-type semiconductor materials suitable for producing the layer 120 are fullerenes, in particular C60, [6, 6] -phenyl-C 6i- methylbutanoate. ([60] PCBM), [6, 6] -phenyl-C7i-butanoate of methyl ([70] PCBM), perylene diimide, zinc oxide (ZnO) or nanocrystals allowing the formation of quantum dots ( quantum dots).
  • FIG. 8 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor 1 of FIGS. 1 and 2 from the structure such as described in relation to figure 7.
  • a non-selective deposition of a second layer 124 is carried out on the side of the upper surface 30 of the CMOS support 3.
  • This deposition is referred to as a “full plate” deposition because it covers all of the material.
  • the deposition of the second layer 124 is preferably carried out by centrifugal spin coating.
  • the second layer 124 is intended to form the future upper electrodes 124A and 124B of the photodetectors 12A and 12B of the pixel 12.
  • the upper electrodes 124A and 124B of the photodetectors 12A and 12B of the pixel 12 have , preferably, a composition and a thickness identical to those of the second layer 124.
  • the second layer 124 is at least partially transparent to the light radiation that it receives.
  • the second layer 124 may be of a conductive and transparent material, for example of conductive and transparent oxide (Transparent Conductive Oxide - TCO), of carbon nanotubes, of graphene, of a conductive polymer, of a metal, or of a mixture or an alloy of at least two of these compounds.
  • the second layer 124 can have a single-layer or a multi-layer structure.
  • TCO suitable for producing the second layer 124 examples include indium-tin oxide (Indium Tin Oxide - ITO), aluminum-zinc oxide (Aluminum Zinc Oxide - AZO), l 'gallium-zinc oxide (Gallium Zinc Oxide - GZO), titanium nitride (TiN), molybdenum oxide (M0O 3) and tungsten oxide (WO 3 ).
  • An example of a conductive polymer suitable for making the second layer 124 is the polymer known under the name PEDOT: PSS, which is a mixture of poly (3, 4) -ethylenedioxythiophene and sodium polystyrene sulfonate and polyaniline, also called PAni.
  • Examples of metals suitable for making the second layer 124 are silver, aluminum, gold, copper, nickel, titanium and chromium.
  • An example of a multilayer structure suitable for making the second layer 124 is a multilayer structure of AZO and silver of the AZO / Ag / AZO type.
  • the thickness of the second layer 124 can be between 10 nm and 5 ⁇ m, for example of the order of 30 nm. In the case where the second layer 124 is metallic, the thickness of this second layer 124 is less than or equal to 20 nm, preferably less than or equal to 10 nm.
  • FIG. 9 is a sectional view, schematic and partial, of yet another step of the mode of implementation of the method for producing the image sensor of FIGS. 1 and 2 from the structure as described. in relation to figure 8.
  • three vertical openings 340, 342 and 344 are made through the first layer 120 and the second layer 124 to the upper surface 30 of the CMOS support 3. These openings are preferably made. by etching after masking of the areas to be protected, for example by deposition of photosensitive resin, exposure through a mask then dry etching, for example by ionic etching reactive, or by wet etching, for example by chemical etching.
  • the deposition of the etching mask is carried out locally, for example by serigraphy, by heliography, by nano-printing (nano imprint) or by flexography, and the etching is carried out by dry etching, for example by reactive ionic etching , or by wet etching, for example by chemical etching.
  • the vertical openings 340 and 342 are located on either side of the first contact recovery element 32A (respectively to the left and to the right of the first contact recovery element 32A);
  • the vertical openings 342 and 344 are located on either side of the second contact recovery element 32B (respectively to the left and to the right of the second contact recovery element 32B).
  • These vertical openings 340, 342 and 344 are to separate photodetectors belonging to the same line of the image sensor 1.
  • the openings 340, 342 and 344 are, for example, produced by photolithography.
  • the openings 340, 342 and 344 are produced by reactive ionic etching or by chemical etching using a suitable solvent.
  • the active layer 120A of the first photodetector 12A of the pixel 12 which entirely covers the free faces of the first contact pickup element 32A and the lower electrode 122A;
  • the active layer 120B of the second photodetector 12B of the pixel 12 which entirely covers the free faces of the second contact pickup element 32B and the lower electrode 122B;
  • the opening 340 is interposed between, on the one hand, the active layer 120A and the upper electrode 124A of the first photodetector 12A of the pixel 12 and, on the other hand, an active layer and an upper electrode of a second photodetector belonging to to a neighboring pixel (not shown);
  • the opening 342 is interposed between, on the one hand, the active layer 120A and the upper electrode 124A of the first photodetector 12A of the pixel 12 and, on the other hand, the active layer 120B and the upper electrode 124B of the second photodetector 12B of pixel 12;
  • the opening 344 is interposed between, on the one hand, the active layer 120B and the upper electrode 124B of the second photodetector 12B of the pixel 12 and, on the other hand, the active layer 160A and the upper electrode 164A of the first photodetector 16A of pixel 16 (partially visible in FIG. 9).
  • the upper electrodes 124A and 124B constitute hole injection layers (Hole Injection Layer - HIL) of the photodetectors 12A and 12B, respectively. These upper electrodes 124A and 124B are also called anodes of the photodetectors 12A and 12B.
  • Hole Injection Layer - HIL hole injection layers
  • the upper electrodes 124A and 124B are preferably made of the same material as the layer 124 in which they are formed, as explained in relation to FIG. 8.
  • FIG. 10 is a sectional view, schematic and partial, of yet another step of the mode of implementation of the method for producing the image sensor 1 of FIGS. 1 and 2 from the structure as described in relation to FIG. 9.
  • the openings 340, 342 and 344 are filled in with a third insulation layer 35 of which only parts 350, 352 and 354 are visible in FIG. 10.
  • the parts 350, 352 and 354 of the third insulation layer 35 respectively fills the openings 340, 342 and 344.
  • the purpose of the parts 350, 352 and 354 of the third layer 35 is to electrically isolate neighboring photodetectors belonging to the same line of the image sensor 1.
  • the parts 350, 352 and 354 of the third layer 35 at least partially absorb the light received by the image sensor 1 in order to optically isolate the photodetectors of a same row.
  • the third insulating layer can be made from a resin whose absorption covers at least the wavelengths of the photodiodes (visible and infrared). Such a resin, with a black appearance, is then qualified as “black resin”.
  • the part 352 electrically and optically isolates the first photodetector 12A from the second photodetector 12B of the pixel 12.
  • the third insulating layer 35 may be made of an inorganic material, for example of silicon oxide (S1O2) or of silicon nitride (SiN). In the case where the third insulating layer 35 is made of silicon nitride, this material is preferably obtained by physical vapor deposition (PVD) or by plasma-assisted chemical vapor deposition (Plasma-Enhanced). Chemical Vapor Deposition - PECVD).
  • the third insulating layer 35 can be made of fluoropolymer, in particular the fluoropolymer known under the trade name "Cytop” of the company Bellex, in polyvinylpyrrolidone (PVP), in polymethyl methacrylate (PMMA), in polystyrene (PS), in parylene, in polyimide (PI), in acrylonitrile butadiene styrene (ABS), in polydimethylsiloxane (PDMS), a photolithography resin, an epoxy resin, an acrylate resin or a mixture of at least two of these compounds.
  • PVP polyvinylpyrrolidone
  • PMMA polymethyl methacrylate
  • PS polystyrene
  • PI polyimide
  • ABS acrylonitrile butadiene styrene
  • PDMS polydimethylsiloxane
  • the third insulating layer 35 can be made from another inorganic dielectric, in particular from aluminum oxide (Al2O3).
  • Aluminum oxide can be deposited by depositing thin atomic layers (Atomic Layer Deposition - ALD).
  • the maximum thickness of the third insulating layer 35 may be between 50 nm and 2 ⁇ m, for example of the order of 100 nm.
  • a fourth layer 360 is then deposited over the entire structure on the side of the upper surface 30 of the CMOS support 3.
  • This fourth layer 360 is preferably a so-called "planarization” layer making it possible to obtain a structure having a flat upper surface before encapsulation of the photodetectors.
  • the fourth planarization layer 360 may be made of a dielectric material based on polymers.
  • the planarization layer 360 can alternatively contain a mixture of silicon nitride (SiN) and silicon oxide (S1O2), this mixture being obtained by sputtering, by physical vapor deposition (PVD) or by plasma-enhanced chemical vapor deposition (PECVD).
  • the planarization layer 360 can also be made of fluoropolymer, in particular the fluoropolymer known under the trade name "Cytop” from the company Bellex, of polyvinylpyrrolidone (PVP), of polymethacrylate of methyl (RMM ⁇ ), polystyrene (PS), parylene, polyimide (PI), acrylonitrile butadiene styrene (ABS), polydimethylsiloxane (PDMS), a photolithography resin, epoxy resin, acrylate resin or a mixture of at least two of these compounds.
  • PVP polyvinylpyrrolidone
  • RRMM ⁇ polymethacrylate of methyl
  • PS polystyrene
  • PS polystyrene
  • parylene polyimide
  • PI acrylonitrile butadiene styrene
  • PDMS polydimethylsiloxane
  • photolithography resin epoxy resin, acrylate resin or a mixture of at least
  • Figure 11 is a sectional view, schematic and partial, of a variant of the embodiment of the method for producing the image sensor 1 of Figures 1 and 2 from the structure as described in relation with figure 9.
  • This variant differs from the step explained in relation to FIG. 10 mainly in that the openings 340, 342 and 344 are not here respectively filled (or filled) by the parts 350, 352 and 354 of the third layer. insulation 35 but by a layer 360 ′ preferably made of a material identical to that of the fourth layer 360.
  • the variant illustrated in FIG. 11 amounts to not effecting the deposition of the third insulation layer 35 and to proceed directly to the deposition of the fourth layer 360 to form the fifth layer 360 '.
  • the transparent materials listed for the fourth layer 360 as explained in relation to FIG. 10 are suitable for forming the fifth layer 360 '.
  • the fifth layer 360 ′ is not made of black resin.
  • FIG. 11 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor 1 of Figures 1 and 2 from the structure such as described in relation to figure 10.
  • a sixth layer 370 is deposited over the entire structure on the side of the upper surface 30 of the CMOS support 3.
  • the purpose of this sixth layer 370 is to encapsulate the organic photodetectors of the image sensor 1.
  • the sixth layer 370 thus makes it possible to avoid the degradation, due to exposure to water or to the humidity contained in the ambient air, of the organic materials constituting the photodetectors of the image sensor 1.
  • the sixth layer 370 covers the entire free top surface of the fourth planarization layer 360.
  • the sixth layer 370 may consist of alumina (Al 2 O 3) obtained by a deposition process by thin atomic layers (Atomic Layer Deposition - ALD), of silicon nitride (S1 3 N 4) or of silicon oxide (S1O 2) obtained by physical vapor deposition (PVD), silicon nitride obtained by plasma assisted chemical vapor deposition (PECVD).
  • the sixth layer 370 may alternatively consist of PET, PEN, COP or CPI.
  • the sixth layer 370 makes it possible to further improve the surface condition of the structure before production of the microlenses.
  • Figure 13 is a sectional view, schematic and partial, of yet another step of the embodiment of the method for producing the image sensor 1 of Figures 1 and 2 from the structure such as described in relation to figure 12.
  • the microlens 18 of the pixel 12 is produced directly above the photodetectors 12A and 12B. In the example of FIG. 13, this microlens 18 is substantially centered relative to the opening 342 separating the two photodetectors 12A, 12B.
  • microlens 18 is approximately aligned with respect to the portion 352 of the third insulation layer 35 ( Figure 10). We thus obtain pixel 12 of image sensor 1.
  • the process for forming the layers of the image sensor 1 may correspond to a so-called additive process, for example by direct printing of the material composing the organic layers at the desired locations, in particular in the form of sol-gel, for example by ink jet printing, heliography, screen printing, flexography, spray coating (spray coating) or depositing drops (drop-casting).
  • the process for forming the layers of the image sensor may correspond to a so-called subtractive process, in which the material making up the organic layers is deposited on the entire structure and in which the unused portions are then removed. , for example by photolithography or laser ablation.
  • the deposition on the entire structure can be carried out for example by liquid, by cathodic sputtering or by evaporation. They may in particular be processes of the spin coating, spray coating, heliography, slot-die coating, blade coating, flexography or screen printing type.
  • the layers are metallic, the metal is, for example, deposited by evaporation or by cathodic sputtering on the whole of the support and the metallic layers are delimited by etching.
  • at least some of the layers of the image sensor can be produced by printing techniques.
  • the materials of these layers described above can be deposited in liquid form, for example in the form of conductive and semiconductor inks using inkjet printers.
  • material in liquid form is understood here also to mean gel materials which can be deposited by printing techniques.
  • Annealing steps are optionally provided between the depositions of the different layers, but the annealing temperatures may not exceed 150 ° C., and the deposition and any annealing may be carried out at atmospheric pressure.
  • Figure 14 is a sectional view along plane AA ( Figure 2), schematic and partial, of the image sensor 1 of Figures 1 and 2.
  • the section plane AA corresponds to a section plane parallel to a image sensor pixel line
  • FIG. 14 only the pixels 12 and 16 of the image sensor 1 have been represented. These pixels 12 and 16 belong to the same row of pixels of the image sensor 1.
  • the photodetectors 12A, 12B of pixel 12 and the photodetectors 16A, 16B of pixel 16 are separated from each other. other. Thus, along the same line of image sensor 1, each photodetector is isolated from neighboring photodetectors.
  • Figure 15 is a sectional view along plane BB ( Figure 2), schematic and partial, of the image sensor 1 of Figures 1 and 2.
  • the section plane BB corresponds to a section plane parallel to a image sensor pixel column
  • FIG. 15 only the first photodetectors 10A and 12A of the pixels 10 and 12, respectively, are visible.
  • the lower electrode 102A of the first photodetector 10A of the pixel 10 is separated from the lower electrode 122A of the first photodetector 12A of the pixel 12;
  • the active layer 100A of the first photodetector 10A of the pixel 10 and the active layer 120A of the first photodetector 12A of the pixel 12 are formed by the same continuous deposit; and the upper electrode 104A of the first photodetector 10A of the pixel 10 and the upper electrode 124A of the first photodetector 12A of the pixel 12 are formed by the same other continuous deposit.
  • all the first photodetectors of the pixels belonging to a same column of pixels of the image sensor 1 have a common active layer and a common upper electrode.
  • the upper electrode thus makes it possible to address all the first photodetectors of the pixels of a same column, while the lower electrode makes it possible to address each first photodetector individually.
  • all the second photodetectors of the pixels belonging to a same column of pixels of the image sensor 1 have another common active layer, distinct from the common active layer of the first photodetectors of these same pixels, and another electrode. common upper, distinct from the common upper electrode of the first photodetectors of these same pixels.
  • This other common upper electrode thus makes it possible to address all the second photodetectors of the pixels of a same column, while the lower electrode makes it possible to address each second photodetector individually.
  • FIG. 16 is a sectional view, schematic and partial, of another embodiment of an image sensor 4.
  • the image sensor 4 shown in FIG. 16 is analogous to the image sensor 1 exposed in relation to FIGS. 1 and 2. This image sensor 4 differs from the image sensor 1 mainly in that:
  • the pixels 10, 12, 14 and 16 of the image sensor 4 belong to the same row or to the same column of this image sensor 4 (while the pixels 10, 12, 14 and 16 of the image sensor 1 (FIG. 1) are distributed over two rows and two different columns of this image sensor 1); and each pixel 10, 12, 14 and 16 of the image sensor 4 has a color filter 41R, 41G or 41B under its microlens 18 and on a passivation layer 43.
  • the four monochromatic pixels 10, 12 are monochromatic pixels 10, 12 ,
  • the image sensor 4 comprises:
  • a first green filter 41G interposed between the microlens 18 of the pixel 10 and the passivation layer 43;
  • a red filter 41R interposed between the microlens 18 of the pixel 12 and the passivation layer 43;
  • a blue filter 41B interposed between the microlens 18 of the pixel 16 and the passivation layer 43.
  • the color filters 41R, 41G and 41B of the image sensor 4 allow electromagnetic waves to pass in different frequency ranges of the visible spectrum and allow electromagnetic waves of the infrared spectrum to pass.
  • 41R, 41G and 41B color filters can match colored resin blocks.
  • Each 41R, 41G and 41B color filter is suitable for allowing infrared radiation to pass, for example at a wavelength between 700 nm and 1 mm, and, for at least some of the color filters, to pass a range of wavelengths of visible light.
  • the image sensor 4 can include:
  • At least one pixel for example, pixel 16 whose color filter 41B is adapted to pass infrared radiation and blue light, for example in the wavelength range of 430 nm to 490 nm;
  • At least one pixel for example, pixels 10 and 14 whose color filter 41G is adapted to pass infrared radiation and green light, for example in the wavelength range of 510 nm to 570 nm; and
  • At least one pixel for example, pixel 12
  • pixel 12 whose color filter 41R is adapted to pass infrared radiation and red light, for example in the wavelength range of 600 nm to 720 nm.
  • each pixel 10, 12, 14, 16 of the image sensor 4 has a first and a second photodetector.
  • Each pixel thus comprises two photodetectors, represented very schematically in FIG. 16 by the same block (OPD). More precisely, in figure 16:
  • pixel 10 includes two organic photodetectors (block 90, OPD);
  • pixel 12 has two organic photodetectors (block 92, OPD);
  • pixel 14 includes two organic photodetectors (block 94, OPD); and
  • pixel 16 has two organic photodetectors (block 96, OPD).
  • OPD organic photodetectors
  • the photodetectors of each pixel 10, 12, 14 and 16 are coplanar and each associated with a read circuit as explained in relation to FIG. 3. These read circuits are produced on and inside the CMOS support 3
  • the image sensor 4 is thus capable, for example, of alternately making estimates of distance by time of flight in the infrared and capturing images in color.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
EP20739697.9A 2019-07-19 2020-07-16 Bildsensorpixel Pending EP4000096A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1908251A FR3098989B1 (fr) 2019-07-19 2019-07-19 Pixel de capteur d’images
PCT/EP2020/070072 WO2021013666A1 (fr) 2019-07-19 2020-07-16 Pixel de capteur d'images

Publications (1)

Publication Number Publication Date
EP4000096A1 true EP4000096A1 (de) 2022-05-25

Family

ID=69172849

Family Applications (2)

Application Number Title Priority Date Filing Date
EP20186097.0A Pending EP3767677A1 (de) 2019-07-19 2020-07-16 Pixel eines bildsensors
EP20739697.9A Pending EP4000096A1 (de) 2019-07-19 2020-07-16 Bildsensorpixel

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP20186097.0A Pending EP3767677A1 (de) 2019-07-19 2020-07-16 Pixel eines bildsensors

Country Status (8)

Country Link
US (1) US20220262863A1 (de)
EP (2) EP3767677A1 (de)
JP (1) JP2022541305A (de)
KR (1) KR20220032096A (de)
CN (2) CN114270521A (de)
FR (1) FR3098989B1 (de)
TW (1) TW202118031A (de)
WO (1) WO2021013666A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210013508A (ko) * 2019-07-26 2021-02-04 삼성디스플레이 주식회사 광 센서, 광 센서의 제조 방법 및 광 센서를 포함하는 표시 장치

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5556823B2 (ja) * 2012-01-13 2014-07-23 株式会社ニコン 固体撮像装置および電子カメラ
JP2016058559A (ja) * 2014-09-10 2016-04-21 ソニー株式会社 固体撮像装置およびその駆動方法、並びに電子機器
US9967501B2 (en) * 2014-10-08 2018-05-08 Panasonic Intellectual Property Management Co., Ltd. Imaging device
KR20160100569A (ko) * 2015-02-16 2016-08-24 삼성전자주식회사 이미지 센서 및 이미지 센서를 포함하는 촬상 장치
KR20170098089A (ko) * 2016-02-19 2017-08-29 삼성전자주식회사 전자 장치 및 그의 동작 방법
EP3579021B1 (de) * 2017-02-06 2021-03-03 Panasonic Intellectual Property Management Co., Ltd. Dreidimensionale bewegungserfassungsvorrichtung und dreidimensionales bewegungserfassungsverfahren

Also Published As

Publication number Publication date
TW202118031A (zh) 2021-05-01
CN213304142U (zh) 2021-05-28
JP2022541305A (ja) 2022-09-22
US20220262863A1 (en) 2022-08-18
KR20220032096A (ko) 2022-03-15
WO2021013666A1 (fr) 2021-01-28
EP3767677A1 (de) 2021-01-20
FR3098989A1 (fr) 2021-01-22
FR3098989B1 (fr) 2023-08-25
CN114270521A (zh) 2022-04-01

Similar Documents

Publication Publication Date Title
EP3931872B1 (de) Farb- und infrarotbildsensor
EP3767679B1 (de) Pixel eines bildsensors
WO2020178498A1 (fr) Capteur d'images couleur et infrarouge
WO2021013666A1 (fr) Pixel de capteur d'images
EP4026172A1 (de) Bildschirmpixel
EP3931873B1 (de) Farb- und infrarotbildsensor
WO2021013667A1 (fr) Pixel de capteur d'images
WO2022184409A1 (fr) Capteur hybride
EP4053900A1 (de) Hybridsensor
FR3120472A1 (fr) Capteur hybride
EP4073842A1 (de) Bildsensor zum korrigieren des elektronischen rauschens eines sensors
EP3942613A1 (de) Bildsensor mit einem winkelfilter

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220214

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)