WO2020259929A1 - Système d'imagerie et procédé de détection - Google Patents

Système d'imagerie et procédé de détection Download PDF

Info

Publication number
WO2020259929A1
WO2020259929A1 PCT/EP2020/063943 EP2020063943W WO2020259929A1 WO 2020259929 A1 WO2020259929 A1 WO 2020259929A1 EP 2020063943 W EP2020063943 W EP 2020063943W WO 2020259929 A1 WO2020259929 A1 WO 2020259929A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging system
detector array
light
image
frame
Prior art date
Application number
PCT/EP2020/063943
Other languages
English (en)
Inventor
Jens Hofrichter
Original Assignee
Ams International Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ams International Ag filed Critical Ams International Ag
Priority to EP20725576.1A priority Critical patent/EP3990942A1/fr
Priority to CN202080056778.8A priority patent/CN114599990A/zh
Priority to US17/621,664 priority patent/US20220357434A1/en
Publication of WO2020259929A1 publication Critical patent/WO2020259929A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/499Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using polarisation effects
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14643Photodiode arrays; MOS imagers
    • H01L27/14649Infrared imagers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components

Definitions

  • the present disclosure is related to an imaging system and a detection method.
  • the disclosure relates to the field of light detection and ranging systems, abbreviated LIDAR systems. Moreover, it relates to imaging systems. More specifically, it relates to imaging systems with integrated LIDAR capability for distance measurements. Particularly, it relates to high-resolution imagers with distance measurement capabilities.
  • the imaging system may be realized as a LIDAR system, employing image sensors with in-pixel modulation.
  • Prior art deals with optical ranging system mainly through the so-called time-of- flight, TOF, approach, where an optical pulse is emitted and the time it travels is calculated with a time-to-digital converter, TDC.
  • Scanning LIDAR systems with movable parts and sequential scanning of the surface are for example employed for autonomous driving.
  • Figure 12 and Figure 13 both show the working principle of prior art approaches (1) and (2) discussed above.
  • Figure 12 shows a LIDAR system wherein a light source (e.g., a laser) is emitting light pulses towards an object where the light pulse is reflected.
  • Figure 13 shows the LIDAR system wherein the backward travelling light pulse is detected by an a light source (e.g., a laser)
  • the time difference between the light pulse emission and the receiving light pulse can be measured with a time-to-digital converter, TDC.
  • a light pulse is emitted by light source (typically a laser suitable for emission of light pulses with a very short duration, typically picoseconds) and deflected onto several objects, where it is then reflected.
  • the backward travelling light pulse is then collected by an appropriate photodetector, e.g., an avalanche photodetector or SPAD suitable for fast light pulse detection.
  • the time difference between the light pulse emission and the received light pulse is measured or counted by a time-to-digital converter, TDC.
  • TDC time-to-digital converter
  • CMOS image sensor or sensor array collected with a CMOS image sensor or sensor array.
  • the image sensor itself is configured such that is has a modulator built on top of it to modulate the intensity or phase of the incoming signal.
  • an unmodulated image Idc is recorded.
  • the modulated image Imod is acquired.
  • the modulation is made such that at the beginning of the frame the modulator is attenuating the signal. At the end of the frame the modulator allows for full transparency and the signal is not attenuated anymore .
  • the distance image is computed as a function of the modulated
  • the following relates to an improved concept in the field of light detection and ranging, LIDAR, systems.
  • Modulation of sensitivity of the receiver side can be achieved through in pixel modulation of the detector array.
  • the improved concept suggests that a modulation is performed within the pixel, rather than using a modulator on top of or attached to the receiving element.
  • the pixels can be a modified four
  • the detector array can be complemented with one transistor or several transistors attached to the photodiode. Said transistor or transistors can be operated as an element to modulate the effective responsivity of the photodiode by introducing a controlled leakage path.
  • the photodiodes can be complemented with an integrated polarizer.
  • the polarizer can be considered any array of either horizontal or vertical lines, which may be made from metal but also from other dielectric or semiconductor materials. Adjacent photodiodes might have different orientations of the polarizers to detect different polarization states of the light. Only one metal level may be present, optionally metal 1, where the remainder of the back-end-of-line is clear.
  • polarizers comprising semiconductor materials or dielectric materials can be contemplated.
  • high-contrast gratings could be integrated in the CMOS process by using polysilicon or other gate materials for instance.
  • an imaging system comprises a light emitter, a detector array and a synchronization
  • the detector array comprises pixels, which have a built-in modulation function.
  • the synchronization circuit is operable to synchronize the acquisition performed by the detector array with the emission by means of the light source .
  • the light emitter comprises a light source such as a light emitting diode or semiconductor laser diode, for example.
  • a semiconductor laser diode includes surface emitting lasers, such as the vertical-cavity surface-emitting laser, or VCSEL, or edge emitter lasers.
  • VCSELs combine properties such as surface emission, with design flexibility in
  • the light emitter is connected to a laser driver circuit.
  • Another option for the light emitter is the used of light emitting diodes (LEDs) . LEDs are also surface-emitting devices, but with the
  • the radiation is non-coherent and has typically Lambertian emission characteristics as opposed to a directed laser beam.
  • the light source, or light emitter has one or more characteristic emission wavelengths.
  • an emission wavelength of the light emitter lies in the near infrared, NIR.
  • Other wavelengths may include visible or ultraviolet, infrared (IR) or far-infrared (FIR), for
  • the wavelength of the light emitter might be at visible wavelengths. In another embodiment, the wavelength of the light emitter might be at visible wavelengths. In another
  • the light emission might be in the infrared wavelength range, which is invisible to the human eye.
  • the emission wavelength can be 850 nm or 940 nm.
  • a narrow wavelength bandwidth (especially over temperature) allows for more effective filtering at the receiver side, e.g. at the detector array, resulting in improved signal to noise ratio (SNR) .
  • SNR signal to noise ratio
  • This type of laser allows for emitting a vertical cylindrical beam, which renders the integration into the imaging system more straightforward.
  • the emission of the laser can be made homogenous using one or a multitude of diffusers to illuminate the scene uniformly.
  • the detector array comprises one or more photodetectors, denoted pixels hereinafter.
  • the pixels are one or more photodetectors, denoted pixels hereinafter.
  • the pixels are one or more photodetectors, denoted pixels hereinafter.
  • detector array implemented as solid-state or semiconductor photodetectors, which are arranged or integrated such as to form an array. Examples include CCD or CMOS image sensors, an array of single-photon avalanche diodes, SPADs, or other type of avalanche diodes, APDs . These types of photodetectors are sensitive to light, such as visible light, NIR, VIS, and UV, which facilitates using narrow pulse widths in emission by means of the light emitter.
  • the detector array is arranged to acquire one or more consecutive images, denoted frames hereinafter. In order to acquire a frame the detector array may integrate incident light, using the pixels of the array, for the duration of an exposure time. Acquisition of
  • consecutive frames may proceed with a frame rate, expressed in frames per second, for example.
  • the pixels have a built-in modulation function.
  • modulation relates to a defined change of sensitivity or responsivity of pixels during the acquisition of a frame. In other words, detection by the detector array is modulated rather than the emission of light by the.
  • This modulation can be monotonous in a mathematical sense, i.e., the change of sensitivity can be described by means of a monotonic function, e.g., a monotonic function of time.
  • a function is called monotonic if and only if it is either entirely non-increasing, or entirely non-decreasing.
  • modulation may be monotonously increasing or
  • the duration of a single frame may be limited to the time during which the sensitivity is modulated.
  • an advantage of the proposed concept lies in the fact that the modulation time can also be, and practice often is, faster than the duration of the frame. This renders the proposed concept different from the common time-of-flight (ToF) , which relies on the ability to conduct an actual time measurement of emitted pulse and corresponding reflected pulse in a stop-watch manner, e.g., by means of time-to- digital converters (TDCs) .
  • TDCs time-to- digital converters
  • the synchronization circuit is arranged to synchronize the emission of light by means of the light emitter on one side and detection of light by means of the detector array on the other side.
  • the synchronization circuit may control a delay between emission of light, e.g., pulses, and a time frame for detection. For example, a delay between the beginning of the light pulse and the beginning of detection may be set, e.g., depending on a distance or distance range to be detected. The time it takes to finish a distance measurement cycle depends on the distance between the imaging system and an external target .
  • the light emitter emits light during operation of the imaging system, e.g., towards a scene including an external target or object.
  • the emitted light eventually hits the external target, gets reflected or scattered at the target, and returns to the imaging system, where the detector array eventually detects the returning light.
  • Light returning from a closer target in the scene meets a different stage of modulation in the receiver than light returning from a target farther away in the scene.
  • the distance is encoded in the signals generated by the pixels, or by the image created from the array.
  • the distance to the point of reflection or scattering can be calculated from the image, e.g., by
  • the imaging system according to the improved concept allows for reducing the need for complex receiver architectures. Instead, the system complexity is reduced by introducing in pixel modulation at the receiver.
  • Such an in-pixel modulation scheme can be fabricated very cost-effectively within a CMOS image sensor process and does not require Pockels Cells being made from lithium niobate type of materials or polymer films exhibiting an electro-optical effect. This allows for
  • the LIDAR system can be compact and cost-efficient, and at the same time has a resolution of that of modern CMOS image sensors (e.g., several Mpixels).
  • the optical beam path is simplified thus enabling integration of the polarization beam splitter (PBS) into the CMOS image sensor.
  • PBS polarization beam splitter
  • the complexity of the receiver can be largely reduced alleviating the need of complex Pockels cells and other types of either costly (lithium niobate) or
  • the detector array and the synchronization circuit are integrated into a same chip.
  • the chip comprises a common integrated circuit into which at least the detector array and synchronization circuit are integrated.
  • the light emitter and the common integrated circuit are arranged on and electrically contacted to each other via a shared carrier or substrate.
  • the light emitter may also be integrated into the common integrated circuit. Integration allows for compact design, reducing board space requirements, enabling low-profile system designs in
  • Flash systems an imaging system which can observe a complete field of view (FOV) at once, called Flash systems.
  • imaging system may emit pulses of light, such as in the infrared. A portion of that energy is returned and converted in distance and optionally speed, for example.
  • the detector array comprises pixels, which have a polarizing function.
  • a polarizing function For example, integrated polarization-sensitive photodiodes can be used.
  • the detector array may have on-chip polarizers associated with the pixels, respectively.
  • Such a structure may be integrated using CMOS technology.
  • CMOS technology complementary metal-oxide-semiconductor
  • the polarizers can be placed on-chip using an air-gap nano- wire-grid coated with an anti-reflection material that suppresses flaring and ghosting. Such an on-chip design reduces polarization crosstalk and improves extinction ratios.
  • the detector array may be complemented with a layer of polarizers arranged above the pixels,
  • the layer of polarizers may not necessarily be integrated with the detector array.
  • the layer of polarizers comprises a number of polarizers which are
  • detector array with pixels which have a polarizing function, is disclosed in EP 3261130 A1 which is hereby incorporated by reference.
  • the scene comprises several external targets which each may reflect or scatter light.
  • the reflection or scattering of light off of the targets typically produces polarized light, such as linearly polarized light in a plane perpendicular to the incident light.
  • Polarization may further be affected by material properties of the external targets, e.g. properties of their surfaces. Polarization may thus be visible in an image acquired by the detector array. For example, by
  • pixels which have a polarizing function may further increase contrast and signal-to-noise ratio in the resulting image so that different external targets can be distinguished in the image more easily.
  • a light source such a VCSEL may generate polarized light.
  • the object or external target may change or rotate the polarized light.
  • polarization is able to generate more information about the object and may increase accuracy of the distance measurement.
  • adjacent pixels have orthogonal polarization functions. Using adjacent pixels with different polarization functions allows for the detection of more linear angles of polarized light, e.g. 0° and 90°, or 180° and 270°. This is possible through comparing the rise and fall in intensities transmitted between adjacent pixels, for example.
  • the adjacent pixels may be considered as groups in order to calculate degree and/or direction of polarization.
  • the modulation function is achieved by a modulating element.
  • the modulating element controls sensitivity and/or responsivity of pixels of the detector array.
  • the modulating element introduces a leakage current which is linear or non-linear to an applied voltage.
  • Voltage can be regulated using known components such as voltage regulators. This allows for controlled and
  • a leakage current that flows through the modulating element has a first value at a start of a frame and a second value at the end of the frame, wherein the first value is higher than the second value.
  • the first value is lower than the second value.
  • modulation is encoded in the image which is acquired by the detector array. For example, intensities in the image allow for determining distance information from the resulting image .
  • a leakage current that flows through the modulating element monotonously decreases from a first value to a second value during a frame.
  • the modulating element is a transistor.
  • An example involves a leakage control transistor.
  • the emission wavelength of the light emitter is larger than 800 nm and smaller than 10.000 nm.
  • the emission wavelength of the light emitter is in between 840 nm and 1610 nm. Detection in these spectral ranges lies in the infrared and results in robust emission and detection. Furthermore, light in these spectral ranges is essentially invisible to and disguised from human's vision.
  • a scene is illuminated with a first and a second light pulse. From the first light pulse a first image is acquired by a detector array having a constant sensitivity. From the second light pulse a second image is acquired by the detector array having a sensitivity increasing with time.
  • a distance of objects of the scene is determined depending on the first and second image. For example, the distance of objects of the scene is inferred from a ratio of the second image and the first image.
  • a LIDAR image is inferred from a ratio of the second image to the first image and distance information of the scene is determined from the LIDAR image.
  • a distance to one or more objects can be concurred from relative intensities in the LIDAR image. For example, as the sensitivity of the detector is modulated, the distance and relative distance of objects is encoded in the LIDAR image. For example, close objects show different intensity than those farther away. These differences may be apparent in the LIDAR image and provide a measure of distance.
  • the scene is illuminated by a light source and the first and the second light pulse have identical duration and height.
  • a vehicle such as a car or other motor vehicle, comprises an imaging system according to the improved concept discussed below.
  • a board electronics such as Advanced Driver Assistance System, ADAS, is embedded in the vehicle.
  • the imaging system is arranged to provide an output signal to the board electronics.
  • Possible applications include automotive such as autonomous driving, collision prevention, security and surveillance, as well as industry and automation and consumer electronics.
  • Figure 1 shows an example of the imaging system.
  • Figure 2 shows an example embodiment of a detector array with polarization function
  • Figure 3 shows a cross section of an example detector array with a high-contrast grating polarizer
  • Figure 4 shows an example embodiment of a modulation
  • Figure 5 shows another example embodiment of a modulation element
  • Figure 6 shows another example embodiment of a modulation element
  • Figure 7 shows another example embodiment of a modulation element
  • Figure 8 shows another example embodiment of a modulation element
  • Figure 9 shows an example embodiment of a detection method
  • Figure 10 shows an example embodiment of a detection method
  • Figure 11 shows an example timing diagram of the light
  • Figure 12 shows an example embodiment of a prior art LIDAR detection method
  • Figure 13 shows another example embodiment of a prior art LIDAR detection method.
  • FIG. 1 shows an example imaging system.
  • the imaging system comprises a light source LS, a detector array DA and a synchronization circuit SC, which are arranged contiguous with and electrically coupled to a carrier CA.
  • the carrier comprises a substrate to provide electrical connectivity and mechanical support.
  • the detector array and the synchronization circuit are integrated into a same chip CH which constitutes a common integrated circuit.
  • the light source and the common integrated circuit are arranged on and electrically contacted to each other via the carrier.
  • the components of the imaging system are embedded in a sensor package (not shown) . Further components such as a processing unit, e.g. a processor or microprocessor, to execute the detection method and ADCs, etc. are also arranged in the sensor package and may be integrated into the same integrated circuit.
  • the light source LS comprises a light emitter such as a surface emitting laser, e.g., a vertical-cavity surface- emitting laser, or VCSEL.
  • the light emitter has one or more characteristic emission wavelengths.
  • an emission wavelength of the light emitter lies in the near infrared, NIR, e.g. larger than 800 nm and smaller than 10.000 nm.
  • LIDAR applications may rely on the range of emission
  • wavelength of the light emitter is in between 840 nm and 1610 nm which results in robust emission and detection. This range can be offered by the VCSEL.
  • the light source such a VCSEL may generate a polarized light.
  • An external target or object may change or rotate the polarized light.
  • an array with pixels having different polarization is able to generate more information about the object and to increase accuracy of the distance measurement.
  • the detector array DA comprises one or more photodetectors, or pixels.
  • the array of pixels form an image sensor.
  • the photodetectors may be a photodiode, e.g. a pinned photodiode, a pin photodiode or another photodiode.
  • the detector array comprises an array of detecting elements such as an array of photodiodes.
  • the pixels are polarization sensitive. Adjacent pixels of the image sensor are polarization sensitive, each having an orthogonal state of polarization arranged in a checker-board pattern. This will be discussed in more detail below.
  • the synchronization circuit SC is arranged in the same sensor package, and, in fact, integrated in the common integrated circuit.
  • the synchronization circuit SC is
  • the detector array e.g. as frames A and B.
  • Figure 2 shows an example embodiment of a detector array with polarization function.
  • the detector array DA or image sensor, comprises pixels which are arranged in a pixel map as shown.
  • the image sensor can be characterized in that adjacent pixels have different states of polarization.
  • the drawings shows a detector array with on-chip polarizers associated with the pixels, respectively. Such a structure may be integrated using CMOS technology. Adjacent pixels have orthogonal polarization functions, e.g., pixels PxH with horizontal polarization and pixels PxV with vertical
  • Embodiments of said detector array are disclosed in in EP 3261130 A1 which is hereby incorporated by reference .
  • Figure 3 shows a cross section of an example detector array with a high-contrast grating polarizer.
  • EP 3261130 A1 corresponds to Figure 1 of EP 3261130 A1 and is cited here for easy reference.
  • the remaining embodiments of detector arrays in EP 3261130 A1 are not excluded but rather
  • the photodetector device, detector array, shown in Figure 3 comprises a substrate 1 of semiconductor material, which may be silicon, for instance.
  • the photodetectors, or pixels, of the array are suitable for detecting electromagnetic
  • the detector array may comprise any conventional photodetector structure and is therefore only schematically represented in Figure 3 by a sensor region 2 in the substrate 1.
  • the sensor region 2 may extend continuously as a layer of the substrate 1, or it may be divided into sections according to a photodetector array.
  • the substrate 1 may be doped for electric conductivity at least in a region adjacent to the sensor region 2, and the sensor region 2 may be doped, either entirely or in separate sections, for the opposite type of electric conductivity. If the substrate 1 has p-type conductivity the sensor region 2 has n-type conductivity, and vice versa. Thus a pn-junction 8 or a plurality of pn-junctions 8 is formed at the boundary of the sensor region 2 and can be operated as a photodiode or array of photodiodes by applying a suitable voltage. This is only an example, and the photodetector array may comprise different structures.
  • a contact region 10 or a plurality of contact regions 10 comprising an electric conductivity that is higher than the conductivity of the adjacent semiconductor material may be provided in the substrate 1 outside the sensor region 2, especially by a higher doping concentration.
  • a further contact region 20 or a plurality of further contact regions 20 comprising an electric conductivity that is higher than the conductivity of the sensor region 2 may be arranged in the substrate 1 contiguous to the sensor region 2 or a section of the sensor region 2.
  • An electric contact 11 can be applied on each contact region 10 and a further electric contact 21 can be applied on each further contact region 20 for external electric connections.
  • An isolation region 3 may be formed above the sensor region 2.
  • the isolation region 3 is transparent or at least
  • the isolation region 3 comprises a dielectric material like a field oxide, for instance. If the semiconductor material is silicon, the field oxide can be produced at the surface of the substrate 1 by local oxidation of silicon (LOCOS) . As the volume of the material increases during oxidation, the field oxide protrudes from the plane of the substrate surface as shown in Figure 3.
  • LOC local oxidation of silicon
  • Grid elements 4 are arranged at a distance d from one another on the surface 13 of the isolation region 3 above the sensor region 2.
  • the grid elements 4 can be arranged immediately on the surface 13 of the isolation region 3.
  • the grid elements 4 may have the same width w, and the distance d may be the same between any two adjacent grid elements 4.
  • the sum of the width w and the distance d is the pitch p, which is a minimal period of the regular lattice formed by the grid elements 4.
  • the length 1 of the grid elements 4, which is perpendicular to their width w, is indicated in Figure 3 for one of the grid elements 4 in a perspective view showing the hidden contours by broken lines.
  • the grid elements 4 are transparent or at least partially transparent to the electromagnetic radiation that is to be detected and have a refractive index for the relevant
  • the grid elements 4 may comprise polysilicon, silicon nitride or niobium pentoxide, for instance.
  • the use of polysilicon for the grid elements 4 has the advantage that the grid elements 4 can be formed in a CMOS process together with the formation of polysilicon electrodes or the like.
  • the refractive index of the isolation region 3 is lower than the refractive index of the grid elements 4.
  • the isolation region 3 is an example of the region of lower refractive index recited in the claims.
  • the grid elements 4 are covered by a further region of lower refractive index.
  • the grid elements 4 are covered by a dielectric layer 5 comprising a refractive index that is lower than the refractive index of the grid elements 4.
  • the dielectric layer 5 may especially comprise borophosphosilicate glass (BPSG) , for instance, or silicon dioxide, which is employed in a CMOS process to form inter-metal dielectric layers of the wiring.
  • BPSG borophosphosilicate glass
  • silicon dioxide silicon dioxide
  • An antireflective coating 7 may be applied on the grid elements 4. It may be formed by removing the dielectric layer 5 above the grid elements 4, depositing a material that is suitable for the antireflective coating 7, and filling the openings with the dielectric material of the dielectric layer 5.
  • the antireflective coating 7 may especially be provided to match the phase of the incident radiation to its propagation constant in the substrate 1.
  • the substrate 1 comprises silicon
  • the refractive index of the antireflective coating 7 may be at least approximately the square root of the refractive index of silicon. Silicon nitride may be used for the antireflective coating 7, for instance.
  • the array of grid elements 4 forms a high-contrast grating, which is comparable to a resonator comprising a high quality- factor.
  • the high-contrast grating For the vector component of the electric field vector that is parallel to the longitudinal extension of the grid elements 4, i.e., perpendicular to the plane of the cross sections shown in Figure 3, the high-contrast grating
  • the optical path length of an incident electromagnetic wave is different in the grid elements 4 and in the sections of the further region of lower refractive index 5, 15 located between the grid elements 4.
  • an incident electromagnetic wave reaches the surface 13, 16 of the region of lower refractive index 3, 6, which forms the base of the high-contrast grating, with a phase shift between the portions that have passed a grid element 4 and the portions that have propagated between the grid elements 4.
  • the high-contrast grating can be designed to make the phase shift n or 180° for a specified wavelength, so that the portions in question cancel each other.
  • the high-contrast grating thus constitutes a reflector for a specified
  • the electromagnetic wave passes the grid elements 4 essentially undisturbed and is absorbed within the substrate 1 underneath.
  • electron-hole pairs are generated in the semiconductor material.
  • the charge carriers generated by the incident radiation produce an electric current, by which the radiation is detected.
  • a voltage is applied to the pn-junction 8 in the reverse direction.
  • the grid elements 4 may comprise a constant width w, and the distance d between adjacent grid elements 4 may also be constant, so that the high-contrast grating forms a regular lattice.
  • the pitch p of such a grating which defines a shortest period of the lattice, is the sum of the width w of one grid element 4 and the distance d.
  • the pitch p is typically smaller than the
  • the detector array with high-contrast grating polarizer can be used for a broad range of applications. Further advantages include an improved extinction coefficient for states of polarization that are to be excluded and an enhanced responsivity for the desired state of polarization.
  • FIG. 4 shows an example embodiment of a modulation element.
  • the drawings shows a circuit layout of a pixel architecture, or 4T pixel cell, of the detector array DA implemented as an image sensor.
  • the pixel Px is connected as a 4T pixel architecture.
  • the 4T pixel architecture comprises a transfer transistor Tt with transfer gate TG, a first reset transistor Trl connected to a first floating diffusion Fdl and a source follower Sf, a column select Cs transistors which provide an output terminal Out.
  • the pixel Px is further connected to the modulation element ME which comprises a leakage control element LC, a second reset transistor Tr2 and a second floating diffusion Fd2.
  • Vdd indicates the supply rail.
  • the modulation element ME comprises a leakage control element LC, which is responsible for re-routing a certain amount of charge per unit time to a position different from a floating diffusion. This re-routing can be controlled using the gate of the leakage control element LC .
  • the floating diffusion of the 4T pixel cell holds the relevant charge information. That charge is intentionally re-routed so that the responsivity is reduced. In fact, there is a leakage path introduced to the second floating diffusion Fd2 reducing the responsivity of the photodetector Px .
  • the modulation shall proceed
  • FIG. 5 shows another example embodiment of a modulation element.
  • the drawings shows a circuit layout of a pixel architecture of the detector array DA implemented as an image sensor.
  • the 4T pixel cell is that of Figure 4, however, modified.
  • the pixel Px is connected to the modulation element ME which comprises the leakage control element LC but not a second reset transistor Tr2 or a second floating diffusion Fd2.
  • the leakage control element LC is connected to the supply rail Vdd.
  • the leakage control element LC redirects charge to the supply rails (e.g. Vdd) during the acquisition of a frame.
  • the leakage control can be monotonous such that a frame starts with a reduced sensitivity which is then monotonously increasing. This concept enables objects further away from the imaging system to contribute more signal.
  • Figure 6 shows another example embodiment of a modulation element.
  • the drawings shows a circuit layout of Figure 5.
  • the leakage control element LC is connected to ground instead of supply rail Vdd.
  • the leakage control element is optionally linear. This means that the leakage current is proportional to the control voltage applied to this element, e.g. via its gate.
  • the leakage control element can be non-linear.
  • the sensitivity of the photodetector during a frame may rise non-linearly or linearly. In both cases, the change of sensitivity during a given frame is monotonous, i.e. either monotonously
  • FIG. 7 shows another example embodiment of a modulation element.
  • the drawing shows a circuit layout of a pixel architecture similar to that of Figure 4.
  • the 4T pixel cell is that of Figure 4.
  • the modulation element ME is modified.
  • the pixel Px is connected to the modulation element ME which comprises the leakage control element LC but not a second reset transistor Tr2 or a second floating diffusion Fd2.
  • the leakage control element LC is connected to the supply rail Vdd via a voltage source VS to apply a control voltage Vleak.
  • the voltage between the leakage control element and the supply rail (VDD) can be varied, e.g. linearly or non-linearly, during acquisition of a frame.
  • the leakage control element i.e., a
  • MOSFET Metal Organic Field-effect transistor
  • the leakage current Ileak is linearly varied in time also.
  • the leakage current Ileak is linearly varied in time also.
  • Figure 8 shows another example embodiment of a modulation element.
  • the transfer gate Tg of the transfer transistor Tt of the 4T pixel cell is operated as a leakage control element in part of the acquisition of a frame.
  • the transfer gate can be kept slightly opened thus introducing a time-dependent leakage path to the first floating diffusion Fdl, which is reset by the reset transistor Tr before the actual charge transfer.
  • the transfer gate Tg is operated conventionally as in a 4T pixel cell, namely it isolates the photodetector from the first floating diffusion Fdl to not alter the conversion gain, e.g. of a read-out circuit.
  • Figure 9 shows an example embodiment of a detection method, e.g. a LIDAR detection method.
  • the light source e.g. the VCSEL laser, LED, laser or flash lamp, emits an unmodulated light pulse.
  • the light pulse is characterized in that it is synchronized with the photodetector. After being emitted either light pulse travels until it is reflected or scattered at one or more objects of the scenery. The reflected or scattered light returns to the imaging system where the detector array eventually detects the returning light.
  • the backward travelling path is shown in Figure 10.
  • Figure 10 shows an example embodiment of a detection method, e.g. a LIDAR detection method.
  • This drawing shows the backward path of the LIDAR system.
  • a light pulse emitted by the light emitter travels until it is reflected or scattered at one or more objects of the scenery.
  • the reflected or scattered light returns to the imaging system where the detector array eventually detects the returning light.
  • the reflected or scattered light pulse is modulated within a photodetector of the detector array during detection.
  • Modulation function is integrated in the pixel.
  • modulation affects the sensitivity or responsiveness of a pixel. For example, sensitivity during a frame is
  • the modulated sensitivity in the drawing starts at low and increases to high sensitivity.
  • the imaging system due to its arrangement, e.g. in a compact sensor package, which may also include dedicated optics, allows for observing a complete field of view (FOV) at once, called a Flash system. Flash typically works well for short to mid-range (O-lOOm); and by capturing a complete scene at once also several objects and objects with high relative speeds can be detected properly.
  • the synchronization circuit controls a delay between emission of light and a time frame for detection, e.g. the delay between emission of the pulses and a time frame for detection. The delay between an end of the pulse and a beginning of detection may be set, e.g. depending on a distance or
  • the detection may involve this example sequence of operation:
  • Figure 11 shows an example timing diagram of the light source.
  • the drawing shows leakage voltage, leakage current and sensitivity as functions of time, respectively.
  • a steady-state image or "constant image”
  • the purpose is to acquire a steady-state or dc image.
  • Such image may be grayscale or color depending on the application.
  • the "modulated image” is acquired.
  • a voltage e.g. the leakage voltage Vleak
  • Vleak the leakage control element LC
  • sensitivity of the pixels is modulated, for example
  • the distance image is then composed by dividing the modulated image from frame B by the non-modulated image from frame A, or by performing another mathematical calculation.
  • the "LIDAR image” items being further away in the scene show a different intensity than close objects.
  • the light source generates the light pulse with a duration that is smaller than a duration of frame A and a duration of frame B.
  • the duration of the light pulse may be less than 2 ns or less than 1 ns or less than 0,5 ns.
  • the duration of the frame B may be higher than 1 ms or higher than 5 ms or higher than 10 ms or higher than 20 ms.
  • the photodetector measures (e.g. integrates) during the frame B.
  • the photodetector may detect light during the complete duration of the frame B or during most of the duration of the frame B.
  • the light source generates the light pulse at the start of frame B or shortly after the start of frame B.
  • the object reflects the light pulse.
  • a point of time at which the reflected light pulse reaches the photodetector depends on the distance of the object from the photodetector.
  • the photodetector has a sensitivity that increases with time. In case of a short distance, the
  • a photodetector has a low sensitivity when the reflected light pulse hits the photodetector. In case of a long distance, the photodetector has a high sensitivity when the reflected light pulse hits the photodetector.
  • a value of the signal generated by the photodetector at the end of frame B depends on the amount of light in the reflected light pulse and the point of time at which the reflected light pulse arrives at the photodetector.
  • the photodetector has a constant sensitivity.
  • a value of the signal generated by the photodetector at the end of frame A depends on the amount of light in the reflected light pulse and is independent from the point of time at which the reflected light pulse arrives at the photodetector.
  • the distance of the object from the photodetector can be calculated by using the value of the signal generated by the photodetector at the end of frame B and the value of the signal generated by the photodetector at the end of frame A or A' .
  • the transfer gate Tg of the transfer transistor Tt of the 4T pixel cell is operated as a leakage control element in part of the acquisition of a frame.
  • the leakage voltage corresponds to a control voltage to be applied at the transfer gate Tg of the transfer transistor Tt.
  • light emitter, detector array and synchronization circuit may all be arranged in a same sensor package, wherein at least the detector array and synchronization circuit are integrated in the same integrated circuit.
  • Further components such as a microprocessor to execute the detection method and ADCs, etc. may also be arranged in a same sensor package and integrated into the same integrated circuit.
  • FIG. 1 to 11 represent example embodiments of the improved imaging system and detection method, therefore they do not constitute a complete list of all embodiments according to the improved imaging system and detection method.
  • An actual imaging system and detection method may vary from the embodiments shown in terms of circuit parts, shape, size and materials, for example .
  • the leakage control element comprises a transistor with a relatively long gate length.
  • photodetectors does not need to be implemented in low-cost applications where size and cost are critical.
  • the applications of the imaging system may e.g. comprise
  • This imaging system is used for LIDAR and TOF systems, but not from a point-cloud perspective but enables high- resolution LIDAR systems suitable for various applications, in particular automotive, autonomous driving, robotics, drones and industrial applications.
  • the imaging system realizes a control of the VCSEL power which could be implemented in CMOS image sensor designs.
  • the detection method may be implemented by the imaging system described above.
  • the light source may be named light emitter.
  • the image sensor may be named detector array.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Dans au moins un mode de réalisation, un système d'imagerie comprend un émetteur de lumière, un réseau de détecteurs et un circuit de synchronisation. Le réseau de détecteurs comprend des pixels, qui ont une fonction de modulation intégrée. Le circuit de synchronisation peut fonctionner pour synchroniser l'acquisition effectuée par le réseau de détecteurs avec une émission au moyen de la source de lumière.
PCT/EP2020/063943 2019-06-27 2020-05-19 Système d'imagerie et procédé de détection WO2020259929A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20725576.1A EP3990942A1 (fr) 2019-06-27 2020-05-19 Système d'imagerie et procédé de détection
CN202080056778.8A CN114599990A (zh) 2019-06-27 2020-05-19 成像系统和检测方法
US17/621,664 US20220357434A1 (en) 2019-06-27 2020-05-19 Imaging system and detection method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP19182966 2019-06-27
EP19182966.2 2019-06-27

Publications (1)

Publication Number Publication Date
WO2020259929A1 true WO2020259929A1 (fr) 2020-12-30

Family

ID=67211503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/063943 WO2020259929A1 (fr) 2019-06-27 2020-05-19 Système d'imagerie et procédé de détection

Country Status (4)

Country Link
US (1) US20220357434A1 (fr)
EP (1) EP3990942A1 (fr)
CN (1) CN114599990A (fr)
WO (1) WO2020259929A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114994704B (zh) * 2022-08-04 2022-12-27 中国科学院西安光学精密机械研究所 基于圆周扫描路径的非视域成像方法、系统及存储介质
DE102023200418A1 (de) 2023-01-20 2024-07-25 Robert Bosch Gesellschaft mit beschränkter Haftung Lidar-Sensor und Umfelderfassungssystem mit einem solchen Lidar-Sensor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030076484A1 (en) * 2000-11-09 2003-04-24 Canesta, Inc. Systems for CMOS-compatible three-dimensional image sensing using quantum efficiency modulation
US20040118624A1 (en) * 2002-12-20 2004-06-24 Motorola, Inc. CMOS camera with integral laser ranging and velocity measurement
US20170248796A1 (en) * 2016-02-29 2017-08-31 Tetravue, Inc. 3d imaging system and method
EP3261130A1 (fr) 2016-06-20 2017-12-27 ams AG Dispositif photodétecteur à polariseur de grille à fort contraste intégré
US20180209846A1 (en) * 2017-01-25 2018-07-26 Apple Inc. SPAD Detector Having Modulated Sensitivity

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101718585B (zh) * 2009-11-13 2011-11-02 武汉大学 提高阵列光电检测器探测灵敏度的阿达玛变换调制方法
CN102175321B (zh) * 2011-01-31 2012-08-29 重庆大学 基于光栅平动式光调制器的多目标成像光谱仪
CN104931974A (zh) * 2015-06-15 2015-09-23 中国科学院上海光学精密机械研究所 基于光源调制解调的icmos高速三维成像激光雷达
CN107219199A (zh) * 2017-07-24 2017-09-29 深圳大学 基于4f系统的新型角度调制spr成像系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030076484A1 (en) * 2000-11-09 2003-04-24 Canesta, Inc. Systems for CMOS-compatible three-dimensional image sensing using quantum efficiency modulation
US20040118624A1 (en) * 2002-12-20 2004-06-24 Motorola, Inc. CMOS camera with integral laser ranging and velocity measurement
US20170248796A1 (en) * 2016-02-29 2017-08-31 Tetravue, Inc. 3d imaging system and method
EP3261130A1 (fr) 2016-06-20 2017-12-27 ams AG Dispositif photodétecteur à polariseur de grille à fort contraste intégré
US20180209846A1 (en) * 2017-01-25 2018-07-26 Apple Inc. SPAD Detector Having Modulated Sensitivity

Also Published As

Publication number Publication date
EP3990942A1 (fr) 2022-05-04
US20220357434A1 (en) 2022-11-10
CN114599990A (zh) 2022-06-07

Similar Documents

Publication Publication Date Title
US11467286B2 (en) Methods and systems for high-resolution long-range flash lidar
KR102494430B1 (ko) 물체까지의 거리를 결정하기 위한 시스템 및 방법
EP3625589B1 (fr) Système et procédé pour mesurer une distance par rapport à un objet
EP3365700B1 (fr) Système et procédé pour mesurer une distance par rapport à un objet
Möller et al. Robust 3D measurement with PMD sensors
KR102451010B1 (ko) 물체까지의 거리를 결정하기 위한 시스템
US10302492B2 (en) Optoelectronic sensor device and method to operate an optoelectronic sensor device
US7947939B2 (en) Detection of optical radiation using a photodiode structure
US11604259B2 (en) Scanning LIDAR receiver with a silicon photomultiplier detector
US8552379B2 (en) Radiation sensor
US20120132809A1 (en) Radiation sensor
US10852400B2 (en) System for determining a distance to an object
EP3550329A1 (fr) Système et procédé pour mesurer une distance par rapport à un objet
US11531094B2 (en) Method and system to determine distance using time of flight measurement comprising a control circuitry identifying which row of photosensitive image region has the captured image illumination stripe
US20220357434A1 (en) Imaging system and detection method
US9497440B2 (en) Burst-mode time-of-flight imaging
JP7133523B2 (ja) 光検出装置及び電子装置
US20220357452A1 (en) Imaging system and detection method
US20230408699A1 (en) Time-of-flight image sensor with quantom dot photodetectors
WO2021144340A1 (fr) Appareil et procédé de détection d'absorption à deux photons
Tisa et al. A 32× 32 photon counting camera: Combining spatial, timing, and single‐photon resolution
Bellisai et al. 1024 pixels single photon imaging array for 3D ranging
Hergert et al. 1. Detectors & Imaging Novel Detectors: SPADs offer possible photodetection solution for ToF lidar applications Jan. 28, 2020 Will single-photon avalanche photodiodes become the standard in time-of-flight lidar photodetection?

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20725576

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2020725576

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2020725576

Country of ref document: EP

Effective date: 20220127