WO2021052859A1 - Unité de capteur pour un système lidar - Google Patents

Unité de capteur pour un système lidar Download PDF

Info

Publication number
WO2021052859A1
WO2021052859A1 PCT/EP2020/075345 EP2020075345W WO2021052859A1 WO 2021052859 A1 WO2021052859 A1 WO 2021052859A1 EP 2020075345 W EP2020075345 W EP 2020075345W WO 2021052859 A1 WO2021052859 A1 WO 2021052859A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
optical sensor
optical
active
units
Prior art date
Application number
PCT/EP2020/075345
Other languages
German (de)
English (en)
Inventor
Frederik ANTE
Steffen Fritz
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2021052859A1 publication Critical patent/WO2021052859A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates

Definitions

  • the present invention relates to a sensor unit for a LiDAR system.
  • LiDAR systems are a key technology for highly automated driving.
  • Various versions of LiDAR systems are currently known. This includes in particular macro scanners, galvo mirrors, MEMS mirrors or solid-state-based systems. Each of these variants has different advantages and disadvantages in terms of integration, complexity, size and field of view.
  • the time-of-flight (ToF) method is typically used as the measuring principle in LiDAR systems. A laser pulse is emitted by a transmitter, reflected on an object and the time until detection is measured on a sensor unit.
  • CCDs charge-coupled devices
  • APDs avalanche photo detectors
  • SPADs single-photon avalanche detectors
  • CCDs charge-coupled devices
  • SPAD and CCD-based systems are susceptible to saturation of the individual pixels, so that no statement can be made about the detection of a nearby and highly reflective object.
  • so-called line flashes can be used in combination with CCD chips.
  • a vertical laser line is emitted and the reflected photons are first resolved vertically in a pixel array on the CCD chip.
  • the temporal resolution by which a distance to the object is described, is achieved by quickly sliding along inactive pixels. By rotating the system, a horizontal coverage of the environment can also be achieved.
  • the sensor device according to the invention for a LiDAR system comprises a sensor array which comprises a multiplicity of optical sensor units which are arranged next to one another on a sensor surface in order to capture one pixel in each case when the LiDAR system is in operation.
  • Each of the sensor units comprises a first optical sensor and a second optical sensor, the first optical sensor comprising a first active sensor surface and the second optical sensor comprising a second active sensor surface, the first active sensor surface being a larger area than the second active sensor surface .
  • a single image point is thus recorded by a single optical sensor unit in each case.
  • a pixel describes a distance to a point in the vicinity of the LiDAR system. The distance is preferably determined from a transit time of an optical signal, which results from a time difference between the transmission of the optical signal and the reception of the optical signal by the optical sensor unit.
  • a scanning point is therefore to be understood as an image point.
  • each of the optical sensor units is set up to receive a reflected optical signal from the surroundings of the LiDAR system in order to provide transit time information for the previously transmitted optical signal.
  • this does not exclude that different points in the vicinity of the LiDAR system are scanned by a sensor unit at different times.
  • Each of the optical sensor units includes a first optical sensor and a second optical sensor. Since each of the optical sensor units is set up to detect a respective pixel, this means that both the first optical sensor and the second optical sensor are used to detect an individual pixel.
  • the first optical sensor and the second optical sensor are therefore arranged in particular on the sensor surface in such a way that a common solid angle can be detected by them, that is, an optical signal can be received that radiates onto the sensor unit from a common direction.
  • the plurality of optical sensor units is set up, in particular, to detect incident light from different directions, preferably each of the optical sensor units is assigned to a specific direction of the different directions.
  • An active sensor surface is the surface of the respective optical sensor that reacts to the incidence of light.
  • the first active sensor surface is larger than the second active sensor surface. This means that the first optical sensor reacts more sensitively to the incidence of light than the second optical sensor. At the same time, this means that the first optical sensor typically goes into saturation sooner than the second optical sensor. It is thus made possible that in particular reflected light with low intensity can be detected by the first optical sensor and that in particular reflected light with high intensity can be received by the second optical sensor.
  • the optical sensor unit is adapted so that it can both detect light with high intensity without going into saturation, this is done by means of the second optical sensor as well To be able to detect light with low intensity, since a sufficiently large active sensor surface is available, which is achieved by the first optical sensor.
  • a sensor array is in particular an arrangement of several optical sensor units which can be arranged next to one another in any desired manner.
  • the sensor surface is either a flat surface, but it can also have a curvature.
  • the optical sensor units are set up in particular to detect an optical signal from the environment of the LiDAR system at the same time or in chronological order.
  • the sensor device is preferably set up so that the first optical sensor and the second optical sensor of an optical sensor unit are active at the same time, in particular over a limited period of time.
  • the sensor array according to the invention enables reliable detection to be made possible for both low-reflective and high-reflective objects.
  • Low-reflective objects lead to an incidence of light of low intensity on the sensor array and highly reflective objects lead to an incidence of light with high intensity on the sensor array.
  • the reflected light can be successfully detected, but prevented it becomes that the optical sensor unit does not enable a reliable detection, since it goes into saturation, or that the reflected optical signal cannot be detected by the optical sensor unit because an active sensor surface is too small.
  • the sensor array is preferably a CCD chip.
  • the sensor array is a SPAD chip.
  • Such sensors are particularly susceptible to saturation of individual sensor units. It is therefore particularly advantageous in the case of such sensors if the second optical sensor provides an active sensor surface through which saturation of the second optical sensor and thus of the optical sensor unit can be avoided.
  • first active sensor surface is arranged directly adjacent to the second active sensor surface.
  • further optical means to be provided for each image point, which could be used both by the first optical sensor and by the second optical sensor.
  • a lens can be used jointly by the first optical sensor and the second optical sensor.
  • the optical sensor units are arranged in a row, with a first optical sensor following a second optical sensor alternately in the row.
  • Such an arrangement is particularly advantageous because measured values provided by the first optical sensor and the second optical sensor can be tapped particularly easily, since supply lines do not have to cross the other of the optical sensors.
  • a particularly compact sensor device can thus be created.
  • the first optical sensor and the second optical sensor in particular their active sensor surfaces, preferably have the same width, that is, they extend with the same width in a direction on the sensor surface that is perpendicular to the direction along which the row extends.
  • the sensor units are arranged in a row, the first optical sensors being arranged in a first row and the second optical sensors being arranged in a second row, the first row being parallel to the second row.
  • a distance between first optical sensors and second optical sensors can be minimized and thus a particularly high angular resolution can be achieved, since the optical sensor units can be arranged particularly close to one another to allow the row to extend in the direction of the first row or the second row can be minimized.
  • the first optical sensor and the second optical sensor in particular their active sensor surfaces, preferably have the same height, that is, they extend with the same width in a direction on the sensor surface along which the row extends.
  • each of the optical sensor units comprises a first storage element which is associated with the first optical sensor, the first storage element being set up to store a first charge output by the first optical sensor or a value representing this first charge, and comprises a second storage element which is associated with the second optical sensor, the second storage element being configured to store a second charge output by the second optical sensor or a value representing this second charge.
  • the first storage element and the second storage element are in particular each a capacitor or a register. For example, the charge emitted by the respective optical sensor is stored in the capacitor or the capacitor is discharged by the optical sensor when an optical signal is received by the latter. Alternatively, the optical signal received by the optical sensors is converted into a digital value and stored in the register.
  • the storage elements are preferably also arranged on the sensor surface.
  • the brightness values detected by the optical sensor units and thus by the first and second optical sensors can be quickly stored by the nearby storage elements, whereby a high scanning frequency of the sensor array can be achieved.
  • each of the optical sensor units is set up to transfer a value stored in the first memory element to a first adjacent memory element and to transfer a value stored in the second memory element to a second adjacent memory element.
  • measured values can be recorded by each of the sensor units in a particularly rapid chronological sequence.
  • a particularly high sampling frequency of the sensor units and thus of the sensor array can thus be achieved.
  • the first adjacent memory element and the second adjacent memory element are preferably the memory elements of an adjacent optical sensor unit. The optical sensor units can thus pass on their measured values via the storage elements of adjacent sensor units, which simplifies reading out the individual sensor units, since no separate contact is required for each of the optical sensor units.
  • the sensor device comprises an evaluation unit which is set up to determine for each of the optical sensor units based on a first measured value provided by the first optical sensor and based on a second measured value provided by the second optical sensor whether a reflected optical signal was received.
  • the reflected optical signal is in particular a signal that was previously sent by an associated LiDAR system and was reflected in an environment of the LiDAR system. Signals from the first optical sensor and signals from the second optical sensor are thus used in combination in order to determine whether a reflected optical signal has been received.
  • each of the sensor units in combination with the evaluation unit can determine whether an optical signal, which was reflected either on a low-reflective or a high-reflective object, was thrown back to the sensor array.
  • the evaluation unit detects that a reflected optical signal has been received when the first measured value and / or the second measured value lies in a respectively associated predetermined value interval.
  • the value intervals can be variable, for example dependent on ambient brightness, or can be dependent on the first measured value or the second measured value. It can in particular on the second measured value can be used if the first measured value indicates saturation of the first optical sensor. Alternatively or additionally, the first measured value can be used if the second measured value indicates that no optical signal was received by the second optical sensor.
  • a LiDAR system which comprises the sensor unit according to the invention has all the advantages of the sensor unit according to the invention.
  • Figure 1 shows a sensor device according to a first embodiment of the invention
  • Figure 2 shows a sensor device according to a second embodiment of the invention
  • FIG. 3 shows a schematic illustration of a sensor device which comprises a first and a second memory element
  • FIG. 4 shows a sensor unit according to a third embodiment of FIG
  • Figure 1 shows a sensor device 1 according to a first embodiment of the invention.
  • the sensor device 1 comprises a sensor array 2, which comprises a multiplicity of optical sensor units 3a, 3b, 3c, 3d, which are arranged next to one another on a sensor surface in order to capture one pixel in each case when the LiDAR system is in operation.
  • the sensor surface extends over the image plane shown in FIG.
  • the sensor surface of the sensor array 2 is, for example, a substrate. There the sensor surface can be a flat surface or also have a curvature.
  • the sensor device 1 is a CCD chip or a SPAD chip.
  • the sensor array 2 shown in FIG. 1 has a first optical sensor unit 3a, a second optical sensor unit 3b, a third sensor unit 3c and a fourth optical sensor unit 3d.
  • the number of optical sensor units 3a, 3b, 3c, 3d is only to be understood as an example.
  • the first to fourth optical sensor units 3a to 3d are structurally identical optical sensor units. Therefore, only the structure of the first optical sensor unit 3a of the optical sensor units 3a, 3b, 3c, 3d is described below.
  • the first optical sensor unit 3 a comprises a first optical sensor 4 and a second optical sensor 5.
  • the first optical sensor 4 has a first active sensor surface 6.
  • the first active sensor surface 6 lies on the sensor surface of the sensor array 2.
  • the second optical sensor 5 is arranged next to the first optical sensor 4 and has a second active sensor surface 7.
  • the second active sensor surface 7 also lies on the sensor surface of the sensor array 2.
  • the first active sensor surface 6 is designed such that it is a larger area than the second active sensor surface 7. This means that the first active sensor surface 6 extends over a A larger area of the sensor surface of the sensor array 2 extends than the second active sensor surface 7.
  • first active sensor surface 6 and the second active sensor surface 7 have the same width, but the first optical sensor surface 6 has a greater height than the second optical sensor surface 7.
  • the first active sensor surface 6 is arranged directly adjacent to the second sensor surface 7 on the sensor surface of the sensor array 2.
  • the width is an extension in the direction along the sensor surface of the sensor array 2 in an X direction and the height is an extension in the direction along the sensor surface of the sensor array 2 in a Y direction.
  • the second sensor unit 3b, the third sensor unit 3c and the fourth sensor unit 3d are constructed in the same way as the first optical sensor unit 3a.
  • the optical sensor units 3a, 3b, 3c, 3d are arranged in a row on the sensor surface of the sensor array 2.
  • a first optical sensor 4 and a second optical sensor 5 are arranged alternately in the row.
  • FIG. 1 it can be seen in FIG. 1 that the optical sensor units 3a, 3b, 3c, 3d are arranged in a row which in FIG. 1 extends from bottom to top.
  • a first optical sensor 4 is initially arranged, which is followed by a second optical sensor 5.
  • the first optical sensor 4 of the second optical sensor unit 3b is arranged next to the second optical sensor 5 of the first optical sensor unit 3a.
  • the result is that, starting from above, a first optical sensor 4 is always arranged next to a second optical sensor 5 and, in turn, a first optical sensor 4 of a subsequent optical sensor unit is arranged next to the second optical sensor 5.
  • Figure 2 shows a sensor device 1 according to a second embodiment of the invention.
  • the second embodiment of the invention corresponds to the first embodiment of the invention, but the first optical sensors 4 and the second optical sensors 5 are arranged in a row next to one another.
  • the optical sensor units 3a, 3b, 3c, 3d are arranged in a row, but the first optical sensors 4 are arranged in a first row and the second optical sensors 5 are arranged in a second row, the first Row is parallel to the second row.
  • the first active sensor surface 6 of the first optical sensor 4 has the same height on the sensor surface as the second active sensor surface 7 of the second optical sensor 5.
  • the second active sensor surface 7 has a smaller width than the first active sensor surface 6 of the first optical sensor 4.
  • the first optical sensors 4 are arranged in a row which is located on the right in FIG. 2, and the second optical sensors 5 are arranged in a row which is located on the left in FIG. It can be seen that a height of the sensor surface can be reduced in this way.
  • each of the sensor units 3a, 3b, 3c, 3d has a first memory element 8 and a second storage element 9.
  • the first storage element 8 is a storage element which is associated with the first optical sensor 4, the first storage element 8 being set up to store a first charge output by the first optical sensor 4 or a value representing this first charge.
  • the second storage element 9 is a storage element which is associated with the second optical sensor 5, the second storage element 9 being set up to store a second charge output by the second optical sensor 5 or a value representing this second charge.
  • the first charge represents a measured value of the first optical sensor 4.
  • the second charge represents a measured value of the second optical sensor 5.
  • the first storage element 8 is preferably arranged on the sensor surface of the sensor array 2 between the first active sensor surface 6 and the second active sensor surface 7.
  • the second memory element 9 is preferably also arranged on the sensor surface, the second memory element 9 preferably being arranged on a side of the second active sensor surface 7 that is opposite the side on which the first memory element 8 is arranged.
  • the arrangement of the storage elements 8, 9 with respect to the active sensor surfaces 6, 7 is shown in FIG. 3 merely as an example. Other arrangements of the storage elements 8, 9 with respect to the active sensor surfaces 6, 7 are also advantageous. It is particularly advantageous if the active sensor surfaces 6, 7 are arranged adjacent to one another, which makes it possible for common optics to enable light to be focused on the active sensor surfaces 6, 7. Since the memory elements 8, 9 do not form an active sensor surface, it is also advantageous if the memory elements 8, 9 are not arranged on the sensor surface, but rather are arranged, for example, behind the active sensor surfaces 6, 7 on a rear side of the sensor array 2.
  • the first storage element 8 is, for example, a capacitor in which a charge provided by the first optical sensor 4 is stored, or a capacitor which is discharged by incident light on the first active sensor surface 6.
  • the first memory element 8 is a register, with the first optical sensor 4 and the first memory element 8 an analog-to-digital converter is preferably connected in order to store a measured value detected by the first optical sensor 4 as a digital value in the register.
  • the second storage element 9 is, for example, a capacitor in which a charge provided by the second optical sensor 5 is stored, or a capacitor which is discharged by incident light on the second active sensor surface 7.
  • the second storage element 9 is a register, an analog-to-digital converter preferably being connected between the second optical sensor 5 and the second storage element 9 in order to store a measured value acquired by the second optical sensor 5 as a digital value in the register.
  • each of the optical sensor units 3a, 3b, 3c, 3d is preferably set up to transfer a value stored in the first storage element 8 or a charge stored in the first storage element 8 to a first adjacent storage element 10.
  • each of the optical sensor units 3a, 3b, 3c, 3d is preferably set up to transfer a value stored in the second storage element 9 or a charge stored in the second storage element 9 to a second adjacent storage element 11.
  • the first adjacent storage element 10 and the second adjacent storage element 11 are preferably arranged adjacent to the first storage element 8 and the second storage element 9. In this way, the first memory element 8 and the second memory element 9 can be read out in a rapid manner and the optical sensor units 3a to 3d are ready to receive a further optical signal.
  • FIG. 4 shows a sensor device 1 according to a third embodiment of the invention.
  • the sensor device 1 comprises the optical sensor units 3a, 3b, 3c, 3d, as they are also described in the previously described embodiments.
  • the optical sensor units 3a, 3b, 3c, 3d are arranged as a group of active image points 12 on the sensor surface and form one of several columns of a sensor matrix.
  • a group of sliding pixels 20 are arranged on the sensor surface of the sensor array 2.
  • the shift pixels 20 are either active optical sensor units which are constructed in accordance with the first to fourth optical sensor units 3a to 3d or are passive optical sensor units which do not detect an optical signal.
  • FIG. 4 shows a sensor device 1 according to a third embodiment of the invention.
  • the sensor device 1 comprises the optical sensor units 3a, 3b, 3c, 3d, as they are also described in the previously described embodiments.
  • the optical sensor units 3a, 3b, 3c, 3d are arranged as a group of active image points 12 on the sensor surface
  • a sensor matrix is formed by the optical sensor units 3a, 3b, 3c, 3d and the shift pixels 20.
  • each of the optical sensor units 3a, 3b, 3c, 3d has associated with it a multiplicity of shift pixels which, in particular, lie in the same row of the sensor matrix.
  • a first shift pixel 21, a second shift pixel 22, a third shift pixel 23 and a fourth shift pixel 24 are associated with the first optical sensor unit 3a.
  • the sliding pixels 21 to 24 belonging to the first optical sensor unit 3a are arranged in a row with the first optical sensor unit 3a.
  • Each of the optical sensor units 3a, 3b, 3c, 3d and each of the shift pixels 20 each comprises a first memory element 8 and a second memory element 9, for example in accordance with the embodiment shown in FIG. If an optical signal is detected by the first optical sensor unit 3a, this is output by the first optical sensor 4 as a charge or a digital value and stored in the first memory element 8 of the first optical sensor unit 3a. The charge stored in the first storage element 8 or the digital value stored by the first storage element 8 is referred to below as the first measured value. If an optical signal is detected by the first optical sensor unit 3a, this is also output by the second optical sensor 5 as a charge or a digital value and is stored in the second memory element 9 of the first optical sensor unit 3a. The charge stored in the second storage element 9 or the digital value stored by the second storage element 9 is referred to below as the first measured value.
  • the first measured value is shifted from the first memory element 8 of the first optical sensor unit 3a into a first adjacent memory element 10, which is a first memory element of the first shift pixel 21.
  • the second measured value stored in the second memory element 9 of the first optical sensor unit 3a is transferred to a second, adjacent memory element 11, which is the second memory element 9 of the first shift pixel 21 is.
  • 3d recorded measured values are shifted step by step through the shift pixels 20 to a digitization unit 30.
  • the first measured value stored in the first memory element 8 of the first optical sensor unit 3a is first shifted into the first memory element of the first shift pixel 21, from there into the first memory element of the second
  • the digitization unit 30 is pushed. If the first measured value has already been converted into a digital value beforehand, it can only be temporarily stored in the digitization unit 30 or the digitization unit 30 can be dispensed with
  • the second measurement value which was stored in the second memory element 9 of the first optical sensor unit 3a, is first shifted into the second memory element 9 of the first shift pixel 21, from there into the second memory element of the second shift pixel
  • the second measured value has already been converted into a digital value beforehand, it can only be temporarily stored in the digitization unit 30 or the digitization unit 30 can be dispensed with
  • the measured values detected by the first optical sensor unit 3a are converted into digital values by the first analog-digital converter 31 and the second analog-digital converter 32 and transferred to a multiplexer 40. From there, the measured values are preferably transmitted to an evaluation unit 50. In a corresponding manner, the measured values that were recorded by the second to fourth optical sensor units 3b, 3c, 3d are transmitted via the shift pixels associated with these optical sensor units 3b, 3c, 3d to corresponding analog-digital converters and from there also to the Multiplexer 40 transmitted. The measured values of the second to fourth optical sensor units 3b, 3c, 3d are also preferably sent to the evaluation unit 50 transferred.
  • the evaluation unit 50 is a component of the CCD chip or a separate structural component.
  • the shift pixels 20 can also be active pixels. In this way, the number of optical sensor units on the sensor surface of the sensor array 2 can be increased. By shifting the measured values from memory element to memory element, it is possible for the optical sensor units 3a to 3d and the shift pixels 20 to be read out. It should be noted that the number of optical sensor units 3a to 3d and also the number of shift pixels 20 in FIG. 4 are selected merely as examples.
  • the evaluation unit 50 therefore has at least the first and second measured values recorded by the optical sensor units 3a to 3d.
  • the evaluation unit 50 is set up to determine for each of the sensor units 3a, 3b, 3c, 3d based on the first measured value provided by the first optical sensor 4 and based on the second measured value provided by the second optical sensor 5, whether a reflected optical signal is received has been.
  • a reflected optical signal has been received when the first measured value and / or the second measured value lies in a respectively associated predetermined value interval.
  • a reflected optical signal was received when the first measured value indicates saturation of the first optical sensor 5 and the second measured value indicates that an optical signal was received by the second optical sensor 5, but not in saturation is.
  • a reflected optical signal has been received when the first measured value indicates that an optical signal was received by the first optical sensor 4, but the first optical sensor 4 is not in saturation, and the second measured value indicates that from the second optical sensor 5 no optical signal was received.
  • a CCD chip is thus designed in such a way that it enables reliable detection, in particular for LiDAR applications in the field of automated driving, for both low-reflective and high-reflective objects.
  • a large pixel array high sensitivity
  • a small pixel array low sensitivity
  • the large pixel array is formed by the first optical sensors 4 and the small pixel array is formed by the second optical sensors 5.
  • the small pixel areas have a lower sensitivity to the incident photons, which is why a significantly higher photon density is required for the saturation of these pixels.
  • One pixel corresponds to one image point. This means that the small pixels are less sensitive to highly reflective objects.
  • the fact that the aperture of the receiving lens is optimized for the large pixel arrays is advantageous here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

L'invention concerne un dispositif de détection (1) pour un système LIDAR, comprenant un réseau de capteurs (2) ayant une pluralité d'unités de capteur optique (3a, 3b, 3c, 3d), qui sont disposés les uns à côté des autres sur une surface de détection afin de détecter un pixel respectif pendant une opération du système LIDAR, chacune des unités de détection (3a, 3b, 3c, 3d) comprenant un premier capteur optique (4) et un second capteur optique (5) lequel premier capteur optique (4) présente une surface active de premier capteur (6) qui convertit un signal optique reçu en un premier signal de mesure, et lequel second capteur optique (5) présente une surface active de second capteur (7) qui convertit un signal optique reçu en un second signal de mesure, la surface active de premier capteur ayant une aire supérieure à la surface active de second capteur (7).
PCT/EP2020/075345 2019-09-18 2020-09-10 Unité de capteur pour un système lidar WO2021052859A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019214211.7 2019-09-18
DE102019214211.7A DE102019214211A1 (de) 2019-09-18 2019-09-18 Sensoreinheit für ein LiDAR-System

Publications (1)

Publication Number Publication Date
WO2021052859A1 true WO2021052859A1 (fr) 2021-03-25

Family

ID=72473557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/075345 WO2021052859A1 (fr) 2019-09-18 2020-09-10 Unité de capteur pour un système lidar

Country Status (2)

Country Link
DE (1) DE102019214211A1 (fr)
WO (1) WO2021052859A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006015906A1 (fr) * 2004-08-04 2006-02-16 Siemens Aktiengesellschaft Module optique pour un systeme d'assistance au conducteur operant une detection dans l'espace exterieur avant, dans le sens de roulement, d'un vehicule a moteur
DE102011005740A1 (de) * 2011-03-17 2012-09-20 Robert Bosch Gmbh Messvorrichtung zur Messung einer Entfernung zwischen der Messvorrichtung und einem Zielobjekt mit Hilfe optischer Messstrahlung
DE102016013861A1 (de) * 2016-11-22 2017-05-18 Daimler Ag Optischer Sensor für ein Kraftfahrzeug und Kraftfahrzeug

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006015906A1 (fr) * 2004-08-04 2006-02-16 Siemens Aktiengesellschaft Module optique pour un systeme d'assistance au conducteur operant une detection dans l'espace exterieur avant, dans le sens de roulement, d'un vehicule a moteur
DE102011005740A1 (de) * 2011-03-17 2012-09-20 Robert Bosch Gmbh Messvorrichtung zur Messung einer Entfernung zwischen der Messvorrichtung und einem Zielobjekt mit Hilfe optischer Messstrahlung
DE102016013861A1 (de) * 2016-11-22 2017-05-18 Daimler Ag Optischer Sensor für ein Kraftfahrzeug und Kraftfahrzeug

Also Published As

Publication number Publication date
DE102019214211A1 (de) 2021-03-18

Similar Documents

Publication Publication Date Title
EP3611535B1 (fr) Capture de lumière à l'aide d'une pluralité d'éléments de photodiode à avalanche
EP3537180B1 (fr) Dispositif récepteur permettant de recevoir des impulsions lumineuses, module lidar et procédé de réception des impulsions lumineuses
EP3185038B1 (fr) Capteur optoelectronique et procede de mesure d'un eloignement
DE102017101501B3 (de) Optoelektronischer Sensor und Verfahren zur Bestimmung der Entfernung eines Objekts in einem Überwachungsbereich
EP3279685B2 (fr) Capteur optoélectronique et procédé de détection d'un objet
EP3450915B1 (fr) Tachéomètre électronique ou théodolite pourvus de fonction de balayage et de zones de réception réglables du récepteur
DE212018000118U1 (de) LiDAR-Ausleseschaltkreis
EP3633405B1 (fr) Appareil de mesure pour balayage 3d géométrique d'un environnement comportant une pluralité de canaux d'émission et de capteurs photomultiplicateur à semi-conducteurs
DE112005003698B4 (de) Erfassung optischer Strahlung
DE202018002044U1 (de) Flugzeitmessung-Tiefenkartierung mit Parallaxenkompensation
WO2019121437A1 (fr) Système lidar à impulsions multiples pour la détection multidimensionnelle d'objets
EP3557286B1 (fr) Capteur optoélectronique et procédé de détection et de détermination de distance d'un objet
EP2708913A1 (fr) Capteur optoélectronique et procédé de détection d'objet
EP3557284A2 (fr) Capteur optoélectronique et procédé de détermination de la distance
EP3451021A1 (fr) Appareil de mesure pourvu de fonction de balayage et de zones de réception réglables du récepteur
DE102006029025A1 (de) Vorrichtung und Verfahren zur Abstandsbestimmung
EP1860462A1 (fr) Procédé de mesure de distances et appareil de mesure de distances destinés à l'enregistrement des dimensions spatiales d'une cible
DE102010043768B3 (de) Lichtlaufzeitkamera
DE202013105389U1 (de) Optoelektronischer Sensor mit im Geiger-Modus betriebenen Lawinenphotodiodenelementen
EP3861374A1 (fr) Capteur d'imagerie
WO2020114740A1 (fr) Système lidar et véhicule à moteur
DE102017222974A1 (de) Anordnung und Verfahren zur Ermittlung einer Entfernung wenigstens eines Objekts mit Lichtsignalen
WO2021052859A1 (fr) Unité de capteur pour un système lidar
DE102018126631A1 (de) Verfahren zur Bestimmung einer Entfernung eines Objekts mithilfe einer optischen Detektionsvorrichtung und optische Detektionsvorrichtung
EP3650888B1 (fr) Détecteur optoélectronique et procédé de détection et de détermination de distance des objets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20771536

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20771536

Country of ref document: EP

Kind code of ref document: A1