WO2023247475A1 - Procédé de fonctionnement de système lidar flash pour véhicule, système lidar flash et véhicule - Google Patents

Procédé de fonctionnement de système lidar flash pour véhicule, système lidar flash et véhicule Download PDF

Info

Publication number
WO2023247475A1
WO2023247475A1 PCT/EP2023/066536 EP2023066536W WO2023247475A1 WO 2023247475 A1 WO2023247475 A1 WO 2023247475A1 EP 2023066536 W EP2023066536 W EP 2023066536W WO 2023247475 A1 WO2023247475 A1 WO 2023247475A1
Authority
WO
WIPO (PCT)
Prior art keywords
reception
phase
lidar system
variables
matrix
Prior art date
Application number
PCT/EP2023/066536
Other languages
German (de)
English (en)
Inventor
Hansjoerg Schmidt
Johannes Michael
Christoph Parl
Thorsten BEUTH
Gerhard Schunk
Original Assignee
Valeo Detection Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Detection Systems GmbH filed Critical Valeo Detection Systems GmbH
Publication of WO2023247475A1 publication Critical patent/WO2023247475A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4918Controlling received signal intensity, gain or exposure of sensor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/493Extracting wanted echo signals

Definitions

  • the invention also relates to a vehicle with at least one flash LiDAR system which is designed to carry out an indirect time-of-flight method.
  • Performing operations by a circuit of the LIDAR device including:
  • the invention is based on the object of designing a method, a flash LiDAR system and a vehicle of the type mentioned at the outset, in which any scattered light effects on the reception areas can be corrected better, in particular more efficiently and/or more precisely.
  • any scattered light effects on reception areas are corrected based on the determined phase sizes.
  • the light coming from the at least one monitoring area can contain at least one received light signal originating from at least one transmission signal reflected into the at least one monitoring area.
  • the position of the object in the reception matrix can be determined via the position of the reception areas hit by the reception light signal within the reception matrix Monitoring area can be determined. With the reception matrix, a spatially resolved LiDAR measurement is possible.
  • a respective phase size is determined for at least some of the reception areas.
  • the light that hits the relevant reception area is converted to the corresponding phase size during at least two recording time ranges.
  • the at least two recording time ranges are started out of phase with respect to a modulation period of the transmitted light signals.
  • Two phase variables, namely one phase variable for each of the at least two recording time ranges, are determined for each of the reception areas used.
  • the distances of the detected object from which the received light signals that hit the corresponding reception area come from can be determined for each reception area used.
  • measurements with a Flash LiDAR system use correspondingly long integration times when detecting the light, in particular the reflected received light signals, with the receiving areas.
  • the integration times can be achieved using appropriate shutter speeds.
  • the use of flash LiDAR systems with long integration times, especially long shutter speeds, can lead not only to oversaturation of reception areas hit by received light signals originating from highly reflective objects, but also to corruption of reception areas in the vicinity of those with the received light signals Reception areas hit by the highly reflective objects. This adulteration is called Blooming or glare (glare). Blooming in the signal leads to an error in determining the distance of objects near highly reflective objects. The distance of the objects is distorted towards the distance of the highly reflective object.
  • highly reflective objects such as traffic signs with retroreflective properties, and weaker reflective objects, such as pedestrians, obstacles such as walls, or the like
  • highly reflective objects such as traffic signs with retroreflective properties
  • weaker reflective objects such as pedestrians, obstacles such as walls, or the like
  • the received light signals from highly reflective objects, in particular retroreflective objects can be smeared across the reception matrix, in particular due to internal reflections, in particular in the optical reception path and/or due to optical crosstalk in the reception device.
  • the Flash LiDAR system works according to an indirect time-of-flight method.
  • an indirect time-of-flight method a shift in the received light signal relative to the transmitted light signal caused by the flight time of the amplitude-modulated transmitted light signal and the corresponding received light signal can be determined. From the displacement, the distance of an object on which the corresponding transmitted light signal is reflected can be determined.
  • the Flash LiDAR system can be used to detect stationary or moving objects, in particular vehicles, people, animals, plants, obstacles, vehicles uneven roads, especially potholes or stones, road boundaries, traffic signs, open spaces, especially parking spaces, precipitation or the like, and/or movements and/or gestures.
  • At least one short measurement and at least one long measurement can be carried out from the same scene in the at least one monitoring area with at least part of the reception areas, wherein the at least one short measurement can be carried out with a short integration period, during which the The light coming from the at least one monitoring area, in particular at least one received light signal originating from a reflected transmitted light signal, can be received with the reception areas in at least two different recording time ranges and converted into corresponding short phase variables, and wherein the at least one long measurement has a long integration period that is greater is as the short integration period, can be carried out, during which the light coming to the at least one monitoring area, in particular at least one received light signal originating from a reflected transmitted light signal, can be received in the at least two different recording time ranges and converted into corresponding long phase variables, for at least a part of the short phase variables can be generated by means of at least one mathematical convolution with the at least one correction function, a respective correction phase variable which can be assigned to the reception range and the recording time range,
  • the correction phase variables are determined by mathematical convolution of the respective short phase variables with the at least one correction function.
  • Phase shifts compared to a reference event of the at least one transmitted light signal enable a more precise assignment of the at least one received light signal to the at least one transmitted light signal.
  • An intensity minimum of the at least one transmitted light signal and a zero crossing of an electrical transmitted signal can be easily defined.
  • scattered light effects of transmitted light signals that are reflected on highly reflective objects, in particular retroreflective objects, in the at least monitoring area can be corrected.
  • the determination of distances from objects from which received light rays hit reception areas which are adjacent to the reception areas which are hit by strong received light rays from highly reflective objects, in particular retroreflective objects can be improved. Due to the scattered light effects, which are caused by reflections on highly reflective objects, in particular retroreflective objects, without the inventive Corrected as intended, the distances from less strongly reflective objects are incorrectly shifted towards the distances from highly reflective, in particular retroreflective, objects.
  • the short integration period can be set so that only the most highly reflective objects, in particular retroreflective objects, in the scene lead to a short phase size that can be distinguished from noise in at least one of the recording time ranges. In this way, those reception areas that are affected by scattered light effects due to strong received light signals can be identified, and the corresponding phase variables can be corrected.
  • the at least one correction function can be specified for detecting retroreflective road signs or road markings.
  • the Flash LiDAR system can be set for use in road traffic, in particular for use in a vehicle.
  • a respective distance of an object detected by the at least one object can be determined for at least one reception area from at least some of the end phase variables belonging to this reception area, and/or For at least one reception area, which is hit by received light signals from an object, directional information characterizing the direction of the reflecting object relative to the flash LiDAR system can be determined from a position of the reception area within the reception matrix. In this way, distances and/or directions of detected objects can be determined relative to the Flash LiDAR system.
  • phase sizes can advantageously be assigned to a corresponding one-dimensional phase size matrix or a two-dimensional phase size matrix. In this way, the phase variables can be more easily assigned to the corresponding reception areas.
  • At least one correction function can advantageously be implemented with a one-dimensional or a two-dimensional correction variable matrix. In this way, the correction function can be implemented according to the dimensions of the receiving matrix.
  • the task in the flash LiDAR system is solved in that the flash LiDAR system has means for carrying out the method according to the invention.
  • the Flash LiDAR system has means with which the light coming from the at least one monitoring area can be converted into respective reception variables in the form of phase variables during at least two recording time ranges, with at least two of the recording time ranges being based on a modulation period of the at least a transmitted light signal can be started out of phase, corrected final phase variables can be determined for at least some of the phase variables using the at least one correction function.
  • retroreflective objects such as street signs, road markings, ments or the like
  • distances from less reflective objects such as pedestrians, other vehicles, cyclists, obstacles or the like
  • the task in the vehicle is solved in that the vehicle has at least one flash LiDAR system according to the invention.
  • Flash LiDAR With a Flash LiDAR system, at least one monitoring area in the surroundings of the vehicle and/or in the interior of the vehicle can be monitored for objects.
  • the Flash LiDAR system can be used to determine distances to detected objects.
  • Figure 2 shows a functional representation of a part of the vehicle with the driver assistance system and the flash LiDAR system from Figure 1;
  • Figure 3 shows a front view of a reception matrix of a reception device of the Flash LiDAR system from Figures 1 and 2, the reception matrix having a plurality of reception areas;
  • Figure 4 shows a signal strength-time diagram with four exemplary phase images DCSo to DCS3, which are determined with respective phase shifts of 90 ° from a received light signal of a reflected transmitted light signal of the Flash LiDAR system from Figures 1 and 2 and their amplitudes as phase variables for determination of distances from objects;
  • Figure 5 shows a distance image of a scene with several objects in grayscale representation, one of the objects being a retroreflective street sign, which is covered here;
  • Figure 6 is a distance image of the scene from Figure 5, where the retroreflective street sign is not covered and leads to scattered light effects;
  • Figure 7 is a distance image of the scene from Figure 5, with the scattered light effects corrected here;
  • Figure 8 is a grayscale matrix representation of a process for determining correction phase variables with a convolution of short phase variables with a correction function
  • Figure 9 is a grayscale matrix representation of a process for determining final phase variables as the difference between long phase variables and the correction phase variables from Figure 8;
  • Figure 10 shows the flow chart for the process of determining the correction phase variables from Figure 8 for the short phase variables of the phase images DCSo and DCSi for the reception areas of a row of the reception matrix;
  • Figure 11 shows the flow chart for the process of determining the final phase variables from Figure 9 for the long phase variables of the phase images DCSo and DCSi for the reception areas of a row of the reception matrix.
  • a vehicle 10 is shown as an example in the form of a passenger car in the front view.
  • Figure 2 shows a functional representation of a part of the vehicle 10.
  • the vehicle 10 has a LiDAR system 12, which is designed as a flash LiDAR system.
  • the LiDAR system 12 is, for example, arranged in the front bumper of the vehicle 10.
  • a monitoring area 14 in the direction of travel 16 in front of the vehicle 10 can be monitored for objects 18.
  • the LiDAR system 12 can also be arranged elsewhere on the vehicle 10 and aligned differently.
  • object information for example distances D, directions and speeds of objects 18 relative to the vehicle 10 or to the LiDAR system 12, can be determined.
  • the objects 18 can be stationary or moving objects, for example other vehicles, people, animals, plants, obstacles, bumps in the road, for example potholes or stones, road boundaries, traffic signs, open spaces, for example parking spaces, precipitation or the like.
  • each object 18 is equated with a single object target.
  • An object target is a location on an object 18 at which transmitted light signals 20, which are sent from the LiDAR system 12 into the monitoring area 14, can be reflected.
  • Each object 18 typically has several such object targets.
  • the LiDAR system 12 is connected to a driver assistance system 22. With the driver assistance system 22, the vehicle 10 can be operated autonomously or semi-autonomously.
  • the LiDAR system 12 includes, for example, a transmitting device 24, a receiving device 26 and a control and evaluation device 28.
  • the control and evaluation device 28 is, for example, an electronic control and evaluation device, for example in the form of one or more processors.
  • the functions of the control and evaluation device 28 can be implemented centrally or decentrally using software and/or hardware. Parts of the functions of the control and evaluation device 28 can also be integrated in the transmitting device 24 and/or the receiving device 26.
  • Electrical transmission signals can be generated with the control and evaluation device 28.
  • the transmitting device 24 can be controlled with the electrical transmission signals so that it sends amplitude-modulated transmission light signals 20 in the form of light pulses into the monitoring area 14.
  • the transmitting device 24 has, for example, a laser as a light source.
  • the laser can be used to generate transmitted light signals 20 in the form of laser pulses.
  • the transmitting device 24 has an optical device with which the transmitted light signals 20 are expanded so that they can spread into the entire monitoring area 14 - similar to a flash. In this way, the entire monitoring area 14 can be illuminated with each transmitted light signal 20.
  • the receiving device 26 can optionally have a received light signal deflection device with which the received light signals 30 are directed to a receiving matrix 32 of the receiving device 26 shown in FIG. 3.
  • the reception matrix 32 is realized, for example, with an area sensor in the form of a CCD sensor with a large number of reception areas 34. Each reception area 34 can be implemented, for example, by a group of pixels.
  • the reception matrix 32 has, for example, 320 x 240 reception areas 34. For the sake of clarity, only 7 x 7 of the reception areas 34 are indicated in FIG.
  • a different type of surface sensor for example an active pixel sensor or the like, can also be used.
  • Each reception area 34 can be activated via suitable closure means for the detection of received light signals 30 for defined recording time ranges TBo, TB1, TB2 and TB3, which can also be referred to below as recording time ranges TBi for the sake of simplicity.
  • Each recording time range TBi is defined by a starting time and an integration period tiNT.
  • the integration times tiNT of the recording time ranges TBi are significantly shorter than a period length tMOD of the modulation period MP of the transmitted light signal 20.
  • the time intervals between two defined recording time ranges TBi are smaller than the period length tMOD of the modulation period MP.
  • FIG. 4 shows a modulation period MP of a reception envelope 36 of the phase images DCSo, DCSi, DCS2 and DCS3 in a common signal strength-time diagram.
  • the signal strength axis is labeled “S” and the time axis is labeled “t”.
  • the reception envelope 36 can be approximated by, for example, four support points in the form of the four phase images DCSo, DCSi, DCS2 and DCS3. Alternatively, the reception envelope 36 can also be approximated by more or fewer support points in the form of phase images.
  • the recording time ranges TBo, TB1, TB2 and TB3 are each started based on a reference event, for example in the form of a trigger signal for the electrical transmission signal at the starting time ST.
  • the reference event can, for example, be a zero crossing of the electrical signal with which the laser is controlled to generate the transmitted light signal 20.
  • the modulation period MP of the transmitted light signal 20 extends over 360°.
  • the recording time ranges TBo, TB1, TB2 and TB3 each start at a distance of 90° from one another in relation to the modulation period MP.
  • the recording time ranges TBo, TB1, TB2 and TB3 start with phase shifts of 0°, 90°, 180° and 270°, respectively, compared to the starting time ST.
  • a distance D of a detected object 18 can be determined mathematically, for example, from the amplitudes, i.e. the phase variables Ao, Ai, A2 and A3, of the phase images DCSo, DCS1, DCS2 and DCS3 for a respective reception area 34 in a manner of no further interest here.
  • the 5 shows a distance image of a scene in grayscale representation, which was captured with the LiDAR system 12.
  • the 320 columns of the reception matrix 32 are indicated in the horizontal dimension of the distance image. Each column characterizes the horizontal direction from which the received light signals 30 received with the reception areas 34 of the column come, i.e. in which the resulting speaking object 18 is located.
  • the 240 lines of the reception matrix 32 are indicated in the vertical dimension of the distance image. Each line characterizes the vertical direction from which the received light signals 30 received with the reception areas 34 of the line come, i.e. in which the corresponding object 18 is located.
  • the direction of the object 18 from which the corresponding received light signals 30 come can be characterized via the position of a reception area 34 in the reception matrix 32 hit by received light signals 30.
  • the distances D of the detected objects 18 are defined in gray levels according to a gray level scale shown next to the distance image.
  • object 1 8R for better distinction, is provided with a normally reflective cover in the scene shown in FIG.
  • object 6 The same scene as in Figure 5 is shown in Figure 6.
  • the cover of the highly reflective object 18R namely the street sign, is removed.
  • the use of the LiDAR system 12 with long integration times tiNT, which are required to detect even weakly reflecting objects 18, such as the person and the board in Figures 5 and 6, in the monitoring area 14, can not only lead to oversaturation Reception areas 34 that are hit by received light signals 30 reflected on highly reflective objects 18R, but also to a falsification of the signals of the reception areas 34 in the vicinity of the reception areas 34 hit by the received light signals 30 from the highly reflective objects 18R.
  • This falsification is known as blooming or glare. Blooming leads to an error in determining the distance D of objects 18 near highly reflective objects 1 8R.
  • the distance D of the normally or weakly reflecting objects 18 is, as shown in FIG. 6, falsified in the direction of the distance D of the strongly reflecting objects 18R.
  • both a short measurement and a long measurement are carried out of the same scene in the monitoring area 14 with the reception areas 34.
  • the short measurement is carried out with a short integration period tiNT;k, during which the light, including received light signals 30, are received from the monitoring area 14 with the reception areas 34 in the four recording time areas TBo, TBi, TB2 and TB3.
  • the length of the short integration period tiNT.K is chosen so that even the strong received light signals 30 from the retroreflective object 1 8R do not lead to overloading in the reception matrix 32.
  • the short integration period tiNT.K is approximately 1 ps.
  • phase variables Ao,k, Ai,k, A2,k and Aa,k assigned to the respective reception area 34 which are also referred to below as short phase variables for the sake of simplicity.
  • Phase variables Ai, k can be designated.
  • the short phase sizes Ao,k, Ai,k, A2,k and Aa,k are assigned to one of four short phase size matrices 4Oo, 40i, 402 and 403 shown in FIG. 8 in accordance with their respective recording time ranges TBo, TB1, TB2 and TB3 , which for the sake of simplicity can also be referred to below as short phase size matrices 40i.
  • Figure 8 from left to right, a flow chart for folding the short phase variables Ai,k with a correction function 42 is shown in grayscale representation.
  • Figure 10 shows the corresponding flow diagram from top to bottom as an example for the row 175 of the two short phase size matrices 4Oo and 40i in an intensity profile representation.
  • each of the short phase size matrices 4Oo, 40i, 402, 403 there is only one of the short phase sizes Ao,k, Ai,k, A2,k and As,k, which corresponds to the reception area 34, for example the reception area 34 in row 175, column 170, which is hit with a received light signal 30 coming from the highly reflective object 18R, is provided with a reference number.
  • each of the short phase size matrices 4Oo, 40i, 402, 403 contains as many short phase sizes Ao,k, Ai,k, A2,k and As,k as the reception matrix 32 contains reception areas 34, namely 320 x 240.
  • the y-axis and the z-axis are shown on the left of the short phase size matrices 4Oo, 40i, 402, 403 in FIG. 8, analogous to the representation of the reception matrix 32 from FIG the short phase size matrices 4Oo, 40i, 402, 403 do not relate to a spatial orientation.
  • the respective short phase variables Ao,k, Ai,k, A2,k and As,k are visualized, for example, as intensities INT using grayscale.
  • the corresponding grayscale scale is shown to the right of the short phase size matrices 4Oo, 40i, 402, 403.
  • the illustration on the left in FIG. 8 is merely an exemplary visualization of the short phase size matrices 40i.
  • the short phase size matrices 40i are actually each a group of tuples, the number of which corresponds to the number of reception areas 34 of the reception matrix 32, namely 320 x 240.
  • Each tuple contains at least one assignment variable, for example coordinates, which enables the assignment to the corresponding reception area 34, and the short phase variable Ai,k assigned to the corresponding reception area 34.
  • An example of an intensity profile of row 175 of the short phase size matrix 4Oo is shown in FIG. 10 at the top left.
  • Figure 10 at the top right shows an intensity profile of row 175 of the short phase size matrix 40i.
  • a respective correction phase variable Ao.c, Ai is determined by means of a mathematical convolution with a correction function 42 , c , A2,c and As.c, which for the sake of simplicity can also be referred to below as correction phase variables Ai, c .
  • the correction function 42 is shown in the middle of FIG. 8 in a two-dimensional gray scale representation as a correction size matrix and in the middle of FIG. 10 in a three-dimensional grid representation.
  • the correction function 42 can be implemented with a kernel.
  • the correction function 42 can be implemented as a group of tuples with column values, row values and intensity values.
  • correction phase sizes Ao.c, Ai, c , A2,c and As.c are assigned to one of four correction phase size matrices 44o, 44i, 442 and 443, respectively, according to their respective recording time ranges TBo, TBi, TB2 and TB3, which are the following For the sake of simplicity, they can also be referred to as correction phase size matrices 44i.
  • the correction phase variables Ao.c, Ai, c , A2,c and A3,C are also assigned to the corresponding reception areas 34 and the corresponding recording time areas TBo, TB1, TB2 and TB3, which correspond to the short phase variables Ao. used for convolution .k, Ai.k, A2,k and As.k correspond.
  • correction phase variables Ao.c, Ai, c , A2,c and Aa,c are visualized, for example, as intensities INT in grayscale.
  • the corresponding grayscale scale is shown to the right of the correction phase size matrices 44i.
  • FIG. 10 An example of an intensity profile through the row 175 of the correction phase size matrix 44o is shown in FIG. 10 at the bottom left.
  • Figure 10 bottom right shows an intensity profile through row 175 of the correction phase size matrix 44i.
  • the long measurement is carried out with a long integration period tiNT,i_, which is longer than the short integration period tiNT.k.
  • the phase image DCSi determined with the long integration period tiNT.L is indicated by dashed lines, with the scales of the short integration period tiNT.K and the long integration period tiNT,i_ being exaggerated there.
  • the length of the long integration period tiNT,i_ is chosen so that the received light signals 30 from the normally or weakly reflecting objects 18 also lead to signals on the side of the correspondingly illuminated reception areas 34, which can be distinguished from noise.
  • the long integration time tiNT.L is approximately 1000 ps.
  • phase variables Ao.i, Ai,i, A2,I and Aa.i are converted, which for the sake of simplicity can also be referred to below as long phase variables Ai,i.
  • the long phase sizes Ao,i, Au, A2,I and Aa.i are assigned to one of four long phase size matrices 46o, 46i, 462 and 463, respectively, according to their respective recording time ranges TBo, TBi, TB2 and TB3, which are described below for the sake of simplicity can also be referred to as long phase size matrices 46i.
  • Figure 9 from left to right, a flow chart for forming the difference between the long phase variables Ai,i and the short phase variables Ai,k is shown in grayscale representation.
  • Figure 11 shows the corresponding flow diagram from top to bottom as an example for the row 175 of the two long-phase size matrices 46o and 46i in an intensity profile representation.
  • each of the long phase size matrices 46o, 46i, 462, 463 analogous to the short phase size matrices 4Oo, 40i, 402, 403 from Figure 8 on the left, only the long phase size Ao.i, Ai,i , A2 and As,i are provided with a reference number, which is determined with the reception area 34, for example the reception area 34 in row 175, column 170, which is hit with a received light signal 30 coming from the highly reflective object 1 8R.
  • each of the long phase size matrices 46o, 46i, 462, 463 contains as many long phase sizes Ao.i, Ai,i, A2,I and As.i as the reception matrix 32 reception areas 34 and the short phase size matrices 4Oo, 40i, 402, 403 short phase sizes Ao,k, Ai,k, A2,k and As,k, namely 320 x 240.
  • the illustration in Figure 9 on the left is merely an exemplary visualization of the long-phase size matrices 46i.
  • the long phase size matrices 46i like the short phase size matrices 40i, are actually a group of tuples, the number of which corresponds to the number of reception areas 34 of the reception matrix 32, namely 320 x 240.
  • Each tuple contains at least one assignment variable, for example coordinates, which enables the assignment to the corresponding reception area 34, and the long-phase variable Ai,i assigned to the corresponding reception area 34.
  • each of the final phase size matrices 48o, 48i, 482, 48a contains as many final phase sizes Ao.e, Ai, e , A2,e, Aa,e as the long phase size matrices 46o, 46i, 462, 46a long phase sizes Ao.i, Ai,i, A2,I and Aa,i, namely 320 x 240.
  • the representation in Figure 9 on the right is merely a visualization of the final phase size matrices 48i.
  • the final phase size matrices 48i like the short phase size matrices 40i, the long phase size matrices 46i and the correction phase size matrices 44i, are actually each a group of tuples, the number of which corresponds to the number of reception areas 34 of the reception matrix 32, namely 320 x 240 , corresponds.
  • Each tuple contains at least one assignment variable, for example coordinates, which enables the assignment to the corresponding reception area 34, and the final phase size Ai, e assigned to the corresponding reception area 34.
  • FIG. 11 An example of an intensity profile through the row 175 of the final phase size matrix 44o is shown in FIG. 11 at the bottom left.
  • Figure 11 bottom right shows an intensity profile through row 175 of the final phase size matrix 44i.
  • the distances D of the objects 18 and 18n detected with the reception areas 34 are determined from the final phase variables Ao.e, Ai. e , A2,e and Aa.e are determined mathematically for each reception area 34.
  • Figure 7 shows the distance image of the scene from Figure 6 the correction of the scattered light effects described above. In Figure 7 it can be seen that the blooming area 38 is reduced compared to the representation in Figure 6.
  • the distances D for the weakly and normally reflecting objects 18, namely for the person and for the board, are specified in more detail after the correction, as shown in FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

L'invention concerne : un procédé de fonctionnement d'un système lidar flash pour un véhicule selon un procédé de temps de vol indirect ; un système lidar flash ; et un véhicule. Dans ce procédé, au moins un signal lumineux de transmission à modulation d'amplitude est transmis à l'aide d'au moins un dispositif de transmission du système lidar flash (12) dans au moins une zone de surveillance. La lumière provenant de ladite au moins une zone de surveillance est reçue par au moins certaines des zones de réception d'une matrice de réception d'un dispositif de réception du système lidar flash et convertie en variables de réception (A0,k, A1,k, A2,k, A3,k). Les variables de réception (A0,k, A1,k, A2,k, A3,k) peuvent chacune être affectées aux zones de réception, caractérisent chacune une quantité de lumière reçue dans la zone de réception correspondante et peuvent être traitées à l'aide d'un processeur du système lidar flash. Tout effet de lumière parasite sur au moins certaines des zones de réception est corrigé à l'aide d'au moins une fonction de correction (42). La lumière provenant de ladite au moins une zone de surveillance est convertie en variables de réception sous forme de variables de phase (A0,k, A1,k, A2,k, A3,k) avec les zones de réception pendant au moins deux plages de temps d'enregistrement. Au moins deux des plages de temps d'enregistrement commencent par un déphasage par rapport à une période de modulation dudit au moins un signal lumineux de transmission. Les variables de phase finale corrigées sont déterminées pour au moins certaines des variables de phase (A0,k, A1,k, A2,k, A3,k) à l'aide de ladite au moins une fonction de correction (42).
PCT/EP2023/066536 2022-06-20 2023-06-20 Procédé de fonctionnement de système lidar flash pour véhicule, système lidar flash et véhicule WO2023247475A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022115268.5A DE102022115268A1 (de) 2022-06-20 2022-06-20 Verfahren zum Betreiben eines Flash-LiDAR-Systems für ein Fahrzeug, Flash- LiDAR-System und Fahrzeug
DE102022115268.5 2022-06-20

Publications (1)

Publication Number Publication Date
WO2023247475A1 true WO2023247475A1 (fr) 2023-12-28

Family

ID=87059754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/066536 WO2023247475A1 (fr) 2022-06-20 2023-06-20 Procédé de fonctionnement de système lidar flash pour véhicule, système lidar flash et véhicule

Country Status (2)

Country Link
DE (1) DE102022115268A1 (fr)
WO (1) WO2023247475A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200072946A1 (en) 2018-08-29 2020-03-05 Sense Photonics, Inc. Glare mitigation in lidar applications

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200072946A1 (en) 2018-08-29 2020-03-05 Sense Photonics, Inc. Glare mitigation in lidar applications

Also Published As

Publication number Publication date
DE102022115268A1 (de) 2023-12-21

Similar Documents

Publication Publication Date Title
DE102012221563B4 (de) Funktionsdiagnose und validierung eines fahrzeugbasierten bildgebungssystems
DE102010012811B4 (de) Verfahren zur Messung von Geschwindigkeiten und Zuordnung der gemessenen Geschwindigkeiten zu angemessenen Fahrzeugen durch Erfassen und Zusammenführen von Objekt-Trackingdaten und Bild-Trackingdaten
DE102004010197A1 (de) Verfahren zur Funktionskontrolle einer Positionsermittlungs- oder Umgebungserfassungseinrichtung eines Fahrzeugs oder zur Kontrolle einer digitalen Karte
EP1068992A2 (fr) Aide pour reculer
DE102009007408A1 (de) Vorrichtung zur Umfelderfassung eines Kraftfahrzeugs
EP4139709A1 (fr) Procédé et dispositif d'identification de l'efflorescence dans une mesure lidar
DE102022115267A1 (de) Verfahren zur Ermittlung wenigstens einer Korrekturfunktionen für ein LiDARSystem, LiDAR-System, Fahrzeug mit wenigstens einem LiDAR-System, Messanlage
EP3994497A1 (fr) Dispositif d'adaptation et dispositif de mesure lidar
DE102013007859B3 (de) Time-of-Flight-System mit räumlich voneinander getrennten Sendeeinrichtungen und Verfahren zur Abstandsmessung von einem Objekt
WO2023247475A1 (fr) Procédé de fonctionnement de système lidar flash pour véhicule, système lidar flash et véhicule
DE102010055865A1 (de) Kameravorrichtung für einen Kraftwagen
DE102019102672A1 (de) Intersensorisches lernen
WO2021001339A1 (fr) Dispositif de mesure optique pour la détermination d'informations d'objet pour des objets dans au moins une zone de surveillance
DE102020206152A1 (de) Vorrichtung zur Bestimmung, ob eine Parklücke in einem seitlichen Frontbereich und/oder Heckbereich eines Fahrzeugs für das Fahrzeug geeignet ist, ein Fahrerassistenzsystem mit solch einer Vorrichtung sowie ein Fahrzeug mit solch einem Fahrerassistenzsystem
WO2023247395A1 (fr) Procédé de fonctionnement d'un système lidar à correction de lumière parasite, système lidar correspondant et véhicule
DE102020124017A1 (de) Verfahren zum Betreiben einer optischen Detektionsvorrichtung, optische Detektionsvorrichtung und Fahrzeug mit wenigstens einer optischen Detektionsvorrichtung
WO2021078552A1 (fr) Dispositif d'étalonnage pour l'étalonnage d'au moins un dispositif de détection optique
WO2023247304A1 (fr) Procédé de fonctionnement d'un système lidar, système lidar, et véhicule comprenant au moins un système lidar
WO2023247474A1 (fr) Procédé de fonctionnement d'un système lidar, système lidar et véhicule comprenant au moins un système lidar
DE102019134985B4 (de) Verfahren zum Erfassen wenigstens eines Verkehrsteilnehmers
DE102023108881A1 (de) Optische empfangseinheit für ein lidar-system, lidar-system für ein fahrzeug sowie verfahren zu betreiben eines lidar-systems
DE102020124023A1 (de) Verfahren zum Detektieren von Objekten und Detektionsvorrichtung
EP3945339A1 (fr) Dispositif de détection optique permettant de surveiller au moins une zone de surveillance au niveau d'objets et procédé de fonctionnement d'un dispositif de détection optique
DE102023004656A1 (de) Verfahren zur Kalibrierung einer an oder in einem Fahrzeug angeordneten Event-Kamera
WO2023083737A1 (fr) Procédé de fonctionnement d'un dispositif de détection pour la surveillance d'au moins une zone surveillée en résolution spatiale, dispositif de détection et véhicule comprenant au moins un dispositif de détection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23734547

Country of ref document: EP

Kind code of ref document: A1