CN112352169A - Method and device for detecting an environment and vehicle having such a device - Google Patents

Method and device for detecting an environment and vehicle having such a device Download PDF

Info

Publication number
CN112352169A
CN112352169A CN201980039872.XA CN201980039872A CN112352169A CN 112352169 A CN112352169 A CN 112352169A CN 201980039872 A CN201980039872 A CN 201980039872A CN 112352169 A CN112352169 A CN 112352169A
Authority
CN
China
Prior art keywords
laser scanner
exposure
environment
exposure time
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980039872.XA
Other languages
Chinese (zh)
Other versions
CN112352169B (en
Inventor
M·舍费尔
M·格雷斯曼
H·埃格斯
P·雷纳
M·梅因克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DAIMLER
Robert Bosch GmbH
Original Assignee
DAIMLER
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DAIMLER, Robert Bosch GmbH filed Critical DAIMLER
Publication of CN112352169A publication Critical patent/CN112352169A/en
Application granted granted Critical
Publication of CN112352169B publication Critical patent/CN112352169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for detecting an environment, in particular of a vehicle, wherein the environment in a first detection area is periodically detected by means of a laser scanner L, the environment in a second detection area is detected by means of an optical camera K, the first and second detection areas overlapping at least one another, the optical sensor of the camera K is exposed at least twice in one period of the laser scanner L, and a first exposure time for a first exposure of the at least two exposures of the optical sensor of the camera K is selected and synchronized with the laser scanner L such that the first exposure takes place in a first time window in which the laser scanner L detects the first detection area. In one example the period of the laser scanner L is 90ms, during which time the laser scanner scans the entire scanning area from 0 ° to 360 °. Here, a first detection region of 120 ° is scanned within 30 ms. Without limiting the generality, the first detection region here starts at 0 ° and ends at 120 °. The first exposure time of the optical sensor for the camera K is temporally arranged in the middle region of the first time window. The second exposure time is set outside the first time window within the period of the laser scanner. The method continues periodically after a scan angle S of 90ms or 360 °. The second exposure time for the second exposure is preferably selected to be longer than the first exposure time for the first exposure. In particular, it is thereby ensured that, on the one hand, the first exposure time is short enough to achieve a sufficiently sharp image for the semantic segmentation and to apply the semantic segmentation to the 3D data acquired by the laser scanner L, and, on the other hand, that the second exposure time is selected long enough to reliably detect pulse-width-modulated light signals, for example traffic lights operated with light-emitting diodes.

Description

Method and device for detecting an environment and vehicle having such a device
Technical Field
The invention relates to a method and a device for detecting an environment, in particular a vehicle environment, and a vehicle having such a device.
Background
It is known to combine optical cameras with laser scanning methods or laser scanners, in particular in vehicles, to detect the environment. For example, U.S. patent application No. US2016/0180177 a1 discloses a system for estimating lane boundaries, which is provided in a motor vehicle and which has a camera on the one hand and a lidar detector on the other hand, determines a first probability model of a lane boundary from the camera data, determines a second probability model of a lane boundary from the lidar data, and combines the probability models thus determined to obtain a combined probability model and estimates the lane boundary therefrom.
From german patent application DE102017108248 a1, a computer-implemented method for identifying road features is known, in which images originating from a camera system connected to a vehicle on the road are received and further processed. The data so obtained may be combined with other sensor data, such as lidar sensor data, to improve the accuracy and reliability of detection and classification.
Typically, optical cameras provide high resolution photographic images of the detection environment, but no distance information. While the laser scanner determines the 3D point cloud of the scanned environment so that the laser scanner can provide highly accurate distance information. But their resolution is significantly lower.
Furthermore, it is known to semantically segment images captured by optical cameras, in particular by means of neural networks, wherein objects in the images are recognized and a category label, such as "lane", "vehicle" or "pedestrian", is assigned to each pixel. The image is divided into semantic segments in this way. It is advantageous to combine this information of the semantic segmentation with the 3D point cloud obtained by the laser scanner, so that the 3D points of the point cloud are also assigned the corresponding class labels. This will significantly improve the interpretation of the scene in the vehicle and the prediction of the behaviour of other traffic participants. However, for this purpose, it is necessary to precisely synchronize the scanning of the environment by the laser scanner on the one hand and the imaging of the environment in the optical camera on the other hand and to select the shortest possible exposure time for the optical camera in order to obtain sufficiently sharp images even while the vehicle is moving.
However, such short exposure times are problematic if the camera images are preferably also used for recognizing light signals, such as traffic lights, traffic signs designed as light signals, brake lights, steering lights, etc. This is because modern optical signals are usually operated with light-emitting diodes which emit only short, rapidly successive light pulses, in which case the light-emitting diodes are operated in particular by means of pulse width modulation. If the exposure time is too short, there is a risk that exposure may occur between the light pulses of the light signal, i.e. in the dark phase, where the state of the light signal, such as the switching state of a traffic light, the type of traffic sign displayed, etc., will no longer be recognizable in the image.
Thus, reliable semantic segmentation of 3D point clouds on the one hand by means of a camera image and reliable recognition of the light signal on the other hand appear to be mutually contradictory and incompatible targets.
Disclosure of Invention
The object on which the invention is based is to provide a method and a device for detecting the environment, in particular of a vehicle, which has such a device, wherein the disadvantages mentioned here do not occur.
The object is achieved by the solution of the independent claims. Advantageous embodiments are given by the dependent claims.
The object is achieved, in particular, by a method for detecting an environment, in particular a vehicle environment, in which the environment in a first detection area is periodically detected by means of a laser scanner and the environment in a second detection area is detected by means of an optical camera. The first detection region and the second detection region overlap one another at least in this case. The optical sensor of the camera is exposed at least twice in one cycle of the laser scanner and a first exposure time for a first of the at least two exposures of the optical sensor is selected and synchronized with the laser scanner such that the first exposure is performed within a first time window within which the laser scanner detects the first detection area. In this way it is ensured that the image capture by the optical camera takes place in synchronism with the detection of the first detection area, which first detection area and the second detection area at least overlap one another, so that the laser scanner on the one hand and the optical camera on the other hand at least partially image the same area of the environment. The first exposure time within the first time window is selected to be sufficiently short to obtain a sharp image by the camera even while the vehicle is moving. I.e. the exposure of the camera is synchronized with the laser scanner and the camera is exposed at least twice within the period of the laser scanner, wherein one exposure is performed simultaneously with the scanning of the environment by the laser scanner, in particular simultaneously with the scanning of a common, overlapping detection area. The second exposure by the optical sensor being arranged in this period of the laser scanner also makes it possible to reliably detect the light signal operating by pulse width modulation, since the second exposure time for the second exposure can in principle be selected arbitrarily within the period of the laser scanner and can therefore in particular be selected to be longer than the first exposure time within the first time window. Thus, not only data allowing semantic segmentation of the optical camera image and the 3D point cloud acquired by the laser scanner, but also optical data allowing reliable recognition of the light signal may be obtained within one cycle of the laser scanner.
The detection region is understood here to mean, in particular, an angular range, i.e. an azimuth range, about a vertical axis, in particular a vertical axis of the vehicle. The first detection area is a corresponding angular range of the environment actually detected by the laser scanner. The first detection area is preferably smaller than the scanning area of the laser scanner during a cycle. The laser scanner can be designed in particular as a laser scanner which rotates about an axis, preferably about a vertical axis, i.e. a vertical axis, wherein the laser scanner scans a 360 ° scanning region in one cycle. It preferably does not acquire data over the entire scanning area but only over a smaller first detection area.
The second detection area is accordingly an angular range given by the opening angle of the optics of the optical camera.
The first detection area and the second detection area at least overlap each other means that they coincide at least in partial areas. Preferably, the detection areas are substantially coincident, particularly preferably they completely overlap, in particular one detection area selected from the first detection area and the second detection area may be completely located within the other detection area selected from the second detection area and the first detection area. By overlapping the detection areas everywhere with each other, they may also coincide completely. The terms "overlap" and "coincidence" are used herein with particular reference to azimuth.
A vertical axis or vertical axis is understood here to mean, in particular, the direction of the gravity vector and/or an axis perpendicular to the support plane or the lane plane on which a vehicle designed for carrying out the method is parked or driven.
Preferably, a Lidar detector (Light-Light Detection And Ranging) is used as the laser scanner.
An optical camera is understood to be an optical image recording device which is designed to record still or moving images, in particular two-dimensional still or moving images, and which can be designed in particular as a camera or video camera. But an optical 3D camera may also be used.
The laser scanner preferably scans the environment continuously and periodically. The optical sensor of the camera is preferably exposed at least twice in each of a plurality of cycles of the laser scanner. It is particularly preferred that the optical sensor is exposed at least twice in each cycle of the laser scanner.
Preferably, the optical sensor of the camera is exposed more than twice within one period of the laser scanner. Multiple exposures may be used in an advantageous manner to obtain additional or more accurate information about the environment, in particular to reduce dead time for image acquisition. In particular, more than one exposure can be set outside the first time window. In particular, an image capture can be set more than once and with an exposure time longer than the first exposure time within the first time window.
According to a further development of the invention, it is provided that a second exposure of the at least two exposures of the optical sensor takes place outside the first time window in a second time window in which the laser scanner preferably does not detect the first detection region. In general, an arbitrary time window can be selected for the second exposure within the period of the laser scanner, since synchronization with the detection of the laser scanner is not required for this purpose. The positioning of the second exposure at a time outside the first time window enables particularly clear separation of data intended for synchronization with the scanning of the laser scanner from data not intended for this.
In accordance with an embodiment of the invention, the first exposure is carried out in the middle region of the first time window in time, in particular the middle region is symmetrical with respect to half of the first time window. Since, according to a preferred embodiment, the laser scanner detects the first detection region at a constant detection speed, in particular at a constant angular speed, the selection results in the first exposure being carried out exactly when the laser scanner passes through the middle of the first detection region. A particularly good correspondence between the optical image taken on the one hand and the 3D points acquired by the laser scanner on the other hand is thereby achieved. This is especially true if the optical camera is also centrally aligned with its second detection area with the first detection area.
In accordance with an embodiment of the invention, the second exposure time for the second exposure is selected to be longer than the first exposure time for the first exposure. The second exposure time for the second exposure is selected to be so long that light signals which are operated by pulse width modulation, in particular traffic lights, illuminated traffic signs, brake lights, steering lights, etc., can be reliably detected, so that at least one bright Phase (Hell-Phase) is also detected during the second exposure time, in particular for the shortest known pulse width modulated traffic signals.
The first exposure time is preferably selected to be so short that sufficiently sharp images are obtained even when the vehicle is moving, which images can be segmented semantically in a meaningful manner and assigned to the 3D point cloud captured by the laser scanner.
According to one embodiment of the invention, the first exposure time and/or the second exposure time is adapted to the period of the laser scanner. In this way, on the one hand, precise synchronization of the optical detection with the laser scanner is ensured and, on the other hand, it is ensured that in particular the second exposure time does not exceed the period of the laser scanner.
The first exposure time is preferably at most 10ms, preferably at most 8ms, preferably at most 7ms, preferably at most 6ms, preferably at most 5ms, preferably at most 4ms, preferably 4 ms.
The second exposure time is preferably at most 45ms, preferably at most 40ms, preferably at most 30ms, preferably at most 20ms, preferably at most 15ms, preferably 12ms, preferably more than 10ms, preferably at least 11ms to at most 45ms, preferably at most 11ms to at most 40ms, preferably at least 11ms to at most 30ms, preferably at least 11ms to at most 20ms, preferably at least 11ms to at most 15 ms.
In a preferred embodiment, the period of the laser scanner is 90ms, and the laser scanner scans a 360 ° scanning area within this 90 ms. The first detection area is preferably an angular range of 120 °, which is therefore scanned within 30ms at a constant angular velocity of the laser scanner. The laser scanner requires the remaining 60ms of the cycle to complete the rotation. Without limiting the generality, the first detection region lies in an angular range of 0 ° to 120 °. If the first exposure time is 4ms, it is preferable that the optical camera performs the first exposure when the laser scanner is located at about 60 °, i.e., in the middle of the first detection area. Without limiting the generality, for example, if the laser scanner starts scanning from 0 ° when time t is 0, the exposure of the camera preferably starts and lasts for 4ms when time t is 13 ms. During this time, the laser scanner passes through the middle of the first detection region — when t is 15 ms. The entire first detection area is scanned after 30 ms.
It is in principle not important when the second exposure of the optical sensor is performed. For example, if the second exposure starts when t equals 58ms, a constant camera cycle time of 45ms can be achieved. For detecting the presently known pulse width modulated light signal, a duration of 12ms is advantageous for the second exposure time. A new measurement period starts after every 90 ms.
Thus, the exposure and exposure time of the optical camera and the scanning by the laser scanner are coordinated as follows: during the scanning of the laser scanner, the optical camera is briefly exposed if and only if the current scanning area of the laser scanner coincides with the viewing direction of the camera. Since the period of the laser scanner is typically longer than the exposure time of the optical camera and the dead time between exposures based on the rotating unit, the camera can make another exposure with a longer exposure time when the laser scanner is just not detecting the environment or is scanning an area outside the camera's second detection area. This results in an image with a shorter exposure time and synchronized with the laser scanner and an image with a longer exposure time for the purpose of recognizing the light signal.
It is not absolutely necessary that the laser scanner does not detect the environment outside the first detection area. Instead, laser scanners can in principle also scan the environment over the entire scanning area. In this case, however, the first detection area is a scanning area of the laser scanner that overlaps the second detection area of the optical camera.
According to one embodiment of the invention, at least one image captured by the optical sensor during the first exposure time is subjected to a semantic segmentation, which is applied to the data of the laser scanner. In particular, a semantic segmentation of the image captured during the first exposure time is applied to the data of the laser scanner, which are acquired in the same cycle of the laser scanner as the image captured during the first exposure time. This allows for a semantic segmentation of the optical image to be accurately assigned to the data of the laser scanner.
The semantic segmentation is preferably performed by means of a Neural Network, in particular a so-called Convolutional Neural Network (Convolutional Neural Network), in particular by means of deep learning.
Finally, the object is also achieved by a device for detecting an environment, in particular a vehicle environment, having a laser scanner which is designed to periodically detect a first detection region of the environment, and having an optical camera which is designed to detect the environment in a second detection region, the first and second detection regions at least overlapping one another. The device also has a control device which is operatively connected to the laser scanner on the one hand and to the optical camera on the other hand in order to control the laser scanner and the optical camera, the control device being designed to carry out the method according to the invention or the method according to one of the preceding embodiments. The advantages already described in connection with the method result in particular with regard to the device.
The optical camera and the laser scanner are preferably oriented parallel to one another with respect to their main axes. The main axis is here the axis pointing in the direction of the detection region. The main axis of the laser scanner extends in particular symmetrically to the axis in the first detection region. The main axis of the optical camera is in particular the optical axis of the camera optics. The optical camera and the laser scanner have the same viewing direction if their main axes are oriented parallel to one another.
The object is also achieved by providing a vehicle having a device according to the invention or a device according to one of the above-described embodiments. The advantages already described in connection with the method result in particular with regard to the vehicle.
According to one embodiment of the invention, the vehicle is designed as a motor vehicle, in particular as a passenger car, truck or commercial vehicle.
Drawings
The invention is explained in more detail below with reference to the drawings. The attached drawings are as follows:
FIG. 1 shows a schematic diagram of an embodiment of an apparatus for detecting an environment; and
FIG. 2 shows a schematic diagram of an embodiment of a method for detecting an environment.
Detailed Description
Fig. 1 shows a schematic illustration of an exemplary embodiment of a device 1 for detecting an environment, in particular the environment of a schematically illustrated vehicle 3, which may preferably be designed as a motor vehicle, in particular as a passenger car, truck or commercial vehicle.
The device 1 has a laser scanner 5, which is designed to periodically detect an environment (here the environment of the vehicle 3) in a first detection region 7. The device 1 also has an optical camera 9, which is designed to detect the environment in a second detection region 11. The first detection region 7 and the second detection region 11 at least partially overlap here. The device 1 also has a control device 13 which is operatively connected to the laser scanner 5 on the one hand and to the optical camera 9 on the other hand, so that the control device 13 can control the laser scanner 5 and the optical camera 9. The control device 13 is designed here to carry out the method which will be described below.
The optical camera 9 and the laser scanner 5 are oriented parallel to one another with respect to their main axes. Fig. 1 shows a first main axis 15 of the laser scanner 5 and a second main axis 17 of the camera 9, which are oriented parallel to one another, so that the laser scanner 5 on the one hand and the optical camera 9 on the other hand have the same viewing direction.
Both detection areas 7, 11 are angular ranges of azimuth angles.
The laser scanner 5 has in particular a larger scanning area than the first detection area 7. In particular, the scanning area of the laser scanner comprises an omnidirectional angular range of 360 °. The laser scanner periodically scans the entire scanning area, the laser scanner scanning the first detection area 7 of the environment within one such period.
The first detection region is here, for example, an angular range of 120 °, which in fig. 1 does not restrict the generality of the extension from 0 ° to 120 °. The first main axis 15 divides the first detection region 7 in half, i.e. at 60 °. The laser scanner 5 preferably scans the scanning area at a constant angular velocity.
Within the scope of the method for detecting the environment, the optical sensor 19 of the camera 9 is exposed at least twice in one cycle of the laser scanner 5. The first exposure time for the first of the at least two exposures of the optical sensor 19 is selected and synchronized with the laser scanner 5 in such a way that the first exposure takes place within a first time window in which the laser scanner 5 scans and thus detects the first detection region 7.
The second of the at least two exposures of the optical sensor 19 may be carried out outside the first time window within a second time window in which the laser scanner 5 preferably does not scan the first detection region 7.
Preferably, the first exposure is carried out in a middle region of the first time window in time, in particular the middle region is symmetrical with respect to half of the first time window. This ensures that the first exposure takes place only when the laser scanner 5 detects the middle region of the first detection region 7, which is preferably symmetrical in fig. 1, in particular with respect to the 60 ° mark, i.e. in particular with respect to the first main axis 15.
The second exposure time for the second exposure is preferably selected to be longer than the first exposure time for the first exposure. In particular, it is possible to ensure by a specific selection of the exposure time that on the one hand the first exposure time is short enough to achieve a sufficiently sharp image for semantic segmentation and to apply this semantic segmentation to the 3D data acquired by the laser scanner and on the other hand that the second exposure time is selected long enough to reliably detect the pulse width modulated light signal.
The first exposure time and/or the second exposure time are adapted to the period of the laser scanner 5.
At least one image of the optical sensor 19 or of the optical camera 9 taken during the first exposure time is semantically segmented, which is applied to the data of the laser scanner 5, in particular to the data acquired in the same period of the laser scanner 5 as the image taken during the first exposure time.
Figure 2 shows a schematic diagram of one embodiment of the method. The time t in ms is shown here on the lowest axis. The exposure K of the camera 9 is shown on the uppermost axis; the scanning of the environment by the laser scanner 5 in the first detection area 7 is shown on a second axis from above as laser scanning L; the scanning angle S of the laser scanner 5 over the entire scanning area from 0 ° to 360 ° is shown on the third axis from above.
Purely by way of example and without limiting generality, the period of the laser scanner 5 is here 90ms, within which the entire scanning area from 0 ° to 360 ° is scanned. The first detection region 7 is scanned over 120 ° in 30 ms. Without limiting the generality, the first detection region 7 begins here at 0 ° and ends at 120 °.
The first exposure time of the optical sensor 19 is 4ms here. The first exposure time is positioned in such a way that it is arranged in time in the middle region of the first time window (i.e. from t 0ms to t 30ms), where the first exposure time starts at t 13ms and ends at t 17ms, respectively. Therefore, the time midpoint of the first exposure time coincides with the time point at which the laser scanner 5 reaches the 60 ° mark of the first detection region 7, that is, t is 15 ms.
The second exposure time was 12 ms. It is arranged outside the first time window in the period of the laser scanner 5, where the second exposure time starts at t 58ms, i.e. at a time interval of 45ms from the start of the first exposure time.
After a scan angle S of 90ms or 360 °, the method continues periodically.
In summary, with the method, the device 1 and the vehicle 3 proposed here, not only meaningful semantic segmentation of the camera image and the 3D data of the laser scanner 5 is possible, but also a pulse-width-modulated light signal can be reliably recognized.

Claims (10)

1. A method for detecting an environment, in particular of a vehicle (3),
periodically detecting the environment in the first detection region (7) by means of the laser scanner (5),
detecting the environment in the second detection area (11) by means of an optical camera (9),
the first detection area (7) and the second detection area (11) at least overlap each other,
the optical sensor (19) of the camera (9) is exposed at least twice in one period of the laser scanner (5), and
selecting and synchronizing a first exposure time for a first exposure of the at least two exposures of the optical sensor (9) with the laser scanner (5) in such a way that
The first exposure is carried out within a first time window, within which the laser scanner (5) detects a first detection region (7).
2. Method according to claim 1, characterized in that a second exposure of the at least two exposures of the optical sensor (19) is carried out outside the first time window in a second time window in which the laser scanner (15) preferably does not detect the first detection region (7).
3. Method according to one of the preceding claims, characterized in that the first exposure is carried out in time in a middle region of the first time window, in particular the middle region is symmetrical about half of the first time window.
4. Method according to any of the preceding claims, characterized in that the second exposure time for the second exposure is selected to be longer than the first exposure time for the first exposure.
5. Method according to any of the preceding claims, characterized in that the first exposure time and/or the second exposure time is adapted to the period of the laser scanner (5).
6. Method according to any one of the preceding claims, characterized in that at least one image taken by the optical sensor (19) during a first exposure time is semantically segmented, the semantic segmentation thus obtained being applied to the data of the laser scanner (5).
7. Device (1) for detecting an environment, in particular of a vehicle (3), comprising:
a laser scanner (5) configured for periodically detecting an environment in a first detection area (7),
an optical camera (9) configured to detect an environment in the second detection area (11),
the first detection area (7) and the second detection area (11) at least overlap each other and comprise
A control device (13) operatively connected to the laser scanner (5) and the optical camera (9) for controlling the laser scanner (5) and the optical camera (9),
the control device (13) is designed to carry out the method according to any one of claims 1 to 6.
8. Device (1) according to claim 7, characterized in that the optical camera (9) and the laser scanner (5) are oriented parallel to each other with respect to their respective main axes (15, 17).
9. A vehicle (3) having a device (1) according to claim 7 or 8.
10. Vehicle (3) according to claim 9, characterized in that the vehicle is configured as a motor vehicle, in particular as a car, truck or commercial vehicle.
CN201980039872.XA 2018-06-15 2019-05-09 Method and device for detecting an environment and vehicle having such a device Active CN112352169B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102018004782.3 2018-06-15
DE102018004782.3A DE102018004782A1 (en) 2018-06-15 2018-06-15 Method and device for detecting an environment, and vehicle with such a device
PCT/EP2019/061904 WO2019238319A1 (en) 2018-06-15 2019-05-09 Method and apparatus for detecting surroundings, and vehicle comprising such an apparatus

Publications (2)

Publication Number Publication Date
CN112352169A true CN112352169A (en) 2021-02-09
CN112352169B CN112352169B (en) 2024-03-19

Family

ID=66484057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980039872.XA Active CN112352169B (en) 2018-06-15 2019-05-09 Method and device for detecting an environment and vehicle having such a device

Country Status (4)

Country Link
US (1) US11443529B2 (en)
CN (1) CN112352169B (en)
DE (1) DE102018004782A1 (en)
WO (1) WO2019238319A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035201A (en) * 2021-09-23 2022-02-11 阿里巴巴达摩院(杭州)科技有限公司 Multi-sensor synchronization method and device and vehicle-mounted control system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11557127B2 (en) 2019-12-30 2023-01-17 Waymo Llc Close-in sensing camera system
US11493922B1 (en) 2019-12-30 2022-11-08 Waymo Llc Perimeter sensor housings
CN112040213B (en) * 2020-09-11 2021-09-14 梅卡曼德(北京)机器人科技有限公司 Modulation method, device and system for imaging scanning signal synchronization

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6600168B1 (en) * 2000-02-03 2003-07-29 Genex Technologies, Inc. High speed laser three-dimensional imager
EP1615051A1 (en) * 2004-07-07 2006-01-11 Nissan Motor Co., Ltd. Object detection apparatus, especially for vehicles
US20110286661A1 (en) * 2010-05-20 2011-11-24 Samsung Electronics Co., Ltd. Method and apparatus for temporally interpolating three-dimensional depth image
CN105549023A (en) * 2014-10-23 2016-05-04 现代摩比斯株式会社 Object detecting apparatus, and method of operating the same
US20170041562A1 (en) * 2015-08-07 2017-02-09 Omnivision Technologies, Inc. Method and system to implement a stacked chip high dynamic range image sensor
WO2017024869A1 (en) * 2015-08-12 2017-02-16 杭州思看科技有限公司 Hand-held laser three-dimensional scanner performing projection using blinking method
US20170219693A1 (en) * 2013-11-30 2017-08-03 Bae Systems Information & Electronic Systems Integration Inc. Laser detection and image fusion system and method
CN108139483A (en) * 2015-10-23 2018-06-08 齐诺马蒂赛股份有限公司 For determining the system and method for the distance of object

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9530062B2 (en) 2014-12-23 2016-12-27 Volkswagen Ag Fused raised pavement marker detection for autonomous driving using lidar and camera
DE102017108248A1 (en) 2016-04-19 2017-10-19 GM Global Technology Operations LLC STREET FEATURE RECOGNITION WITH A VEHICLE CAMERA SYSTEM
US20180136332A1 (en) * 2016-11-15 2018-05-17 Wheego Electric Cars, Inc. Method and system to annotate objects and determine distances to objects in an image
US10841496B2 (en) 2017-10-19 2020-11-17 DeepMap Inc. Lidar to camera calibration based on edge detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6600168B1 (en) * 2000-02-03 2003-07-29 Genex Technologies, Inc. High speed laser three-dimensional imager
EP1615051A1 (en) * 2004-07-07 2006-01-11 Nissan Motor Co., Ltd. Object detection apparatus, especially for vehicles
US20110286661A1 (en) * 2010-05-20 2011-11-24 Samsung Electronics Co., Ltd. Method and apparatus for temporally interpolating three-dimensional depth image
US20170219693A1 (en) * 2013-11-30 2017-08-03 Bae Systems Information & Electronic Systems Integration Inc. Laser detection and image fusion system and method
CN105549023A (en) * 2014-10-23 2016-05-04 现代摩比斯株式会社 Object detecting apparatus, and method of operating the same
US20170041562A1 (en) * 2015-08-07 2017-02-09 Omnivision Technologies, Inc. Method and system to implement a stacked chip high dynamic range image sensor
WO2017024869A1 (en) * 2015-08-12 2017-02-16 杭州思看科技有限公司 Hand-held laser three-dimensional scanner performing projection using blinking method
CN108139483A (en) * 2015-10-23 2018-06-08 齐诺马蒂赛股份有限公司 For determining the system and method for the distance of object

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035201A (en) * 2021-09-23 2022-02-11 阿里巴巴达摩院(杭州)科技有限公司 Multi-sensor synchronization method and device and vehicle-mounted control system

Also Published As

Publication number Publication date
DE102018004782A1 (en) 2019-12-19
US20210271908A1 (en) 2021-09-02
WO2019238319A1 (en) 2019-12-19
US11443529B2 (en) 2022-09-13
CN112352169B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN112352169B (en) Method and device for detecting an environment and vehicle having such a device
CN108957478B (en) Multi-sensor synchronous sampling system, control method thereof and vehicle
US11287523B2 (en) Method and apparatus for enhanced camera and radar sensor fusion
EP3367361B1 (en) Method, device and system for processing startup of front vehicle
CN107161141B (en) Unmanned automobile system and automobile
KR20230004425A (en) Autonomous Vehicle Environment Cognitive Software Architecture
US8164432B2 (en) Apparatus, method for detecting critical areas and pedestrian detection apparatus using the same
US11061122B2 (en) High-definition map acquisition system
WO2016125014A1 (en) Vehicle speed detection
WO2019208101A1 (en) Position estimating device
US11146719B2 (en) Camera system having different shutter modes
KR19990072061A (en) Vehicle navigation system and signal processing method for the navigation system
CN114556249A (en) System and method for predicting vehicle trajectory
US11663834B2 (en) Traffic signal recognition method and traffic signal recognition device
EP3650271A1 (en) Lane recognition for automotive vehicles
JP2019511725A (en) In particular, a method for identifying the attitude of a vehicle that is at least partially automated using a landmark selected and transmitted from a back-end server
CN109300313B (en) Illegal behavior detection method, camera and server
CN117173666A (en) Automatic driving target identification method and system for unstructured road
US8965142B2 (en) Method and device for classifying a light object located ahead of a vehicle
CN112580489A (en) Traffic light detection method and device, electronic equipment and storage medium
CN105247571A (en) Method and apparatus for creating a recording of an object which lights up in a pulsed manner
JP2021190848A (en) Detector, detection system, and detection method
CN114677658B (en) Billion-pixel dynamic large scene image acquisition and multi-target detection method and device
CN113917453A (en) Multi-sensor fusion method based on radar and video
Krajewski et al. Drone-based Generation of Sensor Reference and Training Data for Highly Automated Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant