DE102019213930B4 - Method for optimizing the detection of surroundings in a vehicle - Google Patents
Method for optimizing the detection of surroundings in a vehicle Download PDFInfo
- Publication number
- DE102019213930B4 DE102019213930B4 DE102019213930.2A DE102019213930A DE102019213930B4 DE 102019213930 B4 DE102019213930 B4 DE 102019213930B4 DE 102019213930 A DE102019213930 A DE 102019213930A DE 102019213930 B4 DE102019213930 B4 DE 102019213930B4
- Authority
- DE
- Germany
- Prior art keywords
- vehicle
- detection
- optimizing
- surroundings
- base point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3602—Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/507—Depth or shape recovery from shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Im Rahmen des Verfahrens zur Optimierung der Umgebungserkennung bei einem Fahrzeug, wobei ein allgemeines Bild der Umgebung durch Kamera- und/oder Karten- und Geländedaten generiert wird, wird bei natürlichem Lichteinfall ein Schattenbild von Objekten auf Flächen mittels Kameras und entsprechender Software erkannt und erfasst, wobei aus dem bekannten Sonnenstand und dem erfassten Schattenbild des Objekts der Fußpunkt des Objektes berechnet wird, wodurch Tiefeninformationen gewonnen werden.As part of the method for optimizing the environment recognition in a vehicle, whereby a general image of the environment is generated by camera and / or map and terrain data, a shadow image of objects on surfaces is recognized and recorded by means of cameras and corresponding software when natural light falls, The base point of the object is calculated from the known position of the sun and the captured shadow image of the object, whereby depth information is obtained.
Description
Die vorliegende Erfindung bezieht sich auf ein Verfahren zur Optimierung der Umgebungserkennung bei einem Fahrzeug gemäß dem Oberbegriff des Patentanspruchs 1.The present invention relates to a method for optimizing the environment recognition in a vehicle according to the preamble of claim 1.
Aus dem Stand der Technik ist bekannt, bei Fahrzeugen zur Umgebungserkennung Sensoren zu verwenden, insbesondere Sensoren, welche eine Tiefeninformation oder 3D-Daten mit hinreichender Auflösung, Genauigkeit und Reichweite liefern. Derartige Sensoren können beispielsweise Radarsensoren, LIDAR-Sensoren, ToF (Time of Flight)-Sensoren, Ultraschall-Sensoren oder Stereo-Kameras oder eine Kombination mehrerer derartiger Sensoren sein.It is known from the prior art to use sensors in vehicles to recognize the surroundings, in particular sensors which provide depth information or 3D data with sufficient resolution, accuracy and range. Such sensors can be, for example, radar sensors, LIDAR sensors, ToF (Time of Flight) sensors, ultrasonic sensors or stereo cameras or a combination of several such sensors.
Aus der
Die
Die
Mittels Front- und Rundum-Kameras im Kontext einer Umgebungserkennung können jedoch Objekte der realen Welt, beispielsweise Bäume oder Menschen, die in einem 2D-Kamerabild erscheinen, in nachteiliger Weise in ihrer Tiefe nur sehr grob geschätzt werden. Ein Grund hierfür ist, dass die Größe des Objekts, beispielsweise der Durchmesser bei Bäumen oder die Größe bei Personen nicht bekannt ist und somit nicht für die Auswertung verwendet werden kann. Ferner erweist sich eine Schätzung auf Basis des Standpunktes als schwierig, da das Gelände mit seiner relativen Höhe bzw. seiner Ausgestaltung nicht zwingend bekannt ist. Zudem kann die Sicht auf den Fußpunkt verdeckt sein.By means of front and all-round cameras in the context of an environment recognition, however, objects in the real world, for example trees or people that appear in a 2D camera image, disadvantageously can only be roughly estimated in terms of their depth. One reason for this is that the size of the object, for example the diameter of trees or the size of people, is not known and therefore cannot be used for the evaluation. Furthermore, an estimate based on the point of view proves to be difficult, since the terrain with its relative height or its configuration is not necessarily known. In addition, the view of the foot point can be obscured.
Der vorliegenden Erfindung liegt die Aufgabe zugrunde, ein Verfahren zur Optimierung der Umgebungserkennung bei einem Fahrzeug anzugeben.The present invention is based on the object of specifying a method for optimizing the detection of surroundings in a vehicle.
Diese Aufgabe wird durch die Merkmale des Patentanspruchs 1 gelöst. Weitere erfindungsgemäße Ausgestaltungen und Vorteile gehen aus den Unteransprüchen hervor.This object is achieved by the features of claim 1. Further embodiments and advantages according to the invention emerge from the subclaims.
Demnach wird ein Verfahren zur Optimierung der Umgebungserkennung bei einem Fahrzeug vorgeschlagen, wobei ein allgemeines Bild der Umgebung durch Kamera- und/oder Karten- und Geländedaten generiert wird, im Rahmen dessen bei natürlichem Lichteinfall ein Schattenbild von Objekten auf Flächen, wie beispielsweise auf der Fahrbahn, mittels Kameras und entsprechender Software erkannt und erfasst wird, wobei aus dem bekannten Sonnenstand und dem erfassten Schattenbild des Objekts der Fußpunkt des Objektes berechnet wird, wodurch Tiefeninformationen gewonnen werden. In vorteilhafter Weise muss hierbei der Fußpunkt des Objektes nicht sichtbar sein.Accordingly, a method is proposed for optimizing the detection of the surroundings in a vehicle, with a general image of the surroundings being generated by camera and / or map and terrain data, within the framework of which a shadow image of objects on surfaces, such as on the roadway, when natural light falls , is recognized and recorded by means of cameras and corresponding software, the base point of the object being calculated from the known position of the sun and the recorded shadow image of the object, whereby depth information is obtained. In this case, the base point of the object advantageously does not have to be visible.
Im Rahmen einer Weiterbildung der Erfindung kann vor der vorgeschlagenen Berechnung des Fußpunktes eines Objektes aus dem bekannten Sonnenstand und dem Schattenbild des Objekts eine Objekt-Erkennung durchgeführt werden, wobei anhand der Objekt-Erkennung das Schattenbild des Objektes genauer erkannt werden kann.As part of a further development of the invention, an object detection can be carried out before the proposed calculation of the base point of an object from the known position of the sun and the shadow image of the object, with the shadow image of the object being able to be detected more precisely on the basis of the object detection.
Gemäß einer Weiterbildung der Erfindung kann anschließend eine Transformation in den 3D-Raum durchgeführt werden.According to a further development of the invention, a transformation into 3D space can then be carried out.
Die Erkennung des Schattenbildes und die Berechnung des Fußpunktes erfolgen rechnerbasiert, beispielsweise anhand von Karten- und Geländedaten, Erkennungsalgorithmen und/oder neuronalen Netzwerken.The detection of the shadow image and the calculation of the base point are computer-based, for example on the basis of map and terrain data, detection algorithms and / or neural networks.
Die ermittelten Daten zum Fußpunkt eines Objektes können ferner einem System zur Sensor-Fusion zugeführt werden, wodurch die Qualität der Umgebungserkennung weiter optimiert wird.The data determined at the base of an object can also be fed to a system for sensor fusion, which further optimizes the quality of the environment recognition.
Durch die erfindungsgemäße Konzeption können andere stehende oder fahrende Fahrzeuge besser erkannt werden. In vorteilhafter Weise können mittels des erfindungsgemäßen Verfahrens auch Fahrzeuge die verdeckt sind, beispielsweise durch einen LKW, erkannt werden. Zudem können Objekte die aus Seitenstraßen kommen oder dort stehen, geparkt bzw. abgestellt sind erkannt werden.The concept according to the invention enables other stationary or moving vehicles to be better recognized. Advantageously, the method according to the invention can also be used to detect vehicles that are covered, for example by a truck. In addition, objects coming from side streets or standing there, parked or parked can be recognized.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019213930.2A DE102019213930B4 (en) | 2019-09-12 | 2019-09-12 | Method for optimizing the detection of surroundings in a vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019213930.2A DE102019213930B4 (en) | 2019-09-12 | 2019-09-12 | Method for optimizing the detection of surroundings in a vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
DE102019213930A1 DE102019213930A1 (en) | 2021-03-18 |
DE102019213930B4 true DE102019213930B4 (en) | 2021-05-12 |
Family
ID=74686258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
DE102019213930.2A Active DE102019213930B4 (en) | 2019-09-12 | 2019-09-12 | Method for optimizing the detection of surroundings in a vehicle |
Country Status (1)
Country | Link |
---|---|
DE (1) | DE102019213930B4 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021213881A1 (en) | 2021-12-07 | 2023-06-07 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method and device for operating a vehicle |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202014104937U1 (en) * | 2014-10-16 | 2014-10-28 | Deutsches Zentrum für Luft- und Raumfahrt e.V. | Navigation system for flying objects |
DE102017127343A1 (en) * | 2016-11-22 | 2018-05-24 | Ford Global Technologies, Llc | VEHICLE OVERVIEW |
DE102018107938A1 (en) * | 2017-04-11 | 2018-10-11 | Toyota Jidosha Kabushiki Kaisha | Automatic driving system |
JP2019091327A (en) * | 2017-11-16 | 2019-06-13 | クラリオン株式会社 | Three-dimensional object detection device |
-
2019
- 2019-09-12 DE DE102019213930.2A patent/DE102019213930B4/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202014104937U1 (en) * | 2014-10-16 | 2014-10-28 | Deutsches Zentrum für Luft- und Raumfahrt e.V. | Navigation system for flying objects |
DE102017127343A1 (en) * | 2016-11-22 | 2018-05-24 | Ford Global Technologies, Llc | VEHICLE OVERVIEW |
DE102018107938A1 (en) * | 2017-04-11 | 2018-10-11 | Toyota Jidosha Kabushiki Kaisha | Automatic driving system |
JP2019091327A (en) * | 2017-11-16 | 2019-06-13 | クラリオン株式会社 | Three-dimensional object detection device |
Also Published As
Publication number | Publication date |
---|---|
DE102019213930A1 (en) | 2021-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102019112002A1 (en) | SYSTEMS AND METHOD FOR THE AUTOMATIC DETECTION OF PENDING FEATURES | |
EP3038011B1 (en) | Method for determining the distance between an object and a motor vehicle by means of a monocular imaging device | |
DE102014207802B3 (en) | Method and system for proactively detecting an action of a road user | |
WO2013029722A2 (en) | Method for representing surroundings | |
DE112016001150T5 (en) | ESTIMATION OF EXTRINSIC CAMERA PARAMETERS ON THE BASIS OF IMAGES | |
DE102016003261A1 (en) | Method for self-localization of a vehicle in a vehicle environment | |
DE102017108254A1 (en) | All-round vision camera system for object recognition and tracking | |
DE102009022278A1 (en) | Obstruction-free area determining method for drive assistance system in vehicle, involves considering surface-measuring point on ground plane during formation of depth map such that evidence of obstruction-measuring points is reduced | |
WO2017178232A1 (en) | Method for operating a driver assistance system of a motor vehicle, computing device, driver assistance system, and motor vehicle | |
DE102006044615A1 (en) | Image capturing device calibrating method for vehicle, involves locating objects in vehicle surrounding based on image information detected by image capturing device, and evaluating geometrical relations of objects | |
DE102019213930B4 (en) | Method for optimizing the detection of surroundings in a vehicle | |
DE102011084588A1 (en) | Method for supporting driver while parking vehicle in e.g. parking region surrounded by boundary, involves determining objective parking position as function of object, and initiating park process to park vehicle at objective position | |
EP3721371A1 (en) | Method for position determination for a vehicle, controller and vehicle | |
DE102013021840A1 (en) | Method for generating an environment model of a motor vehicle, driver assistance system and motor vehicle | |
DE102015006569A1 (en) | Method for image-based recognition of the road type | |
DE102012024879A1 (en) | Driver assistance system for at least partially decelerating a motor vehicle, motor vehicle and corresponding method | |
DE102011055441A1 (en) | Method for determining spacing between preceding and forthcoming motor cars by using mono camera in e.g. adaptive cruise control system, involves determining spacing between cars based on information about license plate number | |
DE102010049214A1 (en) | Method for determining lane course for vehicle for e.g. controlling lane assistance device, involves determining global surrounding data from fusion of lane course data with card data, and determining lane course of vehicle from global data | |
DE102013022050A1 (en) | Method for tracking a target vehicle, in particular a motorcycle, by means of a motor vehicle, camera system and motor vehicle | |
DE102018202753A1 (en) | Method for determining a distance between a motor vehicle and an object | |
DE102021107904A1 (en) | Method and system for determining ground level with an artificial neural network | |
DE102008042726A1 (en) | Method for detecting objects of surrounding e.g. traffic signs, of car, involves realizing projected image of sensors with actual detected image of sensors, and keeping actual three-dimensional module of environment based on adjustments | |
DE102019102561A1 (en) | Process for recognizing a plaster marking | |
DE102017214973A1 (en) | Method and apparatus for image based object identification for a vehicle | |
DE102019001610A1 (en) | Method for operating a motor vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
R012 | Request for examination validly filed | ||
R016 | Response to examination communication | ||
R018 | Grant decision by examination section/examining division | ||
R020 | Patent grant now final |