EP2396746A2 - Method for detecting objects - Google Patents
Method for detecting objectsInfo
- Publication number
- EP2396746A2 EP2396746A2 EP10703018A EP10703018A EP2396746A2 EP 2396746 A2 EP2396746 A2 EP 2396746A2 EP 10703018 A EP10703018 A EP 10703018A EP 10703018 A EP10703018 A EP 10703018A EP 2396746 A2 EP2396746 A2 EP 2396746A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- determined
- segments
- segment
- height
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Definitions
- the invention relates to a method for object detection according to the preamble of claim 1.
- a distance image is determined by means of a sensor system via horizontal and vertical angles, wherein a depth map of an environment is determined from the distance image.
- a free space boundary line is defined that bounds an obstacle-free area of the environment, segmenting the depth map outside and along the free space boundary line by forming segments of appropriate equal widths of pixels of equal or similar distance from a plane, with a height of each segment as a part of an object located outside the obstacle-free area, so that each segment is characterized by a two-dimensional position of a foot point (eg given by distance and angle to a vehicle longitudinal axis) and by its height.
- the three-dimensional environment described by the distance image and the depth map is approximated by the obstacle-free area (also called the free space area).
- the obstacle-free area is, for example, a drivable area, which, however, does not necessarily have to be planar.
- the obstacle-free area is limited by the rod-like segments, which in their entirety model the objects surrounding the obstacle-free area. These segments are in the simplest case on the ground and approximate a mean height of the object in the region of the respective segment. Variable height objects, such as cyclists from the side, are thus described by a piecewise constant height function.
- the resulting segments represent a compact and robust representation of the objects and require only a limited amount of data, regardless of the density of the stereo correspondence analysis used to create the depth map.
- Location and altitude are stored for each stixel. These Representation is optimally suitable for any subsequent steps, such as object formation and scene interpretation.
- the stixel representation represents an ideal interface between application-independent stereo analysis and application-specific evaluations.
- Fig. 1 is a two-dimensional representation of an environment with a
- Free space boundary and a number of segments for modeling objects in the environment are free space boundary and a number of segments for modeling objects in the environment.
- FIG. 1 shows a two-dimensional representation of an environment 1 with a free-space delimitation line 2 and a number of segments 3 for modeling objects 4.1 to 4.6 in the environment 1.
- the segments 3 or stixels model the objects 4.1 to 4.6 that are defined by the free-space delimiting line 2 limit free travel.
- a method is used in which two images of an environment are recorded and a disparity image is determined by means of stereo image processing.
- stereo image processing the method described in [H. Hirschmüller: "Accurate and efficient stereo processing by semi-global matching and mutual information.” CVPR 2005, San Diego, CA. Volume 2. (June 2005), pp. 807-814.].
- a depth map of the environment is determined, for example as described in [H.Badino, U. Franke, R.Mester: "Free Space Computation Using Stochastic Occupancy Grids and Dynamic Programming", In Dynamic Programming Workshop for ICCV 07, Rio de Janeiro, Brasil].
- the free space boundary line 2 is identified which delimits the obstacle-free area of the surroundings 1.
- the depth map is segmented by forming the segments 3 with a predetermined width of pixels of equal or similar distance to an image plane of a camera or multiple cameras.
- the segmentation may be accomplished, for example, by the method described in [H.Badino, U.Franke, R.Mester: "Free Space Computation using Stochastic Occupancy Grids and Dynamic Programming", In Dynamic Vision Workshop for ICCV 07, Rio de Janeiro, Brasil] ,
- An approximation of the found free space boundary line 2 in segments 3 (bars, stixels) of predetermined width provides the distance of the segments; with known orientation of the camera to the environment (for example, a road in front of a vehicle on which the camera is arranged) and known 3D curve results in a respective base point of the segments 3 in the image.
- each segment 3 is characterized by a two-dimensional position of a foot point and its height.
- Height estimation is most easily accomplished by histogram-based analysis of all 3D points in the segment area. This step can be solved by dynamic programming.
- Areas that have no segments 3, are those in which no objects were found by the free space analysis.
- Multiple images can be sequentially acquired and processed, and from changes in the depth map and disparity image, motion information can be extracted and assigned to segments 3.
- moving scenes can also be represented and, for example, used to predict an expected movement of the objects 4.1 to 4.6.
- This kind of motion tracking is also called tracking.
- a vehicle own motion can be determined and used for compensation.
- the compactness and robustness of the segments 3 results from the integration of many pixels in the area of the segment 3 and - in the tracking variant - from the additional integration over time.
- the membership of each of the segments 3 to one of the objects 4.1 to 4.6 can also be stored with the remaining information about each segment. However, this is not mandatory.
- the movement information can be obtained, for example, by integration of the optical flow, so that a real movement can be estimated for each of the segments 3.
- Corresponding methods are for. B. from works on the 6D vision, which are published in DE 102005008131 A1, known. This motion information further simplifies the grouping into objects, as compatible movements can be checked.
- the position of the foot point, the height and the motion information of the segment 3 can be determined by means of Scene Flow.
- the Scene Flow is a class of procedures that attempts to determine the correct movement in space plus its SD position from at least 2 consecutive stereo image pairs for as many pixels as possible; See [Sundar Vedulay, Simon Bakery, Peter Randeryz, Robert Collinsy, and Takeo Kanade, "Three Dimensional Scene Flow,” Appeared in the 7th International Conference on Computer Vision, Corfu, Greece, September 1999.]
- information for a driver assistance system can be generated in a vehicle on which the cameras are arranged.
- a remaining time until collision of the vehicle with an object 4.1 to 4.6 formed by segments 3 can be estimated.
- a driving corridor 5 can be placed in the obstacle-free area to be used by the vehicle, wherein a lateral distance of at least one of the objects 4.1 to 4.6 to the driving corridor 5 is determined.
- Information from other sensors can be combined with the driver assistance system information associated with segments 3 (sensor fusion).
- active sensors such as a LIDAR
- the segments 3 have unique neighborhood relationships, which makes them very easy to group into objects 4.1 through 4.6. In the simplest case, only distance and height are to be transmitted to each segment 3, with known width of the segment 3 results in an angle (columns in the image) from an index.
- the distance image can be determined by means of any sensor system over horizontal and vertical angle, wherein from the distance image, the depth map of the environment is determined.
- two images of the surroundings (1) can each be recorded by means of one camera and a disparity image can be determined by means of stereo image processing, the distance image and the depth map being determined from the disparities determined.
- a photonic mixer device and / or a three-dimensional camera and / or a lidar and / or a radar can be used as the sensor system.
Abstract
The invention relates to a method for detecting objects, wherein two images of a surrounding (1) are taken and a disparity image is determined by means of stereo image processing, wherein a depth map of the surrounding (1) is determined from the determined disparities, wherein a free space delimiting line (2) is identified, delimiting an unobstructed region of the surrounding (1), wherein outside and along the free space delimiting line (1) the depth card is segmented by segments (3) of a suitable width formed by pixels of the same or similar distance to an image plane, wherein a height of each segment (3) is estimated as part of an object (4.1 to 4.6) located outside of the unobstructed region in a way, such that each segment (3) is characterized by the two-dimensional position of the base (for example the distance and angle to the longitudinal axis of the vehicle) and the height thereof.
Description
Verfahren zur Objektdetektion Method for object detection
Die Erfindung betrifft ein Verfahren zur Objektdetektion nach dem Oberbegriff des Anspruchs 1.The invention relates to a method for object detection according to the preamble of claim 1.
Moderne Stereoverfahren, aber auch entfernungsmessende Sensoren wie z. B. PMD, Lidar oder hochauflösende Radare, generieren ein dreidimensionales Bild der Umgebung. Aus diesen Daten sind die relevanten Objekte zu extrahieren, beispielsweise stehende oder bewegte Hindernisse. Der Schritt von den Rohdaten zu Objekten erweist sich in der Praxis als sehr groß und führt oft zu vielen heuristischen Speziallösungen. Für die Weiterverarbeitung wird daher eine kompakte Abstraktion mit geringen Datenmengen angestrebt.Modern stereo methods, but also distance-measuring sensors such. As PMD, Lidar or high-resolution radars, generate a three-dimensional image of the environment. From this data, the relevant objects are to be extracted, for example standing or moving obstacles. The step from raw data to objects proves to be very large in practice and often leads to many special heuristic solutions. For further processing, therefore, a compact abstraction with small amounts of data is sought.
Bekannte Verfahren der Stereo-Bildverarbeitung arbeiten mit nicht dichten Stereokarten und extrahieren direkt Objekte unter Verwendung als geeignet angesehener Heuristiken. Ein Abstraktionslevel, der diesen Schritt unterstützt, existiert im Allgemeinen nicht.Known methods of stereo image processing work with non-dense stereo cards and directly extract objects using heuristics deemed suitable. An abstraction level that supports this step generally does not exist.
Aus der US 2007/0274566 A1 ist ein Verfahren zur Detektion von Fußgängern bekannt, bei dem ein Bild einer Szene vor einem Fahrzeug aufgenommen wird. Anschließend wird im Bild eine Geschwindigkeit und eine Richtung von Pixeln, die jeweils charakteristische Punkte repräsentieren, berechnet. Dabei ermittelte Koordinaten der Pixel werden auf eine Draufsicht umgerechnet. Es wird ermittelt, ob die charakteristischen Punkte ein zweidimensionales oder ein dreidimensionales Objekt repräsentieren. Falls es sich um ein dreidimensionales Objekt handelt wird ermittelt, ob das Objekt sich bewegt. Anhand der Änderungen der Geschwindigkeit, mit der das Objekt sich bewegt, wird ermittelt, ob es sich bei dem Objekt um einen Fußgänger handelt. Bei der Extraktion der charakteristischen Punkte werden Kanten der Objekte detektiert, erodiert und damit die Mitte der Kante ermittelt. Die erodierte Kante wird anschließend wieder erweitert, so dass
die Kante eine vorbestimmte Breite, beispielsweise drei Pixel, aufweist, so dass alle Objektkanten eine gleiche Breite aufweisen.From US 2007/0274566 A1 a method for the detection of pedestrians is known in which an image of a scene is taken in front of a vehicle. Subsequently, a speed and a direction of pixels each representing characteristic points are calculated in the image. Determined coordinates of the pixels are converted to a plan view. It is determined whether the characteristic points represent a two-dimensional or a three-dimensional object. If it is a three-dimensional object, it is determined if the object is moving. The changes in the speed at which the object moves determine whether the object is a pedestrian. During the extraction of the characteristic points, edges of the objects are detected, eroded and thus the center of the edge is determined. The eroded edge is subsequently expanded again, so that the edge has a predetermined width, for example three pixels, so that all object edges have an equal width.
Es ist eine Aufgabe der Erfindung, ein verbessertes Verfahren zur Objektdetektion anzugeben.It is an object of the invention to provide an improved method for object detection.
Die Aufgabe wird erfindungsgemäß gelöst durch ein Verfahren mit den Merkmalen des Anspruchs 1.The object is achieved by a method having the features of claim 1.
Vorteilhafte Weiterbildungen sind Gegenstand der Unteransprüche.Advantageous developments are the subject of the dependent claims.
Bei einem erfindungsgemäßen Verfahren zur Objektdetektion wird mittels eines Sensorsystems ein Entfernungsbild über horizontalem und vertikalem Winkel ermittelt, wobei aus dem Entfernungsbild eine Tiefenkarte einer Umgebung bestimmt wird. Erfindungsgemäß wird eine Freiraumbegrenzungslinie identifiziert, die einen hindernisfreien Bereich der Umgebung umgrenzt, wobei die Tiefenkarte außerhalb und entlang der Freiraumbegrenzungslinie segmentiert wird, indem Segmente geeigneter, gleicher Breite aus Pixeln gleicher oder ähnlicher Entfernung zu einer Ebene gebildet werden, wobei eine Höhe jedes Segments als Teil eines außerhalb des hindernisfreien Bereichs befindlichen Objekts geschätzt wird, so dass jedes Segment durch eine zweidimensionale Position eines Fußpunkts (z. B. gegeben durch Entfernung und Winkel zu einer Fahrzeuglängsachse) und durch seine Höhe charakterisiert wird.In a method according to the invention for object detection, a distance image is determined by means of a sensor system via horizontal and vertical angles, wherein a depth map of an environment is determined from the distance image. In accordance with the present invention, a free space boundary line is defined that bounds an obstacle-free area of the environment, segmenting the depth map outside and along the free space boundary line by forming segments of appropriate equal widths of pixels of equal or similar distance from a plane, with a height of each segment as a part of an object located outside the obstacle-free area, so that each segment is characterized by a two-dimensional position of a foot point (eg given by distance and angle to a vehicle longitudinal axis) and by its height.
Die durch das Entfernungsbild und die Tiefenkarte beschriebene dreidimensionale Umgebung wird durch den hindernisfreien Bereich (auch Freiraumfläche genannt) approximiert. Der hindernisfreie Bereich ist beispielsweise ein befahrbarer Bereich, der jedoch nicht zwingend planar sein muss. Der hindernisfreie Bereich wird begrenzt durch die stabartigen Segmente, die in ihrer Gesamtheit die den hindernisfreien Bereich umgebenden Objekte modellieren. Diese Segmente stehen im einfachsten Fall auf dem Boden und approximieren eine mittlere Höhe des Objekts im Bereich des jeweiligen Segments. Objekte mit variabler Höhe, beispielsweise Radfahrer von der Seite, werden so durch eine stückweise konstante Höhenfunktion beschrieben.The three-dimensional environment described by the distance image and the depth map is approximated by the obstacle-free area (also called the free space area). The obstacle-free area is, for example, a drivable area, which, however, does not necessarily have to be planar. The obstacle-free area is limited by the rod-like segments, which in their entirety model the objects surrounding the obstacle-free area. These segments are in the simplest case on the ground and approximate a mean height of the object in the region of the respective segment. Variable height objects, such as cyclists from the side, are thus described by a piecewise constant height function.
Die so gewonnenen Segmente (auch als Stixel bezeichnet) stellen eine kompakte und robuste Repräsentation der Objekte dar und erfordern nur ein begrenztes Datenvolumen, unabhängig von der Dichte der für die Erstellung der Tiefenkarte verwendeten Stereo- Korrespondenzanalyse. Zu jedem Stixel sind Ort und Höhe gespeichert. Diese
Repräsentation eignet sich optimal für gegebenenfalls nachfolgende Schritte, wie Objektbildung und Szeneninterpretation. Die Stixel-Repräsentation stellt eine ideale Schnittstelle zwischen applikationsunabhängiger Stereoanalyse und applikationsspezifischen Auswertungen dar.The resulting segments (also called stixels) represent a compact and robust representation of the objects and require only a limited amount of data, regardless of the density of the stereo correspondence analysis used to create the depth map. Location and altitude are stored for each stixel. These Representation is optimally suitable for any subsequent steps, such as object formation and scene interpretation. The stixel representation represents an ideal interface between application-independent stereo analysis and application-specific evaluations.
Im Folgenden wird ein Ausführungsbeispiel der Erfindung anhand einer Zeichnung näher erläutert.In the following an embodiment of the invention will be explained in more detail with reference to a drawing.
Dabei zeigt:Showing:
Fig. 1 eine zweidimensionale Darstellung einer Umgebung mit einerFig. 1 is a two-dimensional representation of an environment with a
Freiraumbegrenzungslinie und einer Anzahl von Segmenten zur Modellierung von Objekten in der Umgebung.Free space boundary and a number of segments for modeling objects in the environment.
Figur 1 zeigt eine zweidimensionale Darstellung einer Umgebung 1 mit einer Freiraumbegrenzungslinie 2 und einer Anzahl von Segmenten 3 zur Modellierung von Objekten 4.1 bis 4.6 in der Umgebung 1. Die Segmente 3 oder Stixel modellieren die Objekte 4.1 bis 4.6, die den durch die Freiraumbegrenzungslinie 2 definierten freien Fahrraum begrenzen.FIG. 1 shows a two-dimensional representation of an environment 1 with a free-space delimitation line 2 and a number of segments 3 for modeling objects 4.1 to 4.6 in the environment 1. The segments 3 or stixels model the objects 4.1 to 4.6 that are defined by the free-space delimiting line 2 limit free travel.
Zur Erstellung der gezeigten Darstellung kommt ein Verfahren zum Einsatz, bei dem jeweils zwei Bilder einer Umgebung aufgenommen und mittels Stereobildverarbeitung ein Disparitätsbild ermittelt wird. Beispielsweise kann zur Stereobildverarbeitung das in [H. Hirschmüller: "Accurate and efficient Stereo processing by semi-global matching and mutual information. CVPR 2005, San Diego, CA. Volume 2. (June 2005), pp.807-814.] beschriebene Verfahren verwendet werden.To create the representation shown, a method is used in which two images of an environment are recorded and a disparity image is determined by means of stereo image processing. For example, for stereo image processing, the method described in [H. Hirschmüller: "Accurate and efficient stereo processing by semi-global matching and mutual information." CVPR 2005, San Diego, CA. Volume 2. (June 2005), pp. 807-814.].
Aus den ermittelten Disparitäten wird eine Tiefenkarte der Umgebung bestimmt, beispielsweise wie in [H.Badino, U.Franke, R.Mester: "Free Space Computation using Stochastic Occupancy Grids and Dynamic Programming", In Dynamic Programming Workshop for ICCV 07, Rio de Janeiro, Brasil] beschrieben.From the disparities determined, a depth map of the environment is determined, for example as described in [H.Badino, U. Franke, R.Mester: "Free Space Computation Using Stochastic Occupancy Grids and Dynamic Programming", In Dynamic Programming Workshop for ICCV 07, Rio de Janeiro, Brasil].
Es wird die Freiraumbegrenzungslinie 2 identifiziert, die den hindernisfreien Bereich der Umgebung 1 umgrenzt. Außerhalb und entlang der Freiraumbegrenzungslinie 2 wird die Tiefenkarte segmentiert, indem die Segmente 3 mit einer vorgegebenen Breite aus Pixeln gleicher oder ähnlicher Entfernung zu einer Bildebene einer Kamera oder mehrerer Kameras gebildet werden.
Die Segmentierung kann beispielsweise mittels des in [H.Badino, U.Franke, R.Mester: "Free Space Computation using Stochastic Occupancy Grids and Dynamic Programming", In Dynamic Vision Workshop for ICCV 07, Rio de Janeiro, Brasil] beschriebenen Verfahrens erfolgen.The free space boundary line 2 is identified which delimits the obstacle-free area of the surroundings 1. Outside and along the free space boundary line 2, the depth map is segmented by forming the segments 3 with a predetermined width of pixels of equal or similar distance to an image plane of a camera or multiple cameras. The segmentation may be accomplished, for example, by the method described in [H.Badino, U.Franke, R.Mester: "Free Space Computation using Stochastic Occupancy Grids and Dynamic Programming", In Dynamic Vision Workshop for ICCV 07, Rio de Janeiro, Brasil] ,
Eine Approximation der gefundenen Freiraumbegrenzungslinie 2 in Segmente 3 (Stäbe, Stixel) vorgegebener Breite (beliebige Vorgaben möglich) liefert die Entfernung der Segmente; bei bekannter Orientierung der Kamera zur Umgebung (beispielsweise einer Straße vor einem Fahrzeug, an dem die Kamera angeordnet ist) und bekanntem 3D- Verlauf ergibt sich ein jeweiliger Fußpunkt der Segmente 3 im Bild.An approximation of the found free space boundary line 2 in segments 3 (bars, stixels) of predetermined width (any default possible) provides the distance of the segments; with known orientation of the camera to the environment (for example, a road in front of a vehicle on which the camera is arranged) and known 3D curve results in a respective base point of the segments 3 in the image.
Anschließend wird eine Höhe jedes Segments 3 geschätzt, so dass jedes Segment 3 durch eine zweidimensionale Position eines Fußpunkts und seine Höhe charakterisiert wird.Subsequently, a height of each segment 3 is estimated, so that each segment 3 is characterized by a two-dimensional position of a foot point and its height.
Die Schätzung der Höhe kann am einfachsten durch histogrammbasierte Auswertung aller 3D-Punkte im Bereich des Segments erfolgen. Dieser Schritt kann durch dynamische Programmierung gelöst werden.Height estimation is most easily accomplished by histogram-based analysis of all 3D points in the segment area. This step can be solved by dynamic programming.
Bereiche, die keine Segmente 3 aufweisen, sind solche, in denen von der Freiraumanalyse keine Objekte gefunden wurden.Areas that have no segments 3, are those in which no objects were found by the free space analysis.
Es können mehrere Bilder sequentiell aufgenommen und verarbeitet werden, wobei aus Veränderungen in der Tiefenkarte und im Disparitätsbild Bewegungsinformationen extrahiert und den Segmenten 3 zugeordnet werden können. Auf diese Weise können auch bewegte Szenen repräsentiert werden und beispielsweise zur Prognose einer zu erwartenden Bewegung der Objekte 4.1 bis 4.6 verwendet werden. Bei dieser Art von Bewegungsverfolgung spricht man auch von Tracking. Dabei kann zur Bestimmung der Bewegung der Segmente 3 eine Fahrzeug-Eigenbewegung ermittelt und zur Kompensation herangezogen werden. Die Kompaktheit und Robustheit der Segmente 3 resultiert aus der Integration vieler Pixel im Bereich des Segments 3 und - bei der Tracking-Variante - aus der zusätzlichen Integration über die Zeit.Multiple images can be sequentially acquired and processed, and from changes in the depth map and disparity image, motion information can be extracted and assigned to segments 3. In this way, moving scenes can also be represented and, for example, used to predict an expected movement of the objects 4.1 to 4.6. This kind of motion tracking is also called tracking. In this case, to determine the movement of the segments 3, a vehicle own motion can be determined and used for compensation. The compactness and robustness of the segments 3 results from the integration of many pixels in the area of the segment 3 and - in the tracking variant - from the additional integration over time.
Die Zugehörigkeit jedes der Segmente 3 zu einem der Objekte 4.1 bis 4.6 kann ebenfalls mit den übrigen Informationen zu jedem Segment gespeichert sein. Dies ist jedoch nicht zwingend erforderlich.
Die Bewegungsinformationen können beispielsweise durch Integration des optischen Flusses gewonnen werden, so dass sich für jedes der Segmente 3 eine reale Bewegung schätzen lässt. Entsprechende Verfahren sind z. B. aus Arbeiten zur 6D-Vision, welche in der DE 102005008131 A1 veröffentlicht sind, bekannt. Diese Bewegungsinformation vereinfacht weiter die Gruppierung zu Objekten, da auf kompatible Bewegungen geprüft werden kann.The membership of each of the segments 3 to one of the objects 4.1 to 4.6 can also be stored with the remaining information about each segment. However, this is not mandatory. The movement information can be obtained, for example, by integration of the optical flow, so that a real movement can be estimated for each of the segments 3. Corresponding methods are for. B. from works on the 6D vision, which are published in DE 102005008131 A1, known. This motion information further simplifies the grouping into objects, as compatible movements can be checked.
Die Position des Fußpunkts, die Höhe und die Bewegungsinformation des Segments 3 können mittels Scene Flow ermittelt werden. Beim Scene Flow handelt es sich um eine Klasse von Verfahren, die aus mindestens 2 aufeinander folgenden Stereobildpaaren versucht, für möglichst jeden Bildpunkt die korrekte Bewegung im Raum plus seine SD- Position zu ermitteln; siehe hierzu [Sundar Vedulay, Simon Bakery, Peter Randeryz, Robert Collinsy, and Takeo Kanade, "Three-Dimensional Scene Flow", Appeared in the 7th International Conference on Computer Vision, Corfu, Greece, September 1999.]The position of the foot point, the height and the motion information of the segment 3 can be determined by means of Scene Flow. The Scene Flow is a class of procedures that attempts to determine the correct movement in space plus its SD position from at least 2 consecutive stereo image pairs for as many pixels as possible; See [Sundar Vedulay, Simon Bakery, Peter Randeryz, Robert Collinsy, and Takeo Kanade, "Three Dimensional Scene Flow," Appeared in the 7th International Conference on Computer Vision, Corfu, Greece, September 1999.]
Auf der Basis der identifizierten Segmente 3 können Informationen für ein Fahrerassistenzsystem in einem Fahrzeug generiert werden, an dem die Kameras angeordnet sind.On the basis of the identified segments 3, information for a driver assistance system can be generated in a vehicle on which the cameras are arranged.
Beispielsweise kann eine verbleibende Zeit bis zur Kollision des Fahrzeugs mit einem durch Segmente 3 gebildeten Objekt 4.1 bis 4.6 geschätzt werden.For example, a remaining time until collision of the vehicle with an object 4.1 to 4.6 formed by segments 3 can be estimated.
Weiter kann ein Fahrkorridor 5 in den hindernisfreien Bereich gelegt werden, der vom Fahrzeug benutzt werden soll, wobei ein seitlicher Abstand mindestens eines der Objekte 4.1 bis 4.6 zum Fahrkorridor 5 ermittelt wird.Further, a driving corridor 5 can be placed in the obstacle-free area to be used by the vehicle, wherein a lateral distance of at least one of the objects 4.1 to 4.6 to the driving corridor 5 is determined.
Ebenso können kritische, insbesondere bewegte Objekte 4.1 bis 4.6 zur Unterstützung eines Abbiegeassistenzsystems und/oder einer automatische Fahrlichtschaltung und/oder eines Fußgängerschutzsystems und/oder eines Notbremssystems identifiziert werden.Likewise, critical, in particular moving, objects 4.1 to 4.6 for supporting a turning assistance system and / or an automatic driving light circuit and / or a pedestrian protection system and / or an emergency braking system can be identified.
Informationen weiterer Sensoren können mit den Informationen zur Unterstützung des Fahrerassistenzsystems, die den Segmenten 3 zugeordnet sind, kombiniert werden (Sensorfusion). Insbesondere kommen hierfür aktive Sensoren, wie zum Beispiel ein LIDAR, in Frage.
Die Breite der Segmente 3 kann beispielsweise auf fünf Bildpunkte gesetzt werden. Für ein Bild mit VGA-Auflösung ergeben sich dann maximal 640/5=128 Segmente, die durch Entfernung und Höhe eindeutig beschrieben sind. Die Segmente 3 haben eindeutige Nachbarschaftsbeziehungen, wodurch sie sich sehr einfach zu Objekten 4.1 bis 4.6 gruppieren lassen. Im einfachsten Fall sind zu jedem Segment 3 nur Entfernung und Höhe zu übertragen, bei bekannter Breite des Segments 3 ergibt sich ein Winkel (Spalten im Bild) aus einem Index.Information from other sensors can be combined with the driver assistance system information associated with segments 3 (sensor fusion). In particular, active sensors, such as a LIDAR, are suitable for this purpose. The width of the segments 3 can be set to five pixels, for example. For a picture with VGA resolution then a maximum of 640/5 = 128 segments result, which are clearly described by distance and height. The segments 3 have unique neighborhood relationships, which makes them very easy to group into objects 4.1 through 4.6. In the simplest case, only distance and height are to be transmitted to each segment 3, with known width of the segment 3 results in an angle (columns in the image) from an index.
Das Entfernungsbild kann mittels eines beliebigen Sensorsystems über horizontalem und vertikalem Winkel ermittelt werden, wobei aus dem Entfernungsbild die Tiefenkarte der Umgebung bestimmt wird.The distance image can be determined by means of any sensor system over horizontal and vertical angle, wherein from the distance image, the depth map of the environment is determined.
Insbesondere können jeweils zwei Bilder der Umgebung (1) mittels je einer Kamera aufgenommen und mittels Stereobildverarbeitung ein Disparitätsbild ermittelt werden, wobei aus den ermittelten Disparitäten das Entfernungsbild und die Tiefenkarte bestimmt werden.In particular, two images of the surroundings (1) can each be recorded by means of one camera and a disparity image can be determined by means of stereo image processing, the distance image and the depth map being determined from the disparities determined.
Ebenso kann als Sensorsystem ein Photomischdetektor und/oder eine dreidimensionale Kamera und/oder ein Lidar und/oder ein Radar verwendet werden.
Likewise, a photonic mixer device and / or a three-dimensional camera and / or a lidar and / or a radar can be used as the sensor system.
BezugszeichenlisteLIST OF REFERENCE NUMBERS
1 Umgebung1 environment
2 Freiraumbegrenzungslinie2 free space limitation line
3 Segment 4.1 bis 4.6 Objekt 5 Fahrkorridor
3 Segment 4.1 to 4.6 Object 5 Fahrkorridor
Claims
1. Verfahren zur Objektdetektion, bei dem mittels eines Sensorsystems ein Entfernungsbild über horizontalem und vertikalem Winkel ermittelt wird, wobei aus dem Entfernungsbild eine Tiefenkarte einer Umgebung (1) bestimmt wird, dadurch gekennzeichnet, dass im Entfernungsbild eine Freiraumbegrenzungslinie (2) identifiziert wird, die einen hindernisfreien Bereich der Umgebung (1) umgrenzt, wobei die Tiefenkarte außerhalb und entlang der Freiraumbegrenzungslinie (1 ) segmentiert wird, indem Segmente (3) gleicher Breite aus Pixeln gleicher oder ähnlicher Entfernung zu einer Ebene gebildet werden, wobei eine Höhe jedes Segments (3) als Teil eines außerhalb des hindernisfreien Bereichs befindlichen Objekts (4.1 bis 4.6) geschätzt wird, so dass jedes Segment (3) durch eine zweidimensionale Position eines Fußpunkts und seine Höhe charakterisiert wird.1. A method for object detection, in which by means of a sensor system, a distance image over horizontal and vertical angle is determined, wherein from the distance image, a depth map of an environment (1) is determined, characterized in that in the distance image, a free space boundary line (2) is identified bounding an obstacle-free region of the environment (1), wherein the depth map is segmented outside and along the free-space boundary line (1) by forming segments (3) of equal width from pixels of equal or similar distance to a plane, one height of each segment (3 ) is estimated as part of an object (4.1 to 4.6) located outside the obstacle-free region, so that each segment (3) is characterized by a two-dimensional position of a foot point and its height.
2. Verfahren nach Anspruch 1 , dadurch gekennzeichnet, dass jeweils zwei Bilder der Umgebung (1) mittels je einer Kamera aufgenommen und mittels Stereobildverarbeitung ein Disparitätsbild ermittelt wird, wobei aus den ermittelten Disparitäten das Entfernungsbild und die Tiefenkarte bestimmt werden.2. The method according to claim 1, characterized in that in each case two images of the environment (1) recorded by means of a camera and by means of stereo image processing, a disparity image is determined, wherein the distance image and the depth map are determined from the disparities determined.
3. Verfahren nach Anspruch 1 , dadurch gekennzeichnet, dass als Sensorsystem ein Photomischdetektor und/oder eine dreidimensionale Kamera und/oder ein Lidar und/oder ein Radar verwendet wird.3. The method according to claim 1, characterized in that a photonic mixer and / or a three-dimensional camera and / or a lidar and / or a radar is used as the sensor system.
4. Verfahren nach einem der Ansprüche 1 bis 3, dadurch gekennzeichnet, dass mehrere Entfernungsbilder sequentiell ermittelt und verarbeitet werden, wobei aus Veränderungen in der Tiefenkarte Bewegungsinformationen extrahiert und den Segmenten (3) zugeordnet werden.4. The method according to any one of claims 1 to 3, characterized in that a plurality of distance images sequentially determined and are processed, are derived from changes in the depth map motion information and the segments (3) are assigned.
5. Verfahren nach Anspruch 4, dadurch gekennzeichnet, dass die Bewegungsinformationen durch Integration des optischen Flusses gewonnen werden.5. The method according to claim 4, characterized in that the movement information is obtained by integration of the optical flow.
6. Verfahren nach einem der Ansprüche 1 bis 5, dadurch gekennzeichnet, dass die Zugehörigkeit der Segmente (3) zu einem der Objekte (4.1 bis 4.6) bestimmt und die Segmente (3) mit Informationen über ihre Zugehörigkeit zu einem der Objekte (4.1 bis 4.6) versehen werden.6. The method according to any one of claims 1 to 5, characterized in that the affiliation of the segments (3) to one of the objects (4.1 to 4.6) determines and the segments (3) with information about their affiliation to one of the objects (4.1 to 4.6).
7. Verfahren nach einem der Ansprüche 4 bis 6, dadurch gekennzeichnet, dass die Position des Fußpunkts, die Höhe und die Bewegungsinformation des Segments (3) mittels Scene Flow ermittelt werden.7. The method according to any one of claims 4 to 6, characterized in that the position of the foot, the height and the motion information of the segment (3) are determined by means of scene flow.
8. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die Höhe des Segments (3) mittels histogrammbasierter Auswertung aller dreidimensionalen Punkte im Bereich des Segments (3) durchgeführt wird.8. The method according to any one of the preceding claims, characterized in that the height of the segment (3) by means of histogram-based evaluation of all three-dimensional points in the region of the segment (3) is performed.
9. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass auf der Basis der identifizierten Segmente (3) Informationen für ein Fahrerassistenzsystem in einem Fahrzeug generiert werden, an dem Kameras zur Aufnahme der Bilder angeordnet sind.9. The method according to any one of the preceding claims, characterized in that on the basis of the identified segments (3) information for a driver assistance system are generated in a vehicle, are arranged on the cameras for receiving the images.
10. Verfahren nach Anspruch 9, dadurch gekennzeichnet, dass eine verbleibende Zeit bis zur Kollision des Fahrzeugs mit einem durch Segmente (3) gebildeten Objekt (4.1 bis 4.6) geschätzt wird.10. The method according to claim 9, characterized in that a remaining time is estimated until the collision of the vehicle with an object formed by segments (3) (4.1 to 4.6).
11. Verfahren nach einem der Ansprüche 9 oder 10, dadurch gekennzeichnet, dass ein Fahrkorridor (5) in den hindernisfreien Bereich gelegt wird, wobei ein seitlicher Abstand mindestens eines der Objekte (4.1 bis 4.6) zum Fahrkorridor (5) ermittelt wird. 11. The method according to any one of claims 9 or 10, characterized in that a driving corridor (5) is placed in the obstacle-free area, wherein a lateral distance of at least one of the objects (4.1 to 4.6) to the driving corridor (5) is determined.
12. Verfahren nach einem der Ansprüche 9 bis 11 , dadurch gekennzeichnet, dass kritische Objekte (4.1 bis 4.6) zur Unterstützung eines Abbiegeassistenzsystems und/oder einer automatischen Fahrlichtschaltung und/oder eines Fußgängerschutzsystems und/oder eines Notbremssystems identifiziert werden und/oder ein Fahrer beim Befahren enger Fahrspuren unterstützt wird.12. The method according to any one of claims 9 to 11, characterized in that critical objects (4.1 to 4.6) are identified to support a turning assistance system and / or an automatic headlight circuit and / or a pedestrian protection system and / or an emergency braking system and / or a driver when Driving on narrow lanes is supported.
13. Verfahren nach einem der Ansprüche 9 bis 12, dadurch gekennzeichnet, dass Informationen weiterer Sensoren mit den Segmenten (3) zugeordneten Informationen zur Unterstützung des Fahrerassistenzsystems kombiniert werden. 13. The method according to any one of claims 9 to 12, characterized in that information of further sensors with the segments (3) associated information to support the driver assistance system are combined.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102009009047A DE102009009047A1 (en) | 2009-02-16 | 2009-02-16 | Method for object detection |
PCT/EP2010/000671 WO2010091818A2 (en) | 2009-02-16 | 2010-02-04 | Method for detecting objects |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2396746A2 true EP2396746A2 (en) | 2011-12-21 |
Family
ID=42338731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10703018A Withdrawn EP2396746A2 (en) | 2009-02-16 | 2010-02-04 | Method for detecting objects |
Country Status (5)
Country | Link |
---|---|
US (1) | US8548229B2 (en) |
EP (1) | EP2396746A2 (en) |
CN (1) | CN102317954B (en) |
DE (1) | DE102009009047A1 (en) |
WO (1) | WO2010091818A2 (en) |
Families Citing this family (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10776635B2 (en) | 2010-09-21 | 2020-09-15 | Mobileye Vision Technologies Ltd. | Monocular cued detection of three-dimensional structures from depth images |
JP5316572B2 (en) * | 2011-03-28 | 2013-10-16 | トヨタ自動車株式会社 | Object recognition device |
DE102011111440A1 (en) | 2011-08-30 | 2012-06-28 | Daimler Ag | Method for representation of environment of vehicle, involves forming segments of same width from image points of equal distance in one of image planes, and modeling objects present outside free space in environment |
CN103164851B (en) * | 2011-12-09 | 2016-04-20 | 株式会社理光 | Lane segmentation object detecting method and device |
DE102012000459A1 (en) | 2012-01-13 | 2012-07-12 | Daimler Ag | Method for detecting object e.g. vehicle in surrounding area, involves transforming segments with classification surfaces into two-dimensional representation of environment, and searching and classifying segments in representation |
WO2013109869A1 (en) | 2012-01-20 | 2013-07-25 | Magna Electronics, Inc. | Vehicle vision system with free positional virtual panoramic view |
US8824733B2 (en) | 2012-03-26 | 2014-09-02 | Tk Holdings Inc. | Range-cued object segmentation system and method |
US8768007B2 (en) | 2012-03-26 | 2014-07-01 | Tk Holdings Inc. | Method of filtering an image |
CN103390164B (en) * | 2012-05-10 | 2017-03-29 | 南京理工大学 | Method for checking object based on depth image and its realize device |
TWI496090B (en) * | 2012-09-05 | 2015-08-11 | Ind Tech Res Inst | Method and apparatus for object positioning by using depth images |
US9349058B2 (en) | 2012-10-31 | 2016-05-24 | Tk Holdings, Inc. | Vehicular path sensing system and method |
DE102012021617A1 (en) | 2012-11-06 | 2013-05-16 | Daimler Ag | Method for segmentation and recognition of object e.g. cyclist around vehicle, involves forming classifiers for classification of objects based on the frequency statistics and object models |
US20140139632A1 (en) * | 2012-11-21 | 2014-05-22 | Lsi Corporation | Depth imaging method and apparatus with adaptive illumination of an object of interest |
CN103871042B (en) * | 2012-12-12 | 2016-12-07 | 株式会社理光 | Continuous object detecting method and device in parallax directions based on disparity map |
WO2014152470A2 (en) | 2013-03-15 | 2014-09-25 | Tk Holdings, Inc. | Path sensing using structured lighting |
CN104723953A (en) * | 2013-12-18 | 2015-06-24 | 青岛盛嘉信息科技有限公司 | Pedestrian detecting device |
GB201407643D0 (en) * | 2014-04-30 | 2014-06-11 | Tomtom Global Content Bv | Improved positioning relatie to a digital map for assisted and automated driving operations |
US10024965B2 (en) | 2015-04-01 | 2018-07-17 | Vayavision, Ltd. | Generating 3-dimensional maps of a scene using passive and active measurements |
US9928430B2 (en) * | 2015-04-10 | 2018-03-27 | GM Global Technology Operations LLC | Dynamic stixel estimation using a single moving camera |
KR102650541B1 (en) * | 2015-08-03 | 2024-03-26 | 톰톰 글로벌 콘텐트 비.브이. | Method and system for generating and using location reference data |
EP3324359B1 (en) | 2015-08-21 | 2019-10-02 | Panasonic Intellectual Property Management Co., Ltd. | Image processing device and image processing method |
US9761000B2 (en) * | 2015-09-18 | 2017-09-12 | Qualcomm Incorporated | Systems and methods for non-obstacle area detection |
US10482331B2 (en) * | 2015-11-20 | 2019-11-19 | GM Global Technology Operations LLC | Stixel estimation methods and systems |
CN106909141A (en) * | 2015-12-23 | 2017-06-30 | 北京机电工程研究所 | Obstacle detection positioner and obstacle avoidance system |
KR101795270B1 (en) | 2016-06-09 | 2017-11-07 | 현대자동차주식회사 | Method and Apparatus for Detecting Side of Object using Information for Ground Boundary of Obstacle |
CN105974938B (en) * | 2016-06-16 | 2023-10-03 | 零度智控(北京)智能科技有限公司 | Obstacle avoidance method and device, carrier and unmanned aerial vehicle |
US10321114B2 (en) * | 2016-08-04 | 2019-06-11 | Google Llc | Testing 3D imaging systems |
EP3293668B1 (en) | 2016-09-13 | 2023-08-30 | Arriver Software AB | A vision system and method for a motor vehicle |
US10535142B2 (en) | 2017-01-10 | 2020-01-14 | Electronics And Telecommunication Research Institute | Method and apparatus for accelerating foreground and background separation in object detection using stereo camera |
US10445928B2 (en) | 2017-02-11 | 2019-10-15 | Vayavision Ltd. | Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types |
US10474908B2 (en) * | 2017-07-06 | 2019-11-12 | GM Global Technology Operations LLC | Unified deep convolutional neural net for free-space estimation, object detection and object pose estimation |
JP6970577B2 (en) * | 2017-09-29 | 2021-11-24 | 株式会社デンソー | Peripheral monitoring device and peripheral monitoring method |
DE102017123984A1 (en) | 2017-10-16 | 2017-11-30 | FEV Europe GmbH | Driver assistance system with a nanowire for detecting an object in an environment of a vehicle |
DE102017123980A1 (en) | 2017-10-16 | 2017-11-30 | FEV Europe GmbH | Driver assistance system with a frequency-controlled alignment of a transmitter for detecting an object in an environment of a vehicle |
DE102018202244A1 (en) | 2018-02-14 | 2019-08-14 | Robert Bosch Gmbh | Method for imaging the environment of a vehicle |
DE102018202753A1 (en) | 2018-02-23 | 2019-08-29 | Audi Ag | Method for determining a distance between a motor vehicle and an object |
DE102018114987A1 (en) | 2018-06-21 | 2018-08-09 | FEV Europe GmbH | Driver assistance system for determining a color of an object in a vehicle environment |
DE102018005969A1 (en) | 2018-07-27 | 2020-01-30 | Daimler Ag | Method for operating a driver assistance system with two detection devices |
DE102018214875A1 (en) * | 2018-08-31 | 2020-03-05 | Audi Ag | Method and arrangement for generating an environmental representation of a vehicle and vehicle with such an arrangement |
DE102018128538A1 (en) | 2018-11-14 | 2019-01-24 | FEV Europe GmbH | Driver assistance system with a transmitter with a frequency-controlled emission direction and a frequency matching converter |
DE102019107310A1 (en) | 2019-03-21 | 2019-06-19 | FEV Europe GmbH | Driver assistance system for detecting foreign signals |
DE102019211582A1 (en) * | 2019-08-01 | 2021-02-04 | Robert Bosch Gmbh | Procedure for creating an elevation map |
CN110659578A (en) * | 2019-08-26 | 2020-01-07 | 中国电子科技集团公司电子科学研究院 | Passenger flow statistical method, system and equipment based on detection and tracking technology |
US11669092B2 (en) * | 2019-08-29 | 2023-06-06 | Rockwell Automation Technologies, Inc. | Time of flight system and method for safety-rated collision avoidance |
EP3882813A1 (en) | 2020-03-20 | 2021-09-22 | Aptiv Technologies Limited | Method for generating a dynamic occupancy grid |
EP3905105A1 (en) | 2020-04-27 | 2021-11-03 | Aptiv Technologies Limited | Method for determining a collision free space |
EP3905106A1 (en) | 2020-04-27 | 2021-11-03 | Aptiv Technologies Limited | Method for determining a drivable area |
DE102020208066B3 (en) | 2020-06-30 | 2021-12-23 | Robert Bosch Gesellschaft mit beschränkter Haftung | Process object recognition Computer program, storage medium and control device |
DE102020208068A1 (en) | 2020-06-30 | 2021-12-30 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for recognizing an object appearing in a surveillance area, computer program, storage medium and control device |
CA3125716C (en) | 2020-07-21 | 2024-04-09 | Leddartech Inc. | Systems and methods for wide-angle lidar using non-uniform magnification optics |
EP4185888A1 (en) | 2020-07-21 | 2023-05-31 | Leddartech Inc. | Beam-steering device particularly for lidar systems |
EP4185924A1 (en) | 2020-07-21 | 2023-05-31 | Leddartech Inc. | Beam-steering device particularly for lidar systems |
DE102020210816A1 (en) * | 2020-08-27 | 2022-03-03 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for detecting three-dimensional objects, computer program, machine-readable storage medium, control unit, vehicle and video surveillance system |
EP4009228A1 (en) * | 2020-12-02 | 2022-06-08 | Aptiv Technologies Limited | Method for determining a semantic free space |
DE102022115447A1 (en) | 2022-06-21 | 2023-12-21 | Bayerische Motoren Werke Aktiengesellschaft | Method and assistance system for supporting vehicle guidance based on a driving route and a boundary estimate and motor vehicle |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9913687D0 (en) * | 1999-06-11 | 1999-08-11 | Canon Kk | Image processing apparatus |
CN1343551A (en) * | 2000-09-21 | 2002-04-10 | 上海大学 | Hierarchical modular model for robot's visual sense |
US7822241B2 (en) * | 2003-08-21 | 2010-10-26 | Koninklijke Philips Electronics N.V. | Device and method for combining two images |
WO2006011593A1 (en) * | 2004-07-30 | 2006-02-02 | Matsushita Electric Works, Ltd. | Individual detector and accompaniment detection device |
KR100778904B1 (en) * | 2004-09-17 | 2007-11-22 | 마츠시다 덴코 가부시키가이샤 | A range image sensor |
DE102005008131A1 (en) | 2005-01-31 | 2006-08-03 | Daimlerchrysler Ag | Object e.g. road sign, detecting method for use with e.g. driver assistance system, involves determining position and movement of relevant pixels using filter and combining relevant pixels to objects under given terms and conditions |
JP4797794B2 (en) | 2006-05-24 | 2011-10-19 | 日産自動車株式会社 | Pedestrian detection device and pedestrian detection method |
US8385599B2 (en) * | 2008-10-10 | 2013-02-26 | Sri International | System and method of detecting objects |
-
2009
- 2009-02-16 DE DE102009009047A patent/DE102009009047A1/en not_active Withdrawn
-
2010
- 2010-02-04 US US13/201,241 patent/US8548229B2/en not_active Expired - Fee Related
- 2010-02-04 EP EP10703018A patent/EP2396746A2/en not_active Withdrawn
- 2010-02-04 WO PCT/EP2010/000671 patent/WO2010091818A2/en active Application Filing
- 2010-02-04 CN CN201080007837.9A patent/CN102317954B/en not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
See references of WO2010091818A2 * |
Also Published As
Publication number | Publication date |
---|---|
US20110311108A1 (en) | 2011-12-22 |
US8548229B2 (en) | 2013-10-01 |
CN102317954A (en) | 2012-01-11 |
CN102317954B (en) | 2014-09-24 |
WO2010091818A2 (en) | 2010-08-19 |
DE102009009047A1 (en) | 2010-08-19 |
WO2010091818A3 (en) | 2011-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2396746A2 (en) | Method for detecting objects | |
DE102013209415B4 (en) | Dynamic clue overlay with image cropping | |
DE60308782T2 (en) | Device and method for obstacle detection | |
DE102007002419B4 (en) | Vehicle environment monitoring device, method and program | |
DE69937699T2 (en) | Device for monitoring the environment of a vehicle | |
DE10030421B4 (en) | Vehicle environment monitoring system | |
EP2394234B1 (en) | Method and device for determining an applicable lane marker | |
DE10251880B4 (en) | Image recognition device | |
DE102005056645B4 (en) | Vehicle environment monitoring device | |
WO2013029722A2 (en) | Method for representing surroundings | |
DE102015203016B4 (en) | Method and device for optical self-localization of a motor vehicle in an environment | |
DE102012101014A1 (en) | Vehicle detection device | |
DE102017218366A1 (en) | METHOD AND PEDESTRIAN DETECTION APPROACH IN A VEHICLE | |
DE102015115012A1 (en) | Method for generating an environment map of an environment of a motor vehicle based on an image of a camera, driver assistance system and motor vehicle | |
DE112018007484T5 (en) | Obstacle detection device, automatic braking device using an obstacle detection device, obstacle detection method, and automatic braking method using an obstacle detection method | |
DE102012103908A1 (en) | Environment recognition device and environment recognition method | |
DE112018007485T5 (en) | Road surface detection device, image display device using a road surface detection device, obstacle detection device using a road surface detection device, road surface detection method, image display method using a road surface detection method, and obstacle detection method using a road surface detection method | |
DE102014112820A1 (en) | Vehicle exterior environment recognition device | |
DE102012000459A1 (en) | Method for detecting object e.g. vehicle in surrounding area, involves transforming segments with classification surfaces into two-dimensional representation of environment, and searching and classifying segments in representation | |
EP2023265A1 (en) | Method for recognising an object | |
DE102009022278A1 (en) | Obstruction-free area determining method for drive assistance system in vehicle, involves considering surface-measuring point on ground plane during formation of depth map such that evidence of obstruction-measuring points is reduced | |
DE102018121008A1 (en) | CROSS TRAFFIC RECORDING USING CAMERAS | |
DE102016218852A1 (en) | Detection of objects from images of a camera | |
WO2018059630A1 (en) | Detection and validation of objects from sequential images from a camera | |
DE102015211871A1 (en) | Object detection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20110702 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20160901 |