WO2015090691A1 - Procédé de génération d'un modèle d'environnement d'un véhicule automobile, système d'assistance au conducteur et véhicule automobile - Google Patents
Procédé de génération d'un modèle d'environnement d'un véhicule automobile, système d'assistance au conducteur et véhicule automobile Download PDFInfo
- Publication number
- WO2015090691A1 WO2015090691A1 PCT/EP2014/072969 EP2014072969W WO2015090691A1 WO 2015090691 A1 WO2015090691 A1 WO 2015090691A1 EP 2014072969 W EP2014072969 W EP 2014072969W WO 2015090691 A1 WO2015090691 A1 WO 2015090691A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- objects
- motor vehicle
- image
- properties
- environment model
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the invention relates to a method for operating a driver assistance system of a motor vehicle, in which an image of an environmental region of the motor vehicle is provided by means of a camera of the driver assistance system and a plurality of objects in the image is detected by an image processing device.
- the invention also relates to a driver assistance system, which is designed to carry out such a method, as well as a motor vehicle with such
- the interest is directed in particular to the tracking of target vehicles with the aid of a front camera of a motor vehicle.
- Front cameras for motor vehicles are already known from the prior art and usually capture images of a surrounding area in front of the motor vehicle.
- This sequence of images is processed by means of an electronic image processing device which detects target objects in the images.
- the images are subjected to an object detection algorithm.
- object detection algorithms are already state of the art and are based, for example, on pattern recognition.
- characteristic points can first be extracted from the image, and then a target object can be identified on the basis of these characteristic points.
- AdaBoost AdaBoost
- HOG-SVM HOG-SVM.
- a target object is identified in an image of the camera, then this target object can also be tracked over the subsequent images of the sequence.
- the target object is detected in each image, whereby the detection in the current image must be assigned to the detection from the previous image.
- the current position of the target object in the image frame and thus also the current relative position of the target object with respect to the motor vehicle are always known.
- Tracing algorithm can be used, for example, the Lucas-Kanade method.
- a named camera system with a front camera can be used as a collision warning system, by means of which the driver can be warned of a risk of collision with the target object.
- a collision warning system can output warning signals, for example, in order to inform the driver acoustically and / or visually and / or haptically about the detected risk of collision.
- the Camera system can also be used as an automatic brake assist system, which is designed to automatic due to the detected risk of collision
- the risk of collision can be used for the so-called time to collision, that is to say a time period which is presumably required by the motor vehicle in order to reach the target object.
- This time to collision may be determined from the estimated distance of the target object as well as from the relative
- the known detection algorithms which serve to detect target vehicles in the image usually give a rectangular result as a result of the detection
- Bounding box which surrounds the detected target vehicle.
- the detected target vehicle is thus depicted, wherein the bounding box indicates the current position as well as the width and the height of the target vehicle in the image.
- the circumstance may be considered that the bounding box only vaguely indicates the current position of the target vehicle in the image and the latitude and the height of the target vehicle.
- the size of the bounding box is a vaguely indicates the current position of the target vehicle in the image and the latitude and the height of the target vehicle.
- Boundary box may also vary across the sequence of images, which in turn reduces the accuracy of tracking the target vehicle across the images. Accordingly, therefore, the relative position of the target objects relative to the motor vehicle can be determined only inaccurate.
- WO 2005/037619 A1 discloses a method for initiating a
- Emergency braking in which the environment of a vehicle is detected and carried out an object detection and when a predetermined event emergency braking is triggered.
- a reference object is specified, and the detected objects are compared with the reference object. Only objects that are greater than the reference object are considered for the evaluation of the event occurrence.
- a collision warning system for a motor vehicle is further known from document US 8 412 448 B2. It becomes here a three-dimensional model of a motor vehicle
- Target object is assumed relative to the motor vehicle.
- the challenge now is to detect such detection errors and, if possible, to correct them.
- This object is achieved by a method by a
- An inventive method is used to operate a driver assistance system of a motor vehicle.
- a camera of the driver assistance system an image of a surrounding area of the motor vehicle is provided.
- Image processing device is then detected a plurality of objects in the image, namely in particular using a detection algorithm.
- a detection algorithm In principle, an arbitrary detection algorithm can be used, so that in the present case the detection algorithm will not be discussed in greater detail.
- a detection algorithm can be used which outputs for each detected object a so-called bounding box, in which the detected object is imaged. For each object, the
- Image processing device at least one property of the respective object based on the image.
- a property for example, the position of the respective object relative to the motor vehicle can be determined.
- the image processing device then generates an environment model to the surrounding area of the motor vehicle from the
- the environment model is under Using a predetermined optimization algorithm generated, for example, an error minimization algorithm, which the errors between the generated
- the driver assistance system can thus be operated particularly reliably, since application errors due to incorrectly determined properties of the objects can be prevented. For example, it can be prevented that the driver of the motor vehicle is unnecessarily warned by the driver assistance system or that
- the camera is preferably a front camera, which is arranged in particular behind a windshield of the motor vehicle, for example, directly on the windshield in the interior of the motor vehicle.
- the front camera then detects the environment in the direction of travel or in the vehicle longitudinal direction in front of the motor vehicle. This may mean in particular that one perpendicular to the plane of the image sensor
- extending camera axis is oriented parallel to the vehicle longitudinal axis.
- the camera is preferably a video camera which can provide a plurality of frames per second.
- the camera can be a CCD camera or a CMOS camera.
- the camera system may be a collision warning system, by means of which a degree of danger with respect to a collision of the motor vehicle with the target vehicle is determined and depending on the current degree of danger a warning signal is output, with which the risk of collision is signaled to the driver.
- the camera system can also be designed as an automatic brake assist system, by means of which braking interventions are automatically performed as a function of the degree of danger.
- a degree of danger for example, the time to collision and / or removal of the target vehicle from the motor vehicle can be used.
- a position of the respective objects relative to the motor vehicle can be determined as a property of the objects. This means that for each detected object the respective position relative to the motor vehicle is determined and the environment model is generated from the relative positions of the objects. Thus, errors in determining the relative positions of the objects can be detected and optionally then corrected.
- the relative position of the objects with respect to the motor vehicle can be determined, for example, as a function of an estimated width and / or the determined type of the respective object.
- the estimate of the relative position of the objects can basically be made in any way based on the image
- an estimated real width of the respective object can also be determined as a property of the objects.
- This width can be determined, for example, depending on the width of the above-mentioned bounding box, which is output by the detection algorithm. If the real width of the respective object is known, a distance of the object from the motor vehicle can also be estimated. This embodiment therefore has the advantage that on the one hand an error in the determination of the real width of the objects and on the other hand also errors in the
- Determining the distance of the objects can be detected by the motor vehicle and optionally corrected.
- a width of the respective object is preferably understood to mean a dimension of the object in the vehicle transverse direction.
- a type of the respective object in particular a vehicle type, can be determined as a property of the objects. In principle, for example, it is possible to distinguish between the following types of objects: a target vehicle, a pedestrian, a tree and the like. If a target vehicle is detected, it is possible, for example, to distinguish between a passenger car, a truck and a motorcycle.
- Error minimization algorithm can be used, in which the error between the determined properties of the objects on the one hand and the environment model
- the RANSAC algorithm is implemented, which has the advantage that large outliers do not lead to a falsification of the environment model and the algorithm can thus be applied to noisy values. Overall, the RANSAC algorithm is very robust against outliers.
- an algorithm based on regression analysis and / or fitting analysis can be used as the optimization algorithm.
- target vehicles can be detected as objects in particular. In fact, it is typically the target vehicles that pose the greatest collision risk to the ego vehicle. It is thus particularly advantageous if the characteristics of target vehicles, and in particular the relative position of the target vehicles, are determined particularly precisely and any errors occurring are detected.
- longitudinal markings of a carriageway can be detected as objects on which the motor vehicle is located.
- the detection of the longitudinal markings in turn allows conclusions to be drawn on which lane the motor vehicle currently is and whether there are also other objects on this lane , in particular other vehicles.
- relationships between the objects can be taken into account in addition to the at least one property of the objects. These relations can also preferably be determined on the basis of the image, it being additionally or alternatively also possible to use sensor data from other sensors to determine the mutual relations between the objects, in particular sensor data from distance sensors. Thus, in total Also relative constraints or conditions between the objects are taken into account when creating the environment model. These relations between the objects can be determined without much effort on the basis of the image, in particular on the basis of the determined position of the objects relative to the motor vehicle. As a relation between two objects can be determined, for example, which of the two objects is closer to the motor vehicle and which of the objects is further away from the motor vehicle.
- Optimization algorithm can be used to find the best possible environment model.
- the image processing device can for each image or every n-th image, with n> 1, each determine the at least one property of the objects.
- the properties of the objects from previous images can then also be taken into account. In this way, a temporal filtering of the environment model can be made, which further improves the accuracy in the generation of the current environment model and the reliability of the error detection.
- the image processing device in generating the
- Ambient model assumed that the motor vehicle and the objects are located on a common plane. This assumption is also referred to as "flat world assumption.” Thus, a flat surface is used as the basis for the environment model, and it is assumed that all objects and the motor vehicle itself are located on this common flat surface
- the motor vehicle itself can have a pitch angle (non-zero) and / or a roll angle (zero).
- the properties of the objects are each weighted with an associated weighting factor and the environment model is generated from the weighted properties.
- Each object can be assigned a different weighting factor.
- important objects and / or objects having larger confidence values may be more powerful than other objects in generating the
- the detection algorithm When detecting objects in the image, the detection algorithm usually also calculates so-called confidence values, which indicate the accuracy or probability of the detection and thus represent a confidence measure. These confidence values can now be used to determine the weighting factors, the larger the confidence value, the larger the associated weighting factor can be. Additionally or alternatively, information on the application level can also be used to determine the weighting factors, in particular a distance of the respective object from
- a close object which has already been tracked for several minutes, may have a larger weighting factor than a distant object or a newly detected object.
- the environment model can thus be generated even more precise and needs-based.
- the invention additionally relates to a driver assistance system for a motor vehicle, with at least one camera for providing an image of an environmental region of the motor vehicle, and with an image processing device which is designed to carry out a method according to the invention.
- An inventive motor vehicle in particular a passenger car, comprises a driver assistance system according to the invention.
- Embodiments and their advantages apply correspondingly to the driver assistance system according to the invention and to the motor vehicle according to the invention.
- FIG. 1 is a schematic representation of a motor vehicle with a
- FIG. 2 shows a schematic representation of an exemplary image, which by means of a
- Fig. 3 is a schematic representation of an environment model, which under
- FIG. 5 shows a further flow chart of the method, wherein the detection of incorrectly determined properties is illustrated.
- a motor vehicle 1 shown in FIG. 1 is a in the embodiment
- the motor vehicle 1 comprises a driver assistance system 2, which serves, for example, as a collision warning system, by means of which the driver of the motor vehicle 1 can be warned of a risk of collision. Additionally or alternatively, the driver assistance system 2 may be formed as an automatic emergency braking system, by means of which the motor vehicle 1 is braked automatically due to a detected risk of collision.
- a driver assistance system 2 serves, for example, as a collision warning system, by means of which the driver of the motor vehicle 1 can be warned of a risk of collision.
- the driver assistance system 2 may be formed as an automatic emergency braking system, by means of which the motor vehicle 1 is braked automatically due to a detected risk of collision.
- the driver assistance system 2 comprises a camera 3, which is designed as a front camera.
- the camera 3 is in the interior of the motor vehicle 1 at a
- Windshield of the motor vehicle 1 is arranged and detects a
- the camera 3 is, for example, a CCD camera or a CMOS camera.
- the camera 3 is a video camera, which provides a sequence of images of the surrounding area 4 and transmits it to an image processing device, not shown in the figures.
- the Image processing device and the camera 3 can optionally also be integrated in a common housing.
- Fig. 1 is located on a roadway 5 in front of the motor vehicle 1, an object 6, here in the form of a target vehicle 7.
- the image processing device is arranged so that it on the images of the surrounding area 4 a
- Detection algorithm can be applied, which is designed for the detection of objects 6.
- This object detection algorithm can be stored, for example, in a memory of the image processing device and based for example on the algorithm AdaBoost.
- AdaBoost algorithm for example on the algorithm AdaBoost.
- detection algorithms are already state of the art and will not be described in detail here. If the object 6 is detected, this can be tracked by the image processing device over time. Also are
- Tracing the target vehicle 7 over time means that in each frame or nth frame of the sequence (n> 1) the target vehicle 6 is detected and thus its current position in the respective frame is known. The target vehicle 6 is thus tracked across the sequence of images.
- FIG. 8 An exemplary image 8 of the camera 3 is shown in FIG.
- the detection algorithm detects a plurality of objects 6: target vehicles 7a, 7b, 7c, 7d, 7e on the one hand and longitudinal markings 10 of the roadway 5 on the other hand.
- the image processing device determines the following properties for each object 6:
- a real width of the respective object 6 - this real width can be determined by the width of the respective bounding box 9a to 9e; a position of the respective object 6 relative to the motor vehicle 1 - in the determination of the relative position, the distance of the respective object 6 from the motor vehicle 1 can first be determined as a function of the width of the object 6; a type of the respective object 6, in particular also a vehicle type of the target vehicles 7a to 7e - also the type is used to determine the exact relative position of the respective object 6 relative to the motor vehicle 1.
- All properties of all detected objects 6 are then used as input parameters for the generation of an environment model of the surrounding area 4.
- Generation of the environment model are used.
- a relation between two objects 6 it can be determined, for example, which of the objects is closer and which is farther away from the motor vehicle 1 and / or which of the objects 6 is further to the left and which further to the right of the motor vehicle 1.
- FIG. 3 An exemplary environment model 1 1 is shown in Fig. 3. The
- Environment model 1 1 represents a digital environment map of the motor vehicle 1 in a two-dimensional coordinate system x, y.
- Ambient model 1 1 is assumed that the motor vehicle 1 and all other objects 6 on a common plane 12 and thus on a
- Input parameters (the properties of the objects 6 and in particular the mutual relations) is applied.
- the RANSAC algorithm is preferably used as the optimization algorithm. However, it is also possible to use a different fitting algorithm and / or regression algorithm.
- the properties of the objects 6 from the previous images can also be used, i. the previous environment models 1 1, which were created to the previous images.
- a temporal filtering of the environment model 1 1 takes place.
- each object 6 can be assigned a different and thus a separate weighting factor, which can be determined, for example, as a function of a confidence value of the respective detected object 6 and / or depending on a distance of this object 6 from the motor vehicle 1 and / or dependent on a time duration. for which this object 6 by the
- step S1 When generating the environment model 1 1, several input parameters are initially provided according to step S1, which are the
- step S12 camera parameters of the camera 3 or a so-called camera model are provided.
- step S13 the longitudinal markings 10 are detected in the image 8 and the position relative to the motor vehicle 1 is determined.
- step S14 the position relative to the motor vehicle 1 is determined.
- Vehicle types of the target vehicles 7a to 7e determined.
- step S15 the respective real width of the target vehicles 7a to 7e is determined.
- step S16 the respective position of the target vehicles 7a to 7e relative to the motor vehicle 1 is determined.
- step S17 the above-mentioned relations of the objects 6 with each other are determined.
- step S21 the required input parameters are selected.
- step S22 the estimation of the environment model 11 according to the said optimization algorithm then takes place according to step S22.
- step S23 this compares
- the originally determined properties of the objects 6, which were used to generate the optimized environment model 11, are thus used again in order to determine the correctness of the determination of these properties.
- the originally determined properties for example the relative position of the objects 6 with respect to the motor vehicle 1
- the environment model 1 1, which was generated according to the optimization algorithm are compared with the environment model 1 1, which was generated according to the optimization algorithm.
- the properties of the detected objects 6 are determined according to step S101. These properties are then used in step S102 to generate the optimized environment model 11. The properties are then compared with the determined environment model according to step S103. The result of this comparison are properties that were originally determined incorrectly and thus have an error. These properties are outliers.
- the relative position of the target vehicles 7c, 7d relative to the motor vehicle 1 was determined incorrectly. While the relative position of the target vehicle 7c has been erroneously determined only in the x-direction, the relative position of the target vehicle 7d in both the x-direction and the y-direction is erroneous. These errors are detected on the basis of the comparison of the determined relative positions of the target vehicles 7c, 7d with the optimized environment model 11.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
Abstract
L'invention concerne un procédé pour faire fonctionner un système d'assistance au conducteur d'un véhicule automobile (1), comprenant les étapes consistant à : - produire une image d'une zone environnante du véhicule automobile (1) à l'aide d'une caméra du système d'assistance au conducteur, - détecter une pluralité d'objets (6) dans l'image à l'aide d'un dispositif de traitement d'image, - déterminer pour chaque objet (6) au moins une propriété de l'objet (6) respectif à partir de l'image à l'aide du dispositif de traitement d'image, - générer un modèle d'environnement (11) pour la zone environnante du véhicule automobile (1) à partir des propriétés des objets (6) en utilisant un algorithme d'optimisation prédéterminé à l'aide du dispositif de traitement d'image, - comparer les propriétés des objets (6) au modèle d'environnement (11) à l'aide du dispositif de traitement d'image et - détecter les propriétés déterminées comme défectueuses en se référant à la comparaison.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102013021840.3A DE102013021840A1 (de) | 2013-12-21 | 2013-12-21 | Verfahren zum Erzeugen eines Umgebungsmodells eines Kraftfahrzeugs, Fahrerassistenzsystem und Kraftfahrzeug |
DE102013021840.3 | 2013-12-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015090691A1 true WO2015090691A1 (fr) | 2015-06-25 |
Family
ID=51845395
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2014/072969 WO2015090691A1 (fr) | 2013-12-21 | 2014-10-27 | Procédé de génération d'un modèle d'environnement d'un véhicule automobile, système d'assistance au conducteur et véhicule automobile |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE102013021840A1 (fr) |
WO (1) | WO2015090691A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110954111A (zh) * | 2018-09-26 | 2020-04-03 | 罗伯特·博世有限公司 | 定量表征对象的对象属性误差的至少一个时间序列的方法 |
CN112001344A (zh) * | 2020-08-31 | 2020-11-27 | 深圳市豪恩汽车电子装备股份有限公司 | 机动车目标检测装置及方法 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016007389A1 (de) | 2016-06-17 | 2016-12-15 | Daimler Ag | Verfahren zur Erfassung einer Umgebung eines Fahrzeugs |
DE102017207658A1 (de) * | 2017-05-08 | 2018-11-08 | Continental Automotive Gmbh | Verfahren zum Identifizieren eines Fahrzeugs mit einem Zu-satzkennzeichen, und eine Vorrichtung zur Durchführung des Verfahrens |
DE102017207958B4 (de) | 2017-05-11 | 2019-03-14 | Audi Ag | Verfahren zum Generieren von Trainingsdaten für ein auf maschinellem Lernen basierendes Mustererkennungsverfahren für ein Kraftfahrzeug, Kraftfahrzeug, Verfahren zum Betreiben einer Recheneinrichtung sowie System |
US12030507B2 (en) | 2020-06-29 | 2024-07-09 | Dr. Ing. H.C. F. Porsche Aktiengesellschaft | Method and system for predicting a trajectory of a target vehicle in an environment of a vehicle |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005037619A1 (fr) | 2003-09-24 | 2005-04-28 | Daimlerchrysler Ag | Systeme de freinage d'urgence |
US20100315505A1 (en) * | 2009-05-29 | 2010-12-16 | Honda Research Institute Europe Gmbh | Object motion detection system based on combining 3d warping techniques and a proper object motion detection |
US8121348B2 (en) | 2006-07-10 | 2012-02-21 | Toyota Jidosha Kabushiki Kaisha | Object detection apparatus, method and program |
EP2562681A1 (fr) * | 2011-08-25 | 2013-02-27 | Delphi Technologies, Inc. | Procédé de suivi d'objet pour un système d'assistance du conducteur à caméra |
US8412448B2 (en) | 2010-02-08 | 2013-04-02 | Hon Hai Precision Industry Co., Ltd. | Collision avoidance system and method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19534230A1 (de) * | 1995-09-15 | 1997-03-20 | Daimler Benz Ag | Verfahren zur Erkennung von Objekten in zeitveränderlichen Bildern |
DE19926559A1 (de) * | 1999-06-11 | 2000-12-21 | Daimler Chrysler Ag | Verfahren und Vorrichtung zur Detektion von Objekten im Umfeld eines Straßenfahrzeugs bis in große Entfernung |
DE10324895A1 (de) * | 2003-05-30 | 2004-12-16 | Robert Bosch Gmbh | Verfahren und Vorrichtung zur Objektortung für Kraftfahrzeuge |
DE102009033852A1 (de) * | 2009-07-16 | 2010-05-06 | Daimler Ag | Verfahren und Vorrichtung zur Erfassung einer Umgebung eines Fahrzeugs |
-
2013
- 2013-12-21 DE DE102013021840.3A patent/DE102013021840A1/de not_active Withdrawn
-
2014
- 2014-10-27 WO PCT/EP2014/072969 patent/WO2015090691A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005037619A1 (fr) | 2003-09-24 | 2005-04-28 | Daimlerchrysler Ag | Systeme de freinage d'urgence |
US8121348B2 (en) | 2006-07-10 | 2012-02-21 | Toyota Jidosha Kabushiki Kaisha | Object detection apparatus, method and program |
US20100315505A1 (en) * | 2009-05-29 | 2010-12-16 | Honda Research Institute Europe Gmbh | Object motion detection system based on combining 3d warping techniques and a proper object motion detection |
US8412448B2 (en) | 2010-02-08 | 2013-04-02 | Hon Hai Precision Industry Co., Ltd. | Collision avoidance system and method |
EP2562681A1 (fr) * | 2011-08-25 | 2013-02-27 | Delphi Technologies, Inc. | Procédé de suivi d'objet pour un système d'assistance du conducteur à caméra |
Non-Patent Citations (3)
Title |
---|
PONSA D ET AL: "Multiple vehicle 3D tracking using an unscented Kalman filter", INTELLIGENT TRANSPORTATION SYSTEMS, 2005. PROCEEDINGS. 2005 IEEE VIENNA, AUSTRIA 13-16 SEPT. 2005, PISCATAWAY, NJ, USA,IEEE, 13 September 2005 (2005-09-13), pages 1108 - 1113, XP010843181, ISBN: 978-0-7803-9215-1, DOI: 10.1109/ITSC.2005.1520206 * |
SAYANAN SIVARAMAN ET AL: "Integrated Lane and Vehicle Detection, Localization, and Tracking: A Synergistic Approach", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 14, no. 2, 1 June 2013 (2013-06-01), pages 906 - 917, XP011511531, ISSN: 1524-9050, DOI: 10.1109/TITS.2013.2246835 * |
SIVARAMAN SAYANAN ET AL: "Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 14, no. 4, 1 December 2013 (2013-12-01), pages 1773 - 1795, XP011532563, ISSN: 1524-9050, [retrieved on 20131125], DOI: 10.1109/TITS.2013.2266661 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110954111A (zh) * | 2018-09-26 | 2020-04-03 | 罗伯特·博世有限公司 | 定量表征对象的对象属性误差的至少一个时间序列的方法 |
CN112001344A (zh) * | 2020-08-31 | 2020-11-27 | 深圳市豪恩汽车电子装备股份有限公司 | 机动车目标检测装置及方法 |
Also Published As
Publication number | Publication date |
---|---|
DE102013021840A1 (de) | 2015-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3038011B1 (fr) | Procédé de détermination de la distance entre un objet et un véhicule automobile au moyen d'un dispositif de détection d'image monoculaire | |
EP2888604B1 (fr) | Procédé de détermination du tracé de la voie d'un véhicule | |
DE102012216386A1 (de) | Verfahren zum Betreiben eines Fahrerassistenzsystems eines Fahrzeugs | |
EP3292510B1 (fr) | Procédé et dispositif pour identifier et évaluer des réflexions sur une voie de circulation | |
WO2019174682A1 (fr) | Procédé et dispositif de détection et d'évaluation des états de voie de circulation et des influences environnementales météorologiques | |
WO2015090691A1 (fr) | Procédé de génération d'un modèle d'environnement d'un véhicule automobile, système d'assistance au conducteur et véhicule automobile | |
DE102016212326A1 (de) | Verfahren zur Verarbeitung von Sensordaten für eine Position und/oder Orientierung eines Fahrzeugs | |
WO2018177484A1 (fr) | Procédé et système de prédiction de signaux de capteur d'un véhicule | |
WO2014127777A2 (fr) | Procédé et dispositif pour déterminer l'état d'une voie de circulation | |
DE112009005424T5 (de) | Objektdetektionsvorrichtung u nd Objektdetektionsverfahren | |
WO2013079057A1 (fr) | Procédé pour déterminer un risque de renversement d'un véhicule | |
DE102014106506A1 (de) | Verfahren zum Durchführen einer Diagnose eines Kamerasystems eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug | |
DE102013022076A1 (de) | Verfahren zum Bestimmen einer Breite eines Zielfahrzeugs mittels eines Kamerasystems eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug | |
DE102019103368A1 (de) | Erkennung durch fusion mehrerer sensoren | |
WO2020020654A1 (fr) | Procédé pour faire fonctionner un système d'aide à la coduite doté deux dispositifs de détection | |
WO2017102150A1 (fr) | Procédé d'évaluation d'une situation de danger détectée par au moins un capteur d'un véhicule, procédé de commande d'une repoduction d'un avertissement de danger et procédé de reproduction d'un avertissement de danger | |
DE112020000325T5 (de) | Schätzung von Anhängereigenschaften mit Fahrzeugsensoren | |
DE102013103952B4 (de) | Spurerkennung bei voller Fahrt mit einem Rundumsichtsystem | |
WO2021139974A1 (fr) | Procédé de réunion de plusieurs ensembles de données pour la génération d'un modèle de voie actuel d'une voie de circulation et dispositif de traitement de données | |
DE102010049216A1 (de) | Verfahren zum Betrieb einer an einem Fahrzeug angeordneten Kamera | |
DE102008025773A1 (de) | Verfahren zur Schätzung eines Orts- und Bewegungszustands eines beobachteten Objekts | |
DE102018215509A1 (de) | Verfahren und Vorrichtung zum Betrieb eines zumindest teilweise automatisiert betriebenen ersten Fahrzeugs | |
DE102016208774A1 (de) | Fahrassistenzvorrichtung und Fahrassistenzverfahren | |
DE102013020947A1 (de) | Verfahren zum Verfolgen eines Zielobjekts bei Helligkeitsänderung, Kamerasystem und Kraftfahrzeug | |
DE102016002232A1 (de) | Verfahren zum Betrieb eines Kraftfahrzeugs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14792785 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14792785 Country of ref document: EP Kind code of ref document: A1 |