EP2266073A1 - Traffic object detection system, method for detecting a traffic object, and method for setting up a traffic object detection system - Google Patents
Traffic object detection system, method for detecting a traffic object, and method for setting up a traffic object detection systemInfo
- Publication number
- EP2266073A1 EP2266073A1 EP08873947A EP08873947A EP2266073A1 EP 2266073 A1 EP2266073 A1 EP 2266073A1 EP 08873947 A EP08873947 A EP 08873947A EP 08873947 A EP08873947 A EP 08873947A EP 2266073 A1 EP2266073 A1 EP 2266073A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- traffic
- objects
- model
- situation
- pattern recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096716—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information does not generate an automatic action on the vehicle control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096733—Systems involving transmission of highway information, e.g. weather, speed limits where a selection of the information might take place
- G08G1/096758—Systems involving transmission of highway information, e.g. weather, speed limits where a selection of the information might take place where no selection takes place on the transmitted or the received information
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
- G08G1/096791—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is another vehicle
Definitions
- Traffic object recognition system A method for recognizing a traffic object and method for setting up a traffic object recognition system
- the present invention relates to a method for setting up a traffic object recognition system and a traffic object recognition system, in particular for a motor vehicle, and to a method for recognizing a traffic object.
- the inventive traffic object recognition system for recognizing one or more traffic objects in a traffic situation includes at least one sensor for detecting a traffic situation and a pattern recognition device for recognizing the one or the traffic objects in the detected traffic situation.
- the pattern recognition device is trained on the basis of three-dimensional virtual traffic situations that contain the traffic object (s).
- the erfmdungshacke method for detecting one or more traffic objects in a traffic situation uses the following steps: detecting a traffic situation with at least a sensor and detecting the one or the traffic objects in the detected traffic situation with a pattern recognition device that is trained on the basis of three-dimensional virtual traffic situations containing the traffic object or objects.
- the method according to the invention for setting up such a traffic object recognition system provides the following method steps.
- a scene generator simulates three-dimensional simulations of different traffic situations with at least one of the traffic objects.
- a projection device generates signals which correspond to those which the sensor would detect in a traffic situation simulated by the three-dimensional simulation.
- the signals are fed to the evaluation device for detecting traffic objects and the pattern recognition is trained based on a deviation between the traffic objects simulated in the three-dimensional simulations of the traffic situations and the traffic objects recognized therein.
- the relative arrangement of the traffic objects to the sensor in space can be verifiably implemented in the simulation. All phenomena that can lead to a changed perception of the traffic object, eg. As rain, uneven illumination of the signs by shadows of trees, etc., can be directly caused by the causative objects, ie, for. As the rain and trees are simulated. This facilitates the training of the pattern recognition device, since a small amount of time is required.
- 1 is a diagram for explaining a classifier training.
- FIG. 2 shows a first embodiment for the synthetic training of classifiers
- 5 shows a method sequence for synthesizing digital samples for video-based classifiers.
- the following embodiments describe video-based image recognition systems.
- the signals for these image recognition systems are provided by cameras.
- the image recognition system should recognize in the signals depending on the device different traffic objects, z. Driving witnesses, pedestrians, traffic signs, etc.
- Other detection systems are based on radar or ultrasound sensors that output signals corresponding to a traffic situation by an appropriate scanning of the surroundings.
- the traffic object recognition system is based on pattern recognition. For each traffic object, one or more classifiers are provided. These classifiers are compared to the incoming signals. If the signals agree with the classifiers or if the signals meet the conditions of the classifiers, the corresponding traffic object is considered recognized.
- the embodiments described below deal in particular with the determination of suitable classifiers.
- FIG. 1 shows a first approach to training classifiers for pattern recognition.
- One or more cameras 1 generate a video data stream.
- a so-called learning sample 2 is generated.
- the corresponding corresponding meaning information (ground truth) 3 is generated for the image data.
- the corresponding meaning information may contain whether the image data reproduce a traffic object, possibly which traffic object, at which relative position, with which relative speed etc.
- the corresponding meaning information 3 can be edited manually by an operator 7. Subsequent embodiments show how the corresponding meaning information can also be generated automatically.
- the image data 10 and the corresponding meaning information 3 of the training sample 2 are repeatedly supplied to a training module 4 of pattern recognition.
- the training module 4 adapts the classifiers of the pattern recognition until a sufficient correspondence is achieved between the corresponding meaning information 3, ie the traffic objects contained in the image data, and the traffic objects recognized by the pattern recognition.
- test sample 5 is also generated.
- the test sample can be generated in the same way as the learning sample 2.
- the test sample 5 with the image data 11 and corresponding meaning information 6 contained therein is used to test the quality of the previously trained classifier.
- the previously trained classifier 40, the individual samples of the test sample 5 are fed and evaluated the detection rate of the traffic objects statistically.
- An evaluation device 9 determines the recognition rates and the error rates of the classifier 40.
- Fig. 2 shows an embodiment for training classifiers in which the meaning information is generated.
- a scene generator 26 generates three-dimensional simulations of various Traffic situations.
- a central controller 25 can control which scenes the scene generator 26 should simulate.
- the control device 25 can be instructed via a protocol which meaning information 28, ie which traffic objects, should be contained in the simulated traffic situations.
- the central control device 25 can select between different modules 20 to 24, which are connected to the scene generator 26.
- Each module 20 to 24 includes a physical or physical description of traffic objects, other objects, weather conditions, lighting conditions, and possibly also the sensors used.
- a movement of the motor vehicle or the receiving sensor by a movement model 22 can be considered.
- the simulated traffic situation is projected.
- the projection can take place on a screen or other kind of projection surfaces.
- the camera or another sensor detects the projected simulation of the traffic situation.
- the signals from the sensor may be applied to a training sample 27 or, optionally, a test sample.
- the corresponding meaning information 28, d. H. the illustrated traffic objects to be recognized are known from the simulation.
- the central control device 25 or the scene generator 26 are in sync with the detected image data of the learning sample 27, the corresponding meaning information 28 from.
- the senor is also simulated by a module.
- the module generates the signals which would correspond to those which the real sensor would detect during the traffic situation corresponding to the simulation.
- the projection or imaging of the three-dimensional simulation can thus take place within the scope of the simulation.
- the further processing of the generated signals as a learning sample and the associated meaning information 28 takes place as described above.
- the learning sample 27 and the associated meaning information are supplied to a training module 4 for training a classifier.
- FIG. 3 shows a further embodiment for testing and / or training a classifier.
- a scene simulator 30 generates a learning sample 27 with associated corresponding meaning information 28.
- the training sample is generated synthetically, as described in the previous embodiment in connection with FIG.
- a learning sample 27 is provided based on real image data. With a camera 1, for example, a video data stream can be recorded.
- a processing device typically determined with the assistance of an operator the corresponding meaning information 38.
- a classifier is trained by means of a training module 42 both by means of the synthetic learning sample 27 and with the real learning sample 37.
- An evaluation device 35 can analyze how high the recognition rate of the classifier is with regard to certain simulated traffic situations.
- the scene generator 30 also stores simulation parameters 29 in addition to the simulated signals for the training sample 27 and the associated meaning information 28.
- the simulation parameters 29 include in particular the modules used and their settings.
- An analogous evaluation of the recognition rate of the classifier can take place for the real image data.
- not only the associated meaning information but also further information 39 belonging to the image data are determined and stored for the acquired image data.
- This further information may relate to the general traffic situation, the relative position of the object of traffic to be recognized to the sensor, the weather conditions, lighting conditions, etc.
- FIG. 4 schematically shows how an automatic adaptation of the scene generator 26 can take place.
- Synthetic generated patterns 27, 30 and real samples 36, 37 of the samples are fed to the classifier 43.
- the classifier 42 classifies the patterns.
- the result of the classification is compared with the ground truth information, i. the meaning information 31, 38 compared. Deviations are determined in comparison module 60.
- the system has a learning component 63 that enables night training of the classifier 62 using synthetic or real training patterns 61.
- the training patterns 61 may be selected from the patterns in which the comparison module 60 has determined deviations between the meaning information and the classification by the classifier 42.
- the training pattern 61 may also include other patterns that, while not leading to erroneous recognition, may still be improved.
- the detected deviations may also be used to enhance the synthesis 26 and associated input modules 20-24.
- a traffic object 20, z. B. becomes a traffic sign represented by an object model 20 in its physical dimensions and physical appearance.
- a scene model 21 specifies the relative placement and movement of the traffic object to the imaginary sensor.
- the scene model may include other objects, such. As trees, houses, road, etc.
- the lighting model 23 and the scene model specify the lighting 80. This has influence on the synthesized object 81, which is additionally controlled by the object model 20 and the scene model 21.
- the realistically exposed object passes through the optical channel 82, which is given by the illumination model and the scene model.
- optical interference 83 which may be predetermined by the camera model 24, the exposure 84 and camera image 85 take place.
- the motion model of the camera 22 controls the exposure and imaging in the camera 85, which is essentially determined by the camera model 24.
- the camera image 85 or projection is subsequently used as a sample for the training of the classifiers.
- the test of the classifier can be made as described on synthetic and real signals. Testing on real data, as described in connection with FIG. 3, can evaluate the quality of the synthetic training for a real situation.
- An object model 20 for a traffic object can be designed so that it ideally describes the traffic object. However, it is also preferable to integrate smaller disturbances into the object model 20.
- An object model may include, but is not limited to, a geometric description of the object. For flat objects, such. As traffic signs, a graphic definition of the character can be selected in a suitable form. For bulky objects, such. As a vehicle or a pedestrian, the object model preferably includes a three-dimensional description.
- the mentioned smaller interferences may include, in the object geometry, a bending of the object, a concealment by other objects or a lack of individual parts of the object.
- a missing object can, for. B. may be a missing bumper.
- the object model may also describe the surface property of the object. This includes the pattern of surface, color, symbols, etc.
- texture properties of the objects may be integrated in the object model.
- the object model advantageously comprises a reflection model of incident light beams, a possible self-illuminating characteristic (eg in the case of traffic lights, turn signals, traffic lights, etc.). Dirt, snow, scratches, holes, or surface graphical reshaping may also be described by the object model.
- the position of the object in space can also be integrated in the object model, alternatively its position can also be described in the scene model 21 described below.
- the position includes a static position, an orientation in space, the relative position.
- the scene model includes, for example, a road model, such as the lane and lane course, weather model or weather model, with information about dry weather, a rain model, drizzle, light rain, heavy rain, downpour, etc., a snow model, a hail model Fog model, a visibility simulation; a landscape model with surfaces and terrain models, a vegetation model including trees, leaves etc., a building model, a sky model including clouds, direct, indirect light, diffused light, sun, day and night times.
- a road model such as the lane and lane course, weather model or weather model, with information about dry weather, a rain model, drizzle, light rain, heavy rain, downpour, etc., a snow model, a hail model Fog model, a visibility simulation
- a landscape model with surfaces and terrain models a vegetation model including trees, leaves etc.
- a building model a sky model including clouds, direct, indirect light, diffused light, sun, day and night times.
- a model of the sensor 22 may be moved within the simulated scene.
- the sensor model may include a motion model of the sensor for this purpose.
- the following parameters can be taken into account: speed, steering angle, steering wheel angular velocity, steering angle, steering angle velocity, pitch angle, pitch, yaw rate, yaw angle, roll angle, roll rate.
- a realistic dynamic motion model of the vehicle to which the sensor is attached may also be considered, for which a model for a vehicle pitch,
- Modeling common driving maneuvers such as cornering, lane change, braking and acceleration, forward and reverse driving is also possible.
- the illumination model 23 describes the illumination of the scene with all existing light sources. This can u. a. following characteristics are: the illumination spectrum of the respective light source, a lighting by the sun with blue sky, different sun states, diffused light with z. Cloudy sky, backlit, backlit (reflected light), twilight. Furthermore, the light cone of vehicle headlamps in parking lights, low beam and high beam from the different types of headlights, z. As halogen light, Xenon light, sodium, light, mercury vapor light etc. taken into account.
- a model of the sensor 24 includes, for example, a video-based sensor with imaging properties of the camera, the optics and the beam path immediately in front of the optics.
- the exposure properties of the camera pixels whose characteristic in lighting, their dynamic
- the modeling of the optics can include the spectral properties, the focal length, the f-number, the calibration, the distortion (cushions, barrel distortion) within the optics, scattered light etc. Furthermore, calculation properties, spectral filter characteristics of a disk, smears, streaks, drops, water and other impurities can be taken into account.
- the scene generator 26 merges the data of the various models and generates therefrom the synthesized data. In a first variant, the appearance of the entire three-dimensional simulation can be determined and stored as a sequence of video images. The associated meaning information and synthesis parameters are stored. In another variant, only the appearance of the respective traffic object to be recognized is determined and stored. The latter can be done faster and saves storage space. However, a training of the classifier can also be carried out on only the individual traffic object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102008001256A DE102008001256A1 (en) | 2008-04-18 | 2008-04-18 | A traffic object recognition system, a method for recognizing a traffic object, and a method for establishing a traffic object recognition system |
PCT/EP2008/065793 WO2009127271A1 (en) | 2008-04-18 | 2008-11-19 | Traffic object detection system, method for detecting a traffic object, and method for setting up a traffic object detection system |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2266073A1 true EP2266073A1 (en) | 2010-12-29 |
Family
ID=40225250
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP08873947A Withdrawn EP2266073A1 (en) | 2008-04-18 | 2008-11-19 | Traffic object detection system, method for detecting a traffic object, and method for setting up a traffic object detection system |
Country Status (4)
Country | Link |
---|---|
US (1) | US20110184895A1 (en) |
EP (1) | EP2266073A1 (en) |
DE (1) | DE102008001256A1 (en) |
WO (1) | WO2009127271A1 (en) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102010013943B4 (en) * | 2010-04-06 | 2018-02-22 | Audi Ag | Method and device for a functional test of an object recognition device of a motor vehicle |
DE102010055866A1 (en) | 2010-12-22 | 2011-07-28 | Daimler AG, 70327 | Recognition device i.e. image-processing system, testing method for motor car, involves generating and analyzing output signal of device based on input signal, and adapting input signal based on result of analysis |
DE102011107458A1 (en) | 2011-07-15 | 2013-01-17 | Audi Ag | Method for evaluating an object recognition device of a motor vehicle |
DE102012008117A1 (en) | 2012-04-25 | 2013-10-31 | Iav Gmbh Ingenieurgesellschaft Auto Und Verkehr | Method for representation of motor car environment, for testing of driver assistance system, involves processing stored pictures in image database to form composite realistic representation of environment |
US9092696B2 (en) * | 2013-03-26 | 2015-07-28 | Hewlett-Packard Development Company, L.P. | Image sign classifier |
US20140306953A1 (en) * | 2013-04-14 | 2014-10-16 | Pablo Garcia MORATO | 3D Rendering for Training Computer Vision Recognition |
WO2014170757A2 (en) * | 2013-04-14 | 2014-10-23 | Morato Pablo Garcia | 3d rendering for training computer vision recognition |
DE102013217827A1 (en) * | 2013-09-06 | 2015-03-12 | Robert Bosch Gmbh | Method and control device for recognizing an object in image information |
US9610893B2 (en) | 2015-03-18 | 2017-04-04 | Car1St Technologies, Llc | Methods and systems for providing alerts to a driver of a vehicle via condition detection and wireless communications |
US10328855B2 (en) | 2015-03-18 | 2019-06-25 | Uber Technologies, Inc. | Methods and systems for providing alerts to a connected vehicle driver and/or a passenger via condition detection and wireless communications |
US9740944B2 (en) * | 2015-12-18 | 2017-08-22 | Ford Global Technologies, Llc | Virtual sensor data generation for wheel stop detection |
CN108604252B (en) | 2016-01-05 | 2022-12-16 | 六科股份有限公司 | Computing system with channel change based triggering features |
US10474964B2 (en) * | 2016-01-26 | 2019-11-12 | Ford Global Technologies, Llc | Training algorithm for collision avoidance |
DE102016205392A1 (en) * | 2016-03-31 | 2017-10-05 | Siemens Aktiengesellschaft | Method and system for validating an obstacle detection system |
DE102016008218A1 (en) * | 2016-07-06 | 2018-01-11 | Audi Ag | Method for improved recognition of objects by a driver assistance system |
US20180011953A1 (en) * | 2016-07-07 | 2018-01-11 | Ford Global Technologies, Llc | Virtual Sensor Data Generation for Bollard Receiver Detection |
DE102017221765A1 (en) | 2017-12-04 | 2019-06-06 | Robert Bosch Gmbh | Train and operate a machine learning system |
US10769461B2 (en) * | 2017-12-14 | 2020-09-08 | COM-IoT Technologies | Distracted driver detection |
US11726210B2 (en) | 2018-08-05 | 2023-08-15 | COM-IoT Technologies | Individual identification and tracking via combined video and lidar systems |
CN109190504B (en) * | 2018-08-10 | 2020-12-22 | 百度在线网络技术(北京)有限公司 | Automobile image data processing method and device and readable storage medium |
GB2581523A (en) * | 2019-02-22 | 2020-08-26 | Bae Systems Plc | Bespoke detection model |
EP3948328A1 (en) | 2019-03-29 | 2022-02-09 | BAE SYSTEMS plc | System and method for classifying vehicle behaviour |
DE102019124504A1 (en) * | 2019-09-12 | 2021-04-01 | Bayerische Motoren Werke Aktiengesellschaft | Method and device for simulating and evaluating a sensor system for a vehicle as well as method and device for designing a sensor system for environment detection for a vehicle |
DE102021200452A1 (en) | 2021-01-19 | 2022-07-21 | Psa Automobiles Sa | Method and training system for training a camera-based control system |
DE102021202083A1 (en) | 2021-03-04 | 2022-09-08 | Psa Automobiles Sa | Computer-implemented method for training at least one algorithm for a control unit of a motor vehicle, computer program product, control unit and motor vehicle |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5914720A (en) * | 1994-04-21 | 1999-06-22 | Sandia Corporation | Method of using multiple perceptual channels to increase user absorption of an N-dimensional presentation environment |
DE19829527A1 (en) * | 1998-07-02 | 1999-02-25 | Kraiss Karl Friedrich Prof Dr | View-based object identification and data base addressing method |
JP4361389B2 (en) * | 2004-02-26 | 2009-11-11 | 本田技研工業株式会社 | Road traffic simulation device |
WO2008019299A2 (en) * | 2006-08-04 | 2008-02-14 | Ikonisys, Inc. | Image processing method for a microscope system |
-
2008
- 2008-04-18 DE DE102008001256A patent/DE102008001256A1/en not_active Withdrawn
- 2008-11-19 WO PCT/EP2008/065793 patent/WO2009127271A1/en active Application Filing
- 2008-11-19 EP EP08873947A patent/EP2266073A1/en not_active Withdrawn
- 2008-11-19 US US12/988,389 patent/US20110184895A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
See references of WO2009127271A1 * |
Also Published As
Publication number | Publication date |
---|---|
DE102008001256A1 (en) | 2009-10-22 |
US20110184895A1 (en) | 2011-07-28 |
WO2009127271A1 (en) | 2009-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2266073A1 (en) | Traffic object detection system, method for detecting a traffic object, and method for setting up a traffic object detection system | |
CN103770708B (en) | The dynamic reversing mirror self adaptation dimming estimated by scene brightness is covered | |
DE102018201054A1 (en) | System and method for image representation by a driver assistance module of a vehicle | |
EP2087459B1 (en) | Device, method, and computer program for determining a position on the basis of a camera image | |
DE102007034657B4 (en) | Image processing device | |
DE19860676A1 (en) | Visualization device for imaging area illuminated by at headlights of moving vehicle has camera that collects data during journey of vehicle while scenes are simulated by illumination from headlight | |
DE102019115459A1 (en) | METHOD AND DEVICE FOR EVALUATING A ROAD SURFACE FOR MOTOR VEHICLES | |
WO2022128014A1 (en) | Correction of images from a panoramic-view camera system in the case of rain, incident light and contamination | |
DE102017106152A1 (en) | Determine an angle of a trailer with optimized template | |
DE10124005A1 (en) | Method and device for improving visibility in vehicles | |
DE102006053109B3 (en) | Method for pictorial representation of a vehicle environment and image capture system | |
DE102013220839B4 (en) | A method of dynamically adjusting a brightness of an image of a rear view display device and a corresponding vehicle imaging system | |
DE102007008542B4 (en) | Method and device for controlling the light output of a vehicle | |
WO2022128013A1 (en) | Correction of images from a camera in case of rain, incident light and contamination | |
DE102022213200A1 (en) | Method for transmissivity-aware chroma keying | |
DE102022002766B4 (en) | Method for three-dimensional reconstruction of a vehicle environment | |
DE102018214875A1 (en) | Method and arrangement for generating an environmental representation of a vehicle and vehicle with such an arrangement | |
WO2008135295A1 (en) | Method and device for recognising road signs | |
DE102021103367A1 (en) | Generation of realistic image-based data for developing and testing driver assistance systems | |
DE102007008543A1 (en) | Method and device for determining the state of motion of objects | |
WO2017092734A2 (en) | Method for reproducing a simulated environment | |
DE102013021958A1 (en) | Method for testing and reproducing dynamic light distributions | |
DE102023002809A1 (en) | Process for improving the image quality of a multi-camera system | |
DE112021001771T5 (en) | RENDERING SYSTEM AND AUTOMATED DRIVING VERIFICATION SYSTEM | |
DE102018116173A1 (en) | Operating a driver assistance system for a motor vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20101118 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA MK RS |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20130304 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20141114 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20150325 |