EP3895415A1 - Transfer von zusatzinformation zwischen kamerasystemen - Google Patents
Transfer von zusatzinformation zwischen kamerasystemenInfo
- Publication number
- EP3895415A1 EP3895415A1 EP19797243.3A EP19797243A EP3895415A1 EP 3895415 A1 EP3895415 A1 EP 3895415A1 EP 19797243 A EP19797243 A EP 19797243A EP 3895415 A1 EP3895415 A1 EP 3895415A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- source
- pixels
- image
- target
- additional information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012546 transfer Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 14
- 238000004590 computer program Methods 0.000 claims abstract description 3
- 238000012545 processing Methods 0.000 claims description 13
- 238000011156 evaluation Methods 0.000 claims description 4
- 230000006399 behavior Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000001931 thermography Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000053 physical method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/107—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using stereoscopic cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/304—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
Definitions
- the present invention relates to a method for processing images that have been recorded with different camera systems.
- the method can be used in particular for driver assistance systems and systems for at least partially automated driving.
- US Pat. No. 8,958,630 B1 discloses a method for producing a classifier for the semantic classification of image pixels that belong to different object types.
- the database of the learning data is enlarged in an unsupervised learning process.
- the additional information is a source image, which a source camera system has recorded from the same scenery from a different perspective, or source pixels of this source image.
- the source image is already with this
- the additional information can be of any type.
- it can contain physical measurement data that were acquired in connection with the acquisition of the source image.
- the source camera system can be a camera system that includes a source camera that is sensitive to visible light and a thermal imaging camera that is oriented to the same observation area. This source camera system can then record a source image with visible light, and each pixel of the source image is then assigned additional information as an intensity value from the thermal image recorded at the same time.
- the source pixels of the source image are assigned 3D locations in three-dimensional space, which correspond to the positions of the source pixels in the source image.
- a three-dimensional representation of the scenery is thus determined, which, when imaged with the source camera system, leads to the input source image.
- This representation does not have to be continuous and / or complete in the three-dimensional space like a conventional three-dimensional scenery, especially since a particular three-dimensional scenery cannot be inferred from a single two-dimensional picture in particular.
- the three-dimensional representation obtained from a single source image can thus be, for example, a point cloud in three-dimensional space in which there are as many points as the source image has source pixels and in which the three-dimensional space is otherwise assumed to be empty.
- the three-dimensional volume is thus sparsely populated.
- Additional information that is assigned to source pixels is assigned to the respectively associated 3D locations.
- each point in the three-dimensional point cloud that corresponds to the source image is assigned the intensity value of the thermal image associated with the corresponding pixel in the source image.
- the 3D locations are now assigned those target pixels of the target image whose positions in the target image correspond to the 3D locations. It is determined which target pixels in the target image the 3D locations are mapped to when the three-dimensional scenery is recorded with the target camera system. This assignment results from the interaction of the arrangement of the target camera system in space with the imaging properties of the target camera system.
- the additional information that is assigned to the 3D locations is now assigned to the associated target pixels.
- the additional information that was originally developed in connection with the source image can be transferred to the target image. It is therefore possible to provide the target image with this additional information without having to physically record the additional information.
- the additional information such as the infrared intensity from the thermal image in the above example, is not primarily physically linked to the source pixel of the source image, but to the associated 3D location in three-dimensional space.
- this 3D location there is matter at this 3D location that emits infrared radiation. This 3D location is only mapped to different positions in the source image and in the target image, since the source camera and the target camera select the 3D location
- the method takes advantage of this connection by reconstructing 3D locations in a three-dimensional “world coordinate system” for source pixels of the source image and then assigning these 3D locations to target pixels of the target image.
- Such a semantic classification can, for example, assign information to each pixel of the type of the object to which the pixel belongs.
- the object can be, for example, a vehicle, a roadway, a roadway marking, a roadway boundary, a structural obstacle or a traffic sign.
- the semantic classification is often carried out with neural networks or other KL modules. These KL modules are trained by you are given a variety of learning images, for which the correct semantic classification is known as "ground truth”. It is checked to what extent the classification issued by the KL module corresponds to the "ground truth", and lessons are learned from the deviations by the
- Processing of the KL module is optimized accordingly.
- Ground truth is usually obtained by semantically classifying a large number of images of people.
- people mark in the pictures which pixels belong to objects of which classes. This process, called “labeling”, is time-consuming and expensive. So far, the additional information entered by people in this way has always been just that
- Bound camera system with which the learning images were taken If you switched to a different type of camera system, such as from a normal perspective camera to a fish-eye camera, or just changed the perspective of the existing camera system, the labeling process had to start all over again. Since the semantic classification already available for the source images recorded with the source camera system can now be transferred to the target images recorded with the target camera system, the work previously invested in connection with the source images can be used further.
- Driver assistance systems and systems for at least partially automated driving are using more and more cameras and more and more different camera perspectives.
- the source pixels can be assigned to 3D locations in any way.
- the associated 3D location for at least one source pixel can be determined from a time program, according to which at least one source camera of the source camera system moves in space.
- a “structure from motion” algorithm can be used to convert the time program of the movement of a single source camera into an assignment of the source pixels to 3D locations.
- a source camera system with at least two source cameras is selected.
- the 3D locations associated with source pixels can then be determined by stereoscopic evaluation of source images that were recorded by both 3D cameras.
- the at least two source cameras can in particular be contained in a stereo camera system that has one for each pixel
- This depth information can be used to directly assign the source pixels of the source image to 3D locations.
- source pixels from source images that were recorded by both source cameras can also be combined in order to assign additional information to more target pixels of the target image. Since the perspectives of the source camera system and the target camera system are different, both camera systems do not depict exactly the same section of the three-dimensional scene. Thus, if the additional information is transferred from all source pixels of a single source image to target pixels of the target image, not all target pixels of the target image will be covered by this. There will therefore be target pixels to which no additional information has yet been assigned. If several source cameras are used, preferably two or three source cameras, then gaps in the target image can be filled. However, this is not absolutely necessary for training a neural network or other CI module on the basis of the target image. In particular, with one such training target pixels of the target image, for which there is no additional information, from the evaluation by that during training
- any 3D sensor can deliver a point cloud that is compatible with a suitable one to obtain the 3D structure observed by both the source and the target camera system
- the calibration procedure locates both the source pixels and the target pixels in 3D space, thus ensuring that the training information can be transferred from the source system to the target system.
- Additional 3D sensors that only determine the connecting 3D structure of the observed scene for the training could be an additional one
- TOF imaging time-of-flight
- a source image and a target image are selected which have been recorded simultaneously. In this way it is ensured that, especially in the case of dynamic scenery with moving objects, the source image and the target image, apart from the different camera perspective, show the same state of the scenery. If, on the other hand, there is a temporal offset between the source image and the target image, an object that was still present in one image may already be out of the detection range until the other image is captured
- a source camera system and a target camera system are selected, which are mounted on the same vehicle in a fixed relative orientation to one another.
- the fixed connection of the two camera systems ensures that the difference in perspective between the two camera systems remains constant while driving.
- the invention also relates to a method for training a Kl module, the image taken by a camera system and / or pixels of such an image, by processing in an internal module
- Processing chain assigns additional information.
- This additional information can in particular be a classification of image pixels.
- Processing chain of the KL module can in particular include an artificial neural network (KNN).
- KNN artificial neural network
- the behavior of the internal processing chain is determined by parameters. These parameters are optimized when training the Kl module. For a KNN, for example, the parameters can be weights with which the
- Inputs received by a neuron are weighted among each other.
- an error function (Loess function) can depend on the deviation determined in the comparison, and the parameters can be optimized with the aim of minimizing this error function. Any multivariate optimization method can be used for this, such as a gradient descent method.
- the additional learning information is at least partially with the previous one
- the methods can in particular be carried out on a computer and / or on a control device and can be embodied in software to that extent.
- This software is an independent product with customer benefits.
- the invention therefore also relates to a computer program with machine-readable instructions which, when executed on a computer and / or a control device, cause the computer and / or the control device to carry out one of the methods described.
- Figure 2 Exemplary source image 21
- FIG. 3 exemplary translation of the source image 21 into a point cloud in three-dimensional space
- FIG. 4 Exemplary target image 31 with additional information 4, 41, 42 transferred from the source image 21;
- FIG. 5 shows an exemplary arrangement of a source camera system 2 and a target camera system 3 on a vehicle 6;
- FIG. 6 embodiment of the method 200.
- source pixels 21a of a source image 21 are assigned to 3D values 5 in three-dimensional space.
- the associated 3D location 5 for at least one source pixel 21a can be determined from a time program, according to which at least one source camera of the source camera system 2 moves in space.
- the associated 3D location 5 for at least one source pixel 21a can be determined by stereoscopic evaluation of source images 21, which were recorded by two source cameras.
- a source camera system with at least two source cameras was selected in step 105.
- a source image 21a and a target image 31a can be selected which have been recorded simultaneously.
- a source camera system 2 and a target camera system 3 can also be selected, which are mounted on the same vehicle 6 in a fixed relative orientation 61 to one another.
- step 120 the additional information 4, 41, 42, which is assigned to the source pixels 21a of the source image 21, is assigned to the respectively associated 3D locations 5.
- step 130 those target pixels 31a of the target image 31 are assigned to the 3D locations whose positions in the target image 31 correspond to the 3D locations 5.
- step 140 the additional information 4, 41, 42, which is assigned to 3D locations 5, is assigned to the associated target pixels 31a.
- FIG. 2 shows a two-dimensional source image 21 with coordinate directions x and y, which a source camera system 2 has recorded from a scenery 1.
- the source image 21 was segmented semantically. In the example shown in FIG. 2, the became part of the source image 21
- Additional information 4, 41 acquired that this subarea belongs to a vehicle 11 present in scenery 1.
- the additional information 4, 42 was acquired that this
- Sub-areas belong to existing road markings 12 in the scenery 1.
- a single pixel 21a of the source image 21 is marked as an example in FIG.
- the source pixels 21a are translated into 3D locations 5 in three-dimensional space, this being denoted by the reference symbol 5 for the target pixel 21a from FIG.
- the additional information 4, 41 was stored for a source pixel 21a that the source pixel 21a belongs to a vehicle 11, then this additional information 4, 41 was also assigned to the corresponding 3D location 5.
- the additional information 4, 42 was stored for a source pixel 21a that the source pixel 21a belongs to a road marking 12, then this additional information 4, 42 was also assigned to the corresponding 3D location 5. This is represented by different symbols with which the respective 3D locations 5 are represented in the point cloud shown in FIG. 3.
- FIG. 3 also shows that the source image 21 shown in FIG. 2 was taken from perspective A.
- the target image 31 is taken from the perspective B drawn in FIG. 3.
- This exemplary target image 31 is shown in FIG. 4. It is shown here by way of example that the source pixel 21a was ultimately assigned to the target pixel 31a on the detour via the associated 3D location 5. All target pixels 31a, for which there is an associated source pixel 21a with a stored one in FIG. 4.
- Additional information 4, 41, 42 is, accordingly, associated with this additional information 4, 41, 42 on the detour via the associated 3D location 5. The work so far invested in the semantic segmentation of the source image 21 was therefore completely recycled.
- Additional information 4, 41 that source pixels 21a belong to vehicle 11 was only recorded with respect to the rear area of vehicle 11 visible in FIG. 2. Thus, the front area of the vehicle 11 shown in dashed lines in FIG. 4 is not provided with this additional information 4, 41.
- This extreme The constructed example shows that it is advantageous to combine source images 21 from several source cameras in order to provide as many target pixels 31a of the target image 31 with additional information 4, 41, 42.
- FIG. 5 shows an exemplary arrangement of a source camera system 2 and a target camera system 3, both of which are mounted on the same vehicle 6 in a fixed relative orientation 61 to one another. This fixed relative
- Orientation 61 is specified in the example shown in FIG. 5 by a rigid test vehicle.
- the source camera system 2 observes the scenery 1 from a first
- the target camera system 3 observes the same scenery 1 from a second perspective B '.
- the described method 100 enables additional information 4, 41, 42, which was acquired in connection with the source camera system 2, to be used in the context of the target camera system 3.
- FIG. 6 shows an exemplary embodiment of the method 200 for training a Kl module 50.
- the Kl module 50 comprises an internal processing chain 51, the behavior of which is determined by parameters 52.
- step 210 of the method 200 learning images 53 with pixels 53a are input into the KL module 50.
- the KL module 50 supplies these learning images
- step 220 the additional information 4, 41, 42 actually supplied by the KL module 50 is compared with the additional learning information 54.
- the result 220a of this comparison 220 is used in step 230 in order to optimize the parameters 52 of the internal processing chain 51 of the KL module 50.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Mechanical Engineering (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102018221625.8A DE102018221625A1 (de) | 2018-12-13 | 2018-12-13 | Transfer von Zusatzinformation zwischen Kamerasystemen |
PCT/EP2019/079535 WO2020119996A1 (de) | 2018-12-13 | 2019-10-29 | Transfer von zusatzinformation zwischen kamerasystemen |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3895415A1 true EP3895415A1 (de) | 2021-10-20 |
Family
ID=68424887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19797243.3A Withdrawn EP3895415A1 (de) | 2018-12-13 | 2019-10-29 | Transfer von zusatzinformation zwischen kamerasystemen |
Country Status (5)
Country | Link |
---|---|
US (1) | US20210329219A1 (de) |
EP (1) | EP3895415A1 (de) |
CN (1) | CN113196746A (de) |
DE (1) | DE102018221625A1 (de) |
WO (1) | WO2020119996A1 (de) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102020211808A1 (de) | 2020-09-22 | 2022-03-24 | Robert Bosch Gesellschaft mit beschränkter Haftung | Erzeugen gestörter Abwandlungen von Bildern |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10246355A1 (de) * | 2002-10-04 | 2004-04-15 | Rust, Georg-Friedemann, Dr. | Interaktive virtuelle Endoskopie |
CN101443817B (zh) * | 2006-03-22 | 2013-06-12 | 皮尔茨公司 | 用于确定场景的三维重建时的对应关系的方法和装置 |
US8330801B2 (en) | 2006-12-22 | 2012-12-11 | Qualcomm Incorporated | Complexity-adaptive 2D-to-3D video sequence conversion |
US8958630B1 (en) | 2011-10-24 | 2015-02-17 | Google Inc. | System and method for generating a classifier for semantically segmenting an image |
US9414048B2 (en) | 2011-12-09 | 2016-08-09 | Microsoft Technology Licensing, Llc | Automatic 2D-to-stereoscopic video conversion |
US20140071240A1 (en) * | 2012-09-11 | 2014-03-13 | Automotive Research & Testing Center | Free space detection system and method for a vehicle using stereo vision |
WO2014115817A1 (ja) * | 2013-01-23 | 2014-07-31 | 株式会社東芝 | 動作情報処理装置 |
JP7018566B2 (ja) * | 2017-04-28 | 2022-02-14 | パナソニックIpマネジメント株式会社 | 撮像装置、画像処理方法及びプログラム |
JP2018188043A (ja) * | 2017-05-10 | 2018-11-29 | 株式会社ソフトウェア・ファクトリー | 操船支援装置 |
US10977818B2 (en) * | 2017-05-19 | 2021-04-13 | Manor Financial, Inc. | Machine learning based model localization system |
CN111238494B (zh) * | 2018-11-29 | 2022-07-19 | 财团法人工业技术研究院 | 载具、载具定位系统及载具定位方法 |
-
2018
- 2018-12-13 DE DE102018221625.8A patent/DE102018221625A1/de not_active Ceased
-
2019
- 2019-10-29 EP EP19797243.3A patent/EP3895415A1/de not_active Withdrawn
- 2019-10-29 WO PCT/EP2019/079535 patent/WO2020119996A1/de unknown
- 2019-10-29 US US17/271,046 patent/US20210329219A1/en not_active Abandoned
- 2019-10-29 CN CN201980082462.3A patent/CN113196746A/zh active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2020119996A1 (de) | 2020-06-18 |
DE102018221625A1 (de) | 2020-06-18 |
CN113196746A (zh) | 2021-07-30 |
US20210329219A1 (en) | 2021-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018130821A1 (de) | Verfahren zum Beurteilen einer Umgebung eines Kraftfahrzeugs durch ein künstliches neuronales Netz mit einer Aggregationseinheit; Steuereinheit, Fahrerassistenzsystem sowie Computerprogrammprodukt | |
DE112020001103T5 (de) | Multitasking-Wahrnehmungsnetzwerk mit Anwendungen für ein Szenenverständnis und ein fortschrittliches Fahrerassistenzsystem | |
DE102016208056A1 (de) | Verfahren und Vorrichtung zur Verarbeitung von Bilddaten und Fahrerassistenzsystem für ein Fahrzeug | |
WO2010102840A1 (de) | Verfahren und vorrichtung zur reduktion des speicherbedarfs bei der bestimmung von disparitätswerten für mindestens zwei stereoskopisch aufgenommene bilder | |
DE102019131971A1 (de) | Ein Bildverarbeitungsmodul | |
DE10141055B4 (de) | Verfahren zur Bestimmung von Bewegungsinformationen | |
DE102019132996A1 (de) | Schätzen einer dreidimensionalen Position eines Objekts | |
DE102016114693A1 (de) | Verfahren zum Unterstützen eines Fahrers eines Kraftfahrzeugs beim Führen des Kraftfahrzeugs, Fahrerassistenzsystem sowie Kraftfahrzeug | |
EP3895415A1 (de) | Transfer von zusatzinformation zwischen kamerasystemen | |
DE102020200728A1 (de) | Rekonstruieren von Elevationsinformationen aus Radardaten | |
DE102017201796A1 (de) | Steuervorrichtung zum Ermitteln einer Eigenbewegung eines Kraftfahrzeugs sowie Kraftfahrzeug und Verfahren zum Bereitstellen der Steuervorrichtung | |
DE102019219734A1 (de) | Auswertungssystem für Messdaten aus mehreren Domänen | |
DE102019220335A1 (de) | Semantische segmentierung unter verwendung von fahreraufmerksamkeitsinformationen | |
DE102019129029A1 (de) | System und verfahren zur objektdetektion | |
DE102017210415B4 (de) | Verfahren zum Bereitstellen einer Bildmaske für die Abgrenzung einer Interessensregion in einem Kamerabild einer Umfeldkamera eines Kraftfahrzeugs sowie Steuervorrichtung, Umfeldkamera und Kraftfahrzeug | |
DE102019102672A1 (de) | Intersensorisches lernen | |
DE102014219418B4 (de) | Verfahren zur Stereorektifizierung von Stereokamerabildern und Fahrerassistenzsystem | |
WO2021245151A1 (de) | Unüberwachtes lernen einer gemeinsamen darstellung von daten von sensoren unterschiedlicher modalität | |
DE102020110243A1 (de) | Computerimplementiertes Verfahren zur Datenaugmentation | |
EP3754544A1 (de) | Erkennungssystem, arbeitsverfahren und trainingsverfahren | |
DE102019103192A1 (de) | Verfahren zum Erzeugen von Trainingsdaten für ein digitales, lernfähiges Kamerasystem | |
DE102005002636A1 (de) | Verfahren und System zur Verarbeitung von Videodaten eines Kamerasystems | |
DE102023002181B3 (de) | Adaptive Filterkette zum Anzeigen eines Umfeldmodells in einem Fahrzeug | |
DE102022130692B3 (de) | Computerimplementiertes Verfahren zur Erstellung eines dreidimensionalen virtuellen Modells einer Umgebung eines Kraftfahrzeugs | |
DE102018207976A1 (de) | Verfahren und Vorrichtung zum Anzeigen einer Fahrzeugumgebung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210713 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230509 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20240227 |