WO2021245151A1 - Apprentissage non surveillé d'une présentation commune de données provenant de capteurs de modalité différente - Google Patents

Apprentissage non surveillé d'une présentation commune de données provenant de capteurs de modalité différente Download PDF

Info

Publication number
WO2021245151A1
WO2021245151A1 PCT/EP2021/064828 EP2021064828W WO2021245151A1 WO 2021245151 A1 WO2021245151 A1 WO 2021245151A1 EP 2021064828 W EP2021064828 W EP 2021064828W WO 2021245151 A1 WO2021245151 A1 WO 2021245151A1
Authority
WO
WIPO (PCT)
Prior art keywords
measurement data
sensor
representations
trainable
encoders
Prior art date
Application number
PCT/EP2021/064828
Other languages
German (de)
English (en)
Inventor
Fabian TIMM
Alexandru Paul Condurache
Rainer Stal
Sebastian Muenzner
Florian Faion
Claudius Glaeser
Florian Drews
Jasmin Ebert
Thomas Gumpp
Michael Ulrich
Lars Rosenbaum
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2021245151A1 publication Critical patent/WO2021245151A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present invention relates to the processing of sensor data, for example for the at least partially automated driving of vehicles in road traffic.
  • DE 102017223 206 A1 discloses a device which uses an artificial neural network to determine linear trajectories of objects.
  • a device for processing measurement data from at least two sensors comprises at least one first trainable encoder, which is designed to convert measurement data from the first sensor to representations in a workspace map, as well as a second trainable encoder, which is designed to map measurement data of the second sensor on representations in this work space.
  • the device further comprises an evaluator which is designed to evaluate a work result in relation to this specific measurement data from a representation supplied by one of the encoders for specific measurement data.
  • the working area can also be referred to as “latent space” and is determined by the specific encoder. Representations in the work space can only be obtained in that data are fed to at least one of the encoders and mapped onto a representation. If, for example, two concrete encoders each generate representations that are characterized by vectors or matrices with a certain number of elements, then the working space is not identical to the space of all vectors or matrices with this number of elements, but a subspace thereof. If, for example, a vector or a matrix with the number of elements in question is determined randomly, it is not guaranteed that this vector or this matrix lies in the working space.
  • the working space typically has a smaller dimensionality than the space for the possible measurement data. For example, an image that assigns intensity values to a large number of pixels in several color channels is usually mapped onto a representation that is characterized by fewer numerical values than originally existed intensity values. However, this is not absolutely necessary.
  • a trainable module which can be used as an encoder, for example, is viewed in particular as a module that embodies a function parameterized with adaptable parameters with great force for generalization.
  • the parameters can in particular be adapted in such a way that the output of the trainable module is optimal according to a quality function or cost function with regard to a predetermined goal.
  • the trainable module can in particular contain an artificial neural network, ANN, and / or it can be an ANN or be part of the ANN.
  • the encoder and the decoder to be combined in a common ANN.
  • the measurement data can in particular be, for example, data from vehicle surroundings sensors, such as camera images, radar reflexes, LIDAR point clouds and / or ultrasonic echoes.
  • vehicle surroundings sensors such as camera images, radar reflexes, LIDAR point clouds and / or ultrasonic echoes.
  • the surroundings of a vehicle are often observed with several of these sensor modalities at the same time, since there is not a single sensor modality that always enables optimal recording of all relevant information in all driving situations.
  • the encoders can already be pre-trained to a large extent with measurement data from several sensors without looking at a specific evaluator.
  • Such training can be aimed, for example, at reconstructing the originally inserted measurement data from the representations generated by the encoders.
  • the training can, for example, also be aimed at reconstructing the measurement data that a second sensor (such as a radar sensor) recorded at the same time from a representation that is based on measurement data from a first sensor (such as a camera).
  • a second sensor such as a radar sensor
  • a first sensor such as a camera
  • the evaluator specific to the respective application can then be trained in the usual way by feeding learning measurement data to the encoders and training the evaluator so that, if the representation fed to it is based on certain learning measurement data, it can do so Learning work result associated with learning measurement data is produced.
  • the pre-trained encoders already contain much learned knowledge with regard to the recognition of features in measurement data of a sensor modality, as well as with regard to the recognition of common features of both sensor modalities. This has the consequence that the training of the evaluator gets along with less training data that is “labeled” with regard to his specific application with the same or better accuracy that is ultimately achieved.
  • both encoders can learn how to identify certain basic features in the respective measurement data, such as edges, both individually and together.
  • the evaluator then no longer has to start “from scratch”, but receives a refined form of the measurement data with the representations in which the said basic features have already been worked out. This simplifies the work of the evaluator and thus also the necessary training.
  • the encoder training can also be reused for a large number of evaluators.
  • the aforementioned processing of basic features in the representations can be used equally by an evaluator who is specialized in recognizing traffic signs and by an evaluator who is specialized in recognizing other vehicles.
  • the same representations can also be supplied to several different evaluators.
  • an evaluator can also be exchanged more easily for an improved new version.
  • the evaluator is designed to assign the representation from the representation to determine underlying measurement data for one or more classes of a predetermined classification, and / or a semantic segmentation of this measurement data.
  • the evaluator is designed to determine realistic measurement data of another sensor from a representation on which measurement data from one sensor is based.
  • a sensor simulation can in particular be used, for example, to obtain realistic training data for training an evaluation of measurement data from this other sensor.
  • the invention therefore relates very generally to a method for training at least two trainable encoders for use in the device described above.
  • the first encoder is used to determine representations in a workspace from measurement data from a first sensor.
  • the second encoder is used to determine representations in the workspace from measurement data from a second sensor.
  • first trainable decoder measurement data of the first sensor are reconstructed from representations that were supplied by at least one of the encoders.
  • second trainable decoder measurement data of the second sensor are reconstructed from representations that were supplied by at least one of the encoders. It is not necessary to use both decoders for all measurement data that is processed by at least one encoder. For example, part of the measurement data can only be traced back to one sensor and, after processing by an encoder, can be reconstructed by just one decoder. About the Considering the total amount of measurement data, however, both decoders are used at least once.
  • Parameters that characterize the behavior of the trainable encoder and the trainable decoder are optimized with the aim that the reconstructed measurement data of the first sensor are realistic measurement data of the first sensor and the reconstructed measurement data of the second sensor are realistic measurement data of the second sensor.
  • the check of whether the reconstructed measurement data of a sensor is actually realistic measurement data of this sensor does not require any additional information in the form of “labels”. Rather, this test can be carried out using the measurement data itself.
  • the training of the encoders is then interlinked with the training of the decoders in such a way that the result is a self-consistent solution. Of this training result, only the encoders are still required in the device ultimately implemented in many cases.
  • the decoders which were used as “training wheels” in the training of the encoder, similar to learning to ride a bicycle, are only required in a few cases in the finished device, such as in sensor simulation.
  • measurement data from this sensor are reconstructed from representations that are based on measurement data from a sensor.
  • the reconstructed measurement data are compared with the measurement data of the sensor to which they are based.
  • each encoder can be trained to work out precisely those features in the respective representation in the common workspace that are important for a successful reconstruction of the measurement data from this representation.
  • measurement data from another sensor are reconstructed from representations that are based on measurement data from one sensor.
  • the reconstructed measurement data are compared with measurement data from this other sensor.
  • the encoders also focus on the aspect of multimodal cooperation trained. The encoders are thus forced, as it were, to work out commonalities in the sensor modalities in the representations they generate.
  • Measurement data from the sensors that relate to the simultaneous observation of one and the same scene by the sensors are particularly advantageously used during training. Reconstructed measurement data from a sensor can then best be compared with actually recorded measurement data from this sensor. If the measurement data from the sensors relate to different observation periods and / or different scenarios, it is still possible, for example, to check whether the reconstructed measurement data are realistic measurement data from the respective sensor at all.
  • the parameters that characterize the behavior of the trainable encoder are additionally optimized with the aim of making representations based on measurement data from the first sensor difficult to distinguish from representations based on measurement data from the second sensor. This increases the tendency that the representations supplied by both encoders preferably contain common features of the sensor modalities used.
  • the transinformation (“mutual information”) between representations based on measurement data from the first sensor on the one hand and representations based on measurement data from the second sensor on the other hand is advantageously used.
  • a trainable discriminator is trained to distinguish representations that are based on measurement data from the first sensor from representations that are based on measurement data from the second sensor. This discriminator evaluates how difficult it is to distinguish between these representations. In this way, as little differentiation as possible can be trained in the style of a Generative Adversarial Network, GAN.
  • the invention also relates to a method for training the complete device described above. As part of this process, trainable encoders are pre-trained with the previously described process. Learning measurement data and the associated learning work results are provided. The learning measurement data are processed with the pre-trained encoders to representations in a workspace. Parameters that characterize the behavior of the evaluator are optimized with the aim that the evaluator maps representations that go back to the learning measurement data onto the learning work results.
  • the pre-training of the encoder in this context has the effect that the training of the evaluator can already fall back on generally learned knowledge with regard to the recognition of features in the measurement data. Therefore, only the part of the evaluation that is specific to the specific application of the evaluator needs to be trained. In particular, after changing the evaluator, all the knowledge learned that can also be used for the new evaluator is retained and does not have to be learned again.
  • parameters that characterize the behavior of at least one encoder of the device are further optimized with the aim that the evaluator maps representations that are based on the learning measurement data onto the learning work results.
  • the invention therefore also relates to a computer program with machine-readable instructions which, when they are executed on one or more computers, cause the computer or computers to carry out one of the described methods.
  • control devices for vehicles and embedded systems for technical devices which are also able to execute machine-readable instructions, are to be regarded as computers.
  • the invention also relates to a machine-readable data carrier and / or to a download product with the computer program.
  • a download product is a digital product that can be transmitted via a data network, ie can be downloaded by a user of the data network and that can be offered for sale for immediate download in an online shop, for example.
  • a computer can be equipped with the computer program, with the machine-readable data carrier or with the download product.
  • FIG. 2 exemplary embodiment of the method 100
  • FIG. 3 exemplary feedback paths for training the encoders 4a, 4b;
  • FIG. 4 exemplary embodiment of method 300.
  • FIG. 1 shows an exemplary embodiment of the device 1.
  • the device 1 comprises two trainable encoders 4a and 4b and an evaluator 7.
  • Measurement data 3a from a first sensor 2a are mapped onto a first representation 5a in a work space 6 by the first trainable encoder 4a. From this representation 5a, the evaluator 7 determines a work result 8 in relation to the measurement data 3a.
  • Measurement data 3b from a second sensor 2b are mapped onto a second representation 5b in the work space 6 by the second trainable encoder 4b. From this representation 5b, the evaluator 7 in turn determines a work result 8 in relation to the measurement data 3b.
  • FIG. 2 shows an exemplary embodiment of the method 100 for training the encoders 4a, 4b.
  • step 110 measurement data 3a from a first sensor 2a are converted into representations 5a in a work space 6.
  • step 120 measurement data 3b from a second sensor 2b are converted into representations 5b in the working space 6.
  • step 130 measurement data 3a * of the first sensor 2a are reconstructed with a first trainable decoder 9a from representations 5a, 5b that were supplied by at least one of the encoders 4a, 4b.
  • representations 5a can be used which are based on measurement data 3a of the first sensor 2a.
  • representations 5b that go back to measurement data 3b of the second sensor 2b can alternatively or in combination with this be used.
  • a second trainable decoder 9b is used to reconstruct measurement data 3b * of the second sensor 2b from representations 5a, 5b that were supplied by at least one of the encoders 4a, 4b.
  • a second trainable decoder 9b is used to reconstruct measurement data 3b * of the second sensor 2b from representations 5a, 5b that were supplied by at least one of the encoders 4a, 4b.
  • representations 5b that go back to measurement data 3b of the second sensor 2b can be used.
  • representations 5a can alternatively or in combination with this be used, which are based on measurement data 3a of the first sensor 2a.
  • step 150 parameters 4a *, 4b *, 9a *, 9b *, which characterize the behavior of the trainable encoders 4a, 4b and the trainable decoders 9a, 9b, are optimized with the aim of ensuring that the reconstructed measurement data 3a * of the first sensor 2a are realistic measurement data 3a of the first sensor 2a and the reconstructed measurement data 3b * of the second sensor 2b are realistic measurement data 3b of the second sensor 2b.
  • reconstructed measurement data 3a *, 3b * can be used, which according to blocks 131 and 141 each relate to sensor 2a, 2b, from whose measurement data 3a, 3b the representations 5a, 5b used for the reconstruction are based.
  • measurement data 3a *, 3b * reconstructed according to block 152 can be used, which according to blocks 132 and 142 each relate to a different sensor 2a, 2b than the one whose measurement data 3a, 3b are used for the reconstruction used representations 5a, 5b go back.
  • the parameters 4a *, 4b *, which characterize the behavior of the trainable encoders 4a, 4b can additionally be optimized for the goal that representations 5a, which are based on measurement data 3a of the first sensor 2a, are difficult to distinguish from representations 5b which are based on measurement data 3b of the second sensor 2b.
  • this can be measured using the transinformation 11 between the representations 5a and 5b.
  • a trainable discriminator 10 can be trained to distinguish representations 5a, which are based on measurement data 3a of the first sensor 2a, from representations 5b, which are based on measurement data 3b of the second sensor 2b. This discriminator 10 can then be used in accordance with block 153c to evaluate how difficult it is to distinguish the representations 5a, 5b from one another.
  • FIG. 3 illustrates several ways in which feedback for training the trainable encoders 4a, 4b can be obtained.
  • Measurement data 3a from the first sensor 2a are converted by the first encoder 4a into a representation 5a in the working space 6.
  • This representation 5a can be converted with the first decoder 9a into reconstructed measurement data 3a * of the first sensor 2a, which can be compared with the original measurement data 3a.
  • the representation 5a can, however, also be converted with the second decoder 9b into reconstructed measurement data 3b * of the second sensor 2b, which can be compared with measurement data 3b of the second sensor 2b.
  • these measurement data 3b should ideally relate to the same scenery and the same observation time as the measurement data 3a.
  • measurement data 3b from the second sensor 2b are transferred by the second encoder 4b into a representation 5b in the working space 6.
  • This representation 5b can be converted with the second decoder 9b into reconstructed measurement data 3b * of the second sensor 2b, which can be compared with the original measurement data 3b.
  • the representation 5b can, however, also be converted with the first decoder 9a into reconstructed measurement data 3a * from the first sensor 2a, which can be compared with measurement data 3a from the first sensor 2a. Then these measurement data 3a should ideally relate to the same scenery and the same observation time as the measurement data 3b.
  • the transinformation 11 between the representations 5a, 5b can also be used as feedback for training the encoders 4a, 4b.
  • a discriminator 10 can be used to evaluate how difficult or how easy it is to distinguish the representations 5a, 5b from one another. This can also be used as feedback for training the encoders 4a, 4b.
  • FIG. 4 shows an exemplary embodiment of the method 200 for training the device 1.
  • trainable encoders 4a, 4b are pretrained with the method 100 described above, with a corresponding trained state of parameters 4a *, 4b * that determine the behavior of these encoders 4a, 4b characterize.
  • learning measurement data 3a #, 3b # and associated learning work results 8 # which the device 1 is to nominally evaluate from the learning measurement data 3a #, 3b #, are provided.
  • step 230 the learning measurement data 3a #, 3b # are processed with the pre-trained encoders 4a, 4b to form representations 5a, 5b in a work space 6.
  • step 240 parameters 7 *, which characterize the behavior of the evaluator 7, are optimized with the aim that the evaluator 7 uses representations 5a, 5b, which are based on the learning measurement data 3a #, 3b #, on the learning work results 8 #. maps.
  • parameters 4a *, 4b *, which characterize the behavior of at least one encoder 4a, 4b of the device 1, can also be further optimized with a view to ensuring that the evaluator 7 provides representations 5a, 5b based on the learning measurement data 3a # , 3b # go back to the 8 # learning work results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif (1) de traitement de données de mesure (3a, 3b) d'au moins deux capteurs (2a, 2b), comprenant au moins un premier codeur apte à l'instruction (4a), conçu pour mapper des données de mesure (3a) du premier capteur (2a) sur des représentations (5a) dans un espace de travail (6) ; un second codeur apte à l'instruction (4b), conçu pour mapper des données de mesure (3b) du second capteur (2b) sur des représentations (5b) dans ledit espace de travail (6) ; et un évaluateur (7), conçu, en fonction d'une représentation (5a, 5b) fournie par l'un des codeurs (4a, 4b) pour des données spécifiques de mesure (3a, 3b), pour évaluer un résultat de travail (8) par rapport à ces données spécifiques de mesure (3a, 3b). L'invention concerne également un procédé (100) d'instruction d'au moins deux codeurs aptes à l'instruction (4a, 4b) pour une utilisation dans le dispositif (1). L'invention concerne également un procédé (200) d'instruction du dispositif (1), tandis que l'évaluateur (7) est également instruit.
PCT/EP2021/064828 2020-06-04 2021-06-02 Apprentissage non surveillé d'une présentation commune de données provenant de capteurs de modalité différente WO2021245151A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020206990.5 2020-06-04
DE102020206990.5A DE102020206990A1 (de) 2020-06-04 2020-06-04 Vorrichtung zur Verarbeitung von Sensordaten und Trainingsverfahren

Publications (1)

Publication Number Publication Date
WO2021245151A1 true WO2021245151A1 (fr) 2021-12-09

Family

ID=76325539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/064828 WO2021245151A1 (fr) 2020-06-04 2021-06-02 Apprentissage non surveillé d'une présentation commune de données provenant de capteurs de modalité différente

Country Status (2)

Country Link
DE (1) DE102020206990A1 (fr)
WO (1) WO2021245151A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022201073A1 (de) 2022-02-02 2023-08-03 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren zur Objekterkennung, Bilderkennungsvorrichtung, Computerprogramm und Speichereinheit

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017223206A1 (de) 2017-12-19 2019-06-19 Robert Bosch Gmbh Niederdimensionale Ermittlung von abgegrenzten Bereichen und Bewegungspfaden

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017223206A1 (de) 2017-12-19 2019-06-19 Robert Bosch Gmbh Niederdimensionale Ermittlung von abgegrenzten Bereichen und Bewegungspfaden

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
G. E. HINTON ET AL: "Reducing the Dimensionality of Data with Neural Networks", SCIENCE, vol. 313, no. 5786, 28 July 2006 (2006-07-28), US, pages 504 - 507, XP055313000, ISSN: 0036-8075, DOI: 10.1126/science.1127647 *
GUO WENZHONG ET AL: "Deep Multimodal Representation Learning: A Survey", IEEE ACCESS, vol. 7, 23 May 2019 (2019-05-23), pages 63373 - 63394, XP011726992, DOI: 10.1109/ACCESS.2019.2916887 *
HIASA YUTA ET AL: "Cross-modality image synthesis from unpaired data using CycleGAN: Effects of gradient consistency loss and training data size", 31 July 2018 (2018-07-31), pages 1 - 10, XP055836318, Retrieved from the Internet <URL:https://arxiv.org/pdf/1803.06629.pdf> [retrieved on 20210831] *
LIU MING-YU ET AL: "Coupled Generative Adversarial Networks", 20 September 2016 (2016-09-20), pages 1 - 32, XP055836349, Retrieved from the Internet <URL:https://arxiv.org/pdf/1606.07536.pdf> [retrieved on 20210831] *
MAHAJAN SHWETA ET AL: "Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings", 14 September 2019 (2019-09-14), pages 1 - 15, XP055835966, Retrieved from the Internet <URL:https://arxiv.org/pdf/1909.06635.pdf> [retrieved on 20210830] *
NGIAM JIQUAN ET AL: "Multimodal Deep Learning", May 2011 (2011-05-01), pages 1 - 8, XP055836369, Retrieved from the Internet <URL:https://icml.cc/2011/papers/399_icmlpaper.pdf> [retrieved on 20210831] *
ZHU JUN-YAN ET AL: "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks", 15 November 2018 (2018-11-15), pages 1 - 18, XP055836312, Retrieved from the Internet <URL:https://arxiv.org/pdf/1703.10593v6.pdf> [retrieved on 20210831] *

Also Published As

Publication number Publication date
DE102020206990A1 (de) 2021-12-09

Similar Documents

Publication Publication Date Title
DE102018128289B4 (de) Verfahren und vorrichtung für eine autonome systemleistung und zur einstufung
DE102018220941A1 (de) Auswertung von Messgrößen mit KI-Modulen unter Berücksichtigung von Messunsicherheiten
EP3701434A1 (fr) Procédé et dispositif destinés à produire automatiquement un réseau neuronal artificiel
WO2020051618A1 (fr) Analyse de scénarios spatiaux dynamiques
WO2021245151A1 (fr) Apprentissage non surveillé d&#39;une présentation commune de données provenant de capteurs de modalité différente
DE102019214200A1 (de) Übersetzung von Trainingsdaten zwischen Beobachtungsmodalitäten
DE102020214596A1 (de) Verfahren zum Erzeugen von Trainingsdaten für ein Erkennungsmodell zum Erkennen von Objekten in Sensordaten einer Umfeldsensorik eines Fahrzeugs, Verfahren zum Erzeugen eines solchen Erkennungsmodells und Verfahren zum Ansteuern einer Aktorik eines Fahrzeugs
DE102021203587A1 (de) Verfahren und Vorrichtung zum Trainieren eines Stilencoders eines neuronalen Netzwerks und Verfahren zum Erzeugen einer einen Fahrstil eines Fahrers abbildenden Fahrstilrepräsentation
DE102017201796A1 (de) Steuervorrichtung zum Ermitteln einer Eigenbewegung eines Kraftfahrzeugs sowie Kraftfahrzeug und Verfahren zum Bereitstellen der Steuervorrichtung
WO2019196986A1 (fr) Système de fusion pour la fusion d&#39;informations d&#39;environnement pour un véhicule automobile
DE102018216561A1 (de) Verfahren, Vorrichtung und Computerprogramm zum Ermitteln einer Strategie eines Agenten
DE102021202933A1 (de) Verfolgung mehrerer Objekte in Zusammenarbeit mehrerer neuronaler Netzwerke
DE102020212366A1 (de) Transformieren von Messdaten zwischen verschiedenen Konfigurationen von Messsystemen
WO2021245153A1 (fr) Entrainement régularisé de réseaux neuronaux
DE102020208765A1 (de) Bildklassifikator mit variablen rezeptiven Feldern in Faltungsschichten
DE102016208076A1 (de) Verfahren und vorrichtung zur auswertung eines eingabewerts in einem fahrerassistenzsystem, fahrerassistenzsystem und testsystem für ein fahrerassistenzsystem
DE102019217225A1 (de) Verfahren zum Trainieren eines maschinellen Lernsystems für eine Objekterkennungsvorrichtung
DE102019216927A1 (de) Synthetische Erzeugung von Radar-, LIDAR- und Ultraschallmessdaten
EP3895415A1 (fr) Transfert d&#39;une information supplémentaire entre des systèmes de caméra
DE102019210167A1 (de) Robusteres Training für künstliche neuronale Netzwerke
DE102018216172A1 (de) Verfahren zum automatischen Erzeugen eines Labels zum Trainieren eines selbstlernenden Systems sowie Kraftfahrzeug
DE102018205241A1 (de) Fusion von Umfeldinformation eines Kraftfahrzeugs
DE102022212901A1 (de) Automatisches Ermitteln einer optimalen Architektur für ein neuronales Netzwerk
DE102020207887A1 (de) Konvertierung von Messdaten zwischen Messmodalitäten
DE102021202934A1 (de) Verfolgung mehrerer Objekte mit neuronalen Netzwerken, lokalen Speichern und einem gemeinsamen Speicher

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21730881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21730881

Country of ref document: EP

Kind code of ref document: A1