WO2020193126A1 - Procédé et dispositif permettant de déterminer un réseau de décision formé par apprentissage - Google Patents

Procédé et dispositif permettant de déterminer un réseau de décision formé par apprentissage Download PDF

Info

Publication number
WO2020193126A1
WO2020193126A1 PCT/EP2020/056416 EP2020056416W WO2020193126A1 WO 2020193126 A1 WO2020193126 A1 WO 2020193126A1 EP 2020056416 W EP2020056416 W EP 2020056416W WO 2020193126 A1 WO2020193126 A1 WO 2020193126A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
network
signal vector
sensor signal
vector
Prior art date
Application number
PCT/EP2020/056416
Other languages
German (de)
English (en)
Inventor
Florian Maile
Johannes Paas
Michael Grunwald
Original Assignee
Zf Friedrichshafen Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zf Friedrichshafen Ag filed Critical Zf Friedrichshafen Ag
Publication of WO2020193126A1 publication Critical patent/WO2020193126A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9323Alternative operation using light waves

Definitions

  • the present invention relates to a method and a device for determining a trained decision network according to the main claims.
  • components for evaluating sensor signals are often used which contain evaluation algorithms specially designed for the type of sensor. In this case, however, several or a large number of separate evaluation algorithms are to be kept available for the evaluation of sensor signals from sensors of different sensor technologies, which sometimes drastically increases the numerical and / or circuit complexity of the signal processing of the sensor signals.
  • the present invention creates an improved method and an improved device for determining a trained decision network and an improved method and an improved device for detecting at least one object from data of at least a first sensor signal vector and / or a second sensor signal vector according to the main claims chen.
  • Advantageous refinements result from the subclaims and the following description.
  • the approach presented here creates a method for determining a trained decision network, the method having the following steps:
  • the first sensor signal vector having at least one measured value of a first sensor of a first sensor technology and the second sensor signal vector having at least one measured value of a second sensor of a second sensor technology different from the first sensor technology, the first sensor signal vector having a Represents the object from an angle of view of the first sensor and the second sensor signal vector represents the object from an angle of view of the second sensor;
  • a sensor signal vector can be understood to be a one-dimensional or multi-dimensional sensor signal.
  • Sensor technology can be understood as a physical measuring principle by means of which the sensor can record measured values of physical quantities.
  • a sensor technology of a radar sensor can be based on the measuring principle of evaluating a radar beam reflected on an object in order to obtain a three-dimensional position and / or movement of the object in relation to the radar sensor.
  • a sensor technology of a lidar sensor can for example be based on the measuring principle of evaluating a laser beam reflected on an object in order to detect a three-dimensional position and / or movement of the object in relation to the lidar sensor.
  • the position detected three-dimensionally with the lidar sensor can be mapped in a Cartesian coordinate
  • the position detected three-dimensionally with the radar sensor is mapped in spherical coordinates.
  • both the first sensor and the second sensor should each output a sensor signal vector that images the same object, but from the respective viewing angle of the relevant sensor.
  • the sensor is designed as an image sensor, for example a camera or an ultrasonic sensor, and supplies corresponding data as sensor signal vectors.
  • a network such as the first network and / or the second network can be understood, for example, as a trainable descriptor that enables a reduction in the complexity and redundancy of the sensor signal vector, but still allows an extraction of features that allow conclusions to be drawn about the presence of the object.
  • a network can be a feature-learning network, for example.
  • a network can be implemented as a neural network, an artificial normal network or especially an auto-encoder.
  • Adaptation can be understood to mean the change in at least one parameter, such as the weight of a node in the network, for example.
  • one of the sensor signal vectors can be used as an input signal or input input vector are used, whereas the other sensor signal vector is used as a reference vector, so that the network or the parameters of the network are adapted or adapted so that the first sensor signal vector is mapped onto the second sensor signal vector and vice versa.
  • GAN Generative Adversarial Network
  • Such a network as a decision network can also be implemented as a neural network or as an alternative structure from the field of artificial intelligence. It is also conceivable that the decision network is at least partially identical or similar in structure or parameters to a structure or parameters of the aforementioned networks, or contains a network or a combination of the aforementioned first and second networks.
  • the approach proposed here is based on the knowledge that by treading the parameters of the decision network, a significantly more compact and simpler form for evaluating data from sensors of different sensor technologies becomes possible.
  • a very general model can be determined as a decision network to be used in a subsequent Application of this decision network to be able to recognize one or more objects from the sensor signal vectors.
  • a large number of different detection algorithms for evaluating the data from sensors with different sensor technologies can advantageously be dispensed with. This not only makes it possible to improve or reduce the numerical or circuit engineering complexity for object detection, but also to make more efficient use of the available memory.
  • the parameters of the decision network can be adapted using disturbance variables, the state values of the first network and / or status values of the second network are added.
  • Such an embodiment offers the advantage of enabling particularly fast and reliable determination or corresponding training of the parameters of the decision network, so that when the decision network is used later, objects can be quickly and reliably identified that are mapped by features in the sensor signal vectors .
  • An embodiment of the approach presented here is particularly favorable in which, in the training step, the parameters of the decision network are trained by a GAN structure using the first and second networks.
  • GAN Generative Adversarial Network
  • Such a generating generic network can be understood as an algorithm for unsupervised learning.
  • such a network includes, for example, two artificial neural networks that perform a zero-sum game, with a first network as a generator creating network candidates and the second network as a discriminator evaluating the candidates. The aim of the generator network is to learn how to generate results according to a certain distribution.
  • the discriminator is trained to distinguish the results of the generator from the data from the real, given distribution.
  • the goal of the generator network is to produce results that the discriminator cannot distinguish.
  • the generated distribution should gradually adjust to the real distribution.
  • the parameters of the decision network in the training step, can be adapted in such a way that the first network for determining input values of the decision network and the second network for providing reference variables of the decision network is used.
  • the parameters of the decision network can be adapted in the step of adapting such that the second network is used to determine input values of the decision network and the first network is used to provide reference variables of the decision network.
  • An embodiment of the approach proposed here is particularly favorable in which, in the training step, the parameters of the decision network are initially used using the first network to determine input values of the decision network and the second network to provide reference variables of the decision network and then below Use of the second network to determine input values of the decision network and the first network to provide reference variables of the decision network.
  • the first and second networks can be trained simultaneously and / or in parallel according to a further embodiment in the step of adapting.
  • An embodiment of the approach presented here is also favorable in which, in the adapting step, the first and / or second network to be adapted is designed as a neural network, in particular as an artificial neural network and / or as an auto-encoder.
  • Such an embodiment offers the advantage of being able to use sophisticated algorithms or structures for the first and / or second network which can deliver input values or reference values with high precision, with which the decision network is subsequently trained.
  • the approach presented here can be used particularly advantageously in the field of evaluating signals from vehicle sensors in order to also achieve a reduction in data rates that are required, for example, for mobile applications such as the localization of a vehicle using a reference map stored in a central computer are.
  • measured values from the first sensor of a vehicle can be read in as the first sensor signal vector and measured values from the second sensor of a vehicle can be read in as the second sensor signal vector, in particular the first sensor being designed as a radar sensor and the second sensor is designed as a lidar sensor or wherein the first sensor is designed as a lidar sensor and the second sensor is designed as a radar sensor.
  • One embodiment of the approach presented here is particularly cheap and easy to implement, in which, in the reading-in step, a distance, an azimuth angle, an elevation angle, a Doppler frequency shift value between a transmitted and a radar beam received by the object or a value representing a radar cross section is read.
  • an intensity value of a laser beam reflected from the object, an SNR value, an x value, a y value and a z value of a laser beam reflected from the object can be used as a measured value from a sensor designed as a lidar sensor to represent the value on a sensor coordinate system.
  • the advantages of the approach presented here can be implemented as a method for recognizing at least one object from data of at least one first sensor signal vector and / or one second sensor signal vector.
  • the process comprises the following steps:
  • Reading in the first sensor signal vector and the second sensor signal vector where in the first sensor signal vector at least one measured value from a first sensor of a first sensor technology and the second sensor signal vector at least one measured value from a second sensor from one of the first sensor technology. has different second sensor technology, the first sensor signal vector representing an object from a perspective of the first sensor and the second signal vector representing the object from a perspective of the second sensor, wherein in the step of reading a decision network trained according to a variant presented here is also read in; and
  • the previously trained decision network can be used very efficiently, so that, for example, the aforementioned reduction in the data rate when transmitting features or recognized objects can be implemented very easily.
  • An embodiment of the approach presented here is particularly efficient, in which a step of marking the detected object is provided in a map representing the surroundings of the first and / or second sensor.
  • the detected objects can be integrated into the map, for example for later evaluation or transmission, so that, for example, other vehicles can adapt their operation or function to the detected objects in the vicinity of the vehicle.
  • the first and / or second sensor signal vector which was provided by a corresponding first and second sensor of a vehicle, can be read in in the reading-in step.
  • the first and / or second sensor signal vector which was provided by a corresponding first and second sensor of a vehicle, can be read in in the reading-in step.
  • the approach presented here can also be implemented as a device that is set up to carry out the steps of a variant of the method for determining a trained Decision network and / or the steps of a variant of the method for recognizing at least one object from data of at least one first sensor signal vector and / or one second sensor signal vector in corresponding units and / or to control them.
  • a device can be an electrical device that processes electrical signals, for example sensor signals, and outputs control signals as a function thereof.
  • the device can have one or more suitable interfaces which can be designed in terms of hardware and / or software.
  • the interfaces can, for example, be part of an integrated circuit in which functions of the device are implemented.
  • the interfaces can also be separate, integrated circuits or at least partially consist of discrete components.
  • the interface can be software modules that are present, for example, on a microcontroller in addition to their software modules.
  • Also of advantage is a computer program product with program code that can be stored on a machine-readable carrier such as a semiconductor memory, a hard disk or an optical memory and is used to carry out the method according to one of the embodiments described above when the program is on a computer or a device.
  • a machine-readable carrier such as a semiconductor memory, a hard disk or an optical memory
  • FIG. 1 shows a schematic representation of a vehicle with a device according to an exemplary embodiment for recognizing at least one object
  • FIG. 2 shows a block diagram of an exemplary AutoEnoder as an exemplary embodiment of part of a network architecture that can be used for the first and / or second network;
  • 3 shows a schematic representation of the procedure according to an exemplary embodiment for a first step of the method or approach proposed here;
  • FIG. 4 shows a schematic representation of a procedure according to an exemplary embodiment for a second step of the method or approach proposed here;
  • Fig. 5 is a schematic representation of a structure or mode of operation of the linking network
  • FIG. 6 shows a flow chart of an exemplary embodiment of a method for determining a trained decision network
  • FIG. 7 shows a flow chart of an exemplary embodiment of a method for recognizing at least one object.
  • FIG. 1 shows a schematic representation of a vehicle 100 with a device 105 for recognizing at least one object 110 from data at least one first sensor signal vector ST1 and / or a second sensor signal vector ST2.
  • the object 110 can be, for example, a further vehicle which is to be recognized by the first sensor 115 and / or the second sensor 120 and to be taken into account, for example, in an autonomous ferry operation of the vehicle 100.
  • the first sensor signal vector ST1 is provided by a first sensor 115, which in the present case is designed, for example, as a radar sensor.
  • the second sensor signal vector ST2 is provided by a second sensor 120, which in the present case is designed as a LIDAR sensor, for example.
  • the first sensor 115 and the second sensor 120 are thus each designed in different sensor technologies, so that, for example, a position and speed of the object 110 in relation to the first sensor 115 and / or the second sensor 120 is detected by different physical measurement methods, so that at for example also in the event of disturbances in the measurement of parameters for the object 110 the measurement by one sensor, such a disturbance can possibly be reduced or compensated for by measuring the parameters for object 1 10 by the second sensor.
  • a measurement with the second sensor 120 embodied as a LIDAR sensor could be subject to interference due to the occurrence of fog, whereas a measurement with the first sensor 115 embodied as a radar sensor is not subject to interference due to the different measurement method.
  • the first sensor signal vector ST1 is now fed to a first network 125 in which the complexity of the first sensor signal vector ST1 is reduced in order to draw conclusions about features or parameters of the first output signal 127 of the first network 125 from the state values of a first descriptor 126 corresponding to the first network 125 Object 110 in relation to the first sensor 115 (for example a distance or positions of the object 100 in relation to the first sensor 115) to be able to close.
  • the second sensor signal vector ST2 is now fed to a second network 130, in which a reduction in the complexity of the Second sensor signal vector ST2 is executed in order to draw a conclusion about features or parameters of the object 1 10 in relation to the second sensor 120 (for example a distance or positions of the object 100) from a state values of a second descriptor 131 corresponding second output signal 132 of the second network 130 in relation to the second sensor 1 20) to be able to close.
  • the first network 125 and the second network 130 can be trained in such a way that it is no longer possible to draw any conclusions about the sensor technology of the first sensor 115 from the first output signal 127 and / or the second output signal 132 cannot draw any conclusions about the sensor technology of the second sensor 120 is possible.
  • a general representation of the object 110 or the parameters of the object 110 in relation to the first sensor 115 or second sensor 120 can be achieved, which can be used for detection in a subsequent unit 135, so that the object 110 can be detected from the first sensor signal vector ST1 and / or the second sensor signal vector ST2.
  • the first network 125 and the second network 130 can be part of the trained decision network 140.
  • a feature extractor can be implemented in a first sub-unit 145, for example from the first output signal 127 and / or the second output signal 132 features of the object 110 in relation to the first sensor 115 and / or the second sensor 120, such as a distance, a position, a size or a movement of this object 110 in relation to the corresponding sensor.
  • a map management unit 150 as a further sub-unit of the unit 135 for detection, the features extracted by the feature extractor 150 can then be read and, for example, stored in a corresponding digital map 155 so that, for example, information is available for further operation of the vehicle 100 Position is the object 1 10. It is also conceivable for the detection unit 135 to output a corresponding feature signal 160, which for example forwards a feature extracted by the feature extractor 150 directly to other or further components of the vehicle 100.
  • a trained decision network 140 is advantageously used in which a sensor technology-independent descriptor is used as the first descriptor 126 and second descriptor 131, for example in order to carry out card matching using at least one feature recognizer and to store the corresponding information in card 155.
  • the features pointing to the object 110 can also be extracted from just one of the sensor signal vectors ST1 or ST2 of the respective sensors 115 or 120, so that on the one hand it is possible to avoid maintaining a sensor-specific network or a corresponding descriptor for each sensor must as well as additional information that would not be received in the event of a fault in one of the sensors 115 or 120.
  • a feature extraction network (ie the first half of the feature-learning network 125 or 130) is used for each sensor technology in order to extract the features that are required for feature recognition from the radar / LIDAR - / et cetera input signal were used as sensor signal vector ST1 or ST2.
  • the advantage of the approach / method proposed here compared to an individual feature extraction / feature representation is a significantly more compact map or algorithm, since only the one feature representation or the corresponding descriptor needs to be stored that matches the two sensor signal descriptors.
  • Another application is the association of features between sensor signal vectors for fusion applications. In previous applications, on the other hand, an individual localization layer with its features is stored in a map for each sensor technology.
  • the approach proposed here or the method presented here reduces the amount of data by using a generic feature descriptor that is sensor technology-independent and can be extracted from sensor data such as LIDAR / radar / etc. sensor data.
  • a Training of the corresponding or relevant components can be carried out, for example, in a laboratory environment in a separate device outside the vehicle 100 from FIG. 1, but for example using exemplary signal values from two sensors whose sensor technology is different and which are used in a trip of a vehicle were recorded, for example.
  • the functions of a feature-learning network that can be used as a first network 125 or as a second network 130 from FIG. 1 will first be discussed in general.
  • FIG. 2 shows a block diagram of an exemplary AutoEnoder 200 as an exemplary embodiment of part of a network architecture that can be used for the first and / or second network with an encoder 210, a bottleneck 215 and a decoder 220.
  • the encoder 210 reduces one dimension of the input transition space, i.e. the signal space of the input variable x through the use of convolution layers such as those used in neural networks.
  • the “bottleneck” 215 represents the greatest reduction in the dimension of the input x.
  • This representation of the input variable x in a lower dimension is often called a “feature vector” or “descriptor”.
  • the network In a training step of the network, the network, here the AutoEncoder 200, learns how the current input signal x is represented or reconstructed only using a feature vector / descriptor with a low dimension.
  • the next part of the network architecture is the “decoder” 220, which increases the dimension space step-by-step in order to obtain a reconstructed input signal x 'which advantageously has the same dimension as the input signal x.
  • the resultant result of the decoder x ' is identical to the input x, as shown in FIG.
  • the first network 125 and the second network 130 are now used, which correspond to the components of the encoder 210 and the bottleneck 215 in which the complexity-reduced variant of the input signal x is contained.
  • the content of the bottleneck 215 can correspond to the first output signal 127 or the second output signal 132 if the first network 125 and / or the second network 130 correspond to a configuration of a network 230 shown in FIG.
  • FIG. 3 shows a schematic representation of a procedure according to an exemplary embodiment for a first step of the method or approach proposed here, in which at least two feature-learning networks such as the first network 125 and the second network 130 from FIG. 1 are trained or adapted are.
  • the sensor signal vector ST 1 is fed to the first feature-learning network 125 as an input variable, the first network 125 then being trained against the data / measured values of the second sensor signal vector ST2; the second sensor signal vector ST2 is thus used as the reference vector and the first sensor signal vector ST1 is used as the input vector for the first network 125.
  • the sensor signal vector ST2 is added to the second feature-learning network 130 as an input variable, with the second network 130 then against the Da
  • th / measured values are trained from the first sensor signal vector ST1.
  • the aim of such network training is to ensure that the descripto
  • ren / feature vectors 300a or 300b are able to provide the corresponding information tion of the other sensor signal vector in the original dimension and resolution to be able to represent / reconstruct completely.
  • the first sensor signal vector ST1 is thus intended to be reduced to a first descriptor 300a by the first network 125, so that the second sensor signal vector ST2 can be generated from this first descriptor 300a.
  • the second sensor signal vector ST2 is to be reduced to a second descriptor 300b by the second network 130 so that the first sensor signal vector ST1 can be generated from this second descriptor 300b.
  • Current high-performance radar sensors also recognize around 6,000 targets per detection in each measurement cycle.
  • I intensity of the reflected laser beam
  • SNR signal-to-noise ratio
  • FIG. 4 shows a schematic representation of a procedure according to an exemplary embodiment for a second step of the method or approach proposed here, in which several descriptors such as descriptors 300a and 300b are generalized to a common representation as descriptor 400, which then contains input values of an object 100 from any sensor such as sensors 1 15 and 120 and describes in the same way for the relevant sensor signal vectors ST1 and ST2.
  • This generalized descriptor 400 can then be used as the first descriptor 126 or second descriptor 131 in the trained decision network 140 from FIG.
  • an association network 410 is used in order to be able to distinguish features for the same object from output values of the two feature-learning networks 125 and 120 from input values as the sensor signal vectors ST1 and ST2. These features are then stored in the generalized descriptor 400 or linked accordingly in the networks 115 and 120 in order to obtain a corresponding reduction in complexity from the sensor signal vectors ST1 and ST2 to the generalized descriptor 400, which is then used for the evaluation of both the sensor signal vector ST1 as well as the sensor signal vector ST2 can be used.
  • the first descriptor 300a can then be used as a training set and the second descriptor 300b as a reference set towards which the training set is to be optimized.
  • a loss function 500 simulating disturbances is added to the reference set.
  • a discriminator 510 is trained in such a way that the objects of the corresponding feature-learning network which are mapped in the second descriptor 300b and which contain an error that is too great are recognized by the discriminator 510 and discarded.
  • the quality of the feature recognition can be increased by the discriminator network used here in order to maximize the similarity of the descriptors 300a and 300b and to obtain the generalized descriptor 400 from this.
  • the feature-learning networks 125 and 130 can be trained serially or in parallel in the linking network 410, ie first the first descriptor 300a can be used as a training set and the second descriptor 300b can be used as a reference set and, in parallel or subsequently, the second descriptor 300b can be used as Training set and the first descriptor 300a can be used as a reference set.
  • the first feature-learning network 125 for example, is trained again while the second feature-learning network 130 is used as a reference, and vice versa.
  • data from other sensors such as cameras or ultrasound can also be used.
  • Other methods of feature learning networks and interconnection networks can also be used.
  • FIG. 6 shows a flow diagram of an exemplary embodiment of a method 600 for determining a trained decision network.
  • the method includes a step 610 of reading in a first sensor signal vector and a second sensor signal vector, the first sensor signal vector at least one measured value from a first sensor using a first sensor technology and the second sensor signal vector at least one measuring value from a second sensor using a second sensor technology different from the first sensor technology having, the first sensor signal vector representing an object from a perspective of the first sensor and the second sensor signal vector representing the object from a perspective of the second sensor.
  • the method 600 further comprises a step 620 of adapting a first network with the first sensor signal vector as the input vector and the second sensor signal vector as the reference vector and adapting a second network with the second sensor signal vector as the input vector and the first sensor signal vector as the reference vector.
  • the method 600 comprises a step 630 of training parameters of a decision network using the first and second networks in order to obtain the trained decision network.
  • FIG. 7 shows a flowchart of an exemplary embodiment of a method 700 for recognizing at least one object from data of at least one first sensor signal vector and / or a second sensor signal vector.
  • the method 700 includes a step 710 of reading in the first sensor signal vector and the second sensor signal vector, the first sensor signal vector at least one measured value from a first sensor of a first sensor technology and the second sensor signal vector at least one measured value from a second sensor of a second sensor technology different from the first sensor technology having, wherein the first sensor signal vector represents an object from an angle of view of the first sensor and the second signal vector represents the object from an angle of view of the two th sensor, wherein in the step of reading a decision network trained according to a variant described here is also read in.
  • the method 700 further includes a step 720 of detecting the object from the first and / or the second sensor signal vector as input vectors of the trained decision network. Furthermore, a step 730 of marking the detected object in a map representing the surroundings of the first and / or second sensor can be provided.
  • an exemplary embodiment comprises a “and / or” link between a first feature and a second feature
  • this can be read in such a way that the exemplary embodiment according to one embodiment has both the first feature and the second feature and, according to a further embodiment, either only that has the first feature or only the second feature.

Abstract

L'invention concerne un procédé (600) permettant de déterminer un réseau de décision (140) formé par apprentissage, ledit procédé (600) comprenant une étape de lecture (610) d'un premier vecteur de signal de capteur (ST1) et d'un deuxième vecteur de signal de capteur (ST2). Le premier vecteur de signal de capteur (ST1) comprend au moins une valeur de mesure d'un premier capteur (115) d'une première technologie de capteurs et le deuxième vecteur de signal de capteur (ST2) comprend au moins une valeur de mesure d'un deuxième capteur (120) d'une deuxième technologie de capteurs différente de la première. Le premier vecteur de signal de capteur (ST1) représente un objet (110) à partir d'un angle de vue du premier capteur (115) et le deuxième vecteur de signal de capteur (ST2) représente l'objet (110) à partir d'un angle de vue du deuxième capteur (120). Le procédé (600) comporte également une étape d'adaptation (620) d'un premier réseau (125) ayant le premier vecteur de signal de capteur (ST1) comme vecteur d'entrée et le deuxième vecteur de signal de capteur (ST2) comme vecteur de référence et une adaptation d'un deuxième réseau (130) ayant le deuxième vecteur de signal de capteur (ST2) comme vecteur d'entrée et le premier vecteur de signal de capteur (ST1) comme vecteur de référence. Le procédé comporte enfin une étape de formation par apprentissage (630) de paramètres d'un réseau de décision (140) en utilisant le premier et le deuxième réseau (125) pour contenir le réseau de décision (140) formé par apprentissage.
PCT/EP2020/056416 2019-03-22 2020-03-11 Procédé et dispositif permettant de déterminer un réseau de décision formé par apprentissage WO2020193126A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019203953.7 2019-03-22
DE102019203953.7A DE102019203953A1 (de) 2019-03-22 2019-03-22 Verfahren und Vorrichtung zur Ermittlung eines trainierten Entscheidungsnetzwerks

Publications (1)

Publication Number Publication Date
WO2020193126A1 true WO2020193126A1 (fr) 2020-10-01

Family

ID=69804886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/056416 WO2020193126A1 (fr) 2019-03-22 2020-03-11 Procédé et dispositif permettant de déterminer un réseau de décision formé par apprentissage

Country Status (2)

Country Link
DE (1) DE102019203953A1 (fr)
WO (1) WO2020193126A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247584A (en) * 1991-01-10 1993-09-21 Bodenseewerk Geratetechnik Gmbh Signal processing unit for classifying objects on the basis of signals from sensors

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10410113B2 (en) * 2016-01-14 2019-09-10 Preferred Networks, Inc. Time series data adaptation and sensor fusion systems, methods, and apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247584A (en) * 1991-01-10 1993-09-21 Bodenseewerk Geratetechnik Gmbh Signal processing unit for classifying objects on the basis of signals from sensors

Also Published As

Publication number Publication date
DE102019203953A1 (de) 2020-09-24

Similar Documents

Publication Publication Date Title
WO2020182963A2 (fr) Procédé pour reconnaître des objets faisant obstacle ainsi que pour diagnostiquer la modification de position d'objets connus faisant obstacle à l'aide de signaux de plusieurs capteurs et pour compresser ainsi que décompresser des signaux de capteurs employés aux fins susmentionnées
EP3847578A1 (fr) Procédé et dispositif de classification d'objets
WO2020016385A1 (fr) Procédé et système destiné à déterminer une position d'un véhicule
EP3714286B1 (fr) Procédé et dispositif de détermination d'un angle d'installation entre une piste sur laquelle un véhicule circule et une direction de mesure d'un capteur de mesure ou radar
EP3557487B1 (fr) Génération de données de validation au moyen de réseaux génératifs contradictoires
EP3789926A1 (fr) Procédé de détection d'une perturbation adversaire dans des données d'entrée d'un réseau de neurones
DE102019215903A1 (de) Verfahren und Vorrichtung zum Erzeugen von Trainingsdaten für ein Erkennungsmodell zum Erkennen von Objekten in Sensordaten eines Sensors insbesondere eines Fahrzeugs, Verfahren zum Trainieren und Verfahren zum Ansteuern
EP3586090B1 (fr) Procédé d'étalonnage d'un système de capteur
DE102017006260A1 (de) Verfahren zum Bestimmen von Detektionseigenschaften wenigstens eines Umgebungssensors in einem Fahrzeug und Fahrzeug, eingerichtet zur Durchführung eines solchen Verfahrens
EP3968213A1 (fr) Procédé de détermination d'un chemin ferroviaire dans une voie ferrée et dispositif de mise en uvre dudit procédé
WO2020193126A1 (fr) Procédé et dispositif permettant de déterminer un réseau de décision formé par apprentissage
DE102022100545A1 (de) Verbesserte objekterkennung
DE102020212366A1 (de) Transformieren von Messdaten zwischen verschiedenen Konfigurationen von Messsystemen
DE102020212921A1 (de) Verfahren, Computerprogramm und Vorrichtung zum Bewerten einer Verwendbarkeit von Simulationsdaten
WO2020043440A1 (fr) Éstimation directionnelle d'un geste d'espace libre
EP3701428A1 (fr) Procédé et dispositif destinés à améliorer la robustesse d'un système d'apprentissage par machine
DE102021208349B3 (de) Verfahren und Sensorsystem zum Zusammenführen von Sensordaten sowie Fahrzeug mit einem Sensorsystem zum Zusammenführen von Sensordaten
DE102021201557A1 (de) Verfahren zur Erfassung eines Objektes
DE102019112530A1 (de) Verfahren zur Zusammenfassung von Hypothesen unter Berücksichtigung von Ausdehnungsschätzungen
WO2022106414A2 (fr) Procédé et système pour annoter des données de capteur
DE102021210191A1 (de) Verfahren und Steuergerät zum Überwachen eines Sensorsystems
WO2023222343A1 (fr) Procédé de commande d'un dispositif robot
WO2022112298A1 (fr) Procédé de génération d'une séquence temporelle de trames de données augmentées
WO2022129266A1 (fr) Procédé de détection d'au moins un objet d'un environnement au moyen de signaux de réflexion d'un système capteur radar
EP4253188A1 (fr) Ajustement d'une détection d'un tronçon de rail sur la base d'un enregistrement d'un modèle de rail avec une image de caméra

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20710898

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20710898

Country of ref document: EP

Kind code of ref document: A1