CN113447921A - Method for identifying a vehicle environment - Google Patents

Method for identifying a vehicle environment Download PDF

Info

Publication number
CN113447921A
CN113447921A CN202110312295.6A CN202110312295A CN113447921A CN 113447921 A CN113447921 A CN 113447921A CN 202110312295 A CN202110312295 A CN 202110312295A CN 113447921 A CN113447921 A CN 113447921A
Authority
CN
China
Prior art keywords
object hypothesis
hypothesis
vehicle
sensor data
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110312295.6A
Other languages
Chinese (zh)
Inventor
S·鲁特尔
A·海尔
T·古斯纳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN113447921A publication Critical patent/CN113447921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for identifying a vehicle environment, wherein the vehicle has a sensor system and an evaluation unit for evaluating sensor data of the sensor system. The method comprises the following steps: receiving sensor data in an evaluation unit; inputting the sensor data into a fusion algorithm that determines a probability distribution for a feature of an object in the environment of the vehicle based on the sensor data and generates and outputs a first object hypothesis assigned to the object based on the probability distribution; inputting the sensor data into a machine learning algorithm, the machine learning algorithm having been trained to generate and output second object hypotheses assigned to the objects based on the sensor data; deciding whether the first object hypothesis should be rejected; if the first object hypothesis should not be rejected: updating an environment model representing the vehicle environment using the first object hypothesis; if the first object hypothesis should be rejected: the environmental model is updated using the second object hypothesis.

Description

Method for identifying a vehicle environment
Technical Field
The invention relates to a method for identifying a vehicle environment. The invention also relates to an evaluation unit, a computer program and a computer-readable medium for performing the method, and a corresponding vehicle system.
Background
In order to identify the environment of the vehicle, the sensor data of the different sensors of the vehicle can be combined by means of a suitable algorithm in a common representation of the environment, which is also referred to as sensor data fusion. The goal of sensor data fusion is to combine the respective sensor data such that the respective strengths of the sensors are advantageously combined with each other or the respective weaknesses of the sensors are reduced. For example, a kalman filter, the identification performance of which may be limited in certain situations, may be used in such a sensor data fusion.
Disclosure of Invention
Against this background, methods, evaluation units, computer programs and computer-readable media according to the independent claims are presented with the solution presented here. Advantageous developments and improvements of the solution proposed here emerge from the description and are described in the dependent claims.
THE ADVANTAGES OF THE PRESENT INVENTION
By combining a probabilistic model-based fusion algorithm with a machine learning algorithm for pattern recognition, embodiments of the present invention advantageously make it possible to improve the robustness of sensor data fusion. This can prevent, for example, false positive or false negative identification results.
Another advantage is that the machine learning algorithm can be used, for example, only when the recognition performance of the fusion algorithm reaches its limit, i.e., when the fusion algorithm provides uncertain, ambiguous, or contradictory results. In this case, the machine learning algorithm need only be trained using a relatively small training data set.
A first aspect of the invention relates to a computer-implemented method for identifying a vehicle environment, wherein the vehicle has a sensor system with at least two sensor units for detecting the vehicle environment, and an evaluation unit for evaluating sensor data of the sensor system. The method comprises the following steps, which can be carried out in particular in the order indicated: receiving in the evaluation unit sensor data generated by the at least two sensor units; inputting the sensor data into a fusion algorithm configured to determine a probability distribution for a feature of an object in the vehicle environment based on the sensor data, and to generate and output a first object hypothesis assigned to the object based on the probability distribution; inputting the sensor data into a machine learning algorithm that has been trained to generate and output second object hypotheses assigned to the object based on the sensor data; deciding whether the first object hypothesis should be rejected; if the first object hypothesis should not be rejected: updating an environment model representing the vehicle environment using the first object hypothesis; if the first object hypothesis should be rejected: updating the environmental model using the second object hypothesis.
A vehicle may generally be understood as a machine that is moved partially or fully automatically. For example, the vehicle may be a passenger car, truck, bus, motorcycle, robot, or the like.
The vehicle may include a vehicle system designed to control the vehicle in a partially or fully automated manner. For this purpose, the vehicle system can actuate a corresponding actuator system of the vehicle, for example a steering or braking actuator or an engine control device.
The sensor unit may be, for example, a radar sensor, a lidar sensor, an ultrasonic sensor, or a camera. The sensor system may comprise sensor units of the same type (e.g. redundant) or of different types (e.g. complementary). For example, a combination of a radar sensor and a camera or a combination of a plurality of radar sensors having different detection directions is conceivable. For example, the sensor system may also comprise three or more sensor units or sensor types.
The sensor data may comprise characteristics recognized by the respective sensor unit, such as position, velocity, acceleration, stretch or object class of objects in the vehicle environment. These features can be extracted from the raw data of the respective sensor unit.
The sensor data may be fused with each other to identify the environment of the vehicle. Such sensor data fusion can be understood as the following process: information of different sensor units or sensor types is used for detecting and classifying objects in the vehicle environment (also referred to as object differentiation) and estimating the respective states of the objects, i.e. predicting the respective states of the objects with a certain probability (also referred to as trajectory estimation).
An object hypothesis may be understood as a model of a real object located in the vehicle environment, such as an observed vehicle, a road sign, a pedestrian, etc. The object hypotheses may be stored in an environmental model representing the vehicle environment. By comparing the predicted state of the object with the current measurement, the environmental model may be continuously updated based on the sensor data.
The fusion algorithm may be, for example, a bayesian filter, a kalman filter or a particle filter or a combination of at least two of the mentioned examples. For example, the fusion algorithm may be configured to generate a plurality of assignment matrices, each of which describes and weights possible assignments between features (or feature hypotheses, to be precise) and object hypotheses, and to select from these assignment matrices an assignment matrix (also referred to as a multiple hypothesis scheme) suitable for updating the environment model.
The machine learning algorithm may include a classifier into which the sensor data may be input to assign features of the object to a particular class of objects.
The machine learning algorithm may be an artificial neural network, such as a multi-layered perceptron, a recurrent neural network, a neural network with long and short term memory, or a convolutional neural network. However, the machine learning algorithm may also be a bayesian classifier, a support vector machine, a k-nearest neighbor algorithm, a decision tree, a random forest or a combination of at least two of the mentioned examples.
For example, the machine learning algorithm may be activated only if the fusion algorithm provides uncertain, ambiguous, or contradictory results.
In other words, the fusion algorithm can be used for most of the conditions to be identified, whereas the machine learning algorithm can only be used in special cases, for example, in cases where the fusion algorithm cannot be unambiguously identified. This has the advantage that the machine learning algorithm can be trained with relatively little effort. In addition, the system can thus be simply expanded.
It is possible that the machine learning algorithm influences the fusion algorithm in a corresponding manner, for example by selecting or modifying object hypotheses generated by the fusion algorithm, or by adding new object hypotheses.
It is also possible to input the sensor data into at least two different machine learning algorithms. The at least two machine learning algorithms herein may be trained using training data that is different from one another.
For example, whether the first object hypothesis should be rejected may be determined based on the respective trustworthiness of the first object hypothesis. The degree of confidence may be determined from statistical parameters such as covariance, confidence level or (average) probability of presence, or may also be determined from whether the sensor units provide results that are consistent or different from each other. In principle, the confidence level can be understood as a measure of the correspondence between the respective first object hypothesis and the real object.
For example, the first object hypothesis may be compared to the second object hypothesis.
If the first object hypothesis should be rejected, the environment model may be updated using the second object hypothesis, in particular if the first object hypothesis is excluded. Conversely, if the first object hypothesis should not be rejected, the environment model may be updated using the first object hypothesis with the second object hypothesis excluded.
A second aspect of the invention relates to an evaluation unit configured to perform the methods described above and below. Features of the method may also be features of the evaluation unit and vice versa.
A third aspect of the invention relates to a vehicle system configured to perform the methods described above and below. Features of the method may also be features of the vehicle system and vice versa.
Further aspects of the invention relate to a computer program which, when executed by a computer such as the above-mentioned evaluation unit, performs the methods described above and below, and to a computer-readable medium on which such a computer program is stored.
The computer readable medium may be volatile or non-volatile data storage. The computer readable medium may be, for example, a hard disk, a USB memory device, a RAM, a ROM, an EPROM, or a flash memory. The computer readable medium may also be a data communication network enabling downloading of the program code, such as the internet or a data Cloud (Cloud).
Features of the method as described above and below may also be features of the computer program and/or the computer readable medium and vice versa.
The idea of an embodiment of the invention may be considered especially based on the idea and findings described below.
According to one embodiment, at least one further object hypothesis is generated based on the first object hypothesis and the second object hypothesis. Here, the at least one further object hypothesis is used for updating the environment model.
In other words, the respective outputs of the fusion algorithm and the machine learning algorithm may be fused with each other to provide update data suitable for updating the environmental model. The at least one other object hypothesis may be generated, for example, by other fusion algorithms and/or other machine learning algorithms.
Then, if the first object hypothesis should be rejected, the at least one other object hypothesis may be used, for example, for updating the environment model. In particular, the at least one further object hypothesis may be used for updating the environmental model excluding the first object hypothesis.
According to one embodiment, it is determined whether the first object hypothesis is erroneous by comparing the first object hypothesis with the second object hypothesis. If the first object hypothesis is erroneous, it is decided that the first object hypothesis should be rejected. Additionally or alternatively, if the first object hypothesis is not erroneous, it is decided that the first object hypothesis should not be rejected.
For example, it may be determined whether the first object hypothesis represents a false positive result or a false negative result based on the second object hypothesis.
According to an embodiment, the fusion algorithm is configured to weight the first object hypothesis. Here, the weight of at least one of the first object hypotheses is changed based on the second object hypothesis.
In other words, the second object hypothesis may be used to influence the fusion algorithm in a suitable way. The recognition accuracy of the fusion algorithm can thereby be improved.
According to one embodiment, the machine learning algorithm is trained using training data that includes, at least in large part, features of objects that are difficult for the fusion algorithm to recognize.
In other words, the machine learning algorithm may have been trained to only identify specific types of conditions that the vehicle may be in. In particular, the condition may be a condition in which the fusion of the sensor data by means of the fusion algorithm will provide uncertain, ambiguous or contradictory results. A possible example is a manhole cover from which steam is lifted. In this case, for example, sensor data of a radar sensor, whose radar radiation is reflected by the manhole cover, may falsely identify the manhole cover as an object (which may pose a hazard to the vehicle) (false positive identification). The presence of an object can likewise be inferred on the basis of sensor data of a lidar sensor whose laser beam is reflected by the vapor. In contrast, camera-based sensor data can correctly identify non-existing (potentially dangerous to the vehicle) objects. These conflicting identifications may result in the fusion algorithm failing to correctly identify the condition.
According to one implementation, the fusion algorithm includes a kalman filter. Additionally or alternatively, the machine learning algorithm includes a multi-layered perceptron, such as a convolutional neural network or the like.
Drawings
Embodiments of the invention are described below with reference to the accompanying drawings, wherein neither the drawings nor the description should be construed as limiting the invention.
FIG. 1 shows a vehicle having a vehicle system according to an embodiment of the invention.
Fig. 2 shows a block diagram of the evaluation unit of fig. 1.
Fig. 3 shows a block diagram of an evaluation unit according to another embodiment of the invention.
Fig. 4 shows a flow chart of a method according to an embodiment of the invention.
The figures are merely schematic and are not drawn to scale. In these figures, the same reference numerals indicate features of the same or similar function.
Detailed Description
Fig. 1 shows a vehicle 100 with a vehicle system 102 having a sensor system with a first sensor unit 104 (here a radar sensor), a second sensor unit 106 (here a lidar sensor), and a third sensor unit 108 (here a camera) for detecting objects in the environment of the vehicle 100, and an evaluation unit 110 for evaluating respective sensor data 112 of the three sensor units 104, 106, 108. Illustratively, the sensor system in fig. 1 detects a vehicle 113 traveling in front.
Additionally, the vehicle system 102 may include an actuator system 114, such as a steering or braking actuator or an engine controller of the vehicle 100. The evaluation unit 110 can actuate the actuator system 114, for example, in a suitable manner on the basis of the sensor data 112, for example, in order to control the vehicle 100 completely automatically.
In order to recognize a vehicle 113 traveling ahead as an object, the sensor data 112 of the different sensor units 104, 106, 108 are suitably fused with one another in the evaluation unit 110. Here, the identified objects are stored in the environment model and are continuously updated based on the sensor data 112, which is also referred to as tracking. In this case, the future states of the identified objects in the environmental model are estimated in each time step and compared with the current sensor data 112.
Fig. 2 shows the evaluation unit 110 from fig. 1 in a block diagram. The modules described below may be hardware modules and/or software modules, respectively.
The evaluation unit 110 comprises a fusion module 200 and a machine learning module 202, into which fusion module 200 and machine learning module 202 the sensor data 112 of the three sensor units 104, 106, 108 are respectively input. The fusion module 200 executes a fusion algorithm, such as a kalman filter, which determines a probability distribution of a feature about an object in the environment of the vehicle 100 (e.g., about a vehicle 113 traveling ahead) based on the sensor data 112 and generates a first object hypothesis 204 therefrom. One of the first object hypotheses 204 represents, for example, the vehicle 113 traveling ahead. Depending on the situation, the first object hypothesis 204 may be more or less trustworthy.
For example, the fusion algorithm may determine a plurality of alternative assignment variables in the form of assignment matrices, wherein exactly one object hypothesis is assigned to a feature (precisely one feature hypothesis) in each assignment matrix, and each of these assignments is weighted. An allocation matrix suitable for updating the environment model may now be selected from the set of allocation matrices according to the weights.
The environment model may be implemented, for example, in a planner module 208, which planner module 208 plans the trajectory of the vehicle 100 based on the digital map data and the current geographic location of the vehicle 100, taking into account the environment model.
These features can be extracted, for example, by the respective sensor unit 104, 106 or 108 in the context of preprocessing the sensor raw data and can therefore already be included in the sensor data 112. Alternatively or additionally, the preprocessing of the sensor raw data may be performed by the fusion unit 200 and/or the machine learning module 202.
The machine learning module 202 executes a machine learning algorithm, such as a convolutional neural network or a support vector machine. The machine learning algorithm has been trained to assign a specific feature or combination of features contained in the sensor data 112 to a specific object class, here in particular for example to the object class "vehicle driving ahead". Based on this assignment, the machine learning algorithm generates corresponding second object hypotheses 206, each representing an identified object in the environment of the vehicle 100. One of the second object hypotheses 206 represents, for example, the vehicle 113 traveling ahead.
The monitoring module 210 is configured to evaluate the trustworthiness of the result of the sensor data fusion performed in the fusion module 200. For example, if the result is uncertain, i.e. for example has a lower covariance or a lower confidence level, if the result is ambiguous, i.e. if at least one of the sensor units 104, 106, 108 provides a different result than the other two sensor units, or if the (average) probability of presence with respect to an identified object is too low, the result may be rated as less trustworthy.
It is possible to additionally or alternatively evaluate the plausibility in the machine learning module 202. In this case, the machine learning module 202 may send corresponding assessment information to the monitoring module 210.
The first object hypothesis 204 and the second object hypothesis 206 may additionally be input to other fusion modules 212 that perform other fusion algorithms, such as other kalman filters. The other fusion algorithm is configured to generate one or more other object hypotheses 214 by fusing the first object hypothesis 204 with the second object hypothesis 206. Other object hypotheses 214 may have a higher confidence than object hypotheses 204, 206. One of the other object hypotheses 214 may, for example, represent a vehicle 113 traveling ahead.
If the monitoring module 210 determines that the confidence level is too low, the first object hypothesis 204 is rejected. In this case, monitoring module 210 prevents forwarding of first object hypothesis 204 to planner module 208 and causes other object hypotheses 214 to be input into planner module 208 instead. Alternatively, as shown in fig. 3, monitoring module 210 causes second object hypothesis 206 to be input into planner module 208 instead of first object hypothesis 204.
Additionally, the monitoring modules 210 may send information about their respective decisions to the security module 216. The safety module 216 may influence the planner module 208, for example, depending on the decision of the monitoring module 210, for example, correspondingly change the trajectory of the vehicle 100 in case of too low a confidence level, or brake or stop the vehicle 100.
For example, the monitoring module 210 may compare the first object hypothesis 204 and the second object hypothesis 206 to each other to determine whether the first object hypothesis 204 is a false positive identification or a false negative identification, respectively. If a false positive identification or a false negative identification is determined, the corresponding first object hypothesis is rejected 204. Otherwise, the first object hypothesis 204 is forwarded to the planner module 208.
The machine learning module 202 may be configured to, for example, change the weights of the respective first object hypotheses 204 and thereby improve the confidence of the sensor data fusion.
Fig. 3 shows a possible variant of the evaluation unit 110 without the other fusion module 212. Here, the first object hypothesis 204 or the second object hypothesis 206 may be input into the planner module 208.
The decision as to whether the machine learning algorithm should be used may be made by other machine learning algorithms, for example. For example, the other machine learning algorithms may be implemented in the monitoring module 210 or the machine learning module 202. For example, the other machine learning algorithms may have been trained to identify specific, critical conditions based on the sensor data 112 that the fusion algorithm cannot unambiguously identify. If such critical conditions are identified, the input to the planner module 208 may be automatically switched to the output of the machine learning module 202 or other fusion module 212, for example.
The machine learning algorithm may, for example, have been trained using training data that predominantly or exclusively contains such critical conditions.
It is possible that, in addition to the machine learning module 202, the evaluation unit 110 also has at least one further machine learning module, which may, for example, recognize different situations than the machine learning module 202. The decision as to which of the respective outputs of the machine learning module should be used to update the environment model may be made, for example, according to the respective trustworthiness of the outputs.
Fig. 4 shows a flow chart of a method 400, which may be performed by the evaluation unit 110 of fig. 1 to 3.
In a first step 410, the sensor data 112 generated by the three sensor units 104, 106, 108 are received in the evaluation unit 110.
In a second step 420, the sensor data 112 is input into a fusion algorithm of the fusion module 200, which generates and outputs the first object hypotheses 204 based on the sensor data 112. Further, the sensor data 112 is input into a machine learning algorithm of the machine learning module 202, which generates and outputs a second object hypothesis 206 based on the sensor data 112.
In a third step 430 it is decided whether the first object hypothesis 204 should be rejected.
If it should be rejected, the second object hypothesis 206 or other object hypothesis 214 is input into the planner module 208 in step 440. If not, the first object hypothesis 204 is input into the planner module 208 in step 450.
Finally it is noted that terms such as "having", "including", etc., do not exclude other elements or steps, and that terms such as "a" or "an" do not exclude a plurality. Reference signs in the claims shall not be construed as limiting.

Claims (10)

1. A method (400) for identifying an environment of a vehicle (100), wherein the vehicle (100) has a sensor system with at least two sensor units (104, 106, 108) for detecting the environment of the vehicle (100), and an evaluation unit (110) for evaluating sensor data (112) of the sensor system, wherein the method (400) comprises:
receiving (410), in the evaluation unit (110), sensor data (112) generated by the at least two sensor units (104, 106, 108);
inputting (420) the sensor data (112) into a fusion algorithm configured to determine a probability distribution about a feature of an object (113) in the environment of the vehicle (100) based on the sensor data (112), and to generate and output a first object hypothesis (204) assigned to the object (113) based on the probability distribution;
inputting (420) the sensor data (112) into a machine learning algorithm that has been trained to generate and output a second object hypothesis (206) assigned to the object (113) based on the sensor data (112);
deciding (430) whether the first object hypothesis (204) should be rejected;
if the first object hypothesis (204) should not be rejected: updating (440) an environment model representative of an environment of the vehicle (100) using the first object hypothesis (204);
if the first object hypothesis should be rejected (204): updating (450) the environmental model using the second object hypothesis (206).
2. The method (400) of claim 1,
wherein at least one other object hypothesis (214) is generated based on the first object hypothesis (204) and the second object hypothesis (206);
wherein the at least one other object hypothesis (214) is used to update the environmental model.
3. The method (400) of any of the preceding claims,
wherein determining whether the first object hypothesis (204) is erroneous by comparing the first object hypothesis (204) with the second object hypothesis (206);
if the first object hypothesis (204) is false: deciding that the first object hypothesis should be rejected (204); and/or
If the first object hypothesis (204) is not erroneous: it is decided that the first object hypothesis should not be rejected (204).
4. The method (400) of any of the preceding claims,
wherein the fusion algorithm is configured to weight the first object hypothesis (204);
wherein the weight of at least one of the first object hypotheses (204) is changed based on the second object hypothesis (206).
5. The method (400) of any of the preceding claims,
wherein the machine learning algorithm is trained using training data that includes at least a majority of features of objects that are difficult for the fusion algorithm to recognize.
6. The method (400) of any of the preceding claims,
wherein the fusion algorithm comprises a Kalman filter; and/or
Wherein the machine learning algorithm comprises a multi-layer perceptron.
7. An evaluation unit (110) configured to perform the method (400) according to any one of the preceding claims.
8. A vehicle system (102), comprising:
a sensor system having at least two sensor units (104, 106, 108) for detecting an environment of a vehicle (100); and
the evaluation unit (110) of claim 7.
9. A computer program comprising instructions which, when executed by a computer, cause the computer to perform the method (400) according to any one of claims 1 to 6.
10. A computer-readable medium, on which a computer program according to claim 9 is stored.
CN202110312295.6A 2020-03-25 2021-03-24 Method for identifying a vehicle environment Pending CN113447921A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020203828.7 2020-03-25
DE102020203828.7A DE102020203828A1 (en) 2020-03-25 2020-03-25 Method for recognizing the surroundings of a vehicle

Publications (1)

Publication Number Publication Date
CN113447921A true CN113447921A (en) 2021-09-28

Family

ID=77658503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110312295.6A Pending CN113447921A (en) 2020-03-25 2021-03-24 Method for identifying a vehicle environment

Country Status (2)

Country Link
CN (1) CN113447921A (en)
DE (1) DE102020203828A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022207598A1 (en) 2022-07-26 2024-02-01 Robert Bosch Gesellschaft mit beschränkter Haftung Method and control device for controlling an automated vehicle

Also Published As

Publication number Publication date
DE102020203828A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
US11565721B2 (en) Testing a neural network
CN111382768A (en) Multi-sensor data fusion method and device
CN108960083B (en) Automatic driving target classification method and system based on multi-sensor information fusion
US11704912B2 (en) Label-free performance evaluator for traffic light classifier system
GB2560618A (en) Object tracking by unsupervised learning
US11829131B2 (en) Vehicle neural network enhancement
JP2016130966A (en) Risk estimation device, risk estimation method and computer program for risk estimation
CN112585550A (en) Driving function monitoring based on neural network
CN113850391A (en) Occupancy verification apparatus and method
CN112660128A (en) Apparatus for determining lane change path of autonomous vehicle and method thereof
CN111284501A (en) Apparatus and method for managing driving model based on object recognition, and vehicle driving control apparatus using the same
CN116142233A (en) Carrier lamp classification system
CN113447921A (en) Method for identifying a vehicle environment
US20200174488A1 (en) False target removal device and method for vehicles and vehicle including the device
US11541885B2 (en) Location prediction for dynamic objects
US11654927B2 (en) Method for monitoring a vehicle system for detecting an environment of a vehicle
US20230227082A1 (en) Vehicle speed management systems and methods
CN115956041A (en) Driving support device, learning device, driving support method, driving support program, learning-completed model generation method, and learning-completed model generation program
EP3944141A1 (en) Lane keeping assist system of vehicle and lane keeping method using the same
CN114940166A (en) Pedestrian anti-collision protection method, device, equipment and medium based on trajectory prediction
CN113591673A (en) Method and device for recognizing traffic signs
Ravishankaran Impact on how AI in automobile industry has affected the type approval process at RDW
US20230373498A1 (en) Detecting and Determining Relevant Variables of an Object by Means of Ultrasonic Sensors
US20230004757A1 (en) Device, memory medium, computer program and computer-implemented method for validating a data-based model
US20220348194A1 (en) Evaluation apparatus for evaluating a trajectory hypothesis for a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination