EP4066156A1 - VERFAHREN ZUR BERECHNUNG EINES QUALITÄTSMAßES ZUR BEWERTUNG EINES OBJEKTDETEKTIONSALGORITHMUS - Google Patents
VERFAHREN ZUR BERECHNUNG EINES QUALITÄTSMAßES ZUR BEWERTUNG EINES OBJEKTDETEKTIONSALGORITHMUSInfo
- Publication number
- EP4066156A1 EP4066156A1 EP20800096.8A EP20800096A EP4066156A1 EP 4066156 A1 EP4066156 A1 EP 4066156A1 EP 20800096 A EP20800096 A EP 20800096A EP 4066156 A1 EP4066156 A1 EP 4066156A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- object detection
- detection algorithm
- annotations
- determined
- detections
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 188
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 78
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000013528 artificial neural network Methods 0.000 claims description 18
- 238000009826 distribution Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 230000006978 adaptation Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 2
- 230000008901 benefit Effects 0.000 description 16
- 238000007476 Maximum Likelihood Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013398 bayesian method Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000000528 statistical test Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/414—Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text
Definitions
- the invention relates to a method for calculating a quality measure for evaluating a computer-implemented object detection algorithm, a device set up for executing the method, a computer program for executing the method, and a machine-readable storage medium on which this computer program is stored.
- Computer-implemented object detection algorithms are often used as part of an environment recognition of partially, highly or fully automated robots, in particular automated vehicles.
- the algorithms used for this are not perfect and can cause - more or less serious - false detections.
- an object detection algorithm in an automatically operated vehicle can detect an object at a different location than it actually is, and thereby generate a faulty environment model. To enable such a system, it is therefore essential that the quality of the object detection algorithm is assessed and classified as sufficiently good.
- Average metrics for example intersection over union, are generally used to evaluate the object detection algorithm. From a security perspective, however, average metrics are critical as they average the safest and most dangerous behavior. For the release of a These average metrics are therefore no longer sufficient for a safety-critical product.
- the invention describes a computer-implemented method for calculating a quality measure of a computer-implemented object detection algorithm, which can be used in particular to enable the object detection algorithm for partially, highly or fully automated robots, the method comprising the following steps:
- the quality measure representing a probability with which a deviation of an object detection from its assigned annotation exceeds or falls below a predefined threshold value.
- the object detection algorithm can be released for use, in particular for use in an at least partially automated robot, if it exceeds or falls below a predefined quality measurement threshold.
- a robot can be understood to mean, for example, an industrial robot, an automated work machine or an automatically operated vehicle.
- it can be understood to mean a partially, highly or fully automated vehicle which can at least temporarily carry out driving operations without human intervention, in particular adjustments to the longitudinal and / or transverse movements.
- data records with annotation are used for the method, with annotations being understood in particular to be bounding boxes.
- Bounding box can be understood in particular as a rectangle that encloses an object to be detected.
- the bounding box can mark an area of an image in which a person is located.
- the bounding box can also be cuboid if objects are to be recorded in three-dimensional space. This can be useful, for example, if in the above example the position coordinates of the person are to be detected directly in the real world.
- Several objects to be detected and thus several annotations can exist for each data record in the data record. For example, several people can be seen in one image, all of which are to be detected.
- Different data can be used.
- image data recorded by one or more cameras can be used.
- data from other sensors for example radar, lidar or ultrasonic sensors or microphones, can also be used.
- acoustic signals in particular visualized noise spectra or the like can be used as a basis for object recognition.
- annotations can be different. If data records are obtained from external sources such as the Internet, for example, they are often already provided with annotations that can then be read out accordingly. Alternatively, the annotation can be created manually and linked to the data record. Another alternative is the automatic and / or semi-automatic creation of annotations. With the semi-automatic annotation, images are labeled by an annotation algorithm, whereby in a second step only the correctness of the label is checked by a person.
- the task of the object detection algorithm is to detect objects as accurately as possible.
- the object detection algorithm calculates object detections for this purpose, with object detections being understood in particular as bounding boxes.
- object detections being understood in particular as bounding boxes.
- both annotations and object detections can be represented by bounding boxes.
- the difference is that the bounding box of the annotation determines an object to be detected, while the bounding box of the Object detection represents a bounding box determined by the object detection algorithm.
- the object detections of a given object detection algorithm can first be applied to a selected data set in order to calculate object detections for the data set.
- the object detections generated in this way can then be assigned to the annotations of the data record. This can be done by assigning an annotation to the object detection which has the highest overlap with this annotation.
- the distance between an object detection and an annotation can also be used as an assignment criterion, in that an annotation is assigned to the object detection that has the smallest distance to it.
- An object detection was not assigned to any annotation, for example because it does not overlap with any of the annotations. This is known as a false positive.
- the second possibility is that no object detection has been assigned to an annotation. This is known as a false negative.
- the third case is that an assignment has been made and there is a pair of object detection and annotation. This case is called a match.
- a quality measure can now be determined which reflects the accuracy with which an object detection detects an annotation and thus an object to be detected.
- a quality measure can be understood to mean a probability with which the distance between an object detection and the annotation assigned to it falls below a predetermined distance threshold value.
- a distance can be understood here as a distance between a point on an edge of an object detection and a point on an edge of the associated annotation. In particular, it can be the smallest or largest distance between the distance between the edge of an object detection and the associated annotation.
- the advantage of the invention is that it is possible to determine a probability with which the object detection algorithm can cause a security risk in object detections.
- the security risk can be understood here as the probability that the object detections no longer completely enclose their correspondingly assigned annotations.
- the invention can be used to classify an object detection algorithm as safe in the case of an object detection if the determined probability falls below a predefined value.
- the deviation is a distance between a point of the object detection and a point of its assigned annotation.
- the object detection algorithm is not limited to object detections whose sides are parallel to the sides of the annotations.
- the object detection algorithm can output an object detection that is rotated in relation to the annotation assigned to it.
- the deviation can be understood as the distance between a corner of the annotation and a side of the object detection.
- the deviation represents a shift between a point of the object detection to a point of its assigned annotation, the shift being a signed scalar, the magnitude of which represents a distance and the sign of which represents a direction in which the point of the annotation from the point the object detection is shifted.
- the advantage of this expansion is that, for example, the smallest shift can be used to determine the extent to which the annotation lies within the object detection assigned to it or, otherwise, how far the annotation from the object detection lies protrudes. In the event that the object detection completely encloses the annotation, the smallest shift is greater than zero. In the event that parts of the annotation are outside the object detection, the smallest shift is less than zero. This can be used to determine how likely it is that part or all of the annotation is outside of object detection.
- the deviation corresponds to the smallest shift from a set of shifts.
- the advantage of this embodiment is that a given object detection can be characterized by its - from a security point of view - potentially most dangerous deviation from the annotation. For a security argumentation, the most dangerous deviations of all object detections can be used and statistically evaluated.
- the set of displacements consists of displacements of the sides of the annotation to the corresponding sides of the assigned object detection, the displacements being orthogonal to the respective side.
- Corresponding pages are understood to mean the pages of an object detection and annotation that symbolize the same boundaries. In the case of 2-dimensional object detection and annotation, these are the left, right, upper and lower sides. For example, the left side of an object detection corresponds to the left side of the annotation assigned to it.
- the shift parallel to the respective annotation page is determined for each pair of corresponding pages, with which the page is shifted from the corresponding side of the object detection. The smallest shift is then the shift of the smallest length.
- the deviation is understood as an area which corresponds to the part of the annotation which does not overlap with the object detection.
- the advantage of this embodiment is that the area can possibly better describe the extent to which several deviations (e.g. height and width) from the annotation can be security-critical.
- the deviation can accordingly be indicated by a volume, the deviation being represented by the volume that does not overlap with the volume of the object detection.
- the calculation of the probability takes place based on a model which is determined based on the determined deviations.
- the model can be, for example, an expression of a probability distribution.
- the determination of this model can also be based on the determined deviations.
- its parameters can be determined based on a known method, in particular maximum likelihood estimation or Bayesian parameter estimation.
- the parameters can be set based on expert knowledge so that the model shows a desired behavior. The advantage is that suitable assumptions can be integrated into the determination of the probability using a model selected in this way.
- the model can extract knowledge from the specific deviations alone and output a probability accordingly.
- Known machine learning methods can be used for this, in particular neural networks.
- the model described above is a parameterizable model, in particular a parameterizable probability distribution, the parameters of which can be determined from the specific deviations.
- the parameterizable model described above is an expression of a general extreme value distribution, the parameters defining the specific distribution.
- a computer-implemented method which can be used to adapt a computer-implemented object detection algorithm to determine object detections.
- the procedure consists of the following steps:
- Adaptation of the detection algorithm based on the calculated quality measure in such a way that a renewed execution of the object detection algorithm results in a scaling of the object detections determined by means of the object detection algorithm.
- the determination of the annotations and object detections, as well as the calculation of the quality measure can be carried out analogously to the explanations of the method described above.
- data records can be used from which the annotations are extracted.
- the objects to be detected are then predicted using the object detection algorithm through object detection.
- the object detections are then assigned to the annotations and one of the methods described above for determining a quality measure is used to evaluate the predictions.
- the calculated quality measure can be used in such a way that the underlying object detection algorithm becomes more reliable.
- a predicted object detection of an object detection algorithm can be understood as critical to safety if the annotation assigned to it lies wholly or partially outside the object detection.
- the predicted object detection can, for example, be scaled - that is, changed in shape and size - in such a way that the assigned annotation is completely enclosed.
- the advantage of this method is that an object detection algorithm can be adapted in a measurable manner by increasing the quality measure in such a way that the object detection algorithm enables better or more reliable detection of objects.
- the method can therefore be used as a component of a safety argument for the release of a product, for example a automated driving function and / or a driver assistance function, which is based on the object detection algorithm.
- the steps for determining the object detections, calculating the quality measure and adapting the object detection algorithm are carried out with the respectively adapted
- the object detection algorithm is repeated until the quality measure falls below or exceeds a predefined quality value and / or a predefined number of repetitions has been reached.
- the advantage of this embodiment is that object detection algorithms based on iterative methods can be adapted very easily in order to increase the reliability of the predicted object detections.
- the object detection algorithm is based on a parameterizable model, in particular a neural network.
- the advantage of this embodiment is that the currently best performing object detection algorithms are based on neural networks. This embodiment allows the security of a neural network to be assessed using one of the quality measures described above.
- the scaling takes place based on properties of the determined object detections, in particular the size, the proportions and / or the position in the image. For example, it can be stipulated that smaller object detections have to be scaled differently than larger ones, since deviations of the object detections from the assigned annotations are more safety-critical for larger annotations than for small ones and / or vice versa.
- the position of the object detection can also be used to determine the scaling. In the case of an autonomous vehicle, it can be determined, for example, that objects at the upper edge of a video image are further away from the vehicle itself and are therefore less critical to safety.
- the scaling takes place independently of the determined object detections, in particular based on a predefined factor.
- the advantage of this embodiment is that the factor can be optimized based on the quality measure alone, without assumptions, and represents a computationally economical measure by means of which an existing object detection algorithm can be made measurably more reliable in a simple and fast manner.
- the object detection algorithm is based on a parameterizable model, in particular a neural network, the adaptation being based on a change in the parameters of the parameterizable model, comprising the steps:
- the core idea of this embodiment of the method is that a neural network is trained in such a way that it outputs already scaled object detections which no longer require subsequent scaling in order to enclose the assigned annotation. For this, scaled annotations are required first. Scaled annotations are understood to mean annotation that was generated from the originally extracted annotations by scaling. These scaled annotations can then be used to train the neural network, whereby the neural network is instructed to carry out the scaling intrinsically. It has been empirically proven that neural networks currently represent the best performing object detection algorithms. The advantage of this embodiment is therefore that, in addition to the high performance, a high degree of security with regard to the prediction of object detections can be achieved.
- the individual steps of the previous embodiment are repeated with the respectively adapted parameters until a predefined error threshold value is undershot and / or a predefined number of repetitions is reached.
- the advantage of this embodiment is that the neural network can be adapted iteratively. It is known that this iterative procedure for training neural networks enables the best predictive performance.
- a computer program which comprises instructions which, when the computer program is executed by a computer, cause the computer to carry out one of the above-mentioned methods.
- FIG. 1 shows a schematic process diagram for determining a quality measure of the object detection algorithm.
- FIG. 2 shows an example of the relationships between annotation, object detection and scaling of an object detection.
- FIG. 3 shows, by way of example, the determination of displacements of corresponding pages of an annotation and the object detection assigned to it.
- FIG. 4 schematically shows a general extreme value distribution with a threshold value.
- FIG. 5 shows the schematic sequence for improving a quality measure of an object detection algorithm.
- a quality measure of an object detection algorithm is determined by means of a computer-implemented method.
- the object detection algorithm is designed in such a way that it can recognize predefined objects by marking them with a bounding box in image data recorded by means of a camera. This is shown schematically, for example, in FIG. 2a, in which a vehicle with an annotation 201 and a bounding box 202a determined by means of the object detection algorithm are shown.
- a set of images is used in this exemplary embodiment in which objects are annotated and the object detection algorithm has determined bounding boxes for the annotated objects.
- This data record is used for the method shown schematically in FIG. 1 for determining a quality measure of the object recognition algorithm.
- the object detections that were determined by means of the object detection algorithm are assigned to the annotations (201) comprised by the image data.
- an annotation can protrude beyond an associated object detection (202a); this case is shown by way of example in FIG. 2a.
- the annotation is completely enclosed by the object detection (202b), which is shown schematically in FIGS. 2b and 3.
- the special case that the annotation corresponds exactly to the object detection can optionally be one of the two in FIG. 2 for the following steps categories shown.
- the annotation is assigned to an object detection via the so-called intersection over union, that is, the ratio of the overlap of the two bounding boxes to the area of the union of the two bounding boxes.
- the distance between the center points of both bounding boxes can also be used at this point in order to carry out an assignment.
- step 102 the smallest deviation is determined for each pair of annotation and associated object detection.
- the smallest deviation is determined from a set of deviations of the object detections from the associated annotations, which is shown schematically in FIG. 3.
- the deviations are a respective shift of the corresponding pages of an object detection and the annotation assigned to it. This means that shifts are determined for the left (301), upper (302), right (303) and lower (304) corresponding sides. The shifts are always parallel to the corresponding side of the annotation (201).
- the sign of a shift indicates the direction in which the object detection is shifted from the annotation (201). In the event that the annotation (201) protrudes from the object detection on one side, the corresponding shift is negative (301). Otherwise the shift is positive (302, 303, 304).
- the smallest is then determined from the four shifts (301, 302, 303, 304).
- step 103 the quality measure is calculated.
- a model (401) is determined from the deviations determined in step 102, which model represents the distribution of the deviations.
- a general extreme value distribution is used for this purpose.
- the maximum likelihood estimation method is used to determine the parameters of the general extreme value distribution.
- the cumulative distribution function of the general extreme value distribution is evaluated at the value 0 (402). This step is shown schematically in FIG. In the figure, the displacement is plotted on the X axis and the probability density of the extreme value distribution is plotted on the Y axis.
- the result of the evaluation corresponds to the probability that an annotation protrudes from the object detection assigned to it.
- step 103 a Bayesian parameter estimation is carried out instead of the maximum likelihood estimation.
- an object detection algorithm is changed in such a way that it becomes more reliable.
- annotations are manually generated for a data record of camera-based sensor data.
- the annotations can also be generated semi- or fully automatically.
- step 502 object detections are determined for the sensor data using the object detection algorithm, which are then assigned to the annotations in step 503.
- the assignment takes place as in the first exemplary embodiment.
- step 504 the quality measure of the object detection algorithm is determined. This happens as in the first embodiment.
- the object detection algorithm is adapted in such a way that the probability becomes smaller that an annotation protrudes from the object detection assigned to it.
- all object detections are scaled with a fixed factor in such a way that they enclose the annotation assigned to them.
- a fourth exemplary embodiment the same steps are carried out as in the third exemplary embodiment, but LIDAR-based sensor data are used instead of camera-based sensor data. The rest of the steps are the same.
- step 505 the same steps take place as in the third exemplary embodiment, step 505 being modified as follows:
- the object detections are scaled with a fixed factor and the quality measure for the scaled object detections is calculated. If the quality measure does not meet a predefined threshold value, the already scaled object detections are scaled with a factor in such a way that the object detection becomes larger. This adaptation of the size with the aid of a scaling factor is carried out until the quality measure falls below a predefined probability.
- the object detection algorithm is based on a neural network.
- step 505 is modified as follows: The neural network is trained with sensor data and annotations of a second data set in such a way that it intrinsically outputs larger object detections.
- the annotations of the second data set are scaled in such a way that they become larger.
- the neural network learns to predict larger object detections.
- the modified neural network is applied again to the first data set and the quality measure is determined again. If the quality measure is above a predefined probability value, the neural network is trained on the second data set with even larger scaled annotations. The adaptation of the neural network and the evaluation of the quality measure are carried out repeatedly until the quality measure falls below the predefined probability value.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019218483.9A DE102019218483A1 (de) | 2019-11-28 | 2019-11-28 | Verfahren zur Berechnung eines Qualitätsmaßes zur Bewertung eines Objektdetektionsalgorithmus |
PCT/EP2020/080377 WO2021104789A1 (de) | 2019-11-28 | 2020-10-29 | VERFAHREN ZUR BERECHNUNG EINES QUALITÄTSMAßES ZUR BEWERTUNG EINES OBJEKTDETEKTIONSALGORITHMUS |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4066156A1 true EP4066156A1 (de) | 2022-10-05 |
Family
ID=73040083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20800096.8A Pending EP4066156A1 (de) | 2019-11-28 | 2020-10-29 | VERFAHREN ZUR BERECHNUNG EINES QUALITÄTSMAßES ZUR BEWERTUNG EINES OBJEKTDETEKTIONSALGORITHMUS |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220398837A1 (de) |
EP (1) | EP4066156A1 (de) |
CN (1) | CN114787877A (de) |
DE (1) | DE102019218483A1 (de) |
WO (1) | WO2021104789A1 (de) |
-
2019
- 2019-11-28 DE DE102019218483.9A patent/DE102019218483A1/de active Pending
-
2020
- 2020-10-29 US US17/777,222 patent/US20220398837A1/en active Pending
- 2020-10-29 CN CN202080083444.XA patent/CN114787877A/zh active Pending
- 2020-10-29 WO PCT/EP2020/080377 patent/WO2021104789A1/de unknown
- 2020-10-29 EP EP20800096.8A patent/EP4066156A1/de active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2021104789A1 (de) | 2021-06-03 |
CN114787877A (zh) | 2022-07-22 |
US20220398837A1 (en) | 2022-12-15 |
DE102019218483A1 (de) | 2021-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AT521607B1 (de) | Verfahren und Vorrichtung zum Testen eines Fahrerassistenzsystem | |
DE102018100469A1 (de) | Generierten von simulierten sensordaten zum trainieren und überprüfen von erkennungsmodellen | |
EP3765927B1 (de) | Verfahren zum erzeugen eines trainingsdatensatzes zum trainieren eines künstlichen-intelligenz-moduls für eine steuervorrichtung eines fahrzeugs | |
DE102018128289A1 (de) | Verfahren und vorrichtung für eine autonome systemleistung und zur einstufung | |
DE102018200683A1 (de) | Verfahren zur Detektion eines Objektes | |
DE102014208967A1 (de) | Umfeldkarte für Fahrflächen mit beliebigem Höhenverlauf | |
WO2020048669A1 (de) | Verfahren zum bestimmen einer spurwechselangabe eines fahrzeugs, ein computerlesbares speichermedium und ein fahrzeug | |
DE112020006045T5 (de) | Formal sicheres symbolisches bestärkendes lernen anhand von visuellen eingaben | |
WO2019110177A1 (de) | Trainieren und betreiben eines maschinen-lern-systems | |
DE102019002269A1 (de) | Verfahren zum Ermitteln einer Orientierung eines Fahrzeugs relativ zu einem Kraftfahrzeug | |
WO2021165077A1 (de) | Verfahren und vorrichtung zur bewertung eines bildklassifikators | |
DE102021207613A1 (de) | Verfahren zur Qualitätssicherung eines Systems | |
DE102019213797A1 (de) | Verfahren zur Bewertung einer Sequenz von Repräsentationen zumindest eines Szenarios | |
DE102019209463A1 (de) | Verfahren zur Bestimmung eines Vertrauenswertes eines Objektes einer Klasse | |
EP4066156A1 (de) | VERFAHREN ZUR BERECHNUNG EINES QUALITÄTSMAßES ZUR BEWERTUNG EINES OBJEKTDETEKTIONSALGORITHMUS | |
DE102022201679A1 (de) | Verfahren und Vorrichtung zum Trainieren eines neuronalen Netzes | |
DE102018206743A1 (de) | Verfahren zum Betreiben eines Fahrerassistenzsystems eines Egofahrzeugs mit wenigstens einem Umfeldsensor zum Erfassen eines Umfelds des Egofahrzeugs, Computer-lesbares Medium, System, und Fahrzeug | |
DE102018109680A1 (de) | Verfahren zum Unterscheiden von Fahrbahnmarkierungen und Bordsteinen durch parallele zweidimensionale und dreidimensionale Auswertung; Steuereinrichtung; Fahrassistenzsystem; sowie Computerprogrammprodukt | |
DE102021133977A1 (de) | Verfahren und System zur Klassifikation von Szenarien eines virtuellen Tests sowie Trainingsverfahren | |
DE102021129864A1 (de) | Verfahren und System zur Annotation von Sensordaten | |
DE102021213538A1 (de) | Simulation zur Validierung einer automatisierenden Fahrfunktion für ein Fahrzeug | |
DE102020127051A1 (de) | Verfahren zur Bestimmung von sicherheitskritischen Ausgabewerten mittels einer Datenanalyseeinrichtung für eine technische Entität | |
DE102019111608A1 (de) | Verfahren zum Bestimmen einer Eigenbewegung eines Kraftfahrzeugs, elektronische Recheneinrichtung sowie elektronisches Fahrzeugführungssystem | |
DE102020213253A1 (de) | Computerimplementierte konsistente klassifikationsverfahren | |
DE102019115352A1 (de) | Objekterkennungsvorrichtung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220628 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |