CN118196732A - Data drift identification for sensor systems - Google Patents

Data drift identification for sensor systems Download PDF

Info

Publication number
CN118196732A
CN118196732A CN202311600130.4A CN202311600130A CN118196732A CN 118196732 A CN118196732 A CN 118196732A CN 202311600130 A CN202311600130 A CN 202311600130A CN 118196732 A CN118196732 A CN 118196732A
Authority
CN
China
Prior art keywords
detection
category
dataset
iou
ground truth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311600130.4A
Other languages
Chinese (zh)
Inventor
S·巴斯卡尔
金尼什·简
尼基塔·斋普里亚
什里亚莎·波德尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ford Global Technologies LLC
Original Assignee
Ford Global Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies LLC filed Critical Ford Global Technologies LLC
Publication of CN118196732A publication Critical patent/CN118196732A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides "data drift identification for sensor systems". A system and method of identifying data drift in a trained object detection Deep Neural Network (DNN) includes: receiving a dataset based on real world usage, wherein the dataset includes a score associated with each category in an image, the category including a Background (BG) category; measuring IoU a conditioned ECE (IoU-ECE) by calculating an Expected Calibration Error (ECE) with detection from the dataset prior to non-maximum suppression (pre-NMS detection) conditioned on a specific cross-over (IoU) threshold at white box settings; performing a white-box temperature scaling (WB-TS) calibration on the pre-NMS detection of the dataset to extract a temperature T upon determining that the IoU-ECE is greater than a preset first threshold; and upon determining that the temperature T exceeds a preset second threshold, identifying that the data drift has occurred.

Description

Data drift identification for sensor systems
Technical Field
The present disclosure relates to systems and methods for identifying data drift in a trained object detection Deep Neural Network (DNN).
Background
Autopilot may use a Deep Neural Network (DNN) for various perceived tasks and rely on scores output by the perceived DNN to determine the uncertainty associated with the predicted output.
Disclosure of Invention
An object detection Deep Neural Network (DNN) may be trained to determine objects in image data acquired by sensors in systems including vehicle guidance, robotic operation, safety, manufacturing, and product tracking. The vehicle guidance may include the vehicle operating in an autonomous or semi-autonomous mode in an environment including a plurality of objects. Robotic guidance may include guiding a robotic end effector (e.g., gripper) to pick up parts and orient the parts for assembly in an environment that includes multiple parts. The security system includes features in which a computer obtains video data from a camera observing a secure area to provide access rights to authorized users and to detect unauthorized access in an environment including multiple users. In a manufacturing system, a DNN may determine a position and orientation of one or more parts in an environment that includes a plurality of parts. In a product tracking system, a deep neural network may determine the location and orientation of one or more packages in an environment that includes a plurality of packages.
Such tasks may use the object detection DNN for various perceived tasks and rely on confidence scores output by the perceived DNN to determine uncertainty or reliability associated with the predicted output. Calibration of the DNN means that the DNN can predict uncertainty, i.e., the probability that the DNN output accurately represents the ground truth. The calibration of the DNN may depend on various factors such as the architecture, data set, and selection of training parameters. Miscalibration errors are measurement of the deviation of the uncertainty score from the true performance accuracy of the predicted DNN, i.e., miscalibration means that the DNN cannot accurately predict the certainty or uncertainty that its perceived output meets ground truth. When the confidence score is higher than the accuracy of the model, the score is referred to as overstattoo. When the confidence score is below the accuracy of the model, the score is said to be less confident.
DNN calibration may be improved as described herein. For example, the present disclosure includes white-box temperature scaling (WB-TS) to provide calibration of object detection DNN.
As used herein with respect to calibration of DNN, black box calibration refers to calibrating data after a non-maximum suppression (NMS) step, while white box calibration refers to calibrating raw data, i.e., prior to any NMS step. In addition, prater scaling (a parametric method for calibration) may be employed. The non-probabilistic predictions of the classifier are used as features of a logistic regression model that is trained on the validation set to return probabilities. In the context of NN, the Pratet scale learns scalar parameters a, b ε R and outputs q i=σ(azi +b) as the calibration probabilities. The parameters a and b can be optimized using Negative Log Likelihood (NLL) loss on the validation set.
Temperature scaling is the simplest extended prater scaling and uses a single scalar parameter T >0 for all categories K (where K > 2). Given the logic vector z i, the confidence prediction is:
T is called temperature, and it "softens" (i.e., increases the output entropy) Softmax, where T >1. When T→infinity, the probability q i approaches 1/K, which represents the maximum uncertainty. In the case of t=1, the original probability p i is restored. When t→0, the probability collapses to a point quality (i.e., q i =1). Temperature T is optimized with respect to NLL loss on the validation set. Thus, as used herein, the term "temperature" refers to a scalar parameter used in calibration (rather than to the degree or intensity of heat present in a substance or object).
White box temperature scaling (WB-TS) may be used to account for calibration in object detection DNN by scaling the logic vector of the detection box before NMS (before non-maximum suppression) with the temperature value T. Here, the temperature T is obtained during the calibration phase using the validation data set as the calibration data set. The calibrated score has been found to enable reliable uncertainty estimation. However, it has also been found that the level of calibration of the incoming and evolving data sets varies depending on factors such as geography and time of day, week or year. In practice, such data drift may be related to conceptual slipping due to new locations, which is related to only certain kinds of data, or may be related to covariate slipping (such as shading). As described herein, the object detection DNN may advantageously be calibrated from an incoming dataset of a changing environment. Moreover, such adjustments to the environment that addresses the changes allow for continued training of the DNN, where the new incoming data may be further used to retrain the DNN model for unseen, new and/or out-of-distribution (OOD) data points.
Vehicle guidance will be described herein as a non-limiting example of using object detection DNNs to detect, for example, vehicles and pedestrians in a traffic scene. A traffic scene is an environment of a traffic infrastructure system or surrounding of a vehicle, which may include a portion of a road, objects including vehicles and pedestrians, etc. For example, a computing device in a traffic infrastructure may be programmed to obtain one or more images from one or more sensors included in the traffic infrastructure system and detect objects in the images using DNNs. The image may be acquired from a still camera or video camera and may include range data acquired from a range sensor including a lidar sensor. The image may also be acquired from a sensor included in the vehicle. The DNNs may be trained to mark and locate objects and to determine trajectories and uncertainties in the image data or range data. Computing devices included in the traffic infrastructure system may use the detected trajectories and uncertainties of objects to determine a vehicle path to operate the vehicle in an autonomous or semi-autonomous mode. The vehicle may operate by determining commands based on a vehicle path to instruct the powertrain, brake, and steering components of the vehicle to operate the vehicle to travel along the path.
Vehicles operating based on a vehicle path determined by a deep neural network may benefit from detecting objects on or near the vehicle path and determining whether to continue, stop, or determine a new vehicle path that avoids the objects on the vehicle path.
In one or more implementations of the present disclosure, a system includes a computer including a processor and a memory storing instructions executable by the processor, the processor programmed to: identifying a trained object detects data drift in a Deep Neural Network (DNN). This is achieved by: receiving a dataset based on real world usage, wherein the dataset includes a score associated with each category in an image, the category including a Background (BG) category; measuring IoU a conditioned ECE (IoU-ECE) by calculating an Expected Calibration Error (ECE) with detection from the dataset prior to non-maximum suppression (pre-NMS detection) conditioned on a specific cross-over (IoU) threshold at white box settings; performing a white-box temperature scaling (WB-TS) calibration on the pre-NMS detection of the dataset to extract a temperature T upon determining that the IoU-ECE is greater than a preset first threshold; and upon determining that the temperature T exceeds a preset second threshold, identifying that the data drift has occurred.
In one implementation, the system may further include instructions for calibrating the incoming data using the extracted temperature T when a data drift is identified.
In another implementation, incoming data may be calibrated by uniformly scaling a logic vector associated with the pre-NMS detection of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer.
In further implementations, the system may further include instructions for performing additional learning on the object detection DNN when the data drift is identified.
In one implementation IoU-ECE is
Where n is the number of samples adjusted by IoU, M is the number of interval bins (=15), and Bm is the set of indices of samples whose prediction scores fall in the interval im= (M-1/M, M/M).
In one implementation, the particular IoU threshold may be set the same as the IoU threshold used to train the object detection DNN.
In another implementation, the instructions for performing the WB-TS calibration on the pre-NMS detection of the dataset to extract the temperature T may include instructions for: retrieving the dataset, wherein the dataset comprises a score associated with each object category in the image, the object category comprising a Background (BG) category; determining a background ground truth box in the dataset by comparing the ground truth box with a detection box generated by the object detection DNN using an intersection ratio (IoU) threshold; correcting a category imbalance between a ground truth box in a ground truth category and a background ground truth box by updating the ground truth category to include a plurality of background ground truth boxes based on a number of ground truth boxes in the ground truth category; and determining a single scalar parameter for the temperature T for all classes by optimizing for the Negative Log Likelihood (NLL) penalty.
In one implementation, the preset first threshold may be in the range of 2 to 4 times the IoU-ECE value calculated from the retained validation dataset, and the preset second threshold may be in the range of 2 to 4 times the temperature T extracted from the retained validation dataset.
In another implementation, the system may include instructions to: performing non-maximum suppression on the calibrated confidence score using the corresponding bounding box prediction after the Sigmoid/Softmax layer to obtain a final detection; and actuating a vehicle component based on the object detection determination of the object detection DNN.
In further implementations, the instructions for correcting the class imbalance may include instructions for: determining an average number of pre-NMS detection frames in the non-BG class as k; and extracting the top k NMS front detection boxes in the BG class using the corresponding model scores.
In one or more implementations of the present disclosure, a method of identifying data drift in a trained object detection Deep Neural Network (DNN) may be performed by: receiving a dataset based on real world usage, wherein the dataset includes a score associated with each category in an image, the category including a Background (BG) category; measuring IoU a conditioned ECE (IoU-ECE) by calculating an Expected Calibration Error (ECE) with detection from the dataset prior to non-maximum suppression (pre-NMS detection) conditioned on a specific cross-over (IoU) threshold at white box settings; performing a white-box temperature scaling (WB-TS) calibration on the pre-NMS detection of the dataset to extract a temperature T upon determining that the IoU-ECE is greater than a preset first threshold; and upon determining that the temperature T exceeds a preset second threshold, identifying that the data drift has occurred.
In one implementation, the method may further include calibrating the incoming data using the extracted temperature T when the data drift is identified.
In another implementation, incoming data may be calibrated by uniformly scaling a logic vector associated with the pre-NMS detection of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer.
In further implementations, the method may further include performing additional learning on the object detection DNN when the data drift is identified.
In another implementation, ioU-ECE may be
Where n is the number of samples adjusted by IoU, M is the number of interval bins (=15), and Bm is the set of indices of samples whose prediction scores fall in the interval im= (M-1/M, M/M).
In further implementations, the particular IoU threshold may be set the same as the IoU threshold used to train the subject detection DNN.
In one implementation, performing the WB-TS calibration on the pre-NMS detection of the dataset to extract the temperature T may include: retrieving the dataset, wherein the dataset comprises a score associated with each object category in the image, the object category comprising a Background (BG) category; determining a background ground truth box in the dataset by comparing the ground truth box with a detection box generated by the object detection DNN using an intersection ratio (IoU) threshold; correcting a category imbalance between a ground truth box in a ground truth category and a background ground truth box by updating the ground truth category to include a plurality of background ground truth boxes based on a number of ground truth boxes in the ground truth category; and determining a single scalar parameter for the temperature T for all classes by optimizing for the Negative Log Likelihood (NLL) penalty.
In another implementation, the preset first threshold may be in the range of 2 to 4 times the IoU-ECE value calculated from the retained validation dataset, and the preset second threshold may be in the range of 2 to 4 times the temperature T extracted from the retained validation dataset.
In one implementation, the method may further include: performing non-maximum suppression on the calibrated confidence score using the corresponding bounding box prediction after the Sigmoid/Softmax layer to obtain a final detection; and actuating a vehicle component based on the object detection determination of the object detection DNN.
In another implementation, correcting the class imbalance may include: determining an average number of pre-NMS detection frames in the non-BG class as k; and extracting the top k NMS front detection boxes in the BG class using the corresponding model scores.
Drawings
Fig. 1 is an example of a vehicle system for using a deep neural network.
Fig. 2 is an exemplary traffic scenario.
Fig. 3 shows a box plot of Expected Calibration Error (ECE) values obtained with different percentages of Background (BG) class samples to show the effect of class imbalance on ECE values.
Fig. 4 shows an example of a data drift detection process flow.
Fig. 5 shows an exemplary flow chart of the data drift process flow.
Fig. 6 is an exemplary flowchart of a white-box temperature scaling (WB-TS) process.
Fig. 7 is an exemplary flow chart of a data calibration process.
Detailed Description
Fig. 1 is a diagram of an object detection system 100 that may include a traffic infrastructure system 105 that includes a server computer 120 and sensors 122. The object detection system 100 includes a vehicle 110 that is operable in an autonomous ("autonomous" itself means "fully autonomous" in this disclosure) mode, a semi-autonomous mode, and an occupant driving (also referred to as non-autonomous) mode. The computing device 115 of one or more vehicles 110 may receive data regarding the operation of the vehicle 110 from the sensors 116. Computing device 115 may operate vehicle 110 in an autonomous mode, a semi-autonomous mode, or a non-autonomous mode.
The computing device 115 includes a processor and memory such as are known. Further, the memory includes one or more forms of computer-readable media and stores instructions executable by the processor to perform operations including as disclosed herein. For example, the computing device 115 may include one or more of programming to operate vehicle braking, propulsion (e.g., controlling acceleration of the vehicle 110 by controlling one or more of an internal combustion engine, an electric motor, a hybrid engine, etc.), steering, climate control, interior and/or exterior lights, etc., and to determine whether and when the computing device 115 (rather than a human operator) is controlling such operations.
The computing device 115 may include or be communicatively coupled to more than one computing device (e.g., a controller included in the vehicle 110 for monitoring and/or controlling various vehicle components, etc. (e.g., powertrain controller 112, brake controller 113, steering controller 114, etc.)) for example, via a vehicle communication bus as described further below. The computing device 115 is typically arranged for communication over a vehicle communication network (e.g., including a bus in the vehicle 110, such as a Controller Area Network (CAN), etc.); additionally or alternatively, the vehicle 110 network may include, for example, known wired or wireless communication mechanisms, such as ethernet or other communication protocols.
The computing device 115 may transmit and/or receive messages to and/or from various devices in the vehicle (e.g., controllers, actuators, sensors (including sensor 116), etc.) via a vehicle network. Alternatively or additionally, where computing device 115 actually includes multiple devices, a vehicle communication network may be used to communicate between devices represented in this disclosure as computing device 115. In addition, as mentioned below, various controllers or sensing elements (such as sensors 116) may provide data to the computing device 115 via a vehicle communication network.
In addition, the computing device 115 may be configured to communicate with a remote server computer 120 (such as a cloud server) via a network 130 through a vehicle-to-infrastructure (V-to-I) interface 111, including hardware, firmware, and software that permit the computing device 115 to communicate via, for example, a wireless internet, as described belowOr a network 130 of cellular networks, is in communication with the remote server computer 120. Thus, V2I interface 111 may include a network configured to utilize various wired and/or wireless networking technologies (e.g., cellular,/>And wired and/or wireless packet networks), memory, transceivers, and so forth. The computing device 115 may be configured to communicate with other vehicles 110 through a V2I interface 111 using a vehicle-to-vehicle (V2V) network (e.g., according to Dedicated Short Range Communications (DSRC) and/or the like), e.g., formed on a mobile ad hoc network basis between neighboring vehicles 110 or formed through an infrastructure-based network. The computing device 115 also includes non-volatile memory such as is known. The computing device 115 may record the data by storing the data in non-volatile memory for later retrieval and transmission to the server computer 120 or the user mobile device 160 via the vehicle communication network and the vehicle-to-infrastructure (V2I) interface 111.
As already mentioned, programming for operating one or more vehicle 110 components (e.g., braking, steering, propulsion, etc.) without human operator intervention is typically included in instructions stored in memory and executable by a processor of computing device 115. Using data received in computing device 115 (e.g., sensor data from sensors 116, server computer 120, etc.), computing device 115 may make various determinations and/or control various vehicle 110 components and/or operations to operate vehicle 110 without a driver. For example, computing device 115 may include programming to adjust vehicle 110 operational behaviors (i.e., physical manifestations of vehicle 110 operation), such as speed, acceleration, deceleration, steering, etc., as well as strategic behaviors (i.e., controlling operational behaviors in a manner that is generally intended to achieve efficient traversal of a route).
The term controller as used herein includes computing devices that are typically programmed to monitor and/or control specific vehicle subsystems. Examples include a powertrain controller 112, a brake controller 113, and a steering controller 114. The controller may be, for example, a known Electronic Control Unit (ECU), possibly including additional programming as described herein. The controller is communicatively connected to the computing device 115 and receives instructions from the computing device to actuate the subsystems according to the instructions. For example, brake controller 113 may receive instructions from computing device 115 to operate brakes of vehicle 110.
The one or more controllers 112, 113, 114 for the vehicle 110 may include known Electronic Control Units (ECUs) or the like, including, as non-limiting examples, one or more powertrain controllers 112, one or more brake controllers 113, and one or more steering controllers 114. Each of the controllers 112, 113, 114 may include a respective processor and memory and one or more actuators. The controllers 112, 113, 114 may be programmed and connected to a vehicle 110 communication bus, such as a Controller Area Network (CAN) bus or a Local Interconnect Network (LIN) bus, to receive instructions from the computing device 115 and to control actuators based on the instructions.
The sensors 116 may include a variety of devices known to provide data via a vehicle communication bus. For example, a radar fixed to a front bumper (not shown) of the vehicle 110 may provide a distance from the vehicle 110 to a next vehicle in front of the vehicle 110, or a Global Positioning System (GPS) sensor provided in the vehicle 110 may provide geographic coordinates of the vehicle 110. For example, distances provided by radar and/or other sensors 116 and/or geographic coordinates provided by GPS sensors may be used by computing device 115 to autonomously or semi-autonomously operate vehicle 110.
The vehicle 110 is typically a ground-based vehicle 110 (e.g., passenger car, pickup truck, etc.) capable of autonomous and/or semi-autonomous operation and having three or more wheels. The vehicle 110 includes one or more sensors 116, a V-to-I interface 111, a computing device 115, and one or more controllers 112, 113, 114. The sensors 116 may collect data related to the vehicle 110 and the operating environment of the vehicle 110. By way of example and not limitation, the sensor 116 may include, for example, altimeters, cameras, laser radars (LIDARs), radars, ultrasonic sensors, infrared sensors, pressure sensors, accelerometers, gyroscopes, temperature sensors, pressure sensors, hall sensors, optical sensors, voltage sensors, current sensors, mechanical sensors (such as switches), and the like. The sensor 116 may be used to sense an operating environment of the vehicle 110, for example, the sensor 116 may detect phenomena such as weather conditions (rainfall, ambient temperature, etc.), road grade, road location (e.g., using road edges, lane markings, etc.), or location of a target object (such as adjacent the vehicle 110). The sensors 116 may also be used to collect data, including dynamic vehicle 110 data related to the operation of the vehicle 110, such as speed, yaw rate, steering angle, engine speed, brake pressure, oil pressure, power level applied to the controllers 112, 113, 114 in the vehicle 110, connectivity between components, and accurate and timely performance of the components of the vehicle 110.
The vehicle may be equipped to operate in both an autonomous mode and an occupant driving mode. Semi-autonomous mode or fully autonomous mode means an operating mode in which the vehicle may be driven partially or fully by a computing device that is part of a system having sensors and a controller. The vehicle may be occupied or unoccupied, but in either case, the vehicle may be driven partially or fully without occupant assistance. For purposes of this disclosure, autonomous mode is defined as a mode in which each of vehicle propulsion (e.g., via a powertrain including an internal combustion engine and/or an electric motor), braking, and steering is controlled by one or more vehicle computers; in semi-autonomous mode, the vehicle computer controls one or more of vehicle propulsion, braking, and steering. In the non-autonomous mode, none of these are controlled by the computer.
Fig. 2 is an illustration of an image of a traffic scene 200. The image of the traffic scene 200 may be acquired by the sensor 122 included in the traffic infrastructure system 105 or the sensor 116 included in the vehicle 110. The image of the traffic scene 200 includes vehicles 214 on a road 236. Also included in the traffic scene 200 are objects 212 that can be identified and located in order to determine the likely need for a new path.
The image classification includes a class of objects in the predicted image. In contrast, object detection for perception purposes (e.g., for vehicle operation in a vehicle) also includes object localization, which refers to identifying the location of one or more objects in an image and drawing a bounding box around its extent.
As discussed above, in temperature scaling, a single scalar value (i.e., temperature T) is used to scale the non-probabilistic output of the classification network before the Softmax layer, i.e., the log vector z i e RC corresponding to the input image I i, where C is the number of categories. In equation 1, p i is the calibration model score for the prediction class of image I i. (Logit herein has a standard mathematical definition of a function that is the inverse of a standard logic function). T is obtained by optimizing for Negative Log Likelihood (NLL) on the calibration dataset. For random variables X e χ (input) and Y e ρ= {1, with ground truth joint distribution pi (X, Y), C (class labels), given the probability model pi (X, Y) and n samples, the negative log likelihood is defined by equation 2.
Advantageously, calibrating the performance of the DNN model in combination with the temperature T extracted for the new (incoming) dataset may enable detection of miscalibrations. The detection may be used to calibrate the object detection DNN model to the evolving dataset and use the new incoming samples for continuous or continued learning. This is particularly useful in the autonomous vehicle domain where deployed object detection DNNs are typically exposed to new geographic locations and can operate at different times of day, week and year (where lighting, shadows, foliage, traffic, etc. may vary). White box temperature scaling (WB-TS) uses the Expected Calibration Error (ECE) as a measure of the scalar summary of the calibration. ECE measures misalignment by quantifying the gap between accuracy and confidence, as shown in equation 3:
in equation 3, n is the number of samples, M is the number of interval bins (=15), and Bm is the set of indices for samples whose prediction scores fall in interval im= (M-1/M, M/M),.
In one example, DNN (see Liu, wei et al, "Ssd: single shot multi-frame detector (Ssd: single shot multibox detector)", european computer vision Congress (European conference on computer vision), springer, cham, 2016) in the form of a single shot multi-frame detector (SSD) MobilenetV2 model was trained on VOC 2007 (V07) (EVERINGHAM, MARK et al, "Pascal Visual Object Class (VOC) challenge (THE PASCAL Visual Object Classes (VOC)", international journal of computer vision (International journal of computer vision) 88.2 (2010): 303-338) and VOC2012 (V12) (http:// host. Robots. Ox. Ac. Uk/pascal/VOC/VOC2012 /). Performing WB-TS calibration and testing for miscalibration errors in various tests, e.g.
Table 1 shows the results.
Table 1. WB-TS calibration performance trained on V 07 and V 12 and calibrated using six different sets of calibrations (cal.) for calibration
And the retention test set both show IoU-ECE.
To study the effect of different calibration sets on temperature T, the trained SSD model was calibrated on six different sets S 1-S6. S 1、S2、S3 is a subset of the first 40%, the last 40% and the randomly sampled 40% of the V07 proof set. S 4 is the same as the V07 verification set. S 5 and S 6 are V12 and MSCOCO 2017 (C17) (Lin, tsung-Yi et al, "Microsoft coco: public object in context (Microsoft coco: common objects in context)", european computer vision institute (European conference on computer vision), springer, cham, 2014) validating 70% subset of the set (remaining after sampling the set 30% subset for testing). It should be noted that according to Guo et al, T always > 1, which indicates that the model is overly confident at the outset. For S 1-S5, the T and IoU-ECE of the calibration set are similar before and after calibration. This is because S 1-S5 is derived from the same data distribution as the training set (i.e., the V07 and V12 validation sets). Changing the calibration set to S 6 derived from a completely different dataset (i.e., C17) resulted in a significant increase in T (3.076) and IoU-ECE (56.71%). Such a significant increase in both the T value and pre-calibration IoU-ECE indicates that a combination of both can serve as a reliable predictor of data drift.
However, this is only the case when the tag offset condition is satisfied. The main challenge that arises when using ECE metrics is the effect of class imbalance on the output metrics. For example, fig. 3 shows a box plot of ECE values obtained with different percentages of Background (BG) class samples in the V07 test set for SSD models trained on V07 and V12 datasets. It can be seen that when the test set includes 50% BG class samples, the ECE value drops to almost half of the value observed in the absence of BG class samples, thus underestimating the misalignment caused by non-BG classes. Therefore, in order to correctly detect data drift in the evolving dataset, there should be no tag offset caused by class imbalance.
Once such a dataset offset is detected in the evolving dataset, the new T value may not only ensure a robust calibration for the new settings, but the incoming dataset sample may also be used to further prepare the deployed object detection model to the new geography/scene via advanced continuous learning methods. This is illustrated in fig. 4, where an autonomous vehicle 110 (forming part of a distributed vehicle network) moves from location 1 to location 2. The new incoming data (the evolved data set) may be uploaded to the network 130, where it may be obtained by the server computer 120 and processed according to the data drift detection process flow 300 as described with respect to the other figures, for example.
Referring to fig. 5, a flow chart of a data drift detection process flow 300 is shown. In a first block 310, the data drift detection process flow 300 receives new incoming data, such as from the vehicle 102 at location 2.
Next, at block 315, pre-calibration IoU-ECE values are measured.
At block 320, a determination is made as to whether the IoU-ECE value is above a preset first threshold. The preset first threshold may be in the range of 2 to 4 times the IoU-ECE value calculated from the retained validation/calibration data set. For example, if the IoU-ECE value calculated for the retained validation/calibration data set is 10, the preset first threshold may be set to 3 times that value and set to 30.
If it is determined at block 320 that the IoU-ECE value is not above the preset first threshold ("NO"), then no data drift is detected at block 340. If it is determined at block 320 that IoU-ECE values are above the preset first threshold ("Yes"), a WB-TS calibration is performed on the incoming dataset to extract the temperature value T at block 325.
The value T extracted in block 325 is then measured at block 330 for a preset second threshold. The preset second threshold may be in the range of 2 to 4 times the temperature T extracted from the retained validation data set. For example, if the temperature T extracted from the retained validation/calibration data set is 1.2, the preset second threshold may be set to 3 times this value and set to 3.6.
If the value of T is not above the preset second threshold ("NO"), no data drift is detected at block 340. If the value of T is above the preset second threshold ("Yes") at block 330, a data drift is detected at block 335. In this case, the user may be notified of data drift, and further actions may be taken, such as using the extracted T to calibrate future incoming data and/or performing continuous learning, both of which enable robustness to such data drift. This robustness allows for reliable predictions and corresponding uncertainty estimates through the DNN's output confidence score. This is useful for autonomous driving, where the vehicle is constantly exposed to new environments/settings based on geographic and time changes.
When combined with IoU-ECE, WB-TS can enable detection of data drift in a dataset. In addition to detecting such drift, WB-TS can also correct for miscalibration in the evolving dataset with the retrieved temperature T. For the SSD MobilenetV model described above, incorrect detection with high confidence has been observed prior to calibration when testing the OOD samples. After calibration, it has been observed that the output score is more reliable with incorrect detection removed.
Referring to fig. 6, an example of a flow chart of a WB-TS process 500 is shown.
At a first block 515, an incoming (new) dataset is retrieved. The incoming dataset includes scores associated with all object categories in the image, including Background (BG) categories.
Next, at block 520, the method parses the ground truth in the incoming dataset using a IoU threshold set to be the same as the cross-ratio (IoU) threshold for training the subject to detect DNNs. Here, a background ground truth box in the incoming dataset is determined by comparing the ground truth box with a detection box generated by the object detection DNN using a IoU threshold.
To correct possible class imbalance in cases where the BG class detection is much more than the non-BG class detection, at block 525, the number of BG class frames per image is limited based on the number of non-BG class frames being approximately the same. For example, the average number of NMS front detection boxes in a non-BG class may be determined to be k, and the first k NMS front detection boxes in a BG class may be selected using the corresponding model scores.
Next, at block 530, a scalar value for temperature T may be determined by optimizing for NLL loss. Since the temperature T is determined on the dataset that includes BG categories and prior to any non-maximum suppression, this is a White Box (WB) calibration.
Referring to fig. 7, an example of a flow chart of a corrective action procedure 600 is shown. As described above with respect to fig. 5, if the value of T is above a preset second threshold ("yes") at block 330, a data drift is detected at block 335. When a data drift is detected, one corrective action is to calibrate future incoming data using the determined temperature T.
At a first block 635, the logic vector detected before the NMS is scaled using the determined temperature T.
Next, at block 640, the logic vector value is normalized to a value between 0 and 1, such as with a Sigmoid or Softmax layer.
At block 645, the method performs non-maximum suppression on the calibrated score and bounding box predictions to obtain a final prediction from the object detection DNN.
At block 650, the calibrated final prediction from the object detection DNN may be used to activate a component, such as a steering or braking component of a vehicle. For example, the computing device 115 in the vehicle may perform programming to actuate vehicle components based on predictions and/or other data, e.g., to avoid objects, to keep the vehicle on a path, etc.
As used herein, the adverb "substantially" means that the shape, structure, measurement, quantity, time, etc. may deviate from the precisely described geometry, distance, measurement, quantity, time, etc. due to imperfections in materials, machining, manufacturing, data transmission, computational speed, etc.
In general, the described computing systems and/or devices may employ any of a variety of computer operating systems, including, but in no way limited to, the following versions and/or categories: ford (force)Application; appLink/SMART DEVICE LINK middleware; microsoft/>An operating system; microsoft/>An operating system; unix operating systems (e.g., published/>, by Oracle corporation on the coast of Redwood, californiaAn operating system); an AIX UNIX operating system issued by International Business machines corporation of Armonk, N.Y.; a Linux operating system; mac OSX and iOS operating systems published by apple Inc. of Copico, calif.; a BlackBerry operating system issued by BlackBerry limited of smooth iron, canada; and android operating systems developed by google corporation and open cell phone alliance; or/>, provided by QNX software systems CoCAR infotainment platform. Examples of computing devices include, but are not limited to, an on-board first computer, a computer workstation, a server, a desktop computer, a notebook computer, a laptop computer, or a handheld computer, or some other computing system and/or device.
Computers and computing devices typically include computer-executable instructions that may be capable of being executed by one or more computing devices, such as those listed above. Computer-executable instructions may be compiled or interpreted from a computer program created using a variety of programming languages and/or techniques, including, but not limited to, java TM, C, C ++, matlab, simulink, stateflow, visual Basic, java Script, perl, HTML, and the like, alone or in combination. Some of these applications may be compiled and executed on virtual machines such as Java virtual machines, dalvik virtual machines, and the like. In general, a processor (e.g., a microprocessor) receives instructions from, for example, a memory, a computer-readable medium, etc., and executes the instructions, thereby performing one or more processes, including one or more of the processes described herein. Such instructions and other data may be stored and transmitted using a variety of computer-readable media. Files in a computing device are typically a collection of data stored on a computer readable medium such as a storage medium, random access memory, or the like.
The memory may include computer-readable media (also referred to as processor-readable media) including any non-transitory (e.g., tangible) media that participates in providing data (e.g., instructions) that may be read by a computer (e.g., by a processor of a computer). Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media may include, for example, optical or magnetic disks, and other persistent memory. Volatile media may include, for example, dynamic Random Access Memory (DRAM), which typically constitutes a main memory. Such instructions may be transmitted by one or more transmission media, including coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor of the ECU. Common forms of computer-readable media include, for example, RAM, PROM, EPROM, FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
Databases, data stores, or other data stores described herein may include various mechanisms for storing, accessing, and retrieving various data, including hierarchical databases, file sets in file systems, application databases in proprietary formats, relational database management systems (RDBMSs), and the like. Each such data storage device is typically included within a computing device employing a computer operating system (such as one of those mentioned above) and is accessed via a network in any one or more of a variety of ways. The file system may be accessed from a computer operating system and may include files stored in various formats. In addition to languages used to create, store, edit, and execute stored programs (e.g., the PL/SQL language described above), RDBMS also typically employ Structured Query Language (SQL).
In some examples, system elements may be implemented as computer-readable instructions (e.g., software) on one or more computing devices (e.g., servers, personal computers, etc.), stored on a computer-readable medium (e.g., disk, memory, etc.) associated therewith. The computer program product may include such instructions stored on a computer-readable medium for performing the functions described herein.
With respect to the media, processes, systems, methods, heuristics, etc. described herein, it should be understood that, while the steps of such processes, etc. have been described as occurring in a certain ordered sequence, such processes may be practiced by executing the steps in an order different than that described herein. It should also be understood that certain steps may be performed concurrently, other steps may be added, or certain steps described herein may be omitted. In other words, the description of the processes herein is provided for the purpose of illustrating certain embodiments and should not be construed as limiting the claims in any way.
Accordingly, it is to be understood that the above description is intended to be illustrative, and not restrictive. Many embodiments and applications other than the examples provided will be apparent to those of skill in the art upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is contemplated and anticipated that future developments will occur in the arts discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In summary, it is to be understood that the invention is capable of modification and variation and is limited only by the following claims.
Unless explicitly indicated to the contrary herein, all terms used in the claims are intended to be given their ordinary and customary meaning as understood by those skilled in the art. In particular, the use of singular articles such as "a," "an," "the," and the like are to be construed to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
According to an embodiment, there is provided a system having a computer including a processor and a memory storing instructions executable by the processor, the instructions programmed to: data drift in a trained object detection Deep Neural Network (DNN) is identified by: receiving a dataset based on real world usage, wherein the dataset includes a score associated with each category in an image, the category including a Background (BG) category; measuring IoU a conditioned ECE (IoU-ECE) by calculating an Expected Calibration Error (ECE) with detection from the dataset prior to non-maximum suppression (pre-NMS detection) conditioned on a specific cross-over (IoU) threshold at white box settings; performing a white-box temperature scaling (WB-TS) calibration on the pre-NMS detection of the dataset to extract a temperature T upon determining that the IoU-ECE is greater than a preset first threshold; and upon determining that the temperature T exceeds a preset second threshold, identifying that the data drift has occurred.
According to an embodiment, the invention also features instructions for calibrating incoming data using the extracted temperature T when data drift is identified.
According to an embodiment, incoming data is calibrated by uniformly scaling a logic vector associated with the pre-NMS detection of the object detection DNN with the temperature T before a Sigmoid/Softmax layer.
According to an embodiment, the invention is further characterized by instructions for performing additional learning on the object detection DNN when the data drift is identified.
According to an embodiment IoU-ECE isWhere n is the number of samples adjusted by IoU, M is the number of interval bins (=15), and B m is the set of indices of samples whose prediction scores fall in the interval I m = (M-1/M, M/M).
According to an embodiment, the specific IoU threshold is set the same as the IoU threshold used to train the subject detection DNN.
According to an embodiment, the instructions for performing the WB-TS calibration for the NMS front detection of the dataset to extract the temperature T comprise instructions for: retrieving the dataset, wherein the dataset comprises a score associated with each object category in the image, the object category comprising a Background (BG) category; determining a background ground truth box in the dataset by comparing the ground truth box with a detection box generated by the object detection DNN using an intersection ratio (IoU) threshold; correcting a category imbalance between a ground truth box in a ground truth category and a background ground truth box by updating the ground truth category to include a plurality of background ground truth boxes based on a number of ground truth boxes in the ground truth category; and determining a single scalar parameter for the temperature T for all classes by optimizing for the Negative Log Likelihood (NLL) penalty.
According to an embodiment, the preset first threshold is in the range of 2 to 4 times the IoU-ECE value calculated from the retained validation data set, and the preset second threshold is in the range of 2 to 4 times the temperature T extracted from the retained validation data set.
According to an embodiment, the invention also features instructions for: performing non-maximum suppression on the calibrated confidence score using the corresponding bounding box prediction after the Sigmoid/Softmax layer to obtain a final detection; and actuating a vehicle component based on the object detection determination of the object detection DNN.
According to an embodiment, the instructions for correcting the class imbalance include instructions for: determining an average number of pre-NMS detection frames in the non-BG class as k; and extracting the top k NMS front detection boxes in the BG class using the corresponding model scores.
According to the invention, a method of identifying a trained object to detect data drift in a Deep Neural Network (DNN) comprises: receiving a dataset based on real world usage, wherein the dataset includes a score associated with each category in an image, the category including a Background (BG) category; measuring IoU a conditioned ECE (IoU-ECE) by calculating an Expected Calibration Error (ECE) with detection from the dataset prior to non-maximum suppression (pre-NMS detection) conditioned on a specific cross-over (IoU) threshold at white box settings; performing a white-box temperature scaling (WB-TS) calibration on the pre-NMS detection of the dataset to extract a temperature T upon determining that the IoU-ECE is greater than a preset first threshold; and upon determining that the temperature T exceeds a preset second threshold, identifying that the data drift has occurred.
In one aspect of the invention, the method includes calibrating the incoming data using the extracted temperature T when the data drift is identified.
In one aspect of the invention, incoming data is calibrated by uniformly scaling a logic vector associated with the pre-NMS detection of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer.
In one aspect of the invention, the method includes performing additional learning on the object detection DNN when the data drift is identified.
In one aspect of the invention IoU-ECE is Where n is the number of samples adjusted by IoU, M is the number of interval bins (=15), and B m is the set of indices of samples whose prediction scores fall in the interval I m = (M-1/M, M/M).
In one aspect of the invention, the specific IoU threshold is set the same as the IoU threshold used to train the subject detection DNN.
In one aspect of the invention, performing the WB-TS calibration on the pre-NMS detection of the dataset to extract the temperature T comprises: retrieving the dataset, wherein the dataset comprises a score associated with each object category in the image, the object category comprising a Background (BG) category; determining a background ground truth box in the dataset by comparing the ground truth box with a detection box generated by the object detection DNN using an intersection ratio (IoU) threshold; correcting a category imbalance between a ground truth box in a ground truth category and a background ground truth box by updating the ground truth category to include a plurality of background ground truth boxes based on a number of ground truth boxes in the ground truth category; and determining a single scalar parameter for the temperature T for all classes by optimizing for the Negative Log Likelihood (NLL) penalty.
In one aspect of the invention, the preset first threshold is in the range of 2 to 4 times the IoU-ECE value calculated from the retained validation dataset, and the preset second threshold is in the range of 2 to 4 times the temperature T extracted from the retained validation dataset.
In one aspect of the invention, the method comprises: performing non-maximum suppression on the calibrated confidence score using the corresponding bounding box prediction after the Sigmoid/Softmax layer to obtain a final detection; and actuating a vehicle component based on the object detection determination of the object detection DNN.
In one aspect of the invention, correcting the class imbalance includes: determining an average number of pre-NMS detection frames in the non-BG class as k; and extracting the top k NMS front detection boxes in the BG class using the corresponding model scores.

Claims (15)

1. A method of identifying data drift in a trained object detection Deep Neural Network (DNN) by:
Receiving a dataset based on real world usage, wherein the dataset includes a score associated with each category in an image, the category including a Background (BG) category;
Measuring IoU a conditioned ECE (IoU-ECE) by calculating an Expected Calibration Error (ECE) with detection from the dataset prior to non-maximum suppression (pre-NMS detection) conditioned on a specific cross-over (IoU) threshold at white box settings;
performing a white-box temperature scaling (WB-TS) calibration on the pre-NMS detection of the dataset to extract a temperature T upon determining that the IoU-ECE is greater than a preset first threshold; and
Upon determining that the temperature T exceeds a preset second threshold, it is recognized that the data drift has occurred.
2. The method of claim 1, further comprising calibrating incoming data using the extracted temperature T when the data drift is identified.
3. The method of claim 2, wherein incoming data is calibrated by uniformly scaling a logic vector associated with the pre-NMS detection of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer.
4. The method of claim 1, further comprising performing additional learning on the object detection DNN when the data drift is identified.
5. The method of claim 1, wherein the IoU-ECE is
Where n is the number of samples adjusted by IoU, M is the number of interval bins (=15), and B m is the set of indices of samples whose prediction scores fall in interval I m = (M-1/M, M/M).
6. The method of claim 1, wherein the particular IoU threshold is set the same as a IoU threshold used to train the subject detection DNN.
7. The method of claim 1 wherein performing the WB-TS calibration on the pre-NMS detection of the dataset to extract the temperature T comprises:
retrieving the dataset, wherein the dataset comprises a score associated with each object category in the image, the object category comprising a Background (BG) category;
Determining a background ground truth box in the dataset by comparing the ground truth box with a detection box generated by the object detection DNN using an intersection ratio (IoU) threshold;
Correcting a category imbalance between a ground truth box in a ground truth category and a background ground truth box by updating the ground truth category to include a plurality of background ground truth boxes based on a number of ground truth boxes in the ground truth category; and
A single scalar parameter of the temperature T for all classes is determined by optimizing for the Negative Log Likelihood (NLL) penalty.
8. The method of claim 1, wherein the preset first threshold is in the range of 2 to 4 times the IoU-ECE value calculated from the retained validation dataset and the preset second threshold is in the range of 2 to 4 times the temperature T extracted from the retained validation dataset.
9. A method as in claim 3, further comprising:
Performing non-maximum suppression on the calibrated confidence scores with corresponding bounding box predictions after the Sigmoid/Softmax layer to obtain a final detection; and
Actuating a vehicle component based on the object detection determination of the object detection DNN.
10. The method of claim 7, wherein correcting the class imbalance comprises:
determining an average number of pre-NMS detection frames in the non-BG class as k; and
The top k NMS front detection boxes in the BG class are extracted using the corresponding model scores.
11. The method of claim 5 wherein performing the WB-TS calibration on the pre-NMS detection of the dataset to extract the temperature T comprises:
retrieving the dataset, wherein the dataset comprises a score associated with each object category in the image, the object category comprising a Background (BG) category;
Determining a background ground truth box in the dataset by comparing the ground truth box with a detection box generated by the object detection DNN using an intersection ratio (IoU) threshold;
Correcting a category imbalance between a ground truth box in a ground truth category and a background ground truth box by updating the ground truth category to include a plurality of background ground truth boxes based on a number of ground truth boxes in the ground truth category; and
A single scalar parameter of the temperature T for all classes is determined by optimizing for the Negative Log Likelihood (NLL) penalty.
12. The method of claim 11, further comprising using the extracted temperature T to calibrate incoming data when the data drift is identified,
Wherein incoming data is calibrated by uniformly scaling a logic vector associated with the pre-NMS detection of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer.
13. The method of claim 12, further comprising:
Performing non-maximum suppression on the calibrated confidence scores with corresponding bounding box predictions after the Sigmoid/Softmax layer to obtain a final detection; and
Actuating a vehicle component based on the object detection determination of the object detection DNN.
14. The method of claim 11, wherein correcting the class imbalance comprises:
determining an average number of pre-NMS detection frames in the non-BG class as k; and
The top k NMS front detection boxes in the BG class are extracted using the corresponding model scores.
15. A computing device comprising a processor and a memory storing instructions executable by the processor to perform the method of one of claims 1 to 14.
CN202311600130.4A 2022-12-14 2023-11-28 Data drift identification for sensor systems Pending CN118196732A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18/080,799 2022-12-14
US18/080,799 US20240202503A1 (en) 2022-12-14 2022-12-14 Data drift identification for sensor systems

Publications (1)

Publication Number Publication Date
CN118196732A true CN118196732A (en) 2024-06-14

Family

ID=91278979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311600130.4A Pending CN118196732A (en) 2022-12-14 2023-11-28 Data drift identification for sensor systems

Country Status (3)

Country Link
US (1) US20240202503A1 (en)
CN (1) CN118196732A (en)
DE (1) DE102023133295A1 (en)

Also Published As

Publication number Publication date
DE102023133295A1 (en) 2024-06-20
US20240202503A1 (en) 2024-06-20

Similar Documents

Publication Publication Date Title
US10449956B2 (en) Object tracking by unsupervised learning
CN110936959B (en) On-line diagnosis and prediction of vehicle perception system
US11657635B2 (en) Measuring confidence in deep neural networks
CN112184844A (en) Vehicle image generation
US11829131B2 (en) Vehicle neural network enhancement
US11702044B2 (en) Vehicle sensor cleaning and cooling
US12020475B2 (en) Neural network training
US11574463B2 (en) Neural network for localization and object detection
CN112240767A (en) Vehicle location identification
CN115366885A (en) Method for assisting a driving maneuver of a motor vehicle, assistance device and motor vehicle
CN115959135A (en) Enhanced vehicle operation
US11610412B2 (en) Vehicle neural network training
US11745766B2 (en) Unseen environment classification
US20240202503A1 (en) Data drift identification for sensor systems
WO2023039193A1 (en) Search algorithms and safety verification for compliant domain volumes
CN114758313A (en) Real-time neural network retraining
US20210403056A1 (en) Convolution operator selection
CN114581865A (en) Confidence measure in deep neural networks
CN113159271A (en) Time CNN rear impact warning system
Ravishankaran Impact on how AI in automobile industry has affected the type approval process at RDW
CN112519779A (en) Location-based vehicle operation
US11166003B1 (en) Dynamic vibration sensor optics distortion prediction
US12046132B2 (en) Sensor localization
US20220374657A1 (en) Anomaly detection for deep neural networks
CN117115625A (en) Unseen environmental classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication