US12198440B2 - Vehicle perception by adjusting deep neural network confidence valves based on k-means clustering - Google Patents
Vehicle perception by adjusting deep neural network confidence valves based on k-means clustering Download PDFInfo
- Publication number
- US12198440B2 US12198440B2 US17/872,112 US202217872112A US12198440B2 US 12198440 B2 US12198440 B2 US 12198440B2 US 202217872112 A US202217872112 A US 202217872112A US 12198440 B2 US12198440 B2 US 12198440B2
- Authority
- US
- United States
- Prior art keywords
- confidence scores
- perception
- class
- vehicle
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
- G06F18/15—Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/72—Data preparation, e.g. statistical preprocessing of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/778—Active pattern-learning, e.g. online learning of image or video features
- G06V10/7796—Active pattern-learning, e.g. online learning of image or video features based on specific statistical tests
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- the present application generally relates to vehicle advanced driver-assistance systems (ADAS) and autonomous driving and, more particularly, to techniques for improved vehicle perception by adjusting deep neural network (DNN) confidence values based on k-means clustering.
- ADAS vehicle advanced driver-assistance systems
- DNN deep neural network
- perception is typically performed by trained DNNs with sensor inputs.
- sensor data camera(s), LIDAR, RADAR, maps/GPS, etc.
- these DNNs can predict object location, class/type, as well as confidence values.
- Confidence values are typically then used in “sensor fusion” to combine information from multiple sources in an effort to produce more accurate results.
- DNNs are trained using training datasets, which are typically limited in size and thus may not cover all possible scenarios. More specifically, trained DNNs in deployment could see something that never appeared in its training dataset(s), and thus the DNNs could report a potentially untrustworthy high confidence value. This potentially untrustworthy high confidence value could result in inaccurate object detection and/or sensor fusion outputs. Accordingly, while such conventional vehicle perception systems do work well for their intended purpose, there exists an opportunity for improvement in the relevant art.
- a perception system for a vehicle comprises a set of vehicle perception sensors configured to provide a set of inputs, wherein the set of vehicle perception sensors comprises at least a camera system configured to capture images of an environment external to the vehicle, and a controller configured to obtain a training dataset represented by N training histograms, in an image feature space, corresponding to N training images, respectively, K-means cluster the N training histograms to determine K clusters with respective K respective cluster centers, wherein K and N are integers greater than or equal to one and K is less than or equal to N, compare the N training histograms to their respective K cluster centers to determine maximum in-class distances for each of K clusters, apply a deep neural network (DNN) to input images of the set of inputs to output detected/classified objects with respective confidence scores, obtain adjusted confidence scores by adjusting the confidence scores output by the DNN based on distance ratios of (i) minimal distances of input histograms representing the input images
- DNN deep neural network
- the K-means clustering is a vector quantization technique in which the N training histograms are N vectors that are partitioned into K clusters such that each of the N vectors belongs to a respective cluster of the K clusters having the nearest mean.
- the K-means clustering minimizes within-cluster variances but not Euclidean distances.
- adjusting the confidence scores further comprises determining discount probability (DP) values based on the distance ratios, and adjusting the confidence scores based on the DP values.
- DP discount probability
- adjusting the confidence scores further comprises obtaining a threshold for determining if a sample is in-class or out-of-class, and applying a scaled sigmoid function based on the distance ratios and the threshold to compute the DP values.
- the scaled sigmoid function (S(x)) to calculate the DP values is:
- the sensor fusion includes fusing the detected objects/classifications and confidence scores for images captured by the camera system with detected objects/classifications and confidence scores for information gathered by a remainder of the set of vehicle perception sensors to improve object detection/classification accuracy and/or robustness.
- the remainder of the set of vehicle perception sensors includes at least one of another camera system, a LIDAR system, a RADAR system, and a map system.
- the perception method comprises receiving, by a controller of the vehicle and from a set of vehicle perception sensors of the vehicle, a set of inputs, wherein the set of vehicle perception sensors comprises at least a camera system configured to capture images of an environment external to the vehicle, obtaining, by the controller, a training dataset represented by N training histograms, in an image feature space, corresponding to N training images, K-means clustering, by the controller, the N training histograms to determine K clusters with respective K respective cluster centers, wherein K and N are integers greater than or equal to one and K is less than or equal to N, comparing, by the controller, the N training histograms to their respective K cluster centers to determine maximum in-class distances for each of K clusters, applying, by the controller, a DNN to input images of the set of inputs to output detected/classified objects with respective confidence scores, obtaining, by the controller, adjusted confidence scores by adjusting the confidence scores
- the K-means clustering is a vector quantization technique in which the N training histograms are N vectors that are partitioned into K clusters such that each of the N vectors belongs to a respective cluster of the K clusters having the nearest mean.
- the K-means clustering minimizes within-cluster variances but not Euclidean distances.
- adjusting the confidence scores further comprises determining DP values based on the distance ratios, and adjusting the confidence scores based on the DP values.
- adjusting the confidence scores further comprises obtaining a threshold for determining if a sample is in-class or out-of-class, and applying a scaled sigmoid function based on the distance ratios and the threshold to compute the DP values.
- the scaled sigmoid function (S(x)) to calculate the DP values is:
- the sensor fusion includes fusing the detected objects/classifications and confidence scores for images captured by the camera system with detected objects/classifications and confidence scores for information gathered by a remainder of the set of vehicle perception sensors to improve object detection/classification accuracy and/or robustness.
- the remainder of the set of vehicle perception sensors includes at least one of another camera system, a LIDAR system, a RADAR system, and a map system.
- FIG. 1 is a plot illustrating example K-means clusters, K cluster centers, and in-class and out-of-class samples according to the principles of the present application;
- FIG. 2 is a functional block diagram of a vehicle having an example perception system configured for object detection according to the principles of the present application;
- FIG. 3 is a flow diagram of an example vehicle perception method including object detection/classification according to the principles of the present application.
- FIG. 4 is a plot illustrating an example discount probability (DP) with an example in-class threshold (T) computed using an example scaled sigmoid function according to the principles of the present application.
- each training dataset is K-means clustered in the feature space of images. This includes computing histograms of images and using these histograms to cluster images. After K-means clustering, K cluster centers are obtained. All of the images are compared to the K cluster centers and the maximum in-class distances therefrom are recorded.
- FIG. 1 a plot 100 illustrating example K-means clusters, K cluster centers, and in-class and out-of-class samples according to the principles of the present application is illustrated.
- the x and y axes correspond to dimensions 1 and 2 in the image feature space (e.g., x/y coordinates in two-dimensional, or 2D images). To achieve a desired sample distribution for desired clustering, these axes could be normalized (i.e., varying scales/percentages relative to each other).
- three samples (X-shapes) in the lower left quadrant are identified as a first cluster (indicated by a star-shape) and three samples (circle-shapes) in the upper right quadrant are identified as a second cluster (also indicated by the star-shape).
- In-class and out-of-class samples are also illustrated by the diamond-shape and triangle-shape samples.
- N eight samples
- K 2
- K and N being integers greater or equal to one and N ⁇ K
- the K-means clustering is a vector quantization technique in which the N training histograms are N vectors that are partitioned into K clusters such that each of the N vectors belongs to a respective cluster of the K clusters having the nearest mean.
- the K-means clustering minimizes within-cluster variances but not Euclidean distances. It will be appreciated that this plot 100 is merely an example for illustrative/descriptive purposes and is in no way intended to limit the K-means clustering techniques herein.
- the vehicle 200 could be any suitable type of vehicle (a conventional engine-powered vehicle, a hybrid electric vehicle, a fully-electrified vehicle, etc.).
- the vehicle 200 generally comprises a powertrain 208 (e.g., an engine, electric motor(s), or some combination thereof, plus a transmission) configured to generate and transfer drive torque to a driveline 212 for vehicle propulsion.
- a powertrain 208 e.g., an engine, electric motor(s), or some combination thereof, plus a transmission
- a controller 216 controls operation of the vehicle 200 , including controlling the powertrain 208 to generate a desired amount of drive torque based on a driver torque request received via a driver interface 220 (e.g., an accelerator pedal).
- the controller 216 is also configured to execute/perform one or more ADAS/autonomous driving features (e.g., up to level 4, or L4 autonomous driving), which generally includes controlling a set of one or more ADAS/autonomous actuator(s) based on information gathered from a plurality of perception sensors 228 .
- the perception system 208 generally comprises the controller 216 , the ADAS/autonomous actuator(s) 224 , and the perception sensors 228 .
- Non-limiting examples of the ADAS/autonomous actuator(s) 224 include an accelerator actuator, a brake actuator, and a steering actuator. In other words, these actuator(s) 224 include actuators for aspects of vehicle control that would typically be handled by a human driver.
- Non-limiting examples of the perception sensors 228 include one or more cameras configured to capture images of an environment external to the vehicle 200 (e.g., a front-facing camera), a light detection and ranging (LIDAR) system, a radio detection and ranging (RADAR) system, and a map system (a high definition (HD) map system, a global navigation satellite system (GNNS) transceiver, etc.). The concept of “sensor fusion” will be discussed in greater detail below.
- FIG. 3 a flow diagram of an example perception (e.g., object detection/classification) method 300 for a vehicle according to the principles of the present application is illustrated.
- vehicle 200 and its components will be referenced in describing the method 300 , but it will be appreciated that the method 300 could be applicable to any suitable vehicle.
- the controller 216 obtains a training dataset represented by N training histograms, in the image feature space, corresponding to N training images, respectively.
- the controller 216 K-means clusters the N training histograms to determine K clusters with respective K respective cluster centers, wherein K and N are integers greater than or equal to one and K is less than or equal to N.
- the controller 216 compares the N training histograms to their respective K cluster centers to determine maximum in-class distances (D max-in-class) for each of K clusters.
- D max-in-class maximum in-class distances
- FIG. 1 and the previous discussion herein further illustrates this K-means clustering and distance determination process.
- These steps 304 - 312 can be described as the training process for the DNN's confidence scores, whereas the following steps 316 - 328 can be described as the usage or implementation of the trained DNN (e.g., in sensor fusion).
- the controller 216 receives, from the perception sensors 228 , a set of inputs including at least input images captured by a camera system.
- the controller 216 applies a DNN to input images of the set of inputs to output detected/classified objects (histograms for the input images) with respective confidence scores.
- the controller 216 obtains adjusted confidence scores by adjusting the confidence scores output by the DNN.
- this confidence score adjustment process involves the computation and application of discount probability (DP) values, e.g., a potential negative adjustment to a respective confidence score, which is also partially shown in a plot 400 of FIG. 4 and discussed in greater detail below.
- DP discount probability
- a threshold (T) for determining if a particular sample is in-class or out-of-class is determined ( 324 b ).
- this threshold and the distance ratios are used to compute the DP values ( 324 c ).
- this includes applying a scaled sigmoid function (S(x)) as follows, where x is the input histograms representing the input images and K is a scaling factor:
- controller refers to any suitable control device or set of multiple control devices that is/are configured to perform at least a portion of the techniques of the present application.
- Non-limiting examples include an application-specific integrated circuit (ASIC), one or more processors and a non-transitory memory having instructions stored thereon that, when executed by the one or more processors, cause the controller to perform a set of operations corresponding to at least a portion of the techniques of the present application.
- ASIC application-specific integrated circuit
- the one or more processors could be either a single processor or two or more processors operating in a parallel or distributed architecture.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
and where x is the input histograms representing the input images, T is the threshold, K is a scaling factor, and dmax-in-class is the maximum in-class distance.
and where x is the input histograms representing the input images, T is the threshold, K is a scaling factor, and dmax-in-class is the maximum in-class distance.
For in-class samples, d≤dmax-in-class, x=0, and DP=S=1.0. For out-of-class samples, x<0 and DP=S<1.0, and as d increases, DP or S will decrease and approach 0.0 as shown in
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/872,112 US12198440B2 (en) | 2022-07-25 | 2022-07-25 | Vehicle perception by adjusting deep neural network confidence valves based on k-means clustering |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/872,112 US12198440B2 (en) | 2022-07-25 | 2022-07-25 | Vehicle perception by adjusting deep neural network confidence valves based on k-means clustering |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20240029442A1 US20240029442A1 (en) | 2024-01-25 |
| US12198440B2 true US12198440B2 (en) | 2025-01-14 |
Family
ID=89576901
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/872,112 Active 2043-07-25 US12198440B2 (en) | 2022-07-25 | 2022-07-25 | Vehicle perception by adjusting deep neural network confidence valves based on k-means clustering |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US12198440B2 (en) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170075355A1 (en) * | 2015-09-16 | 2017-03-16 | Ford Global Technologies, Llc | Vehicle radar perception and localization |
| US20190108447A1 (en) | 2017-11-30 | 2019-04-11 | Intel Corporation | Multifunction perceptrons in machine learning environments |
| US20190258878A1 (en) | 2018-02-18 | 2019-08-22 | Nvidia Corporation | Object detection and detection confidence suitable for autonomous driving |
| US20200026960A1 (en) | 2018-07-17 | 2020-01-23 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
| US20210352203A1 (en) * | 2019-01-25 | 2021-11-11 | Opple Lighting Co., Ltd. | Detection circuit, device and method for detecting light source flicker, and photoelectric detection device |
| US11532168B2 (en) * | 2019-11-15 | 2022-12-20 | Nvidia Corporation | Multi-view deep neural network for LiDAR perception |
| US20230065931A1 (en) * | 2020-04-06 | 2023-03-02 | Nvidia Corporation | Projecting images captured using fisheye lenses for feature detection in autonomous machine applications |
| US20230068046A1 (en) * | 2021-08-24 | 2023-03-02 | GM Global Technology Operations LLC | Systems and methods for detecting traffic objects |
| US20240071092A1 (en) * | 2022-08-24 | 2024-02-29 | Nec Laboratories America, Inc. | Object detection in driver assistance system |
-
2022
- 2022-07-25 US US17/872,112 patent/US12198440B2/en active Active
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170075355A1 (en) * | 2015-09-16 | 2017-03-16 | Ford Global Technologies, Llc | Vehicle radar perception and localization |
| US20190108447A1 (en) | 2017-11-30 | 2019-04-11 | Intel Corporation | Multifunction perceptrons in machine learning environments |
| US20190258878A1 (en) | 2018-02-18 | 2019-08-22 | Nvidia Corporation | Object detection and detection confidence suitable for autonomous driving |
| US20200026960A1 (en) | 2018-07-17 | 2020-01-23 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
| US20230152801A1 (en) * | 2018-07-17 | 2023-05-18 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
| US20210352203A1 (en) * | 2019-01-25 | 2021-11-11 | Opple Lighting Co., Ltd. | Detection circuit, device and method for detecting light source flicker, and photoelectric detection device |
| US11532168B2 (en) * | 2019-11-15 | 2022-12-20 | Nvidia Corporation | Multi-view deep neural network for LiDAR perception |
| US20230065931A1 (en) * | 2020-04-06 | 2023-03-02 | Nvidia Corporation | Projecting images captured using fisheye lenses for feature detection in autonomous machine applications |
| US20230068046A1 (en) * | 2021-08-24 | 2023-03-02 | GM Global Technology Operations LLC | Systems and methods for detecting traffic objects |
| US20240071092A1 (en) * | 2022-08-24 | 2024-02-29 | Nec Laboratories America, Inc. | Object detection in driver assistance system |
Also Published As
| Publication number | Publication date |
|---|---|
| US20240029442A1 (en) | 2024-01-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12008817B2 (en) | Systems and methods for depth estimation in a vehicle | |
| US9286524B1 (en) | Multi-task deep convolutional neural networks for efficient and robust traffic lane detection | |
| US10937186B2 (en) | Techniques for precisely locating landmarks in monocular camera images with deep learning | |
| EP3881226B1 (en) | Object classification using extra-regional context | |
| US20200167941A1 (en) | Systems and methods for enhanced distance estimation by a mono-camera using radar and motion data | |
| US12283112B2 (en) | Automatic evaluation of three-dimensional vehicle perception using two-dimensional deep neural networks | |
| US20240338938A1 (en) | Multimodal method and apparatus for segmentation and depth estimation | |
| US11892574B2 (en) | Dynamic lidar to camera alignment | |
| US20210133947A1 (en) | Deep neural network with image quality awareness for autonomous driving | |
| US11869250B2 (en) | Systems and methods for detecting traffic objects | |
| CN115705780B (en) | Correlate perceived and mapped lane edges for localization | |
| US11619495B2 (en) | Position estimating apparatus and position estimating method | |
| US20210279482A1 (en) | Processors configured to detect objects and methods of detecting objects | |
| CN116704470A (en) | Systems and methods for providing traffic light ROI parameters to autonomous vehicles | |
| CN115375613A (en) | Vehicle LiDAR System with Neural Network-Based Dual Density Point Cloud Generator | |
| CN118254811A (en) | Vehicle driving intention prediction method, device, equipment and computer storage medium | |
| US12475597B2 (en) | Robust lidar-to-camera sensor alignment | |
| KR102538225B1 (en) | Method for semantic segmentation based on sensor fusion, and computer program recorded on record-medium for executing method thereof | |
| JP6992099B2 (en) | Information processing device, vehicle, vehicle control method, program, information processing server, information processing method | |
| US12198440B2 (en) | Vehicle perception by adjusting deep neural network confidence valves based on k-means clustering | |
| CN115147791B (en) | Vehicle lane change detection method, device, vehicle and storage medium | |
| US12293589B2 (en) | Systems and methods for detecting traffic objects | |
| EP4145352A1 (en) | Systems and methods for training and using machine learning models and algorithms | |
| US12462549B2 (en) | Method for determining the encoder architecture of a neural network | |
| US12482302B2 (en) | Server, control device for vehicle, and machine learning system for vehicle |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: FCA US LLC, MICHIGAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, DALONG;PARANJPE, ROHIT S;HORTON, STEPHEN;SIGNING DATES FROM 20220725 TO 20220811;REEL/FRAME:067149/0744 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |