JP7224682B1 - 自律走行のための3次元多重客体検出装置及び方法 - Google Patents
自律走行のための3次元多重客体検出装置及び方法 Download PDFInfo
- Publication number
- JP7224682B1 JP7224682B1 JP2021198447A JP2021198447A JP7224682B1 JP 7224682 B1 JP7224682 B1 JP 7224682B1 JP 2021198447 A JP2021198447 A JP 2021198447A JP 2021198447 A JP2021198447 A JP 2021198447A JP 7224682 B1 JP7224682 B1 JP 7224682B1
- Authority
- JP
- Japan
- Prior art keywords
- dimensional
- bev
- object detection
- point cloud
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title description 20
- 230000004807 localization Effects 0.000 claims abstract description 20
- 238000013135 deep learning Methods 0.000 claims abstract description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 22
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 10
- 239000002356 single layer Substances 0.000 abstract description 12
- 238000012549 training Methods 0.000 abstract description 11
- 238000009826 distribution Methods 0.000 description 7
- 239000010410 layer Substances 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000012639 Balance disease Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/408—Radar; Laser, e.g. lidar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Aviation & Aerospace Engineering (AREA)
Abstract
Description
110 データ入力モジュール
120 BEVイメージ生成モジュール
130 CNNベースの学習モジュール
140 ローカリゼーションモジュール
Claims (6)
- ライダーセンサを用いて3次元多重客体を検出するための3次元多重客体検出装置であって、
ライダーセンサから未処理の点群データの入力を受信するためのデータ入力モジュールと、
前記未処理の点群データからBEV(Bird’s Eye View)イメージを生成するためのBEVイメージ生成モジュールと、
BEVイメージから細分化した特徴イメージを抽出するためのディープラーニング(deep learning)アルゴリズムベースの学習を実行する学習モジュールと、
前記細分化した特徴イメージから3次元客体を検出するための3D候補ボックスとそれに対応するクラスを見つけるための回帰(regression)作業とローカリゼーション(localization)作業を行うローカリゼーションモジュールと、
を含み、
前記BEVイメージ生成モジュールは、前記未処理の点群データが分割された同じ形状の複数の3次元セルごとに、前記3次元セルにおける高さが最大の点の高さ、前記3次元セルにおける点の密度、前記3次元セルにおける前記高さが最大の点の反射率に対応する強度、及び前記3次元セルにおける原点から最も遠い点までの距離のそれぞれについてエンコーディングした前記複数の3次元セルそれぞれの4つの特徴データを含む2次元の特徴マップである前記BEVイメージを生成する、3次元多重客体検出装置。 - BEVイメージ生成モジュールは、3Dの未処理の点群データを2D擬似(pseudo)イメージに投影して離散化する方式でBEVイメージを生成することを特徴とする、
請求項1に記載の3次元多重客体検出装置。 - 前記学習モジュールは、CNN(Convolutional Neural Network)ベースの学習を行うことを特徴とする、
請求項1に記載の3次元多重客体検出装置。 - ライダーセンサを用いて3次元多重客体を検出するための3次元多重客体検出装置における3次元多重客体検出方法であって、
ライダーセンサから未処理の点群データの入力を受けるためのデータ入力ステップと、
前記未処理の点群データからBEV(Bird ’s Eye View)イメージを生成するためのBEVイメージ生成ステップと、
前記BEVイメージから細分化した特徴イメージを抽出するためにディープラーニング(deep learning)アルゴリズムベースの学習を実行する学習ステップと、
前記細分化した特徴イメージから3次元客体を検出するための3D候補ボックスとそれに対応するクラスを見つけるための回帰(regression)作業とローカリゼーション(localization)作業を行うローカリゼーションステップと、
を含み、
前記BEVイメージ生成ステップにおいて、前記未処理の点群データが分割された同じ形状の複数の3次元セルごとに、前記3次元セルにおける高さが最大の点の高さ、前記3次元セルにおける点の密度、前記3次元セルにおける前記高さが最大の点の反射率に対応する強度、及び前記3次元セルにおける原点から最も遠い点までの距離のそれぞれについてエンコーディングした前記複数の3次元セルそれぞれの4つの特徴データを含む2次元の特徴マップである前記BEVイメージを生成する、3次元多重客体検出方法。 - 前記BEVイメージ生成ステップで、3Dの未処理の点群データを2D擬似(pseudo)イメージに投影して離散化する方式でBEVイメージを生成することを特徴とする、
請求項4に記載の3次元多重客体検出方法。 - 前記学習ステップでCNN(Convolutional Neural Network)ベースの学習を行うことを特徴とする、
請求項4に記載の3次元多重客体検出方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020210108154A KR102681992B1 (ko) | 2021-08-17 | 2021-08-17 | 자율 주행을 위한 단일 계층 3차원 다중 객체 검출 장치 및 방법 |
KR10-2021-0108154 | 2021-08-17 |
Publications (2)
Publication Number | Publication Date |
---|---|
JP7224682B1 true JP7224682B1 (ja) | 2023-02-20 |
JP2023027736A JP2023027736A (ja) | 2023-03-02 |
Family
ID=78918653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2021198447A Active JP7224682B1 (ja) | 2021-08-17 | 2021-12-07 | 自律走行のための3次元多重客体検出装置及び方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230071437A1 (ja) |
EP (1) | EP4138044A1 (ja) |
JP (1) | JP7224682B1 (ja) |
KR (1) | KR102681992B1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965824A (zh) * | 2023-03-01 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | 点云数据标注方法、点云目标检测方法、设备及存储介质 |
CN116385452A (zh) * | 2023-03-20 | 2023-07-04 | 广东科学技术职业学院 | 一种基于极坐标BEV图的LiDAR点云全景分割方法 |
JP2023117203A (ja) * | 2022-02-10 | 2023-08-23 | 本田技研工業株式会社 | 移動体制御装置、移動体制御方法、学習装置、学習方法、およびプログラム |
CN116664825A (zh) * | 2023-06-26 | 2023-08-29 | 北京智源人工智能研究院 | 面向大场景点云物体检测的自监督对比学习方法及系统 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116740669B (zh) * | 2023-08-16 | 2023-11-14 | 之江实验室 | 多目图像检测方法、装置、计算机设备和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017166971A (ja) | 2016-03-16 | 2017-09-21 | 株式会社デンソー | 物体検出装置および物体検出プログラム |
JP2019532433A (ja) | 2016-10-11 | 2019-11-07 | カールタ インコーポレイテッド | リアルタイムオンラインエゴモーション推定を有するレーザスキャナ |
JP2020042009A (ja) | 2018-09-07 | 2020-03-19 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 点群における非地面点のフィルタリング方法、装置及び記憶媒体 |
WO2020253121A1 (zh) | 2019-06-17 | 2020-12-24 | 商汤集团有限公司 | 目标检测方法和装置及智能驾驶方法、设备和存储介质 |
JP2021082296A (ja) | 2019-11-22 | 2021-05-27 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 3次元オブジェクトを分類するための方法及びそのシステム |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101655606B1 (ko) | 2014-12-11 | 2016-09-07 | 현대자동차주식회사 | 라이다를 이용한 멀티 오브젝트 추적 장치 및 그 방법 |
US11592566B2 (en) * | 2019-08-15 | 2023-02-28 | Volvo Car Corporation | Vehicle systems and methods utilizing LIDAR data for road condition estimation |
US12116015B2 (en) * | 2020-11-17 | 2024-10-15 | Aurora Operations, Inc. | Automatic annotation of object trajectories in multiple dimensions |
-
2021
- 2021-08-17 KR KR1020210108154A patent/KR102681992B1/ko active IP Right Grant
- 2021-12-07 JP JP2021198447A patent/JP7224682B1/ja active Active
- 2021-12-08 US US17/545,237 patent/US20230071437A1/en active Pending
- 2021-12-10 EP EP21213697.2A patent/EP4138044A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017166971A (ja) | 2016-03-16 | 2017-09-21 | 株式会社デンソー | 物体検出装置および物体検出プログラム |
JP2019532433A (ja) | 2016-10-11 | 2019-11-07 | カールタ インコーポレイテッド | リアルタイムオンラインエゴモーション推定を有するレーザスキャナ |
JP2020042009A (ja) | 2018-09-07 | 2020-03-19 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 点群における非地面点のフィルタリング方法、装置及び記憶媒体 |
WO2020253121A1 (zh) | 2019-06-17 | 2020-12-24 | 商汤集团有限公司 | 目标检测方法和装置及智能驾驶方法、设备和存储介质 |
JP2021082296A (ja) | 2019-11-22 | 2021-05-27 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 3次元オブジェクトを分類するための方法及びそのシステム |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023117203A (ja) * | 2022-02-10 | 2023-08-23 | 本田技研工業株式会社 | 移動体制御装置、移動体制御方法、学習装置、学習方法、およびプログラム |
JP7450654B2 (ja) | 2022-02-10 | 2024-03-15 | 本田技研工業株式会社 | 移動体制御装置、移動体制御方法、学習装置、学習方法、およびプログラム |
CN115965824A (zh) * | 2023-03-01 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | 点云数据标注方法、点云目标检测方法、设备及存储介质 |
CN115965824B (zh) * | 2023-03-01 | 2023-06-06 | 安徽蔚来智驾科技有限公司 | 点云数据标注方法、点云目标检测方法、设备及存储介质 |
CN116385452A (zh) * | 2023-03-20 | 2023-07-04 | 广东科学技术职业学院 | 一种基于极坐标BEV图的LiDAR点云全景分割方法 |
CN116664825A (zh) * | 2023-06-26 | 2023-08-29 | 北京智源人工智能研究院 | 面向大场景点云物体检测的自监督对比学习方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
US20230071437A1 (en) | 2023-03-09 |
EP4138044A1 (en) | 2023-02-22 |
KR102681992B1 (ko) | 2024-07-04 |
KR20230026130A (ko) | 2023-02-24 |
JP2023027736A (ja) | 2023-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7224682B1 (ja) | 自律走行のための3次元多重客体検出装置及び方法 | |
US11475573B2 (en) | Sensor data segmentation | |
Rahman et al. | Notice of violation of IEEE publication principles: Recent advances in 3D object detection in the era of deep neural networks: A survey | |
Jebamikyous et al. | Autonomous vehicles perception (avp) using deep learning: Modeling, assessment, and challenges | |
Zhou et al. | T-LOAM: Truncated least squares LiDAR-only odometry and mapping in real time | |
Zheng et al. | Rcfusion: Fusing 4-d radar and camera with bird’s-eye view features for 3-d object detection | |
Erbs et al. | Moving vehicle detection by optimal segmentation of the dynamic stixel world | |
Benedek et al. | Positioning and perception in LIDAR point clouds | |
CN114325634A (zh) | 一种基于激光雷达的高鲁棒性野外环境下可通行区域提取方法 | |
CN114782785A (zh) | 多传感器信息融合方法及装置 | |
Alaba et al. | A comprehensive survey of deep learning multisensor fusion-based 3d object detection for autonomous driving: Methods, challenges, open issues, and future directions | |
Valente et al. | Fusing laser scanner and stereo camera in evidential grid maps | |
Yang et al. | MonoGAE: Roadside monocular 3D object detection with ground-aware embeddings | |
CN118411507A (zh) | 一种具有动态目标的场景的语义地图构建方法及系统 | |
CN114118247A (zh) | 一种基于多传感器融合的无锚框3d目标检测方法 | |
Stäcker et al. | RC-BEVFusion: A plug-in module for radar-camera bird’s eye view feature fusion | |
Wang et al. | A Deep Analysis of Visual SLAM Methods for Highly Automated and Autonomous Vehicles in Complex Urban Environment | |
Dai et al. | Enhanced Object Detection in Autonomous Vehicles through LiDAR—Camera Sensor Fusion. | |
Liu et al. | A lightweight lidar-camera sensing method of obstacles detection and classification for autonomous rail rapid transit | |
CN116386003A (zh) | 基于知识蒸馏的三维目标检测方法 | |
Tas et al. | High-definition map update framework for intelligent autonomous transfer vehicles | |
Hazarika et al. | Multi-camera 3D object detection for autonomous driving using deep learning and self-attention mechanism | |
Madake et al. | Visualization of 3D Point Clouds for Vehicle Detection Based on LiDAR and Camera Fusion | |
Pravallika et al. | Deep Learning Frontiers in 3D Object Detection: A Comprehensive Review for Autonomous Driving | |
Tao et al. | 3D object detection algorithm based on multi-sensor segmental fusion of frustum association for autonomous driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20211207 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20221025 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20230113 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20230124 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20230201 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7224682 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |