JP2021522585A - マスクを使用したデータセグメンテーション - Google Patents
マスクを使用したデータセグメンテーション Download PDFInfo
- Publication number
- JP2021522585A JP2021522585A JP2020559462A JP2020559462A JP2021522585A JP 2021522585 A JP2021522585 A JP 2021522585A JP 2020559462 A JP2020559462 A JP 2020559462A JP 2020559462 A JP2020559462 A JP 2020559462A JP 2021522585 A JP2021522585 A JP 2021522585A
- Authority
- JP
- Japan
- Prior art keywords
- mask
- sensor data
- data
- voxel
- voxel space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title description 38
- 238000000034 method Methods 0.000 claims abstract description 149
- 230000008569 process Effects 0.000 claims abstract description 75
- 238000004422 calculation algorithm Methods 0.000 claims description 64
- 238000010801 machine learning Methods 0.000 claims description 44
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 abstract description 5
- 230000008447 perception Effects 0.000 abstract 1
- 238000012549 training Methods 0.000 description 23
- 238000012545 processing Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 11
- 230000004807 localization Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 238000010200 validation analysis Methods 0.000 description 5
- AYCPARAPKDAOEN-LJQANCHMSA-N N-[(1S)-2-(dimethylamino)-1-phenylethyl]-6,6-dimethyl-3-[(2-methyl-4-thieno[3,2-d]pyrimidinyl)amino]-1,4-dihydropyrrolo[3,4-c]pyrazole-5-carboxamide Chemical compound C1([C@H](NC(=O)N2C(C=3NN=C(NC=4C=5SC=CC=5N=C(C)N=4)C=3C2)(C)C)CN(C)C)=CC=CC=C1 AYCPARAPKDAOEN-LJQANCHMSA-N 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 240000004050 Pentaglottis sempervirens Species 0.000 description 3
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 3
- 238000003491 array Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 239000000872 buffer Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012628 principal component regression Methods 0.000 description 2
- XGVXKJKTISMIOW-ZDUSSCGKSA-N simurosertib Chemical compound N1N=CC(C=2SC=3C(=O)NC(=NC=3C=2)[C@H]2N3CCC(CC3)C2)=C1C XGVXKJKTISMIOW-ZDUSSCGKSA-N 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000237519 Bivalvia Species 0.000 description 1
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 235000020639 clam Nutrition 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 229910052739 hydrogen Inorganic materials 0.000 description 1
- 239000001257 hydrogen Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Electromagnetism (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Game Theory and Decision Science (AREA)
- Optics & Photonics (AREA)
- Business, Economics & Management (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Traffic Control Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
- Image Analysis (AREA)
Abstract
【選択図】図4B
Description
この特許出願は、2018年4月26日に出願された出願番号15/963,833の米国実用新案特許出願の優先権を主張している。出願番号15/963,833は参照により本明細書に完全に組み込まれる。
A.システムは1つまたは複数のプロセッサと、1つまたは複数のプロセッサによって実行可能な命令を格納し、命令が実行されると、システムに以下の工程を実行させる1つまたは複数のコンピュータ可読媒体と、を備え、工程は、車両の光検出および測距(LIDAR)センサを使用してセンサデータを取り込むこと(工程と)、センサデータを少なくとも三次元を含むボクセル空間に関連付けること、ボクセル空間の二次元表現を生成すること、画像チャネルの数を含む二次元表現を機械学習アルゴリズムに入力すること、機械学習アルゴリズムから、ボクセル空間内のオブジェクトを二次元で表す第1のマスクを受け取ること、第1のマスクの拡張部分の少なくとも一部に基づいて、第2のマスクを生成することであって、拡張部分は、少なくとも1つの拡張領域アルゴリズム、前記第1のマスクのサイズ、または他のオブジェクトに関連付けられた第3のマスクの交点の少なくとも一部に基づくこと、および少なくとも部分的に第2のマスクに基づいて、センサデータを、セグメント化すること、を含む。
少なくとも1つまたは複数である。
Claims (15)
- 1つまたは複数のプロセッサと、
前記1つまたは複数のプロセッサによって実行可能な命令を格納し、前記命令が実行されると、前記システムに以下の工程を実行させる1つまたは複数のコンピュータ可読媒体と、を備え、前記工程は、
車両上のセンサを使用してセンサデータを取り込むこと、(工程と、)
前記センサデータを少なくとも三次元を含むボクセル空間に関連付けること、
前記ボクセル空間の二次元表現を生成すること、
前記二次元表現を機械学習アルゴリズムに入力すること、
前記機械学習アルゴリズムから、前記ボクセル空間内のオブジェクトを二次元で表す第1のマスクを受け取ること、
前記第1のマスクの拡張部分の少なくとも一部に基づいて、第2のマスクを生成することであって、前記拡張部分は、少なくとも1つの拡張領域アルゴリズム、前記第1のマスクのサイズ、または他のオブジェクトに関連付けられた第3のマスクの交点の少なくとも一部に基づくこと、および
前記第2のマスクの少なくとも一部に基づいて、前記センサデータをセグメント化すること、
を含むことを特徴とするシステム。 - 請求項1に記載のシステムであって、前記機械学習アルゴリズムは、取り込まれたLIDARデータを受け取ることと第1の幅および第1の長さを有する検出されたオブジェクトを表現することとの少なくとも一部に基づいて検出されたオブジェクトに関連付けられたマスクを出力するために訓練されており、前記マスクは第1の幅以下の第2の幅、および第1の長さ以下の第2の長さを有する、ことを特徴とするシステム。
- 請求項1または2に記載のシステムであって、疑似ピクセルは、前記ボクセル空間のボクセルに関連付けられること、前記疑似ピクセルは、前記ボクセル空間における前記ボクセルの列および前記ボクセルの列に関連付けられた特徴を含むことを特徴とするシステム。
- 請求項1または2に記載のシステムであって、前記センサデータをセグメント化することは、前記領域成長アルゴリズムを使用して前記第2のマスク内のボクセル空間の1つまたは複数のボクセルをクラスター化することを含むことを特徴とするシステム。
- センサを用いて、環境のセンサデータの取り込むことであって、前記センサデータは環境内の1つのオブジェクトを示すこと、
前記センサデータをボクセル空間に関連付けること、
ボクセル空間の一部に関連付けられた第1のマスクを受け取ることであって、前記第1のマスクは前記オブジェクトよりも小さいサイズの領域を表わすこと、
前記第1のマスクを拡張することにより第2のマスクを生成すること、および
少なくとも前記第2のマスクの一部に基づいて、前記センサデータを、セグメント化すること、
を含むことを特徴とする方法。 - 請求項5に記載の方法であって、
少なくともセンサデータをセグメント化することの一部に基づいて、自律車両のための軌道を、生成すること、および
少なくとも前記軌道の一部に基づいて、前記自律車両が前記環境を横断するための、制御をすること、を含むことを特徴とする方法。 - 請求項5に記載の方法であって、
前記ボクセル空間の二次元表現を機械学習アルゴリズムに入力すること、および
前記機械学習アルゴリズムの出力を前記第1のマスクとして受け取ること、を含むことを特徴とする方法。 - 請求項7に記載の方法であって、前記ボクセル空間の前記二次元表現は、少なくともボクセル空間の高さ及び1つまたは複数の特徴に基づくチャネルの数を有する画像を含むことを特徴とする方法。
- 請求項8に記載の方法であって、前記1つまたは複数の特徴は、
前記センサデータの平均、
前記センサデータが前記ボクセルに関連付けられている回数、
前記センサデータの共分散、
一以上の分類に属するボクセルの確率
前記ボクセルに関連付けられたレイキャスティング情報;または
ボクセルの占有
のうち少なくとも1つを含む、
ことを特徴とする方法。 - 請求項7に記載の方法であって、前記二次元表現は、前記ボクセル空間の一次元に関連付けられた長さ、前記ボクセル空間の二次元に関連付けられた幅、およびチャネルの数を有する疑似画像を含むこと、および
前記チャネルの数は、前記ボクセル空間の三次元と1つのまたは複数の特徴を備えたセンサデータの平均、センサデータの共分散、センサデータの観測数、占有、またはセマンティック分類に関連付けられた1つのまたは複数の確率、の少なくとも一部、に基づくことを特徴とする方法。 - 請求項5に記載の方法であって、前記センサは、光検出および測距(LIDAR)センサを含むことを特徴とする方法。
- 請求項5に記載の方法であって、前記第1のマスクは、少なくともセンサデータに関連付けられたクラス上のデータの一部に基づいて生成されること、前記クラス上のデータは、少なくとも1つのまたは複数の車両、自転車、または歩行者であることを特徴とする方法。
- 請求項5に記載の方法であって、
前記第1のマスクの拡張部分と前記ボクセル空間に関連付けられた他のオブジェクトに関連付けられた第3のマスクとの交点の少なくとも一部に基づいて前記第2のマスクを生成すること、
ことをさらに含むことを特徴とする方法。 - 請求項5に記載の方法であって、前記センサデータをセグメント化することは、前記第2のマスクに関連付けられたボクセル空間の1つまたは複数のボクセルを関連付けることを含むことを特徴とする方法。
- 実行されると、1つまたは複数のプロセッサに、請求項5から14のいずれか1項に記載されている方法を実装させる命令を格納する非一時的なコンピュータ可読媒体。
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/963,833 US10649459B2 (en) | 2018-04-26 | 2018-04-26 | Data segmentation using masks |
US15/963,833 | 2018-04-26 | ||
PCT/US2019/028667 WO2019209795A1 (en) | 2018-04-26 | 2019-04-23 | Data segmentation using masks |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2021522585A true JP2021522585A (ja) | 2021-08-30 |
JP7350013B2 JP7350013B2 (ja) | 2023-09-25 |
Family
ID=68292419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2020559462A Active JP7350013B2 (ja) | 2018-04-26 | 2019-04-23 | マスクを使用したデータセグメンテーション |
Country Status (5)
Country | Link |
---|---|
US (3) | US10649459B2 (ja) |
EP (1) | EP3784984A4 (ja) |
JP (1) | JP7350013B2 (ja) |
CN (2) | CN112041633B (ja) |
WO (1) | WO2019209795A1 (ja) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017063033A1 (en) * | 2015-10-12 | 2017-04-20 | Groundprobe Pty Ltd | Slope stability lidar |
US10649459B2 (en) | 2018-04-26 | 2020-05-12 | Zoox, Inc. | Data segmentation using masks |
US10810792B2 (en) * | 2018-05-31 | 2020-10-20 | Toyota Research Institute, Inc. | Inferring locations of 3D objects in a spatial environment |
US11995763B2 (en) * | 2018-07-02 | 2024-05-28 | Vayyar Imaging Ltd. | System and methods for environment mapping |
US20210165413A1 (en) * | 2018-07-26 | 2021-06-03 | Postmates Inc. | Safe traversable area estimation in unstructured free-space using deep convolutional neural network |
WO2020082382A1 (en) * | 2018-10-26 | 2020-04-30 | Intel Corporation | Method and system of neural network object recognition for image processing |
US10983201B2 (en) | 2018-10-29 | 2021-04-20 | Dji Technology, Inc. | User interface for displaying point clouds generated by a lidar device on a UAV |
DK201970115A1 (en) * | 2018-11-08 | 2020-06-09 | Aptiv Technologies Limited | DEEP LEARNING FOR OBJECT DETECTION USING PILLARS |
CN109685762A (zh) * | 2018-11-09 | 2019-04-26 | 五邑大学 | 一种基于多尺度深度语义分割网络的天线下倾角测量方法 |
US11010907B1 (en) * | 2018-11-27 | 2021-05-18 | Zoox, Inc. | Bounding box selection |
US10936902B1 (en) | 2018-11-27 | 2021-03-02 | Zoox, Inc. | Training bounding box selection |
US11294042B2 (en) * | 2018-12-31 | 2022-04-05 | Wipro Limited | Method and system for detecting presence of partial visual fault in Lidar sensor of vehicle |
US11521010B2 (en) * | 2019-01-23 | 2022-12-06 | Motional Ad Llc | Automatically choosing data samples for annotation |
CN111771206B (zh) * | 2019-01-30 | 2024-05-14 | 百度时代网络技术(北京)有限公司 | 用于自动驾驶车辆的地图分区系统 |
US11210554B2 (en) | 2019-03-21 | 2021-12-28 | Illumina, Inc. | Artificial intelligence-based generation of sequencing metadata |
US11347965B2 (en) | 2019-03-21 | 2022-05-31 | Illumina, Inc. | Training data generation for artificial intelligence-based sequencing |
US11593649B2 (en) | 2019-05-16 | 2023-02-28 | Illumina, Inc. | Base calling using convolutions |
CA3150615C (en) * | 2019-08-30 | 2024-01-16 | Deere & Company | System and method of control for autonomous or remote-controlled vehicle platform |
US20210181758A1 (en) | 2019-10-26 | 2021-06-17 | Zoox, Inc. | Object detection and tracking |
GB2591171B (en) | 2019-11-14 | 2023-09-13 | Motional Ad Llc | Sequential fusion for 3D object detection |
US11604465B2 (en) * | 2019-11-26 | 2023-03-14 | Zoox, Inc. | Correction of sensor data alignment and environment mapping |
US11462041B2 (en) | 2019-12-23 | 2022-10-04 | Zoox, Inc. | Pedestrians with objects |
US11789155B2 (en) | 2019-12-23 | 2023-10-17 | Zoox, Inc. | Pedestrian object detection training |
EP4107735A2 (en) | 2020-02-20 | 2022-12-28 | Illumina, Inc. | Artificial intelligence-based many-to-many base calling |
US20210278852A1 (en) * | 2020-03-05 | 2021-09-09 | Uatc, Llc | Systems and Methods for Using Attention Masks to Improve Motion Planning |
US11472442B2 (en) | 2020-04-23 | 2022-10-18 | Zoox, Inc. | Map consistency checker |
US20220067399A1 (en) * | 2020-08-25 | 2022-03-03 | Argo AI, LLC | Autonomous vehicle system for performing object detections using a logistic cylinder pedestrian model |
US11636685B1 (en) * | 2020-12-18 | 2023-04-25 | Zoox, Inc. | Multi-resolution top-down segmentation |
RU2767831C1 (ru) * | 2021-03-26 | 2022-03-22 | Общество с ограниченной ответственностью "Яндекс Беспилотные Технологии" | Способы и электронные устройства для обнаружения объектов в окружении беспилотного автомобиля |
CN112733817B (zh) * | 2021-03-30 | 2021-06-04 | 湖北亿咖通科技有限公司 | 一种高精度地图中点云图层精度的测量方法及电子设备 |
US11810225B2 (en) * | 2021-03-30 | 2023-11-07 | Zoox, Inc. | Top-down scene generation |
US11858514B2 (en) | 2021-03-30 | 2024-01-02 | Zoox, Inc. | Top-down scene discrimination |
US20220336054A1 (en) | 2021-04-15 | 2022-10-20 | Illumina, Inc. | Deep Convolutional Neural Networks to Predict Variant Pathogenicity using Three-Dimensional (3D) Protein Structures |
US20230027496A1 (en) * | 2021-07-22 | 2023-01-26 | Cnh Industrial America Llc | Systems and methods for obstacle detection |
US11787419B1 (en) | 2021-10-22 | 2023-10-17 | Zoox, Inc. | Robust numerically stable Kalman filter for autonomous vehicles |
US20230311930A1 (en) * | 2022-03-31 | 2023-10-05 | Zoox, Inc. | Capturing and simulating radar data for autonomous driving systems |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7333648B2 (en) * | 1999-11-19 | 2008-02-19 | General Electric Company | Feature quantification from multidimensional image data |
US7330528B2 (en) * | 2003-08-19 | 2008-02-12 | Agilent Technologies, Inc. | System and method for parallel image reconstruction of multiple depth layers of an object under inspection from radiographic images |
US9600933B2 (en) * | 2011-07-01 | 2017-03-21 | Intel Corporation | Mobile augmented reality system |
US8793046B2 (en) | 2012-06-01 | 2014-07-29 | Google Inc. | Inferring state of traffic signal and other aspects of a vehicle's environment based on surrogate data |
US9488492B2 (en) | 2014-03-18 | 2016-11-08 | Sri International | Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics |
WO2014167466A1 (en) * | 2013-04-09 | 2014-10-16 | Koninklijke Philips N.V. | Layered two-dimensional projection generation and display |
US20150071541A1 (en) * | 2013-08-14 | 2015-03-12 | Rice University | Automated method for measuring, classifying, and matching the dynamics and information passing of single objects within one or more images |
US10121082B2 (en) * | 2015-10-07 | 2018-11-06 | Honda Motor Co., Ltd. | System and method for providing laser camera fusion for identifying and tracking a traffic participant |
DE102016200656A1 (de) | 2016-01-20 | 2017-07-20 | Robert Bosch Gmbh | Verfahren zum Auswerten eines Umfeldes eines Fahrzeuges |
GB2567587B (en) | 2016-09-16 | 2021-12-29 | Motorola Solutions Inc | System and method for fixed camera and unmanned mobile device collaboration to improve identification certainty of an object |
EP3392798A1 (en) * | 2017-04-21 | 2018-10-24 | Delphi Technologies, Inc. | A method for the semantic segmentation of an image |
US10650531B2 (en) * | 2018-03-16 | 2020-05-12 | Honda Motor Co., Ltd. | Lidar noise removal using image pixel clusterings |
US10649459B2 (en) | 2018-04-26 | 2020-05-12 | Zoox, Inc. | Data segmentation using masks |
-
2018
- 2018-04-26 US US15/963,833 patent/US10649459B2/en active Active
-
2019
- 2019-04-23 JP JP2020559462A patent/JP7350013B2/ja active Active
- 2019-04-23 WO PCT/US2019/028667 patent/WO2019209795A1/en active Application Filing
- 2019-04-23 CN CN201980027927.5A patent/CN112041633B/zh active Active
- 2019-04-23 EP EP19793082.9A patent/EP3784984A4/en active Pending
- 2019-04-23 CN CN202310538677.XA patent/CN116563548A/zh active Pending
-
2020
- 2020-03-20 US US16/825,778 patent/US11195282B2/en active Active
-
2021
- 2021-12-03 US US17/541,580 patent/US11620753B2/en active Active
Non-Patent Citations (2)
Title |
---|
JUNG-UN KIM, ET AL.: "LiDAR based 3D object detection using CCD information", 2017 IEEE THIRD INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA, JPN6023009596, 2017, ISSN: 0005012222 * |
MATTI LEHTOMAKI, ET AL.: "Object Classification and Recognition From Mobile Laser Scanning Point Clouds in a Road Environment", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE ZSENSIZNG, vol. 54, no. 2, JPN6023009595, 2016, XP011595766, ISSN: 0005012223, DOI: 10.1109/TGRS.2015.2476502 * |
Also Published As
Publication number | Publication date |
---|---|
US10649459B2 (en) | 2020-05-12 |
JP7350013B2 (ja) | 2023-09-25 |
US11195282B2 (en) | 2021-12-07 |
CN112041633B (zh) | 2023-06-02 |
CN112041633A (zh) | 2020-12-04 |
US20190332118A1 (en) | 2019-10-31 |
WO2019209795A1 (en) | 2019-10-31 |
EP3784984A4 (en) | 2022-01-26 |
US20220156940A1 (en) | 2022-05-19 |
EP3784984A1 (en) | 2021-03-03 |
CN116563548A (zh) | 2023-08-08 |
US20200218278A1 (en) | 2020-07-09 |
US11620753B2 (en) | 2023-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11620753B2 (en) | Data segmentation using masks | |
US11353577B2 (en) | Radar spatial estimation | |
US11010907B1 (en) | Bounding box selection | |
US11169531B2 (en) | Trajectory prediction on top-down scenes | |
US11748909B2 (en) | Image-based depth data and localization | |
US10936902B1 (en) | Training bounding box selection | |
EP4077084A1 (en) | Prediction on top-down scenes based on object motion | |
WO2020112649A1 (en) | Probabilistic risk assessment for trajectory evaluation | |
CN114072841A (zh) | 根据图像使深度精准化 | |
US11379684B2 (en) | Time of flight data segmentation | |
US20210157325A1 (en) | Latency accommodation in trajectory generation | |
US11614742B2 (en) | Height estimation using sensor data | |
US20230169777A1 (en) | Center-based detection and tracking | |
US20220176988A1 (en) | Determining inputs for perception system | |
US11270437B1 (en) | Top-down segmentation pixel orientation and distance | |
US11761780B1 (en) | Determining data for semantic localization | |
CN117545674A (zh) | 用于识别路缘的技术 | |
US11636685B1 (en) | Multi-resolution top-down segmentation | |
US11906967B1 (en) | Determining yaw with learned motion model | |
WO2023114590A1 (en) | Identifying relevant objects within an environment | |
WO2022146622A1 (en) | Intermediate input for machine learned model | |
WO2023219893A1 (en) | Sensor calibration validation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20220425 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20230314 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20230613 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20230815 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20230912 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7350013 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |