CN111402256B - Three-dimensional point cloud target detection and attitude estimation method based on template - Google Patents

Three-dimensional point cloud target detection and attitude estimation method based on template Download PDF

Info

Publication number
CN111402256B
CN111402256B CN202010287173.1A CN202010287173A CN111402256B CN 111402256 B CN111402256 B CN 111402256B CN 202010287173 A CN202010287173 A CN 202010287173A CN 111402256 B CN111402256 B CN 111402256B
Authority
CN
China
Prior art keywords
point cloud
dimensional point
template
target detection
rotation angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010287173.1A
Other languages
Chinese (zh)
Other versions
CN111402256A (en
Inventor
王磊
吴伟龙
周建品
李争
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shiyan Intelligent Technology Guangzhou Co ltd
Original Assignee
Shiyan Intelligent Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiyan Intelligent Technology Guangzhou Co ltd filed Critical Shiyan Intelligent Technology Guangzhou Co ltd
Priority to CN202010287173.1A priority Critical patent/CN111402256B/en
Publication of CN111402256A publication Critical patent/CN111402256A/en
Application granted granted Critical
Publication of CN111402256B publication Critical patent/CN111402256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention relates to a template-based three-dimensional point cloud target detection and attitude estimation method, which comprises the following steps of: s1: constructing and training a neural network model; s2: inputting the three-dimensional point cloud to be subjected to target detection and attitude estimation into the neural network model trained by S1 for target detection and attitude estimation; s3: and outputting the three-dimensional point cloud target detection and attitude estimation results. The invention solves the problems of synchronous target detection and attitude estimation and the problem of poor mobility of a deep neural network. The trained model has extremely high stability and mobility, and can be applied to various template-based target detection and posture estimation tasks except for training samples.

Description

Three-dimensional point cloud target detection and attitude estimation method based on template
Technical Field
The invention relates to the field of industrial part detection, in particular to a three-dimensional point cloud target detection and posture estimation method based on a template.
Background
In the unordered grabbing application of industrial parts, a template-based target detection and posture estimation technology is one of core technologies of the unordered grabbing application. Common template matching algorithms include LINEMOD, PPF (Point-Pair Feature) and the like, which can be divided into two steps of Feature extraction and Feature matching in 2D or 3D. Common point cloud features include PPF, gradient features of the image, and the like. Because the used features are primary visual features, the extracted features are easily interfered by various factors such as illumination conditions, shadows, point cloud loss and occlusion, and a large amount of feature matching gross errors are caused. Therefore, the existing mainstream algorithm eliminates a large amount of matching gross errors by establishing a heuristic rule to obtain a final matching result. The use of heuristic rules also leads to the poor universality of the existing algorithm on various different data distributions and scenes, more algorithm parameters and complex debugging.
In order to solve the above problems, a common solution is to divide target detection and pose estimation into two steps, that is, directly predict an external rectangular frame or bounding box of a target on an image or point cloud by using a deep neural network, and then perform template matching within the rectangular frame or bounding box of the target by using ICP and other methods. The circumscribed rectangle or bounding box for directly predicting the target is a common target detection method, but the obtained neural network model can only be used for detecting the trained target by learning the shape or image characteristics of the target. The model has poor mobility, i.e. the model needs to be retrained for each part, thereby causing difficulty in large-scale deployment. In addition, the point cloud registration algorithm based on the ICP needs a better initial value to obtain a better effect. The traditional target detection algorithm based on the neural network cannot generate a good initial attitude value, so that the difficulty of ICP matching is increased, and the accuracy of the whole system is reduced.
Disclosure of Invention
The invention provides a template-based three-dimensional point cloud target detection and attitude estimation method for overcoming the defect of asynchronism of three-dimensional point cloud target detection and attitude estimation in the prior art.
The method comprises the following steps:
s1: constructing and training a neural network model;
s2: inputting the three-dimensional point cloud to be subjected to target detection and attitude estimation into the neural network model trained by S1 for target detection and attitude estimation;
s3: and outputting the three-dimensional point cloud target detection and attitude estimation results.
Preferably, the training of the neural network model in S1 includes the steps of:
s1.1: marking the three-dimensional coordinates and postures of the target object on the three-dimensional point cloud data as training samples;
s1.2: calculating a depth map of the three-dimensional point cloud;
s1.3: calculating image characteristics of the three-dimensional point cloud depth map;
s1.4: and acquiring the probability of the corresponding rotation interval of the input depth map and an end symbol according to the image characteristics.
S1.5: sampling the three-dimensional point cloud template to obtain 1 possible subinterval, and then, dividing the rotation angle of the possible subinterval along the x, y and z directions into N3A rotation angle subinterval;
wherein x, y and z are three axes in a space rectangular coordinate system;
s1.6: for N in S1.53The depth map calculation is carried out on the sub-intervals of the rotation angles to respectively obtain N3Three-dimensional point cloud depth maps corresponding to the rotation angle subintervals; inputting the three-dimensional point cloud depth map into a GRU (gated Current Unit) to obtain the probability that the three-dimensional point cloud depth map belongs to the corresponding rotation interval;
s1.7: repeating S1.5-S1.6 until the repetition times are reached, or outputting an end symbol sample of 1; and obtains the optimal rotation angle intervals in three directions using the bundle search.
S1.8: during training, two steps are carried out, a true value rotation angle interval is used as the input of the GRU network in each step in the first step, the output of the GRU in the previous step is used as the input of the GRU in the next step after convergence, and training is carried out until convergence is achieved.
Preferably, the neural network model constructed in S1 is an rnn (regenerative neural network) network composed of two layers of GRUs.
Preferably, S1.2 is in particular:
and taking a plane vertical to the height direction of the point cloud as a projection plane, vertically projecting the point cloud obtained by scanning the target to the projection plane, and performing grid formation to obtain a depth map.
And taking a plane vertical to the height direction of the point cloud as a projection plane, rotating the three-dimensional point cloud template to different postures and projecting like the projection plane to obtain a group of depth maps of the point cloud template under different postures.
Preferably, the image features of the S1.3 point cloud depth map are calculated using ResNet 50.
Preferably, S1.4 is in particular:
inputting the image characteristics obtained in the S1.3 as hidden characteristics into an RNN network consisting of two layers of GRUs; the rotation angles along the three directions of x, y and z are respectively divided into N parts equally, so that the whole rotation space is divided into N parts3Each rotation angle subinterval is used for calculating a template depth map by taking the midpoints of three rotation directions of each rotation angle subinterval to obtain N3The depth maps of the templates under the corresponding rotation angle subinterval postures;
inputting all depth maps of the template under the corresponding rotation sub-interval postures into a GRU, and outputting N3The +1 value indicates the probability that the input depth map belongs to the corresponding rotation section, and the probability of ending (end symbol).
Preferably, N in S1.63And (3) carrying out depth map calculation on the rotation angle subintervals, wherein the calculation process comprises the following steps:
and taking a plane vertical to the height direction of the rotation angle subinterval point cloud as a projection plane, rotating the subinterval three-dimensional point cloud template to different postures, and projecting like the projection plane to obtain a group of depth maps of the subinterval three-dimensional point cloud template under different postures.
The invention can complete target detection and template matching at the same time. Different from the traditional target detection algorithm for predicting the circumscribed rectangle and the bounding box, the method combines and converts target detection and attitude estimation into an initial attitude classification problem, and can reduce the training difficulty of the model. Meanwhile, the target trained by the model is a training target without learning the relevant characteristics of the target by comparing two types of input point clouds, so that the method has stronger mobility compared with the traditional target detection algorithm.
The general template learning algorithm often cannot replace the template or cannot deal with the problem of the sharp increase of the calculation amount caused by a large number of templates. According to the invention, through a mode of generating the template posture dictionary in an iterative manner, the rotation angle interval of the posture is continuously reduced from coarse to fine while the quantity of the templates is ensured to be less, and the template replacement is possible. Therefore, the matching technology based on the template has good practicability.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention solves the problems of synchronous target detection and attitude estimation and the problem of poor mobility of a deep neural network. The trained model has extremely high stability and mobility, and can be applied to various template-based target detection and posture estimation tasks except for training samples.
Drawings
Fig. 1 is a flowchart of a method for detecting a target and estimating an attitude of a template-based three-dimensional point cloud according to embodiment 1.
FIG. 2 is a flow chart of model training as described in example 1.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides a template-based three-dimensional point cloud target detection and posture estimation method, as shown in fig. 1, the method includes the following steps:
s1: constructing and training a neural network model; as shown in fig. 2, the training of the neural network model includes the following steps:
s1.1: and marking the three-dimensional coordinates and the postures of the target objects on the three-dimensional point cloud data as training samples.
S1.2: and taking a plane vertical to the height direction of the point cloud as a projection plane, vertically projecting the point cloud obtained by scanning the target to the projection plane, and performing grid formation to obtain a depth map.
S1.3: and taking a plane vertical to the height direction of the point cloud as a projection plane, rotating the three-dimensional point cloud template to different postures and projecting the three-dimensional point cloud template to the projection plane to obtain a group of depth maps of the point cloud template under different postures.
S1.4: image features of the scan point cloud depth map are calculated using ResNet 50.
S1.5: and (4) inputting the image characteristics obtained in the S1.4 serving as hidden characteristics into an RNN (radio network unit) consisting of two layers of GRUs.
The rotation angles along the three directions of x, y and z are respectively divided into N parts equally, so that the whole rotation space is divided into N parts3Each rotation angle subinterval is used for calculating a template depth map by taking the midpoints of three rotation directions of each rotation angle subinterval to obtain N3And (4) depth maps of the templates under the corresponding rotation angle subinterval postures.
Inputting all depth maps of the template under the corresponding rotation sub-interval postures into a GRU, and outputting N3The +1 value indicates the probability that the input depth map belongs to the corresponding rotation section, and the probability of ending (end symbol).
S1.6: sampling according to the probability of the rotation interval obtained in S1.5 to obtain 1 possible subinterval, and then dividing the subinterval of the three rotation angles into N3A sub-interval of rotation angles. Obtaining this N as S1.33And inputting the three-dimensional point cloud depth map corresponding to each rotation angle subinterval into the GRU to obtain the probability that the three-dimensional point cloud depth map belongs to the corresponding rotation interval.
S1.7: s1.6 is repeated until the number of repetitions is reached or the output terminator sample is 1. The optimal rotation angle intervals in three directions are obtained by using the bundle search.
S1.8: during training, the rotation angle interval corresponding to the true value is used for replacing the output of the GRU network to serve as the input of the next step, after the rotation angle interval is stabilized, the output of the GRU network is used as the input of the next step, and training is carried out until convergence is achieved.
S2: inputting the three-dimensional point cloud to be subjected to target detection and attitude estimation into the neural network model trained by S1 for target detection and attitude estimation;
s3: and outputting the three-dimensional point cloud target detection and attitude estimation results.
The terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. A template-based three-dimensional point cloud target detection and attitude estimation method is characterized by comprising the following steps:
s1: constructing and training a neural network model;
s2: inputting the three-dimensional point cloud to be subjected to target detection and attitude estimation into the neural network model trained by S1 for target detection and attitude estimation;
s3: outputting a three-dimensional point cloud target detection and attitude estimation result;
the training of the neural network model in S1 includes the following steps:
s1.1: marking the three-dimensional coordinates and postures of the target object on the three-dimensional point cloud data as training samples;
s1.2: calculating a depth map of the three-dimensional point cloud;
s1.3: calculating image characteristics of the three-dimensional point cloud depth map;
s1.4: acquiring the probability of the corresponding rotation interval of the input depth map and an end symbol according to the image characteristics;
s1.5: sampling the three-dimensional point cloud template to obtain 1 possible subinterval, and then, dividing the rotation angle of the possible subinterval along the x, y and z directions into N3A rotation angle subinterval;
s1.6: for N in S1.53The depth map calculation is carried out on the sub-intervals of the rotation angles to respectively obtain N3Three-dimensional point cloud depth maps corresponding to the rotation angle subintervals; inputting the three-dimensional point cloud depth map into a GRU (generalized regression Unit) to obtain the probability that the three-dimensional point cloud depth map belongs to the corresponding rotation section;
s1.7: repeating S1.5-S1.6 until the repetition times are reached, or outputting an end symbol sample of 1; obtaining the optimal rotation angle intervals in three directions by using cluster searching;
s1.8: during training, the rotation angle interval corresponding to the true value is used for replacing the output of the GRU network to serve as the input of the next step, after the rotation angle interval is stabilized, the output of the GRU network is used as the input of the next step, and training is carried out until convergence is achieved.
2. The template-based three-dimensional point cloud target detection and pose estimation method of claim 1, wherein the neural network model constructed in S1 is an RNN network consisting of two layers of GRUs.
3. The template-based three-dimensional point cloud target detection and pose estimation method of claim 2, wherein S1.2 specifically is:
taking a plane vertical to the height direction of the point cloud as a projection plane, vertically projecting the point cloud obtained by scanning a target to the projection plane, and gridding to obtain a depth map;
and taking a plane vertical to the height direction of the point cloud as a projection plane, rotating the three-dimensional point cloud template to different postures and projecting like the projection plane to obtain a group of depth maps of the point cloud template under different postures.
4. The template-based three-dimensional point cloud target detection and pose estimation method of claim 3, wherein the image features of the S1.3 point cloud depth map are computed using ResNet 50.
5. The template-based three-dimensional point cloud target detection and pose estimation method of claim 4, wherein S1.4 specifically is:
inputting the image characteristics obtained in the S1.3 as hidden characteristics into an RNN network consisting of two layers of GRUs; the rotation angles along the three directions of x, y and z are respectively divided into N parts equally, so that the whole rotation space is divided into N parts3Each rotation angle subinterval is used for calculating a template depth map by taking the midpoints of three rotation directions of each rotation angle subinterval to obtain N3The depth maps of the templates under the corresponding rotation angle subinterval postures;
inputting all depth maps of the template under the corresponding rotation sub-interval postures into a GRU, and outputting N3The +1 value represents the probability that the input depth map belongs to the corresponding rotation interval, and an end symbol.
6. The template-based three-dimensional point cloud target detection and pose estimation method of claim 4 or 5, wherein N is the pair in S1.63And (3) carrying out depth map calculation on the rotation angle subintervals, wherein the calculation process comprises the following steps:
and taking a plane vertical to the height direction of the rotation angle subinterval point cloud as a projection plane, rotating the subinterval three-dimensional point cloud template to different postures, and projecting like the projection plane to obtain a group of depth maps of the subinterval three-dimensional point cloud template under different postures.
CN202010287173.1A 2020-04-13 2020-04-13 Three-dimensional point cloud target detection and attitude estimation method based on template Active CN111402256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010287173.1A CN111402256B (en) 2020-04-13 2020-04-13 Three-dimensional point cloud target detection and attitude estimation method based on template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010287173.1A CN111402256B (en) 2020-04-13 2020-04-13 Three-dimensional point cloud target detection and attitude estimation method based on template

Publications (2)

Publication Number Publication Date
CN111402256A CN111402256A (en) 2020-07-10
CN111402256B true CN111402256B (en) 2020-10-16

Family

ID=71431530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010287173.1A Active CN111402256B (en) 2020-04-13 2020-04-13 Three-dimensional point cloud target detection and attitude estimation method based on template

Country Status (1)

Country Link
CN (1) CN111402256B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116638A (en) * 2020-09-04 2020-12-22 季华实验室 Three-dimensional point cloud matching method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018190805A1 (en) * 2017-04-11 2018-10-18 Siemens Aktiengesellschaft Depth image pose search with a bootstrapped-created database
CN109145864A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 Determine method, apparatus, storage medium and the terminal device of visibility region
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN109753885A (en) * 2018-12-14 2019-05-14 中国科学院深圳先进技术研究院 A kind of object detection method, device and pedestrian detection method, system
CN110428464A (en) * 2019-06-24 2019-11-08 浙江大学 Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN110930452A (en) * 2019-10-23 2020-03-27 同济大学 Object pose estimation method based on self-supervision learning and template matching
CN110942512A (en) * 2019-11-27 2020-03-31 大连理工大学 Indoor scene reconstruction method based on meta-learning
CN110969660A (en) * 2019-12-17 2020-04-07 浙江大学 Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11017550B2 (en) * 2017-11-15 2021-05-25 Uatc, Llc End-to-end tracking of objects
CN110942515A (en) * 2019-11-26 2020-03-31 北京迈格威科技有限公司 Point cloud-based target object three-dimensional computer modeling method and target identification method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018190805A1 (en) * 2017-04-11 2018-10-18 Siemens Aktiengesellschaft Depth image pose search with a bootstrapped-created database
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN109145864A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 Determine method, apparatus, storage medium and the terminal device of visibility region
CN109753885A (en) * 2018-12-14 2019-05-14 中国科学院深圳先进技术研究院 A kind of object detection method, device and pedestrian detection method, system
CN110428464A (en) * 2019-06-24 2019-11-08 浙江大学 Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN110930452A (en) * 2019-10-23 2020-03-27 同济大学 Object pose estimation method based on self-supervision learning and template matching
CN110942512A (en) * 2019-11-27 2020-03-31 大连理工大学 Indoor scene reconstruction method based on meta-learning
CN110969660A (en) * 2019-12-17 2020-04-07 浙江大学 Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation";Kyunghyun Cho,at el.;《arXiv》;20140903;第1-15页 *
"LiDAR-based Online 3D Video Object Detection with Graph-based Message Passing and Spatiotemporal Transformer Attention";Junbo Yin,at el.;《arXiv》;20200403;第1-10页 *
"Point Cloud Generation From Multiple Angles of Voxel Grids";Shumin Kong,at el.;《IEEE》;20191104;第436-488页 *
"基于机器学习的智能机器人环境视觉感知方法研究";陈旭展;《中国博士学位论文全文数据库 信息科技辑》;20200315(第3期);I140-25 *
"面向物流分拣任务的自主抓取机器人系统";马灼明等;《机械设计与研究》;20191231;第35卷(第6期);第10-16页 *

Also Published As

Publication number Publication date
CN111402256A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN108399649B (en) Single-picture three-dimensional face reconstruction method based on cascade regression network
CN106023298B (en) Point cloud Rigid Registration method based on local Poisson curve reestablishing
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
Becker Generation and application of rules for quality dependent façade reconstruction
JP5061350B2 (en) Motion capture system and three-dimensional reconstruction method of feature points in motion capture system
CN111612059A (en) Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars
CN113379898B (en) Three-dimensional indoor scene reconstruction method based on semantic segmentation
CN105427293A (en) Indoor scene scanning reconstruction method and apparatus
CN108154104A (en) A kind of estimation method of human posture based on depth image super-pixel union feature
Sohn et al. An implicit regularization for 3D building rooftop modeling using airborne lidar data
CN113139453A (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN113012122B (en) Category-level 6D pose and size estimation method and device
Galvanin et al. Extraction of building roof contours from LiDAR data using a Markov-random-field-based approach
CN111681300B (en) Method for obtaining target area composed of outline sketch lines
JP7424573B2 (en) 3D model generation device based on 3D point cloud data
CN111476242A (en) Laser point cloud semantic segmentation method and device
CN108961385A (en) A kind of SLAM patterning process and device
Poullis Large-scale urban reconstruction with tensor clustering and global boundary refinement
CN107123138B (en) Based on vanilla-R point to the point cloud registration method for rejecting strategy
CN111402256B (en) Three-dimensional point cloud target detection and attitude estimation method based on template
CN104978583B (en) The recognition methods of figure action and device
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
Schnabel et al. Shape detection in point clouds
CN112926681B (en) Target detection method and device based on deep convolutional neural network
Denker et al. On-line reconstruction of CAD geometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant