CN112967296A - Point cloud dynamic region graph convolution method, classification method and segmentation method - Google Patents

Point cloud dynamic region graph convolution method, classification method and segmentation method Download PDF

Info

Publication number
CN112967296A
CN112967296A CN202110261653.5A CN202110261653A CN112967296A CN 112967296 A CN112967296 A CN 112967296A CN 202110261653 A CN202110261653 A CN 202110261653A CN 112967296 A CN112967296 A CN 112967296A
Authority
CN
China
Prior art keywords
point cloud
information
convolution
map
convolution operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110261653.5A
Other languages
Chinese (zh)
Other versions
CN112967296B (en
Inventor
王勇
岳晨珂
汤鑫彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiulai Technology Co.,Ltd.
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202110261653.5A priority Critical patent/CN112967296B/en
Publication of CN112967296A publication Critical patent/CN112967296A/en
Application granted granted Critical
Publication of CN112967296B publication Critical patent/CN112967296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud dynamic area map convolution method, a point cloud dynamic area map classification method and a point cloud dynamic area map segmentation method using the point cloud dynamic area map convolution method. The invention adopts a new convolution operation form aiming at point cloud, and aggregates point characteristic information of a plurality of different neighborhoods by a nonlinear method according to a constructed point cloud picture structure, so that the neuron can select the area size in a self-adaptive manner. Compared with the prior technical scheme of analyzing on a single point, such as PointNet, the method constructs a plurality of different local neighborhood map structures, enables each neuron to adaptively select the proper neighborhood receptive field size, then performs similar convolution operation by utilizing the connection between each point and the neighborhood points to obtain local characteristics, can better combine surrounding neighborhood information, more effectively extract local geometric information, and finally improves the accuracy of classifying or dividing the point cloud data.

Description

Point cloud dynamic region graph convolution method, classification method and segmentation method
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a point cloud dynamic area graph convolution method, a classification method and a segmentation method.
Background
The point cloud data contains abundant semantic information and has the characteristics of high density, high precision and the like, but due to the irregularity and disorder of the point cloud data, semantic analysis based on the point cloud data is still a difficult challenge. Some of the earlier methods use features with complex rules for manual extraction to solve such problems. With the recent increase of fire heat of deep learning and machine learning technologies, methods of deep learning are also introduced for analysis and processing of point cloud data. The data to be processed by the deep network is in a regular shape, the point cloud data is basically irregular, and the spatial distribution of the point cloud data is not influenced by the arrangement mode of the point clouds, so that the common method for processing the point cloud data by using the deep learning model is to convert the original point cloud data into data structure forms such as grids, voxels, trees and the like. Some advanced deep learning networks such as PointNet and PointNet + + are designed specifically to deal with irregularities in the point cloud, and can directly process the original point cloud data without converting the point cloud data into a regular shape and then processing the point cloud data. However, neither PointNet nor PointNet + + support convolution operation and cannot effectively extract local geometric information.
Much work is currently focused on processing point cloud data using convolution operations. The 2DCNN is directly expanded to the 3D field, the 3D space is regarded as a volume grid, and the operation is carried out by using 3D convolution. Although 3D convolution works well on the task of point cloud classification and segmentation, their high requirements for storage performance and the high computational cost required make them still suffer from insufficient accuracy on large scale data sets and large scenes.
In summary, how to improve the accuracy of classifying or segmenting point cloud data becomes a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention actually solves the problems that: the accuracy of classification or segmentation of point cloud data is improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
a point cloud dynamic region graph convolution method comprises the following steps:
s1, acquiring three-dimensional point cloud data X, X ═ alpha123,…,αi,…,αn},αiData representing the ith point, n representing the number of points in the three-dimensional point cloud data, alphai={xi,yi,zi},xi、yiAnd ziDenotes alphaiThree-dimensional coordinates of (a);
s2, performing two independent k nearest neighbor operations on the three-dimensional point cloud data X to obtain two local feature maps y and z, wherein k values of the two independent k nearest neighbor operations are different;
s3, fusing the two local feature maps y and z to obtain fused information T, where T is Sum (y, z);
s4, pooling the fusion information T to obtain the characteristic communication information S1,s1=MAX(T);
S5, using full connection layer to feature communication information S1Performing compact dimensionality reduction to obtain compact features s2,s2=FC(s1);
S6, adaptively selecting branch dimension information of different areas from the compact features by using an attention mechanism, and normalizing the weights by using softmax to obtain normalized information a1And a2
Figure BDA0002970290580000021
Figure BDA0002970290580000022
FC1() And FC2() Represent fully connected layer operations corresponding to y and z, respectively;
s7, multiplying the normalized information by the local feature map, and summing to obtain a feature map U, wherein U is Sum (a)1*y,a2*z)。
Preferably, the k values of the two independent k-nearest neighbor operations in step S2 are 15 and 25, respectively.
A point cloud dynamic regional image classification method adopts the point cloud dynamic regional image convolution method to carry out convolution operation, and the feature spectrum U is used as a feature obtained by each convolution operation.
A point cloud dynamic regional image segmentation method adopts the point cloud dynamic regional image convolution method to carry out convolution operation, and the feature spectrum U is used as a feature obtained by each convolution operation.
In summary, compared with the prior art, the invention has the following technical effects:
the invention adopts a new convolution operation form aiming at point cloud, and aggregates point characteristic information of a plurality of different neighborhoods by a nonlinear method according to a constructed point cloud picture structure, so that the neuron can select the area size in a self-adaptive manner. Compared with the prior technical scheme of analyzing on a single point, such as PointNet, the method constructs a plurality of different local neighborhood map structures, enables each neuron to adaptively select the proper neighborhood receptive field size, then performs similar convolution operation by utilizing the connection between each point and the neighborhood points to obtain local characteristics, can better combine surrounding neighborhood information, more effectively extract local geometric information, and finally improves the accuracy of classifying or dividing the point cloud data.
Drawings
FIG. 1 is a flow chart of a method for convolving a point cloud dynamic region map according to the present invention;
fig. 2 is a k-neighbor map of a local point cloud space.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the invention discloses a method for convolving a point cloud dynamic region map, comprising the following steps:
s1, acquiring three-dimensional point cloud data X, X ═ alpha123,…,αi,…,αn},αiData representing the ith point, n representing the number of points in the three-dimensional point cloud data, alphai={xi,yi,zi},xi、yiAnd ziDenotes alphaiThree-dimensional coordinates of (a);
s2, performing two independent k nearest neighbor operations on the three-dimensional point cloud data X to obtain two local feature maps y and z, wherein k values of the two independent k nearest neighbor operations are different;
as shown in fig. 2, alpha is defined for k-neighbor map of local point cloud spacej1j2,…,αjkIs alphaiK neighbor points of eijFor edge features, defined as eij=hθij) Where θ is a training parameter, a non-linear function hθij):RC×RC→RC,RCIs a feature after polymerization. The output of the ith point in the graph convolution can be expressed as:
Figure BDA0002970290580000031
similar to the convolution operation in 2D images, will be alphaiCentral pixel, α, seen as a convolution regionjIs then alphaiThe surrounding blocks of pixels. Alpha in the graph structureiAnd alphajFormed directed edge eijThe designed edge function is defined as: h isθij)=hθiij). Such a structure combines both global shape information and local neighborhood information and is implemented by MLP. And the aggregation function selects a max function to perform aggregation operation.
S3, fusing the two local feature maps y and z to obtain fused information T, where T is Sum (y, z);
s4, pooling the fusion information T to obtain the characteristic communication information S1,s1=MAX(T);
S5, using full connection layer to feature communication information S1Performing compact dimensionality reduction to obtain compact features s2,s2=FC(s1);
Steps S3 to S5 are to carry out the integrated encoding of the information from multiple branches and to transmit the information to the next step, so as to realize the adaptive adjustment of the size of k neighborhood by the neuron. Finally, the full-connection network is used for carrying out compact dimension reduction on the features, so that not only can the region be accurately and adaptively selected, but also the size can be reduced, and the operation efficiency can be improved.
S6, adaptively selecting branch dimension information of different areas from the compact features by using an attention mechanism, and normalizing the weights by using softmax to obtain normalized information a1And a2
Figure BDA0002970290580000041
Figure BDA0002970290580000042
FC1() And FC2() Represent fully connected layer operations corresponding to y and z, respectively;
s7, multiplying the normalized information by the local feature map, and summing to obtain a feature map U, wherein U is Sum (a)1*y,a2*z)。
a1∈1×C′,a2Belongs to 1 multiplied by C ', y belongs to n multiplied by C ', z belongs to n multiplied by C '; c' represents the number of characteristic channels.
In specific implementation, the k values of two independent k-nearest neighbor operations in step S2 are 15 and 25, respectively.
The average class accuracy and the total accuracy when k-neighbor operations of different numbers and k-values are used are shown in table 2, and therefore, in the present invention, the number of k-neighbor operations is preferably 2, and the k-values are 15 and 25, respectively.
TABLE 2
Figure BDA0002970290580000043
Figure BDA0002970290580000051
The invention adopts a new convolution operation form aiming at point cloud, and aggregates point characteristic information of a plurality of different neighborhoods by a nonlinear method according to a constructed point cloud picture structure, so that the neuron can select the area size in a self-adaptive manner. Compared with the prior technical scheme of analyzing on a single point, such as PointNet, the method constructs a plurality of different local neighborhood map structures, enables each neuron to adaptively select the proper neighborhood receptive field size, then performs similar convolution operation by utilizing the connection between each point and the neighborhood points to obtain local characteristics, can better combine surrounding neighborhood information, more effectively extract local geometric information, and finally improves the accuracy of classifying or dividing the point cloud data.
The invention also discloses a point cloud dynamic regional image classification method, which adopts the point cloud dynamic regional image convolution method to carry out convolution operation, and takes the characteristic map U as the characteristic obtained by each convolution operation.
In order to verify the effect of the point cloud dynamic region map classification method disclosed by the invention, the classification task is evaluated on a ModelNet40 data set. The dataset contains 12311 mesh CAD models from 40 classes, of which 9843 models were used for training and 2468 models were used for testing. The invention follows the experimental setting of DGCNN and other models, and for each model, 1024 points are uniformly sampled from a grid surface, and only the three-dimensional coordinates of the sampling points are used as the input data of the network.
Four DRG modules are used to extract local geometric features, and the features calculated by each DRG module are used for recalculation by the next module. For the DRG module, two different k neighborhood branches are taken here as 15 and 25, respectively. And then connecting the features obtained by each DRG module to obtain a 512-dimensional feature point cloud with 64+64+128+ 256. Global features are then obtained using global max pooling and average max pooling, respectively. Finally, two fully connected layers (512, 256) are used for feature classification.
All layers contain LeakyReLU and batch regularization. The experiment also compares the number of different k neighborhoods, selects the optimal number of k neighborhoods, and evaluates the model on the test data set. An SGD optimizer with a learning rate of 0.1 was used and the learning rate was attenuated to 0.001. The training data was selected as batch number 24 and the test data was selected as 16. The results of the experiment are shown in table 1.
TABLE 1
Figure BDA0002970290580000061
The invention also discloses a point cloud dynamic regional image segmentation method, which is used for convolution operation by adopting the point cloud dynamic regional image convolution method, and the characteristic map U is used as the characteristic obtained by each convolution operation.
In order to verify the effect of the point cloud dynamic region graph segmentation method disclosed by the invention, a partial segmentation task is performed on a ShapeNet data set. The task classifies each point in the point cloud into several part category labels of the object. The dataset contains 16881 3D shapes from 16 object classes, for a total of 50 parts, 2048 points were sampled in each training sample, again following the experimental protocol of DGCNN et al. The outputs of the three layers of DRGConv modules are connected, spliced into 2048 point features, and then feature transformed by MLP (256, 256, 128). The selection of batch number, activation function, learning rate, etc. is the same as the classification network.
The IOU of a shape is calculated by averaging the IOUs of different parts appearing in the shape using the same evaluation method as PointNet, and the IOU of the class is obtained by averaging the IOUs of all shapes belonging to the class. Finally, the average IOU (mIOU) is calculated by averaging the IOUs of all test shapes. By comparison with PointNet, PointNet + +, PointCNN, DGCNN, Kd-Net. The results of the experiment are shown in table 3.
TABLE 3
Figure BDA0002970290580000071
The above is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several changes and modifications can be made without departing from the technical solution, and the technical solution of the changes and modifications should be considered as falling within the scope of the claims of the present application.

Claims (4)

1. A method for convolving a point cloud dynamic region map is characterized by comprising the following steps:
s1, acquiring three-dimensional point cloud data X, X ═ alpha123,…,αi,…,αn},αiData representing the ith point, n representing the number of points in the three-dimensional point cloud data, alphai={xi,yi,zi},xi、yiAnd ziDenotes alphaiThree-dimensional coordinates of (a);
s2, performing two independent k nearest neighbor operations on the three-dimensional point cloud data X to obtain two local feature maps y and z, wherein k values of the two independent k nearest neighbor operations are different;
s3, fusing the two local feature maps y and z to obtain fused information T, where T is Sum (y, z);
s4, pooling the fusion information T to obtain the characteristic communication information S1,s1=MAX(T);
S5, using full connection layer to feature communication information S1Performing compact dimensionality reduction to obtain compact features s2,s2=FC(s1);
S6, adaptively selecting branch dimension information of different areas from the compact features by using an attention mechanism, and normalizing the weights by using softmax to obtain normalized information a1And a2
Figure FDA0002970290570000011
FC1() And FC2() Represent fully connected layer operations corresponding to y and z, respectively;
s7, multiplying the normalized information by the local feature map, and summing to obtain a feature map U, wherein U is Sum (a)1*y,a2*z)。
2. The point cloud dynamic region map convolution method of claim 1, wherein k values of two independent k neighbor operations in step S2 are 15 and 25, respectively.
3. A point cloud dynamic regional image classification method is characterized in that the point cloud dynamic regional image convolution method of claim 1 or 2 is adopted to carry out convolution operation, and the feature map U is used as a feature obtained by each convolution operation.
4. A point cloud dynamic regional image segmentation method is characterized in that the point cloud dynamic regional image convolution method of claim 1 or 2 is adopted to carry out convolution operation, and the feature map U is used as a feature obtained by each convolution operation.
CN202110261653.5A 2021-03-10 2021-03-10 Point cloud dynamic region graph convolution method, classification method and segmentation method Active CN112967296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110261653.5A CN112967296B (en) 2021-03-10 2021-03-10 Point cloud dynamic region graph convolution method, classification method and segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110261653.5A CN112967296B (en) 2021-03-10 2021-03-10 Point cloud dynamic region graph convolution method, classification method and segmentation method

Publications (2)

Publication Number Publication Date
CN112967296A true CN112967296A (en) 2021-06-15
CN112967296B CN112967296B (en) 2022-11-15

Family

ID=76277614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110261653.5A Active CN112967296B (en) 2021-03-10 2021-03-10 Point cloud dynamic region graph convolution method, classification method and segmentation method

Country Status (1)

Country Link
CN (1) CN112967296B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516663A (en) * 2021-06-30 2021-10-19 同济大学 Point cloud semantic segmentation method and device, electronic equipment and storage medium
CN113628217A (en) * 2021-08-12 2021-11-09 江南大学 Three-dimensional point cloud segmentation method based on image convolution and integrating direction and distance

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150006117A1 (en) * 2013-07-01 2015-01-01 Here Global B.V. Learning Synthetic Models for Roof Style Classification Using Point Clouds
CN106682233A (en) * 2017-01-16 2017-05-17 华侨大学 Method for Hash image retrieval based on deep learning and local feature fusion
CN107451616A (en) * 2017-08-01 2017-12-08 西安电子科技大学 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth
CN108198244A (en) * 2017-12-20 2018-06-22 中国农业大学 A kind of Apple Leaves point cloud compressing method and device
CN109035329A (en) * 2018-08-03 2018-12-18 厦门大学 Camera Attitude estimation optimization method based on depth characteristic
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109410321A (en) * 2018-10-17 2019-03-01 大连理工大学 Three-dimensional rebuilding method based on convolutional neural networks
CN110081890A (en) * 2019-05-24 2019-08-02 长安大学 A kind of dynamic K arest neighbors map-matching method of combination depth network
CN110188802A (en) * 2019-05-13 2019-08-30 南京邮电大学 SSD algorithm of target detection based on the fusion of multilayer feature figure
CN110197223A (en) * 2019-05-29 2019-09-03 北方民族大学 Point cloud data classification method based on deep learning
CN110443842A (en) * 2019-07-24 2019-11-12 大连理工大学 Depth map prediction technique based on visual angle fusion
CN111027559A (en) * 2019-10-31 2020-04-17 湖南大学 Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
CN111242208A (en) * 2020-01-08 2020-06-05 深圳大学 Point cloud classification method, point cloud segmentation method and related equipment
CN111476226A (en) * 2020-02-29 2020-07-31 新华三大数据技术有限公司 Text positioning method and device and model training method
CN111583263A (en) * 2020-04-30 2020-08-25 北京工业大学 Point cloud segmentation method based on joint dynamic graph convolution
CN111666836A (en) * 2020-05-22 2020-09-15 北京工业大学 High-resolution remote sensing image target detection method of M-F-Y type lightweight convolutional neural network
CN111723814A (en) * 2020-06-05 2020-09-29 中国科学院自动化研究所 Cross-image association based weak supervision image semantic segmentation method, system and device
CN111753698A (en) * 2020-06-17 2020-10-09 东南大学 Multi-mode three-dimensional point cloud segmentation system and method
CN111915619A (en) * 2020-06-05 2020-11-10 华南理工大学 Full convolution network semantic segmentation method for dual-feature extraction and fusion
CN112036447A (en) * 2020-08-11 2020-12-04 复旦大学 Zero-sample target detection system and learnable semantic and fixed semantic fusion method
CN112149725A (en) * 2020-09-18 2020-12-29 南京信息工程大学 Spectral domain graph convolution 3D point cloud classification method based on Fourier transform
CN112184548A (en) * 2020-09-07 2021-01-05 中国科学院深圳先进技术研究院 Image super-resolution method, device, equipment and storage medium
CN112329771A (en) * 2020-11-02 2021-02-05 元准智能科技(苏州)有限公司 Building material sample identification method based on deep learning

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150006117A1 (en) * 2013-07-01 2015-01-01 Here Global B.V. Learning Synthetic Models for Roof Style Classification Using Point Clouds
CN106682233A (en) * 2017-01-16 2017-05-17 华侨大学 Method for Hash image retrieval based on deep learning and local feature fusion
CN107451616A (en) * 2017-08-01 2017-12-08 西安电子科技大学 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth
CN108198244A (en) * 2017-12-20 2018-06-22 中国农业大学 A kind of Apple Leaves point cloud compressing method and device
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109035329A (en) * 2018-08-03 2018-12-18 厦门大学 Camera Attitude estimation optimization method based on depth characteristic
CN109410321A (en) * 2018-10-17 2019-03-01 大连理工大学 Three-dimensional rebuilding method based on convolutional neural networks
CN110188802A (en) * 2019-05-13 2019-08-30 南京邮电大学 SSD algorithm of target detection based on the fusion of multilayer feature figure
CN110081890A (en) * 2019-05-24 2019-08-02 长安大学 A kind of dynamic K arest neighbors map-matching method of combination depth network
CN110197223A (en) * 2019-05-29 2019-09-03 北方民族大学 Point cloud data classification method based on deep learning
CN110443842A (en) * 2019-07-24 2019-11-12 大连理工大学 Depth map prediction technique based on visual angle fusion
CN111027559A (en) * 2019-10-31 2020-04-17 湖南大学 Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
CN111242208A (en) * 2020-01-08 2020-06-05 深圳大学 Point cloud classification method, point cloud segmentation method and related equipment
CN111476226A (en) * 2020-02-29 2020-07-31 新华三大数据技术有限公司 Text positioning method and device and model training method
CN111583263A (en) * 2020-04-30 2020-08-25 北京工业大学 Point cloud segmentation method based on joint dynamic graph convolution
CN111666836A (en) * 2020-05-22 2020-09-15 北京工业大学 High-resolution remote sensing image target detection method of M-F-Y type lightweight convolutional neural network
CN111723814A (en) * 2020-06-05 2020-09-29 中国科学院自动化研究所 Cross-image association based weak supervision image semantic segmentation method, system and device
CN111915619A (en) * 2020-06-05 2020-11-10 华南理工大学 Full convolution network semantic segmentation method for dual-feature extraction and fusion
CN111753698A (en) * 2020-06-17 2020-10-09 东南大学 Multi-mode three-dimensional point cloud segmentation system and method
CN112036447A (en) * 2020-08-11 2020-12-04 复旦大学 Zero-sample target detection system and learnable semantic and fixed semantic fusion method
CN112184548A (en) * 2020-09-07 2021-01-05 中国科学院深圳先进技术研究院 Image super-resolution method, device, equipment and storage medium
CN112149725A (en) * 2020-09-18 2020-12-29 南京信息工程大学 Spectral domain graph convolution 3D point cloud classification method based on Fourier transform
CN112329771A (en) * 2020-11-02 2021-02-05 元准智能科技(苏州)有限公司 Building material sample identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHENGLI ZHAI ET AL.: "MULTI-SCALE DYNAMIC GRAPH CONVOLUTION NETWORK FOR POINT CLOUDS CLASSIFICATION", 《IEEE ACCESS》 *
于挺等: "基于K近邻卷积神经网络的点云模型识别与分类", 《激光与光电子学进展》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516663A (en) * 2021-06-30 2021-10-19 同济大学 Point cloud semantic segmentation method and device, electronic equipment and storage medium
CN113628217A (en) * 2021-08-12 2021-11-09 江南大学 Three-dimensional point cloud segmentation method based on image convolution and integrating direction and distance

Also Published As

Publication number Publication date
CN112967296B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN111161364B (en) Real-time shape completion and attitude estimation method for single-view depth map
CN112967296B (en) Point cloud dynamic region graph convolution method, classification method and segmentation method
CN111768415A (en) Image instance segmentation method without quantization pooling
CN112785526B (en) Three-dimensional point cloud restoration method for graphic processing
CN111695494A (en) Three-dimensional point cloud data classification method based on multi-view convolution pooling
CN110751195B (en) Fine-grained image classification method based on improved YOLOv3
CN112419191B (en) Image motion blur removing method based on convolution neural network
CN111401380A (en) RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
Rios et al. Feature visualization for 3D point cloud autoencoders
CN111652273A (en) Deep learning-based RGB-D image classification method
CN114419413A (en) Method for constructing sensing field self-adaptive transformer substation insulator defect detection neural network
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
CN116091823A (en) Single-feature anchor-frame-free target detection method based on fast grouping residual error module
CN112509021A (en) Parallax optimization method based on attention mechanism
CN114373104A (en) Three-dimensional point cloud semantic segmentation method and system based on dynamic aggregation
CN115830375A (en) Point cloud classification method and device
CN113537119B (en) Transmission line connecting part detection method based on improved Yolov4-tiny
CN116188882A (en) Point cloud up-sampling method and system integrating self-attention and multipath path diagram convolution
CN116704378A (en) Homeland mapping data classification method based on self-growing convolution neural network
CN113808006B (en) Method and device for reconstructing three-dimensional grid model based on two-dimensional image
CN113723468B (en) Object detection method of three-dimensional point cloud
CN112365456B (en) Transformer substation equipment classification method based on three-dimensional point cloud data
CN115272673A (en) Point cloud semantic segmentation method based on three-dimensional target context representation
CN115131245A (en) Point cloud completion method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230628

Address after: No. 1811, 18th Floor, Building 19, Section 1201, Lushan Avenue, Wan'an Street, Tianfu New District, Chengdu, Sichuan, China (Sichuan) Pilot Free Trade Zone, 610213, China

Patentee after: Sichuan Jiulai Technology Co.,Ltd.

Address before: No. 69 lijiatuo Chongqing District of Banan City Road 400054 red

Patentee before: Chongqing University of Technology

TR01 Transfer of patent right