CN118154825A - Rotation-unchanged point cloud data processing method - Google Patents

Rotation-unchanged point cloud data processing method Download PDF

Info

Publication number
CN118154825A
CN118154825A CN202410586756.2A CN202410586756A CN118154825A CN 118154825 A CN118154825 A CN 118154825A CN 202410586756 A CN202410586756 A CN 202410586756A CN 118154825 A CN118154825 A CN 118154825A
Authority
CN
China
Prior art keywords
point
point cloud
rotation
cloud data
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410586756.2A
Other languages
Chinese (zh)
Inventor
张燕
赵庆国
田茂义
李文君
杨俊涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Science and Technology
Original Assignee
Shandong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Science and Technology filed Critical Shandong University of Science and Technology
Priority to CN202410586756.2A priority Critical patent/CN118154825A/en
Publication of CN118154825A publication Critical patent/CN118154825A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a rotation-unchanged point cloud data processing method, which belongs to the technical field of point cloud data processing and is used for processing point cloud data. The invention projects the point cloud to the feature vector space to obtain a new coordinate value, and the consistency of the finally extracted point cloud features is ensured to the greatest extent by constructing the main coordinate space of the point cloud, and the point cloud classification network provided by the invention can reach the precision equivalent to that of a large training sample under a small training sample; the invention can realize the rotation invariance of the point cloud, and meanwhile, the zero-parameter network module does not need additional parameters or network training, thereby greatly reducing the workload of parameter adjustment and greatly improving the working efficiency of feature extraction.

Description

Rotation-unchanged point cloud data processing method
Technical Field
The invention discloses a rotation-unchanged point cloud data processing method, and belongs to the technical field of point cloud data processing.
Background
Automatic classification and identification of point clouds are basic work of point cloud data application, and automatic processing of the point clouds is a challenging work due to the characteristics of high sensor noise, complex scene, unordered property, unstructured property, uneven density distribution, spatial position of the point clouds and the like. The object characteristics extracted by the point clouds of the same object under different coordinate systems have inconsistent phenomenon due to the problems of translation and rotation of the coordinates, particularly the rotation problem, and greatly influence the classification and identification precision of the point clouds. In view of this, the prior art designs a rotation invariant module, and the objective of rotation invariant of the point cloud is achieved by training the network layer, and the final effect is not expected.
Aiming at the problem of rotation invariance of the point cloud, the prior art mainly adopts a manually designed feature descriptor (such as a scale-invariant feature transform matching algorithm SIFT) to process the point cloud data. Although this method can achieve a certain effect, it has a great limitation. On one hand, the feature expression capability extracted by using a manual design rule is weak, and the generalization capability of the model is not strong; on the other hand, the selection and use of feature descriptors requires a certain expertise, which also limits the development of the method to a certain extent.
Along with the development of the deep learning technology, the prior art combines deep learning with point cloud processing to solve the rotation problem of the point cloud, and the method comprises the steps of directly processing the point cloud, introducing a module named as T-Net into a deep learning model PointNet, pointNet for directly processing unordered point cloud data, and using the module named as T-Net for learning the alignment transformation of the point cloud data, wherein the T-Net aims to convert the input point cloud data into a standard reference coordinate system, so that the influence of the point cloud data under different postures is eliminated, and the rotation invariance of the point cloud is realized. Although T-Net realizes the rotation invariance of the point cloud data to a certain extent, the problems of high calculation complexity, difficult parameter adjustment, over-fitting and the like exist; the image convolution is applied to the point cloud data, the PointCNN network designs an X-conv module, the point cloud carries out dynamic convolution, different points in the point cloud are weighted through self-adaptive weights, so that the network can better capture local features, but the X-conv does not achieve an ideal effect.
Disclosure of Invention
The invention aims to provide a rotation-unchanged point cloud data processing method, which is used for solving the problems that in the prior art, the characteristics of the object extracted by the point clouds of the same object under different coordinate systems have inconsistent phenomenon due to the problems of translation and rotation of coordinates, particularly the rotation problem, and the classification and identification precision of the point clouds are greatly influenced.
A rotation-invariant point cloud data processing method comprises the steps of inputting point cloud, conducting furthest point sampling, constructing local neighborhood, designing a local coordinate transformation module and a rotation-invariant module for constructing a local coordinate system, conducting feature aggregation by means of an inverse distance weighting module, conducting classification by means of a full-connection layer, and outputting results.
The farthest point sampling comprises S1, selecting an initial point, and randomly selecting a point from the point cloud as an initial selected point;
S2, calculating the distance, calculating the Euclidean distance from all points to the selected point, and finding the point farthest from the selected point as a new selected point;
s3, updating the selected point set, and adding the furthest point selected in the S2 into the selected point set;
s4, repeating the selection, and repeating the steps S2 and S3 until the selected points reach the preset number.
Constructing a local neighborhood comprises adopting ball query, setting a radius by taking a selected point as a central point, finding k points in the ball closest to the central point, and if the number of the points in the ball is more than k, taking the first k points closest to the central point; if the number of points in the sphere is less than k, copying the point closest to the center point until the number of points reaches the preset number.
The local coordinate transformation module is used for obtaining local coordinates by subtracting the coordinates of the central point from the coordinates of the neighborhood points.
The rotation invariant module is:
T1, inputting three-dimensional point cloud data with the size of n rows and 3 columns, marking the three-dimensional point cloud data as X 0, subtracting a characteristic mean value from each characteristic of X 0, and performing decentration to obtain a new matrix X 1;
t2 finds the covariance matrix C 1 from X1:
The rotation invariant module includes:
T3, obtaining the characteristic value of C 1 and the corresponding characteristic vector;
T4 orders the eigenvalues of C 1 in order from large to small to obtain an eigenvector matrix corresponding to the eigenvalues of C 1
The rotation invariant module includes:
T5 projecting X 0 into the ordered feature vector space to obtain a projection value
T6 takes the absolute value of the projection value.
The inverse distance weighting module comprises:
l1 calculates Euclidean distance between each neighborhood point and the center point ,/>The number of the points in the neighborhood is the number;
L2 calculates the weight of each neighborhood point according to the Euclidean distance between each neighborhood point and the center point
The inverse distance weighting module comprises:
L3 features weighting each point And (3) the following steps:
;
In the method, in the process of the invention, Is/>, after weight is given
The inverse distance weighting module comprises:
l4 feature aggregation, namely aggregating features of the neighborhood points to the center point:
In the method, in the process of the invention, Is the central point after feature aggregation.
Compared with the prior art, the invention has the following beneficial effects: the invention projects the point cloud to the feature vector space to obtain a new coordinate value, and the consistency of the finally extracted point cloud features is ensured to the greatest extent by constructing the main coordinate space of the point cloud, and the point cloud classification network provided by the invention can reach the precision equivalent to that of a large training sample under a small training sample; the invention can realize the rotation invariance of the point cloud, and meanwhile, the zero-parameter network module does not need additional parameters or network training, thereby greatly reducing the workload of parameter adjustment and greatly improving the working efficiency of feature extraction.
Drawings
FIG. 1 is a technical flow chart of the present invention;
FIG. 2 is a flow chart of the furthest point sampling;
FIG. 3 is a flow chart of a rotation invariant module;
FIG. 4 is a schematic diagram of the furthest point sampling;
FIG. 5 is a schematic view of a partial neighborhood;
FIG. 6 is a schematic diagram of a partial coordinate transformation;
Fig. 7 is an inverse distance weighted schematic.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the present invention will be clearly and completely described below, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A rotation-invariant point cloud data processing method comprises the steps of inputting point cloud, conducting furthest point sampling, constructing local neighborhood, designing a local coordinate transformation module and a rotation-invariant module for constructing a local coordinate system, conducting feature aggregation by means of an inverse distance weighting module, conducting classification by means of a full-connection layer, and outputting results.
The farthest point sampling comprises S1, selecting an initial point, and randomly selecting a point from the point cloud as an initial selected point;
S2, calculating the distance, calculating the Euclidean distance from all points to the selected point, and finding the point farthest from the selected point as a new selected point;
s3, updating the selected point set, and adding the furthest point selected in the S2 into the selected point set;
s4, repeating the selection, and repeating the steps S2 and S3 until the selected points reach the preset number.
Constructing a local neighborhood comprises adopting ball query, setting a radius by taking a selected point as a central point, finding k points in the ball closest to the central point, and if the number of the points in the ball is more than k, taking the first k points closest to the central point; if the number of points in the sphere is less than k, copying the point closest to the center point until the number of points reaches the preset number.
The local coordinate transformation module is used for obtaining local coordinates by subtracting the coordinates of the central point from the coordinates of the neighborhood points.
The rotation invariant module is:
T1, inputting three-dimensional point cloud data with the size of n rows and 3 columns, marking the three-dimensional point cloud data as X 0, subtracting a characteristic mean value from each characteristic of X 0, and performing decentration to obtain a new matrix X 1;
t2 finds the covariance matrix C 1 from X1:
The rotation invariant module includes:
T3, obtaining the characteristic value of C 1 and the corresponding characteristic vector;
T4 orders the eigenvalues of C 1 in order from large to small to obtain an eigenvector matrix corresponding to the eigenvalues of C 1
The rotation invariant module includes:
T5 projecting X 0 into the ordered feature vector space to obtain a projection value
T6 takes the absolute value of the projection value.
The inverse distance weighting module comprises:
l1 calculates Euclidean distance between each neighborhood point and the center point ,/>The number of the points in the neighborhood is the number;
L2 calculates the weight of each neighborhood point according to the Euclidean distance between each neighborhood point and the center point
The inverse distance weighting module comprises:
L3 features weighting each point And (3) the following steps:
;
In the method, in the process of the invention, Is/>, after weight is given
The inverse distance weighting module comprises:
l4 feature aggregation, namely aggregating features of the neighborhood points to the center point:
In the method, in the process of the invention, Is the central point after feature aggregation.
The application provides a point cloud rotation invariant network, which consists of downsampling, constructing local neighborhood, constructing local coordinate system, feature aggregation and classification. Firstly, sampling the input point cloud data through the furthest point to obtain a representative point, and forming a local neighborhood by taking the selected representative point as a central point through a ball query mode; and secondly, constructing a local coordinate system, introducing a rotation invariant module, constructing a main coordinate space, eliminating the influence caused by the translation of the point cloud, and keeping the consistency of the extracted point cloud characteristics to the greatest extent.
And for a group of input point clouds, obtaining the representative points of the target number by a method of sampling the FPS by the furthest points. And taking the representative points as central points, namely, how many central points are, namely, how many neighborhoods are, designating the number of the points in the neighborhoods by adopting a ball query mode, and carrying out nearest neighbor query on surrounding points. The furthest point sampling is a method for selecting representative points from a point set, and is mainly used for reducing the density of point cloud and reducing the computational complexity. The points selected by the FPS are better representative, and fewer points can be used to describe the entire dataset, helping to preserve important structural information.
After downsampling and constructing a local neighborhood, the application designs a local coordinate transformation module and a rotation invariant module aiming at the influence caused by translation and rotation of the three-dimensional point cloud, wherein the local coordinate transformation module mainly subtracts the center point coordinate from the neighborhood point coordinate to obtain a local coordinate, so that the alignment between the point clouds can be improved, and the local structure and detail can be better captured. The latter mainly projects the original point cloud onto the feature vector by constructing the main coordinate space of the point cloud, and the obtained projection value is used as a new coordinate value. It is different from T-Net and X-Conv, does not need to be trained by network, and is a network module with zero parameter.
The application relates to a point cloud rotation invariant module, which comprises the steps of firstly obtaining a matrix after decentralization from point cloud data, then solving a covariance matrix, further calculating characteristic values and characteristic vectors of the covariance matrix, sequencing the characteristic values according to a sequence from big to small to obtain corresponding characteristic vectors, then projecting original point cloud data into a sequenced characteristic vector space, establishing a main coordinate space, taking absolute values of projection values as coordinate values of original point clouds in the main coordinate space in consideration of inconsistency of the characteristic vector directions.
Yet another way to construct the primary coordinate space is to obtain feature vectors by feature decomposition, in the following specific form:
Wherein the columns of U consist of unitized eigenvectors of AAT, the columns of V consist of unitized eigenvectors of a T a, and the diagonal elements of Σ originate from the square root of the eigenvalues of AA T or a T a, i.e. singular values, and are arranged in order from large to small. The right singular vector matrix can be directly obtained by using the feature decomposition, and then the original data is projected onto the feature vector space to obtain a new coordinate value.
After a local coordinate system is constructed, feature dimension lifting is carried out on each neighborhood for multiple times by using a multi-layer perceptron MLP. In order to solve the disorder of the point cloud, the prior partial network adopts a mode of maximum pooling to aggregate the feature information after the dimension rise, but the maximum pooling is easy to cause the loss of the information.
In order to reduce loss and better consider the characteristic information of each point, realize characteristic aggregation in the neighborhood, the application designs an inverse distance weighting module. The module fully considers the characteristic information of different points in the neighborhood and the influence of the neighborhood point on the center point. The main idea of the module is to assign different weights according to the distance between the neighborhood point and the center point, and assign the weights to the characteristics of each neighborhood point, so that the characteristics of each point are different, and finally the characteristics of each neighborhood point are weighted to the characteristics of the center point, thereby realizing the characteristic aggregation of the neighborhood. Specifically, the closer the neighborhood point is to the center point, the greater the resulting weight; otherwise, the smaller the weight. And finally, sending the extracted global features into a fully connected network for classification.
The present invention has been tested on ModelNet data sets. The dataset contains about 40 object classes (e.g., airplane, form, plant, etc.), 12311 CAD models represented by triangular meshes. The data were divided into 9843 training samples and 2468 test samples.
To better describe the Accuracy of classification, the invention uses the Overall classification Accuracy (OA) and the average classification Accuracy (MEAN CLASS Accuracy, mAcc) to measure the model effect. The overall classification accuracy is the proportion of the point cloud that the model correctly classifies across the entire dataset. The calculation formula is as follows:
The average classification accuracy considers the classification performance of each category, and is a finer-granularity evaluation mode. First, the classification accuracy of each category is calculated, and then their average value is taken as a performance metric of the model. The calculation formula is as follows:
acc represents classification accuracy, TP represents a sample predicted to be positive and actually positive; TN represents a sample predicted to be negative, actually negative; FP represents a sample predicted to be positive, actually negative; FN represents samples predicted to be negative and actually positive.
In order to verify classification accuracy, the invention performed classification experiments on ModelNet data sets, obtained an overall accuracy of 92.0% that was 2.8% higher than PointNet classification experiments, an average classification accuracy of 88.8% and 2.8% higher than PointNet, as shown in table 1.
Table 1 classification accuracy of network model at ModelNet40
In the tables MVCNN, voxNet, pointNet are all prior art methods.
The invention provides a novel point cloud rotation invariant network, which projects point clouds to a feature vector space to obtain new coordinate values, and the consistency of finally extracted point cloud features is ensured to the greatest extent by constructing a main coordinate space of the point clouds. The invention fully considers the point cloud rotation problem to be solved in the point cloud processing network model. At present, most methods adopt a complex network layer and realize rotation invariance through continuous training or parameter adjustment, but the parameter adjustment of the method is complex and has large workload. By the method provided by the invention, the rotation invariance of the point cloud can be realized. Meanwhile, the zero-parameter rotation unchanged module does not need additional parameters or network training, so that the parameter adjustment workload is greatly reduced, and the feature extraction working efficiency is greatly improved.
The technical flow chart is shown in figure 1, and comprises the steps of inputting point cloud, sampling the furthest point, constructing local neighborhood, constructing local coordinate system, feature aggregation and outputting result; the furthest point sampling flow is shown in fig. 2, after a point cloud is input, an initial point is randomly selected, euclidean distances from all points to the selected points are calculated, the point with the furthest distance from the selected points is selected, the selected furthest point is added into a selected point set, if the target sampling quantity is reached, a result is output, and if the target sampling quantity is not reached, the Euclidean distance is calculated; the flow of the rotation invariant module is shown in fig. 3, a point cloud rotation invariant module with zero parameters is used, the point cloud is subjected to decentralization processing, a covariance matrix is obtained, eigenvalues and eigenvectors thereof are calculated, the eigenvalues are ordered according to the order from large to small, a corresponding eigenvector matrix is obtained, data is projected to an ordered eigenvector space, projection values are calculated, absolute values are taken, and a result is output.
Fig. 4 is a schematic diagram of the furthest point sampling. Selecting an initial point: randomly selecting a point as the selected sampling point, e.g., (a), calculating the distance: calculating Euclidean distance from all points to the selected point, and finding out the point farthest from the selected point as a new representative point, wherein the points are shown in (b), (c) and (d); updating the selected point set: adding the furthest point just selected to the set of selected points, see (e), repeating the selection: and (5) repeating the step 2 and the step 3 until the required point number is reached. FIG. 5 is a schematic view of a local neighborhood. p0 represents the center point of the neighborhood, and p1, p2 and p3 each represent a neighbor within the neighborhood. Fig. 6 is a schematic diagram of a partial coordinate transformation. (p 1-p 0), (p 2-p 0) and (p 3-p 0) represent adjacent points after local coordinate transformation. Fig. 7 is an inverse distance weighted schematic. (p 0, f 0) represents a center point and features thereof,Representing the Euclidean distance of the adjacent point from the center point,/>The weight of each neighbor is represented, (p 0, f' 0) represents the feature-aggregated center point and its features.
The above embodiments are only for illustrating the technical aspects of the present invention, not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may be modified or some or all of the technical features may be replaced with other technical solutions, which do not depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A rotation-unchanged point cloud data processing method is characterized by comprising the steps of inputting point clouds, conducting furthest point sampling, constructing local neighborhood, designing a local coordinate transformation module and a rotation-unchanged module for constructing a local coordinate system, conducting feature aggregation by utilizing an inverse distance weighting module, conducting classification through a full connection layer, and outputting a result.
2. The rotation-invariant point cloud data processing method of claim 1, wherein performing furthest point sampling comprises S1 selecting an initial point, randomly selecting a point from the point cloud as an initial selected point;
S2, calculating the distance, calculating the Euclidean distance from all points to the selected point, and finding the point farthest from the selected point as a new selected point;
s3, updating the selected point set, and adding the furthest point selected in the S2 into the selected point set;
s4, repeating the selection, and repeating the steps S2 and S3 until the selected points reach the preset number.
3. The method for processing rotation-invariant point cloud data according to claim 2, wherein constructing a local neighborhood comprises using a sphere query, setting a radius with a selected point as a center point, finding k points in the sphere closest to the center point, and if the number of points in the sphere is more than k, taking the first k points closest to the center point; if the number of points in the sphere is less than k, copying the point closest to the center point until the number of points reaches the preset number.
4. A method of processing rotation invariant point cloud data according to claim 3 wherein the local coordinate transformation module comprises subtracting the central point coordinates from the neighborhood point coordinates to obtain local coordinates.
5. The rotation-invariant point cloud data processing method of claim 4, wherein the rotation-invariant module is:
T1, inputting three-dimensional point cloud data with the size of n rows and 3 columns, marking the three-dimensional point cloud data as X 0, subtracting a characteristic mean value from each characteristic of X 0, and performing decentration to obtain a new matrix X 1;
t2 finds the covariance matrix C 1 from X1:
6. The method for processing rotation-invariant point cloud data of claim 5, wherein the rotation-invariant module comprises:
T3, obtaining the characteristic value of C 1 and the corresponding characteristic vector;
T4 orders the eigenvalues of C 1 in order from large to small to obtain an eigenvector matrix corresponding to the eigenvalues of C 1
7. The method for processing rotation-invariant point cloud data of claim 6, wherein the rotation-invariant module comprises:
T5 projecting X 0 into the ordered feature vector space to obtain a projection value
T6 takes the absolute value of the projection value.
8. The method for processing rotation-invariant point cloud data of claim 7, wherein the inverse distance weighting module comprises:
l1 calculates Euclidean distance between each neighborhood point and the center point ,/>The number of the points in the neighborhood is the number;
L2 calculates the weight of each neighborhood point according to the Euclidean distance between each neighborhood point and the center point
9. The method for processing rotation-invariant point cloud data of claim 8, wherein the inverse distance weighting module comprises:
L3 features weighting each point And (3) the following steps:
;
In the method, in the process of the invention, Is/>, after weight is given
10. The method for processing rotation-invariant point cloud data of claim 9, wherein the inverse distance weighting module comprises:
l4 feature aggregation, namely aggregating features of the neighborhood points to the center point:
In the method, in the process of the invention, Is the central point after feature aggregation.
CN202410586756.2A 2024-05-13 2024-05-13 Rotation-unchanged point cloud data processing method Pending CN118154825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410586756.2A CN118154825A (en) 2024-05-13 2024-05-13 Rotation-unchanged point cloud data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410586756.2A CN118154825A (en) 2024-05-13 2024-05-13 Rotation-unchanged point cloud data processing method

Publications (1)

Publication Number Publication Date
CN118154825A true CN118154825A (en) 2024-06-07

Family

ID=91299390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410586756.2A Pending CN118154825A (en) 2024-05-13 2024-05-13 Rotation-unchanged point cloud data processing method

Country Status (1)

Country Link
CN (1) CN118154825A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447030A (en) * 2008-11-12 2009-06-03 山东理工大学 Method for quickly querying scattered point cloud local profile reference data
US20170193692A1 (en) * 2015-12-30 2017-07-06 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Three-dimensional point cloud model reconstruction method, computer readable storage medium and device
CN112101278A (en) * 2020-09-25 2020-12-18 湖南盛鼎科技发展有限责任公司 Hotel point cloud classification method based on k nearest neighbor feature extraction and deep learning
CN112164101A (en) * 2020-09-29 2021-01-01 北京环境特性研究所 Three-dimensional point cloud matching method and device
CN113362340A (en) * 2021-06-04 2021-09-07 中国计量大学 Dynamic space sphere searching point cloud K neighborhood method
WO2023060683A1 (en) * 2021-10-13 2023-04-20 东南大学 Three-dimensional point cloud model-based method for measuring surface flatness of prefabricated beam segment
CN116524301A (en) * 2023-05-06 2023-08-01 浙江大学 3D point cloud scene instance shape searching and positioning method based on contrast learning
CN116778220A (en) * 2023-04-27 2023-09-19 广东工业大学 Three-dimensional point cloud classification and segmentation method and network based on enhanced point cloud rotation invariance

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447030A (en) * 2008-11-12 2009-06-03 山东理工大学 Method for quickly querying scattered point cloud local profile reference data
US20170193692A1 (en) * 2015-12-30 2017-07-06 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Three-dimensional point cloud model reconstruction method, computer readable storage medium and device
CN112101278A (en) * 2020-09-25 2020-12-18 湖南盛鼎科技发展有限责任公司 Hotel point cloud classification method based on k nearest neighbor feature extraction and deep learning
CN112164101A (en) * 2020-09-29 2021-01-01 北京环境特性研究所 Three-dimensional point cloud matching method and device
CN113362340A (en) * 2021-06-04 2021-09-07 中国计量大学 Dynamic space sphere searching point cloud K neighborhood method
WO2023060683A1 (en) * 2021-10-13 2023-04-20 东南大学 Three-dimensional point cloud model-based method for measuring surface flatness of prefabricated beam segment
CN116778220A (en) * 2023-04-27 2023-09-19 广东工业大学 Three-dimensional point cloud classification and segmentation method and network based on enhanced point cloud rotation invariance
CN116524301A (en) * 2023-05-06 2023-08-01 浙江大学 3D point cloud scene instance shape searching and positioning method based on contrast learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YINLONG LIU等: "Efficient Global Point Cloud Registration by Matching Rotation Invariant Features Through Translation Search", 《 ECCV 2018 》, 6 October 2018 (2018-10-06), pages 460 - 474, XP047488446, DOI: 10.1007/978-3-030-01258-8_28 *
唐志荣;刘明哲;蒋悦;赵飞翔;赵成强;: "基于典型相关分析的点云配准算法", 中国激光, no. 04, 8 January 2019 (2019-01-08) *
王育坚;吴明明;高倩;: "基于保局PCA的三维点云配准算法", 光学技术, no. 05, 15 September 2018 (2018-09-15) *

Similar Documents

Publication Publication Date Title
CN109685152B (en) Image target detection method based on DC-SPP-YOLO
WO2020177432A1 (en) Multi-tag object detection method and system based on target detection network, and apparatuses
CN109993748B (en) Three-dimensional grid object segmentation method based on point cloud processing network
CN110287873B (en) Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment
CN108665491B (en) Rapid point cloud registration method based on local reference points
CN111401468B (en) Weight self-updating multi-view spectral clustering method based on shared neighbor
AU2020101435A4 (en) A panoramic vision system based on the uav platform
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN110543581A (en) Multi-view three-dimensional model retrieval method based on non-local graph convolution network
CN110781920B (en) Method for identifying semantic information of cloud components of indoor scenic spots
CN112328715A (en) Visual positioning method, training method of related model, related device and equipment
CN111368733B (en) Three-dimensional hand posture estimation method based on label distribution learning, storage medium and terminal
CN114120067A (en) Object identification method, device, equipment and medium
Liu et al. Scene recognition mechanism for service robot adapting various families: A cnn-based approach using multi-type cameras
CN111368637A (en) Multi-mask convolution neural network-based object recognition method for transfer robot
Zhang et al. AKConv: Convolutional kernel with arbitrary sampled shapes and arbitrary number of parameters
Wang et al. Ovpt: Optimal viewset pooling transformer for 3d object recognition
CN112418250A (en) Optimized matching method for complex 3D point cloud
CN118154825A (en) Rotation-unchanged point cloud data processing method
CN115690439A (en) Feature point aggregation method and device based on image plane and electronic equipment
Dalara et al. Entity Recognition in Indian Sculpture using CLAHE and machine learning
Wang et al. A geometry feature aggregation method for point cloud classification and segmentation
CN115100136A (en) Workpiece category and pose estimation method based on YOLOv4-tiny model
CN111414802B (en) Protein data characteristic extraction method
Caglayan et al. 3D convolutional object recognition using volumetric representations of depth data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination