CN112767457A - Principal component analysis-based plane point cloud matching method and device - Google Patents
Principal component analysis-based plane point cloud matching method and device Download PDFInfo
- Publication number
- CN112767457A CN112767457A CN202110097200.3A CN202110097200A CN112767457A CN 112767457 A CN112767457 A CN 112767457A CN 202110097200 A CN202110097200 A CN 202110097200A CN 112767457 A CN112767457 A CN 112767457A
- Authority
- CN
- China
- Prior art keywords
- point clouds
- feature point
- point cloud
- feature
- component analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000513 principal component analysis Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 20
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 15
- 238000012847 principal component analysis method Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 40
- 238000000354 decomposition reaction Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006798 ring closing metathesis reaction Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/32—Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a plane point cloud matching method and device based on principal component analysis, wherein the method comprises the following steps: extracting main components of two characteristic point clouds to obtain main directions of the two characteristic point clouds; performing initial registration on the principal directions of the two feature point clouds by using a PCA principal component analysis method; and intercepting pixels of unit length around the two feature point clouds, inputting the pixels into a CNN neural network for comparison, and determining a matching result of the two feature point clouds. The device comprises: the device comprises a principal component analysis module, an initial registration module and a convolutional neural network module. The method and the device have the advantages that the registration processing capability of the point cloud data with large data volume is strong, and the algorithm time and space complexity is controllable. By utilizing the feature extraction of PCA and the pattern matching of CNN, the high-efficiency and high-resolution point cloud learning is realized.
Description
Technical Field
The application relates to the technical field of computer data processing, in particular to a plane point cloud matching method and device based on principal component analysis.
Background
In the fields of computer vision and SLAM (Simultaneous Localization and Mapping), local point clouds need to be registered to obtain complete three-dimensional point clouds due to the fact that point cloud data collected by a depth camera is incomplete, rotational, translational and dislocated and the like, and the point collection obtained from all visual angles is combined to a uniform coordinate system to form the registration of the complete three-dimensional point cloud data is always a research hotspot and becomes a primary task of many applications including object pose estimation, scene three-dimensional reconstruction and visual SLAM.
The existing method for registering the geometrical characteristics of the point cloud comprises the following steps of 1) constructing a histogram of geometrical characteristics, for example, describing the space difference between a query point and a neighborhood point through a multidimensional histogram based on a calculation method of a fast point characteristic histogram (FPFH); 2) and repeatedly selecting the corresponding relation Point pair by a least square method based on the ICP (Iterative Closest Point) of distance measurement, and calculating the optimal rigid body transformation until the convergence precision requirement of correct registration is met. The method is easy to generate wrong corresponding points for the point cloud to be matched with large angle and scale change, or the point cloud to be matched is trapped in local optimization to cause registration failure. In addition, there are also registration algorithms based on geometric shapes, such as 4PCS (4-Points consistency Sets, point cloud rough registration method) which establishes a correspondence by finding Congruent and coplanar quadrilaterals on two point Sets, but for point Sets with small overlapping areas, it is often difficult to find a correspondence. The Super4PCS algorithm based on the 4PCS method improves the problem, but the time complexity is far higher than that of the similar algorithm. The algorithm can obtain better effect on the registration of the point cloud data with simple scene; however, in an actual application scenario, if SLAM deals with inter-frame registration and solves the problem of ring closure, the point cloud data acquired from the depth camera often has the situations of only partial point cloud registration, large rotation and translation transformation angle, and the like, and these algorithms often have difficulty in obtaining a relatively ideal effect.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to one aspect of the application, a plane point cloud matching method based on principal component analysis is provided, and comprises the following steps:
extracting main components of two characteristic point clouds to obtain main directions of the two characteristic point clouds;
performing initial registration on the principal directions of the two feature point clouds by using a PCA principal component analysis method;
and intercepting pixels of unit length around the two feature point clouds, inputting the pixels into a CNN neural network for comparison, and determining a matching result of the two feature point clouds.
Preferably, the method further comprises:
when the two feature point clouds are correctly matched, continuing to calculate the similarity of the two feature point clouds;
when the two feature point clouds are not correctly matched, outputting a matching result: the two feature point clouds are dissimilar.
Preferably, the initial registration of the principal directions of the two feature point clouds using PCA principal component analysis comprises:
carrying out characteristic value decomposition on the covariance information of the two characteristic point clouds to obtain characteristic vector matrixes corresponding to the two characteristic point clouds;
and performing matrix transformation on the feature vector matrix to register the main directions of the two feature point clouds.
Preferably, performing eigenvalue decomposition on the covariance information of the two feature point clouds to obtain the feature vector matrix corresponding to the two feature point clouds comprises:
respectively calculating the centroid coordinates of the source point cloud X and the target point cloud Y, and expressing the centroid coordinates as xmean and ymean;
respectively calculating covariance matrixes of a source point cloud X and a target point cloud Y, and expressing the covariance matrixes as Xcovar and Ycovar;
respectively carrying out eigenvalue decomposition on the two obtained covariance matrixes to obtain two corresponding eigenvector matrixes expressed as Xeigen and Yeigen,
wherein X ═ { X ═ Xi∈R3|i=1,2…,M},Y={yj∈R3|j=1,2,…,N}。
Preferably, matrix transforming the feature vector matrix comprises:
according to obtainingThe resulting rotation matrix R is y according to Tmean-R*xmeanCalculating a translation matrix T;
according to X according to the rotation matrix R and the translation matrix TinitComputing is performed on the source point cloud R X + T.
Preferably, intercepting the pixels of the unit length around the two feature point clouds and inputting the pixels into the CNN neural network for comparison, and determining the matching result of the two feature point clouds includes:
the calculated point cloud XinitThe target point cloud Y is used as the input of the CNN neural network;
respectively obtaining a feature map of a source point cloud X and a feature map of a target point cloud Y by using a CNN neural network;
and calculating the similarity between the feature maps of the source point cloud X and the target point cloud Y by using a cosine similarity algorithm.
Preferably, the calculating the similarity between the feature maps of the source point cloud X and the target point cloud Y by the cosine similarity algorithm includes:
and when the cosine similarity obtained by calculation is smaller than a preset threshold value, the cosine similarity is considered to be similar, otherwise, the cosine similarity is not similar.
On the other hand, the invention also provides a plane point cloud matching device based on principal component analysis, which comprises:
the principal component analysis module is used for extracting principal components of the two feature point clouds to obtain principal directions of the two feature point clouds;
the initial registration module is used for performing initial registration on the main directions of the two feature point clouds by using a Principal Component Analysis (PCA) method;
and the convolutional neural network module is used for intercepting pixels of unit length around the two feature point clouds and inputting the pixels into the CNN neural network for comparison so as to determine a matching result of the two feature point clouds.
Preferably, the device further comprises: a judgment module configured to
When the two feature point clouds are correctly matched, continuing to calculate the similarity of the two feature point clouds;
and outputting a dissimilar matching result when the two feature point clouds are not matched correctly.
Preferably, the initial registration module initially registers principal directions of the two feature point clouds using PCA principal component analysis, including:
carrying out characteristic value decomposition on the covariance information of the two characteristic point clouds to obtain characteristic vector matrixes corresponding to the two characteristic point clouds;
and performing matrix transformation on the feature vector matrix to register the main directions of the two feature point clouds.
The plane point cloud matching method and device based on principal component analysis can solve the problem that the quality precision of data acquisition is excessively depended on. The method and the device have the advantages that the registration processing capability of the point cloud data with large data volume is strong, and the algorithm time and space complexity is controllable. By utilizing the feature extraction of PCA and the pattern matching of CNN, the high-efficiency and high-resolution point cloud learning is realized.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic flow chart diagram of a principal component analysis-based planar point cloud matching method according to one embodiment of the present application;
FIG. 2 is a schematic structural diagram of a planar point cloud matching device based on principal component analysis according to an embodiment of the present application;
FIG. 3 is another schematic structural diagram of a planar point cloud matching device based on principal component analysis according to an embodiment of the present application;
FIG. 4 is a schematic block diagram of a first computer-readable storage medium according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a second computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Fig. 1 is a plane point cloud matching method based on principal component analysis according to an embodiment of the present application, which may generally include steps S101 to S103:
s101, extracting main components of two feature point clouds to obtain main directions of the two feature point clouds;
s102, performing initial registration on the main directions of the two feature point clouds by using a Principal Component Analysis (PCA) method;
s103, intercepting the pixels of the unit length around the two feature point clouds, inputting the pixels into a CNN neural network for comparison, and determining the matching result of the two feature point clouds.
In the embodiment of the present invention, the method further includes:
when the two feature point clouds are correctly matched, continuing to calculate the similarity of the two feature point clouds;
when the two feature point clouds are not correctly matched, outputting a matching result: the two feature point clouds are dissimilar.
In the embodiment of the invention, when the two feature point clouds are correctly matched, the similarity can be continuously calculated. The mismatch can directly output the result: are not similar.
In the embodiment of the present invention, the step S102 of performing initial registration on the principal directions of the two feature point clouds by using a PCA principal component analysis method includes:
carrying out characteristic value decomposition on the covariance information of the two characteristic point clouds to obtain characteristic vector matrixes corresponding to the two characteristic point clouds;
and performing matrix transformation on the feature vector matrix to register the main directions of the two feature point clouds.
In the embodiment of the invention, step S102 uses PCA principal component analysis to perform initial registration on the principal directions of the two feature point clouds X and Y, so as to perform initial registration.
In the embodiment of the present invention, performing eigenvalue decomposition on the covariance information of the two feature point clouds to obtain the eigenvector matrix corresponding to the two feature point clouds includes:
respectively calculating the centroid coordinates of the source point cloud X and the target point cloud Y, and expressing the centroid coordinates as xmean and ymean;
respectively calculating covariance matrixes of a source point cloud X and a target point cloud Y, and expressing the covariance matrixes as Xcovar and Ycovar;
respectively carrying out eigenvalue decomposition on the two obtained covariance matrixes to obtain two corresponding eigenvector matrixes expressed as Xeigen and Yeigen,
wherein X ═ { X ═ Xi∈R3|i=1,2,…,M},Y={yj∈R3|j=1,2,…,N}。
In the embodiment of the present invention, the matrix transformation according to the eigenvector matrix includes:
according to T = y based on the obtained rotation matrix Rmean-R*xmeanCalculating a translation matrix T;
according to X according to the rotation matrix R and the translation matrix TinitComputing is performed on the source point cloud R X + T.
In the embodiment of the present invention, intercepting the pixels of the unit length around the two feature point clouds and inputting the pixels into the CNN neural network for comparison, and determining the matching result of the two feature point clouds includes:
the calculated point cloud XinitThe target point cloud Y is used as the input of the CNN neural network;
respectively obtaining a feature map of a source point cloud X and a feature map of a target point cloud Y by using a CNN neural network;
and calculating the similarity between the feature maps of the source point cloud X and the target point cloud Y by using a cosine similarity algorithm.
In the examples of the present invention, XinitRepresenting the new point cloud obtained by transformation.
In the embodiment of the invention, the calculating the similarity between the feature maps of the source point cloud X and the target point cloud Y by using a cosine similarity algorithm comprises the following steps:
and when the cosine similarity obtained by calculation is smaller than a preset threshold value, the cosine similarity is considered to be similar, otherwise, the cosine similarity is not similar.
The preset threshold in the embodiment of the invention is artificially counted and determined through a large number of experimental comparisons to obtain the cosine similarity threshold with the best classification effect.
The process of the point cloud feature matching method facing the region in the embodiment of the invention is as follows:
the structure of the model utilized includes: PCA (principal component analysis), CNN (convolutional neural network)
Firstly, coarse matching is carried out: in the rough matching stage, principal components of the two characteristic point clouds are extracted by using a Principal Component Analysis (PCA) method, so that principal directions of the two point clouds are obtained. Assuming that the source point cloud X and the target point cloud Y are X ═ Xi∈R3|i=1,2,…,M},Y={yj∈R31, 2, …, N, the flow of PCA-based initial registration is as follows:
a) respectively calculating the mass center coordinates of the two point cloud models: xmean and ymean;
b) respectively calculating covariance matrixes of two point cloud models: xcovar and Ycovar;
c) and respectively carrying out eigenvalue decomposition on the two covariance matrixes to obtain two eigenvector matrixes: xeigen and Yeigen;
e) calculating a translation matrix: y ═ Tmean-R*xmean;
f) Performing computations on the source point cloud: xniit=R*X+T;
In the judgment of matching, the characteristic diagram extracted by the CNN neural network is used for judgment. And intercepting the pixels with unit length around the feature point, inputting the pixels into a CNN neural network for comparison, and judging whether the feature point is correctly matched or not by the CNN.
a) Taking the calculated point clouds Xnit and Y as the input of the neural network
b) Respectively obtaining feature maps of two pictures by using CNN neural network
c) Calculating similarity between graphs using cosine similarity algorithm
As shown in fig. 2, an embodiment of the present invention further provides a plane point cloud matching apparatus based on principal component analysis, including:
a principal component analysis module 100 configured to extract principal components of two feature point clouds to obtain principal directions of the two feature point clouds;
an initial registration module 200 configured to perform initial registration on principal directions of the two feature point clouds by using a Principal Component Analysis (PCA);
the convolutional neural network module 300 is configured to intercept pixels of unit length around the two feature point clouds and input the pixels into the CNN neural network for comparison, so as to determine a matching result of the two feature point clouds.
As shown in fig. 3, in the embodiment of the present invention, the apparatus further includes: a judging module 400 configured to
When the two feature point clouds are correctly matched, continuing to calculate the similarity of the two feature point clouds;
and outputting a dissimilar matching result when the two feature point clouds are not matched correctly.
In the embodiment of the present invention, the initially registering the two feature point clouds in the main direction by the initial registration module 200 using a PCA principal component analysis method includes:
carrying out characteristic value decomposition on the covariance information of the two characteristic point clouds to obtain characteristic vector matrixes corresponding to the two characteristic point clouds;
and performing matrix transformation on the feature vector matrix to register the main directions of the two feature point clouds.
Embodiments also provide a computing device, referring to fig. 4, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 5, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A plane point cloud matching method based on principal component analysis comprises the following steps:
extracting main components of two characteristic point clouds to obtain main directions of the two characteristic point clouds;
performing initial registration on the principal directions of the two feature point clouds by using a PCA principal component analysis method;
and intercepting pixels of unit length around the two feature point clouds, inputting the pixels into a CNN neural network for comparison, and determining a matching result of the two feature point clouds.
2. The method of claim 1, further comprising:
when the two feature point clouds are correctly matched, continuing to calculate the similarity of the two feature point clouds;
when the two feature point clouds are not correctly matched, outputting a matching result: the two feature point clouds are dissimilar.
3. The method of claim 1 or 2, wherein initially registering principal directions of the two feature point clouds using PCA principal component analysis comprises:
carrying out characteristic value decomposition on the covariance information of the two characteristic point clouds to obtain characteristic vector matrixes corresponding to the two characteristic point clouds;
and performing matrix transformation on the feature vector matrix to register the main directions of the two feature point clouds.
4. The method of claim 3, wherein performing eigenvalue decomposition on the covariance information of the two feature point clouds to obtain eigenvector matrices corresponding to the two feature point clouds comprises:
respectively calculating the centroid coordinates of the source point cloud X and the target point cloud Y, and expressing the centroid coordinates as xmean and ymean;
respectively calculating covariance matrixes of a source point cloud X and a target point cloud Y, and expressing the covariance matrixes as Xcovar and Ycovar;
respectively carrying out eigenvalue decomposition on the two obtained covariance matrixes to obtain two corresponding eigenvector matrixes expressed as Xeigen and Yeigen,
wherein X ═ { X ═ Xi∈R3|i=1,2,…,M},Y={yj∈R3|j=1,2,…,N}。
5. The method of claim 4, wherein matrix transforming the eigenvector matrix comprises:
according to T ═ y, the rotation matrix R is obtainedmean-R*xmeanCalculating a translation matrix T;
according to X according to the rotation matrix R and the translation matrix TinitComputing is performed on the source point cloud R X + T.
6. The method of claim 5, wherein pixels of unit length around the two feature point clouds are intercepted and input into a CNN neural network for comparison, and determining the matching result of the two feature point clouds comprises:
the calculated point cloud XinitThe target point cloud Y is used as the input of the CNN neural network;
respectively obtaining a feature map of a source point cloud X and a feature map of a target point cloud Y by using a CNN neural network;
and calculating the similarity between the feature maps of the source point cloud X and the target point cloud Y by using a cosine similarity algorithm.
7. The method of claim 6, wherein calculating the similarity between the feature maps of the source point cloud X and the target point cloud Y using a cosine similarity algorithm comprises:
and when the cosine similarity obtained by calculation is smaller than a preset threshold value, the cosine similarity is considered to be similar, otherwise, the cosine similarity is not similar.
8. A planar point cloud matching device based on principal component analysis comprises:
the principal component analysis module is used for extracting principal components of the two feature point clouds to obtain principal directions of the two feature point clouds;
the initial registration module is used for performing initial registration on the main directions of the two feature point clouds by using a Principal Component Analysis (PCA) method;
and the convolutional neural network module is used for intercepting pixels of unit length around the two feature point clouds and inputting the pixels into the CNN neural network for comparison so as to determine a matching result of the two feature point clouds.
9. The apparatus of claim 8, further comprising: a judgment module configured to
When the two feature point clouds are correctly matched, continuing to calculate the similarity of the two feature point clouds;
and outputting a dissimilar matching result when the two feature point clouds are not matched correctly.
10. The apparatus of claim 8 or 9, wherein the initial registration module initially registers principal directions of the two feature point clouds using PCA principal component analysis comprises:
carrying out characteristic value decomposition on the covariance information of the two characteristic point clouds to obtain characteristic vector matrixes corresponding to the two characteristic point clouds;
and performing matrix transformation on the feature vector matrix to register the main directions of the two feature point clouds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110097200.3A CN112767457A (en) | 2021-01-25 | 2021-01-25 | Principal component analysis-based plane point cloud matching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110097200.3A CN112767457A (en) | 2021-01-25 | 2021-01-25 | Principal component analysis-based plane point cloud matching method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112767457A true CN112767457A (en) | 2021-05-07 |
Family
ID=75707192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110097200.3A Pending CN112767457A (en) | 2021-01-25 | 2021-01-25 | Principal component analysis-based plane point cloud matching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767457A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463396A (en) * | 2022-01-07 | 2022-05-10 | 武汉大学 | Point cloud registration method using plane shape and topological graph voting |
CN114723795A (en) * | 2022-04-18 | 2022-07-08 | 长春工业大学 | Bucket wheel machine unmanned operation positioning and mapping method based on improved nearest point registration |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799763A (en) * | 2012-06-20 | 2012-11-28 | 北京航空航天大学 | Point cloud posture standardization-based method for extracting linear characteristic of point cloud |
CN109559338A (en) * | 2018-11-20 | 2019-04-02 | 西安交通大学 | A kind of three-dimensional point cloud method for registering estimated based on Weighted principal component analysis and M |
CN109872352A (en) * | 2018-12-29 | 2019-06-11 | 中国科学院遥感与数字地球研究所 | Power-line patrolling LiDAR data autoegistration method based on shaft tower characteristic point |
CN111028151A (en) * | 2019-12-03 | 2020-04-17 | 西安科技大学 | Point cloud data splicing method based on graph residual error neural network fusion |
CN112017220A (en) * | 2020-08-27 | 2020-12-01 | 南京工业大学 | Point cloud accurate registration method based on robust constraint least square algorithm |
-
2021
- 2021-01-25 CN CN202110097200.3A patent/CN112767457A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799763A (en) * | 2012-06-20 | 2012-11-28 | 北京航空航天大学 | Point cloud posture standardization-based method for extracting linear characteristic of point cloud |
CN109559338A (en) * | 2018-11-20 | 2019-04-02 | 西安交通大学 | A kind of three-dimensional point cloud method for registering estimated based on Weighted principal component analysis and M |
CN109872352A (en) * | 2018-12-29 | 2019-06-11 | 中国科学院遥感与数字地球研究所 | Power-line patrolling LiDAR data autoegistration method based on shaft tower characteristic point |
CN111028151A (en) * | 2019-12-03 | 2020-04-17 | 西安科技大学 | Point cloud data splicing method based on graph residual error neural network fusion |
CN112017220A (en) * | 2020-08-27 | 2020-12-01 | 南京工业大学 | Point cloud accurate registration method based on robust constraint least square algorithm |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463396A (en) * | 2022-01-07 | 2022-05-10 | 武汉大学 | Point cloud registration method using plane shape and topological graph voting |
CN114463396B (en) * | 2022-01-07 | 2024-02-06 | 武汉大学 | Point cloud registration method utilizing plane shape and topological graph voting |
CN114723795A (en) * | 2022-04-18 | 2022-07-08 | 长春工业大学 | Bucket wheel machine unmanned operation positioning and mapping method based on improved nearest point registration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10776936B2 (en) | Point cloud matching method | |
Prakhya et al. | B-SHOT: A binary feature descriptor for fast and efficient keypoint matching on 3D point clouds | |
Micusik et al. | Structure from motion with line segments under relaxed endpoint constraints | |
CN106447601B (en) | Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation | |
JP2018523881A (en) | Method and system for aligning data | |
CN111145232A (en) | Three-dimensional point cloud automatic registration method based on characteristic information change degree | |
CN108765476B (en) | Polarized image registration method | |
CN107025449B (en) | Oblique image straight line feature matching method constrained by local area with unchanged visual angle | |
Chen et al. | Robust affine-invariant line matching for high resolution remote sensing images | |
CN107358629A (en) | Figure and localization method are built in a kind of interior based on target identification | |
Gedik et al. | 3-D rigid body tracking using vision and depth sensors | |
Du et al. | New iterative closest point algorithm for isotropic scaling registration of point sets with noise | |
Wu et al. | 3D scene reconstruction based on improved ICP algorithm | |
CN109613974B (en) | AR home experience method in large scene | |
CN112767457A (en) | Principal component analysis-based plane point cloud matching method and device | |
Andaló et al. | Efficient height measurements in single images based on the detection of vanishing points | |
CN112651408B (en) | Point-to-point transformation characteristic-based three-dimensional local surface description method and system | |
CN108447084B (en) | Stereo matching compensation method based on ORB characteristics | |
Ma et al. | Efficient rotation estimation for 3D registration and global localization in structured point clouds | |
Zhong et al. | Triple screening point cloud registration method based on image and geometric features | |
Famouri et al. | Fast shape-from-template using local features | |
CN110135474A (en) | A kind of oblique aerial image matching method and system based on deep learning | |
Wan et al. | A performance comparison of feature detectors for planetary rover mapping and localization | |
CN112614166A (en) | Point cloud matching method and device based on CNN-KNN | |
Dantanarayana et al. | Object recognition and localization from 3D point clouds by maximum-likelihood estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |