CN114037743A - Three-dimensional point cloud robust registration method for Qinhong warriors based on dynamic graph attention mechanism - Google Patents
Three-dimensional point cloud robust registration method for Qinhong warriors based on dynamic graph attention mechanism Download PDFInfo
- Publication number
- CN114037743A CN114037743A CN202111245398.1A CN202111245398A CN114037743A CN 114037743 A CN114037743 A CN 114037743A CN 202111245398 A CN202111245398 A CN 202111245398A CN 114037743 A CN114037743 A CN 114037743A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- warriors
- qin
- dimensional point
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000007246 mechanism Effects 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 title claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims abstract description 5
- 230000008859 change Effects 0.000 claims abstract description 4
- 238000005070 sampling Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000010606 normalization Methods 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a three-dimensional point cloud robust registration method for Qin warriors based on a dynamic graph attention mechanism, which comprises the following steps: step 1, acquiring three-dimensional point clouds of Qinhong figures with different resolutions through a three-dimensional scanner; step 2, replacing the convolution layer with B-NHN-Conv, replacing the deconvolution layer with B-NHN-ConvTr, and embedding a residual module and a dynamic graph attention mechanism into the U-Net network to obtain a point cloud registration network; step 3, inputting three-dimensional point clouds of the Qin warriors with different resolutions into a point cloud registration network, and training the three-dimensional point clouds under the supervision of a circle loss function and an overlap loss function; and 4, extracting three-dimensional point cloud characteristics of the Qin warriors by using the trained point cloud registration network, and estimating a change matrix between the source point cloud and the target point cloud by combining a RANSAC algorithm to complete registration of the three-dimensional point cloud of the Qin warriors. The registration method provided by the invention can still learn robust features under the condition that the point cloud resolutions are not matched and contain a large amount of noise, and can well complete the registration of the point cloud under low overlapping degree.
Description
Technical Field
The invention relates to a three-dimensional point cloud model registration technology, in particular to a three-dimensional point cloud robust registration method for Qin warriors based on a dynamic graph attention machine mechanism.
Background
The point cloud registration technology plays an important role in projects such as virtual restoration of Qin dynasty figures, Qin tomb intelligent museums and the like, and the accuracy of a point cloud registration result is the key of subsequent three-dimensional reconstruction. The point cloud registration aims at obtaining a perfect coordinate transformation through calculation, and uniformly integrating point cloud data under different visual angles into a specified coordinate system through rigid transformation such as rotation translation and the like.
At present, most of registration methods of Qin warriors and related cultural relics are optimized traditional registration methods, wherein a more classical method is an Iterative Closest Point (Iterative Closest Point) algorithm. The method mainly comprises two stages: corresponding search and transform estimation. These two stages are repeated to find the best transformation between the point clouds. However, when processing scenes with large initial position difference, mismatched point cloud resolution, strong noise interference and small overlapping degree, the method is easy to fall into local optimization. In recent years, researchers have proposed a deep learning-based method to learn robust feature calculation corresponding points, and finally determine a transformation matrix through a RANSAC or SVD algorithm without iteration between corresponding estimation and transformation estimation. However, such algorithms do not deal quickly and robustly with the challenges of partial overlap, density variation, noise, etc.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a dynamic graph attention machine mechanism-based three-dimensional point cloud robust registration method for Qin dynasty soldiers, which can be robustly processed in the face of partial overlapping and density change.
In order to achieve the purpose, the invention adopts the following technical scheme:
a three-dimensional point cloud robust registration method for Qin warriors based on a dynamic graph attention mechanism comprises the following steps:
step 3, inputting three-dimensional point clouds of Qin warriors with different resolutions into a point cloud registration network, and training the constructed point cloud registration network under the joint supervision of a circle loss function and an overlap loss function;
and 4, extracting three-dimensional point cloud characteristics of the Qin warriors by using the trained point cloud registration network, and estimating a change matrix between the source point cloud and the target point cloud by combining a RANSAC algorithm to complete registration of the three-dimensional point cloud of the Qin warriors.
Further, the U-Net network in step 2 further includes an encoder module and a decoder module, wherein the encoder module obtains a plurality of scale features through a plurality of downsampling, and converts all fully connected layers in the multi-layer perceptron into a series of fully convolutional layers with a kernel size of 1 × 1 × 1, and the number of channels is (64,128,256, 512); and the decoder module fuses the features with the same scale and the number of corresponding channels through the jump connection structure and the feature extraction part every time sampling is carried out, so that the information loss caused by down-sampling is reduced.
Further, the residual block in step 2 is formed by connecting a plurality of residual blocks in series, and is used for connecting the input information to the output information in a jumping manner.
Further, the dynamic graph attention mechanism in the step 2 comprises a combination module of a multilayer self-attention module and a cross-attention module, and the dynamic graph attention mechanism is based on the attention weight aijFind the first k edges for each query node and use only the first k edges and corresponding nodesConstructing a graph by points, and according to the changed attention weight a at each layer of the attention mechanism of the dynamic graphijA new graph is constructed.
Compared with the prior art, the invention has the following technical effects:
the invention uses U-Net as the backbone framework, solve the gradient and disappear or the gradient explodes the problem through imbedding the residual block; the local features and the context features of the point clouds are aggregated by embedding a dynamic graph attention machine mechanism, so that multi-level and richer semantic representations can be obtained, and the network can be better helped to determine possible overlapping areas among the point clouds; potential changes of a characteristic mean value and a standard deviation are reduced or removed through B-NHN-Conv convolution operation, and robustness of three-dimensional characteristics to point density changes is improved; therefore, the point cloud registration network constructed by the invention can still learn robust features and well complete the registration of point clouds under low overlapping degree under the condition that the point cloud resolutions are not matched and contain a large amount of noise.
Further, converting all fully connected layers in the multi-layer perceptron into a series of fully convolutional layers with kernel size of 1 × 1 × 1 can speed up the efficiency of network processing.
Drawings
FIG. 1 is a flow chart of a three-dimensional point cloud robust registration method of Qin warriors based on a dynamic graph attention mechanism of the invention;
FIG. 2 is a diagram of a point cloud registration network model;
FIG. 3 is a schematic diagram of a dynamic attention mechanism;
FIG. 4 is a schematic diagram of a residual module;
FIG. 5 is an initial pose diagram of point clouds at two ends of a Qin tomb figure;
FIG. 6 is a point cloud registration result diagram of two heads of Qin dynasty figures;
FIG. 7 is a point cloud initial pose diagram of two feet of a Qin tomb figure;
FIG. 8 is a point cloud registration result diagram of two feet of Qin dynasty figures.
Detailed Description
The present invention will be explained in further detail with reference to examples.
Referring to fig. 1 to 4, the embodiment provides a robust registration method for a three-dimensional point cloud of a terracotta warrior based on a dynamic graph attention mechanism, which includes the following steps:
B-NHN-Conv is an operation combining B-NHN normalization and subsequent three-dimensional sparse convolution, and the B-NHN normalization and the three-dimensional sparse convolution are tightly coupled; B-NHN-ConvTr is to transpose the convolution operation of B-NHN-Conv, namely the normalization operation of B-NHN is combined with the three-dimensional sparse transpose convolution function; the B-NHN normalization operation may reduce or eliminate potential variations in feature mean and standard deviation, improving the robustness of three-dimensional features to variations in dot density.
The residual error module is formed by connecting a plurality of residual error blocks in series and is used for jumping and connecting input information to output information; the problem of losing effective characteristic information can be relieved, so that the network is easier to be optimized, and the corresponding relation between two point clouds can be established more favorably.
The kinegram attention mechanism includes a combined module of a multilayer self-attention module (self-attention) and a cross-attention module (cross-attention), also referred to as a kinegram attention module. The dynamic graph attention mechanism is based on an attention weight aijFinding the first k edges for each query node, and constructing a graph using only the first k edges and corresponding nodes, and according to the changed attention weight a at each level of the dynamic graph attention mechanismijA new graph is constructed.
The dynamic graph attention machine firstly calculates a query vector for the characteristic nodes in the graph through linear projectionKey vectorVector of sum valuesWhereinRepresenting a set of real numbers, b being the dimension of the feature, for graph structure update and attention aggregation, as shown in the following equation:
qi=W1 (l)fi Q+b1
wherein { Q, S } - [ X, Y ] }2When Q ═ S, it represents self-attention; when Q is not equal to S, the cross attention is shown; w and b are projection parameters that can be learned,(l)fi Qthe feature of the key point at the layer l of the dynamic graph attention module in the point cloud Q can be represented as a query node, and all the nodes in the point cloud S can be represented as source nodes.
By calculating qiAnd each kjThe similarity or correlation of each k is obtainedjCorresponds to vjIs given by a weight coefficient alphaijThen to vjAnd performing weighted summation to obtain a final Attention value, wherein the message is calculated by a weighted average value, and the calculation formula is as follows:
wherein the weight coefficientRepresents the feature (l) fi QTo the characteristicsThe degree of attention of; ε ∈ { ε ∈self,εcross}。
Once all layers have been aggregated, the final characteristics of the node can be expressed as:
fi=Wfi+b
the U-Net network further comprises an encoder module for extracting features and a decoder module for fusing the features, wherein the encoder module obtains a plurality of scale features through a plurality of downsampling, and converts all full connection layers in the multilayer perceptron into a series of full convolution layers with the kernel size of 1 × 1 × 1, and the number of channels is (64,128,256, 512); and the decoder module fuses the features with the same scale and the number of corresponding channels through the jump connection structure and the feature extraction part every time sampling is carried out, so that the information loss caused by down-sampling is reduced.
Step 3, inputting three-dimensional point clouds of Qin warriors with different resolutions into a point cloud registration network, and training the constructed point cloud registration network under the common supervision of a circle loss function (cyclic loss function) and an overlap loss function (overlapping loss function);
circle loss is a variation of the triplet loss commonly found in point cloud registration, and the loss function formula is as follows:
where the overlapping point cloud pair X, Y is aligned, nxRepresenting the number of point clouds, epsilon, randomly sampled from the point cloud Xx(xi) Expressed as point xiAs a center, radius rxAll points in the point cloud Y, εn(xi) Expressed as point xiAs a center, radius rxAll the points outside the point cloud Y,representing the distance, Δ, between two features in a feature spacexAnd ΔnRespectively representing positive and negative sample intervals, weightsConsists of the hyper-parameters gamma,And sample positive sample interval ΔxDetermined, the same weightConsists of the hyper-parameters gamma,And sample negative sample interval ΔnThe determination is made as to whether the user has selected,the same way of calculation, so the final loss function of circle loss is
The estimation of the overlap region was converted to a binary classification, supervised using overlap loss, and the loss function was as follows:
wherein the real labelRepresents point xiWhether it is an overlapping area or not,a label representing a network prediction.The calculation method is the same.
And 4, inputting the source point cloud and the target point cloud into a trained point cloud registration network, sequentially passing through an encoder module and a dynamic graph attention module, finally outputting the extracted characteristics through a decoder module, and calculating a transformation matrix by combining a RANSAC algorithm to complete registration.
Fig. 5 and 7 are a point cloud initial pose image of two heads of a Qin warrior and a point cloud initial pose image of two feet of a Qin warrior, which are obtained by a three-dimensional scanner respectively; fig. 6 and 8 show the result graph of the point cloud registration of the two heads of the Qin warriors and the result graph of the point cloud registration of the two feet of the Qin warriors which are registered by the method of the invention. It can be seen from the graph that the method of the present invention can still be robustly processed when the point cloud part is overlapped and the density is changed.
Claims (4)
1. A three-dimensional point cloud robust registration method for Qin warriors based on a dynamic graph attention mechanism is characterized by comprising the following steps:
step 1, acquiring three-dimensional point clouds of Qinhong figures with different resolutions through a three-dimensional scanner;
step 2, replacing the convolution layer with B-NHN-Conv, replacing the deconvolution layer with B-NHN-ConvTr, and embedding a residual module and a dynamic graph attention mechanism into the U-Net network to obtain a point cloud registration network;
step 3, inputting three-dimensional point clouds of Qin warriors with different resolutions into a point cloud registration network, and training the constructed point cloud registration network under the joint supervision of a circle loss function and an overlap loss function;
and 4, extracting three-dimensional point cloud characteristics of the Qin warriors by using the trained point cloud registration network, and estimating a change matrix between the source point cloud and the target point cloud by combining a RANSAC algorithm to complete registration of the three-dimensional point cloud of the Qin warriors.
2. The robust registration method for the three-dimensional point cloud of Qin warriors based on the dynamic graph attention mechanism as claimed in claim 1, wherein the U-Net network in step 2 further comprises an encoder module and a decoder module, the encoder module obtains a plurality of scale features through a plurality of downsampling, and converts all full connection layers in the multi-layer sensor into a series of full convolution layers with the kernel size of 1 x 1, and the number of channels is (64,128,256, 512); and the decoder module fuses the features with the same scale and the number of corresponding channels through the jump connection structure and the feature extraction part every time sampling is carried out, so that the information loss caused by down-sampling is reduced.
3. The robust registration method for the three-dimensional point cloud of the Qin warriors based on the dynamic graph attention mechanism as claimed in claim 1, wherein the residual error module in step 2 is formed by connecting a plurality of residual error blocks in series, and is used for jumping and connecting the input information to the output information.
4. The robust registration method for the three-dimensional point cloud of the Qin warriors based on the dynamic graph attention mechanism as claimed in claim 1, wherein the dynamic graph attention mechanism in the step 2 comprises a combination module of a multilayer self-attention module and a cross-attention module, and the dynamic graph attention mechanism is based on the attention weight aijFinding the first k edges for each query node, and constructing a graph using only the first k edges and corresponding nodes, and based on the changed attention weight α at each level of the dynamic graph attention mechanismijA new graph is constructed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111245398.1A CN114037743B (en) | 2021-10-26 | 2021-10-26 | Three-dimensional point cloud robust registration method for Qin warriors based on dynamic graph attention mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111245398.1A CN114037743B (en) | 2021-10-26 | 2021-10-26 | Three-dimensional point cloud robust registration method for Qin warriors based on dynamic graph attention mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114037743A true CN114037743A (en) | 2022-02-11 |
CN114037743B CN114037743B (en) | 2024-01-26 |
Family
ID=80135357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111245398.1A Active CN114037743B (en) | 2021-10-26 | 2021-10-26 | Three-dimensional point cloud robust registration method for Qin warriors based on dynamic graph attention mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114037743B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631341A (en) * | 2022-12-21 | 2023-01-20 | 北京航空航天大学 | Point cloud registration method and system based on multi-scale feature voting |
CN115631221A (en) * | 2022-11-30 | 2023-01-20 | 北京航空航天大学 | Low-overlapping-degree point cloud registration method based on consistency sampling |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596961A (en) * | 2018-04-17 | 2018-09-28 | 浙江工业大学 | Point cloud registration method based on Three dimensional convolution neural network |
US20190147245A1 (en) * | 2017-11-14 | 2019-05-16 | Nuro, Inc. | Three-dimensional object detection for autonomous robotic systems using image proposals |
CN111046781A (en) * | 2019-12-09 | 2020-04-21 | 华中科技大学 | Robust three-dimensional target detection method based on ternary attention mechanism |
CN111882593A (en) * | 2020-07-23 | 2020-11-03 | 首都师范大学 | Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network |
CN112837356A (en) * | 2021-02-06 | 2021-05-25 | 湖南大学 | WGAN-based unsupervised multi-view three-dimensional point cloud joint registration method |
WO2021104056A1 (en) * | 2019-11-27 | 2021-06-03 | 中国科学院深圳先进技术研究院 | Automatic tumor segmentation system and method, and electronic device |
-
2021
- 2021-10-26 CN CN202111245398.1A patent/CN114037743B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190147245A1 (en) * | 2017-11-14 | 2019-05-16 | Nuro, Inc. | Three-dimensional object detection for autonomous robotic systems using image proposals |
CN108596961A (en) * | 2018-04-17 | 2018-09-28 | 浙江工业大学 | Point cloud registration method based on Three dimensional convolution neural network |
WO2021104056A1 (en) * | 2019-11-27 | 2021-06-03 | 中国科学院深圳先进技术研究院 | Automatic tumor segmentation system and method, and electronic device |
CN111046781A (en) * | 2019-12-09 | 2020-04-21 | 华中科技大学 | Robust three-dimensional target detection method based on ternary attention mechanism |
CN111882593A (en) * | 2020-07-23 | 2020-11-03 | 首都师范大学 | Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network |
CN112837356A (en) * | 2021-02-06 | 2021-05-25 | 湖南大学 | WGAN-based unsupervised multi-view three-dimensional point cloud joint registration method |
Non-Patent Citations (2)
Title |
---|
唐灿;唐亮贵;刘波;: "图像特征检测与匹配方法研究综述", 南京信息工程大学学报(自然科学版), no. 03 * |
马福峰;耿楠;张志毅;: "基于邻域几何特征约束的植株三维形态配准方法研究", 计算机应用与软件, no. 09 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631221A (en) * | 2022-11-30 | 2023-01-20 | 北京航空航天大学 | Low-overlapping-degree point cloud registration method based on consistency sampling |
CN115631341A (en) * | 2022-12-21 | 2023-01-20 | 北京航空航天大学 | Point cloud registration method and system based on multi-scale feature voting |
Also Published As
Publication number | Publication date |
---|---|
CN114037743B (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108510532B (en) | Optical and SAR image registration method based on deep convolution GAN | |
CN109840556B (en) | Image classification and identification method based on twin network | |
CN109086405B (en) | Remote sensing image retrieval method and system based on significance and convolutional neural network | |
CN114037743A (en) | Three-dimensional point cloud robust registration method for Qinhong warriors based on dynamic graph attention mechanism | |
CN106599917A (en) | Similar image duplicate detection method based on sparse representation | |
CN113988147B (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
CN112084895B (en) | Pedestrian re-identification method based on deep learning | |
CN113076957A (en) | RGB-D image saliency target detection method based on cross-modal feature fusion | |
CN108805280B (en) | Image retrieval method and device | |
CN111899203A (en) | Real image generation method based on label graph under unsupervised training and storage medium | |
CN112036511A (en) | Image retrieval method based on attention machine mapping convolutional neural network | |
CN110083734B (en) | Semi-supervised image retrieval method based on self-coding network and robust kernel hash | |
CN115346207A (en) | Method for detecting three-dimensional target in two-dimensional image based on example structure correlation | |
CN115131558A (en) | Semantic segmentation method under less-sample environment | |
CN114663880A (en) | Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism | |
CN107358625B (en) | SAR image change detection method based on SPP Net and region-of-interest detection | |
CN111242003B (en) | Video salient object detection method based on multi-scale constrained self-attention mechanism | |
CN111209886B (en) | Rapid pedestrian re-identification method based on deep neural network | |
CN110135253B (en) | Finger vein authentication method based on long-term recursive convolutional neural network | |
CN116386079A (en) | Domain generalization pedestrian re-recognition method and system based on meta-graph perception | |
CN115375966A (en) | Image countermeasure sample generation method and system based on joint loss function | |
CN114937154A (en) | Significance detection method based on recursive decoder | |
CN113591685A (en) | Geographic object spatial relationship identification method and system based on multi-scale pooling | |
CN111291223A (en) | Four-embryo convolution neural network video fingerprint algorithm | |
CN117475481B (en) | Domain migration-based night infrared image animal identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |