CN112509019A - Three-dimensional corresponding relation grouping method based on compatibility characteristics - Google Patents

Three-dimensional corresponding relation grouping method based on compatibility characteristics Download PDF

Info

Publication number
CN112509019A
CN112509019A CN202011400682.7A CN202011400682A CN112509019A CN 112509019 A CN112509019 A CN 112509019A CN 202011400682 A CN202011400682 A CN 202011400682A CN 112509019 A CN112509019 A CN 112509019A
Authority
CN
China
Prior art keywords
point cloud
compatibility
matching
calculating
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011400682.7A
Other languages
Chinese (zh)
Other versions
CN112509019B (en
Inventor
杨佳琪
陈家豪
张艳宁
黄志强
权思文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011400682.7A priority Critical patent/CN112509019B/en
Publication of CN112509019A publication Critical patent/CN112509019A/en
Application granted granted Critical
Publication of CN112509019B publication Critical patent/CN112509019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional corresponding relation grouping method based on compatibility characteristics. The method comprises the steps of firstly obtaining an initial matching set, namely a corresponding relation, of a source point cloud and a target point cloud by calculating key points, normal vectors and feature descriptors, then extracting compatibility features for each matching pair by calculating compatibility values among different matching pairs, and obtaining final output categories of the extracted features through an MLP (Multi level processing) classification network. The result shows that compared with other existing methods, the method provided by the invention has the advantages of high accuracy, low time complexity and strong generalization.

Description

Three-dimensional corresponding relation grouping method based on compatibility characteristics
Technical Field
The invention belongs to the field of knowledge mining, and particularly relates to a three-dimensional corresponding relation grouping method.
Background
The three-dimensional correspondence grouping is crucial for many tasks based on local geometric feature matching, such as three-dimensional point cloud registration, three-dimensional target recognition and three-dimensional reconstruction. The purpose of grouping the three-dimensional correspondences is to classify initial point-to-point correspondences between two 3D point clouds, which are obtained by matching local geometric descriptors to obtain an Inlier number (Inlier) and an Outlier number (Outlier). Due to many factors, such as repetitive patterns, keypoint localization errors, and data interference (including noise, limited overlap, clutter, and occlusions), a large number of mismatches may be generated in the initial match set. Therefore, it is very challenging to mine the consistency of the scarce interior points and find them. Existing 3D correspondence grouping methods can be divided into two categories: group-based and Indvidual-based. The Group-based approach assumes that inliers constitute a cluster in a particular domain, however, it is difficult to recover the cluster. In contrast, the indicative-based method typically assigns confidence scores to correspondences based on features or geometric constraints and then selects the highest scoring correspondence. However, these methods have the following drawbacks: the generalization over data sets with different application scenarios and data modalities is poor and the accuracy is limited, which is crucial for successful three-dimensional registration for sparse correspondences.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a three-dimensional corresponding relation grouping method based on compatibility characteristics. The method comprises the steps of firstly obtaining an initial matching set, namely a corresponding relation, of a source point cloud and a target point cloud by calculating key points, normal vectors and feature descriptors, then extracting compatibility features for each matching pair by calculating compatibility values among different matching pairs, and obtaining final output categories of the extracted features through an MLP (Multi level processing) classification network. The result shows that compared with other existing methods, the method provided by the invention has the advantages of high accuracy, low time complexity and strong generalization.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: calculating key points of source point cloud and target point cloud, and recording the key point of the source point cloud as PsThe key point of the target point cloud is Pt
Step 2: calculating normal vectors of key points of the source point cloud and the target point cloud;
and step 3: calculating feature descriptors of key points of the source point cloud and the target point cloud;
and 4, step 4: calculating the corresponding relation between the key points of the source point cloud and the target point cloud, using the corresponding relation as matching, and forming initial matching by matchingSet, initial matching set denoted as C ═ CiTherein of
Figure BDA0002812187480000021
And is
Figure BDA0002812187480000022
Figure BDA0002812187480000023
Figure BDA0002812187480000024
Is the ith key point of the source cloud,
Figure BDA0002812187480000025
the ith key point of the target point cloud is obtained; c. CiIs composed of
Figure BDA0002812187480000026
And
Figure BDA0002812187480000027
matching;
and 5: obtaining the matched compatibility characteristics in the initial matching set;
step 5-1: suppose that
Figure BDA0002812187480000028
For any one match in the set of initial matches,
Figure BDA0002812187480000029
for another match, n is the normal vector of the keypoint P, S (c)i,cj) Is ciAnd cjA compatibility value between;
Figure BDA00028121874800000210
is the j key point of the source point cloud,
Figure BDA00028121874800000211
the jth key point of the target point cloud is obtained; c. CjIs composed of
Figure BDA00028121874800000212
And
Figure BDA00028121874800000213
matching;
step 5-2: defining a rigid distance constraint term:
Figure BDA00028121874800000214
wherein,
Figure BDA00028121874800000215
defining a normal vector included angle constraint term:
Figure BDA00028121874800000216
wherein,
Figure BDA00028121874800000217
is the normal vector of the ith keypoint of the source cloud,
Figure BDA00028121874800000218
is the normal vector of the j-th key point of the source cloud,
Figure BDA00028121874800000219
is the normal vector of the ith key point of the target point cloud,
Figure BDA00028121874800000220
a normal vector of the jth key point of the target point cloud;
step 5-3: the compatibility value is defined as:
Figure BDA00028121874800000221
wherein,
Figure BDA00028121874800000222
in order to be a distance parameter,
Figure BDA00028121874800000223
is a normal vector included angle parameter;
step 5-4: calculating compatibility values between each match and all other matches in the initial matching set;
step 5-5: sorting all compatibility values from large to small, and selecting K values in the top sorting as compatibility characteristics;
step 6: classifying the compatibility characteristics by adopting an MLP network;
step 6-1: dividing the compatibility characteristic data into a training set and a testing set, wherein the proportion of the training set to the testing set is a:1, and the Mini-batch size of network training is set as B; adopting Focal local as a Loss function;
step 6-2: training the MLP network by using a training set until the MLP network converges;
step 6-3: evaluating the converged MLP network classification result by using a test set; evaluation was performed using Precision, Recall and F-score, as defined below:
precision definition:
Figure BDA0002812187480000031
recall defines:
Figure BDA0002812187480000032
f-score definition:
Figure BDA0002812187480000033
wherein, CgroupThe positive sample number judged by the network is represented, namely the correct matching is realized; cinlierRepresenting network decisionsThe correct number of positive samples is then taken,
Figure BDA0002812187480000034
and the positive sample number calculated by the group Truth matrix is shown, namely the actual correct matching number in the initial matching set.
Preferably, the method of calculating the keypoints of the source point cloud and the target point cloud is the Harris3D keypoint method.
Preferably, the method for calculating the feature descriptors at the key points of the source point cloud and the target point cloud is a SHOT descriptor.
Preferably, K is 50, a is 4, and B is 1024.
Due to the adoption of the three-dimensional corresponding relation grouping method based on the compatibility characteristics, the following beneficial effects are brought:
1. the method has low time complexity, and the first process of the time overhead of the method is in calculating the three-dimensional point cloud matching relation and belongs to the necessary overhead; the second process is on the calculation of the compatibility characteristics, but the second process is extremely low in time consumption relative to the first process, the network only needs to be trained once, and the training can be used for testing other matching sets.
2. The method has high accuracy. Experiments on 4 different types of point cloud databases show that the accuracy of the method is higher than that of the conventional method in most cases.
3. The method has strong robustness. Compared with other traditional methods, the method has stronger anti-interference capability and better effect by adding various interferences, such as Gaussian noise and downsampling, to the data.
4. The method has strong generalization. The invention still keeps higher accuracy by generating the compatibility characteristics extracted by the matching relation for different key points and descriptors.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
FIG. 2 is a visualization of compatibility features of the method of the present invention, (a) is a visualization of correctly matched features, and (b) is a visualization of incorrectly matched features.
FIG. 3 shows the target point cloud and the images with Gaussian noise and random downsampling of the method of the present invention, (a) is the target original point cloud, (b) is the Gaussian noise with 0.3 times of the resolution of the point cloud added to the target original point cloud, and (c) is the target original point cloud randomly downsampled to 1/8.
FIG. 4 is a comparison graph of PRF results of different algorithms under the U3M test set according to an embodiment of the present invention, (a) is a Precision index graph, (b) is a Recall index graph, and (c) is an F-Score index graph.
FIG. 5 is a comparison graph of PRF results of U3M test sets generated under different interference conditions in the method of the present invention, (a) is a Precision index graph, (b) is a Recall index graph, and (c) is an F-Score index graph.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, the present invention provides a three-dimensional correspondence grouping method based on compatibility characteristics, which includes the following steps:
step 1: calculating key points of source point cloud and target point cloud, and recording the key point of the source point cloud as PsThe key point of the target point cloud is Pt
Step 2: calculating normal vectors of key points of the source point cloud and the target point cloud;
and step 3: calculating feature descriptors of key points of the source point cloud and the target point cloud;
and 4, step 4: calculating the corresponding relation between the key points of the source point cloud and the target point cloud, taking the corresponding relation as matching, forming an initial matching set by matching, and recording the initial matching set as C ═ CiTherein of
Figure BDA0002812187480000041
And is
Figure BDA0002812187480000042
Figure BDA0002812187480000043
Figure BDA0002812187480000044
Is the ith key point of the source cloud,
Figure BDA0002812187480000045
the ith key point of the target point cloud is obtained; c. CiIs composed of
Figure BDA0002812187480000046
And
Figure BDA0002812187480000047
matching;
and 5: obtaining the matched compatibility characteristics in the initial matching set;
step 5-1: suppose that
Figure BDA0002812187480000048
For any one match in the set of initial matches,
Figure BDA0002812187480000049
for another match, n is the normal vector of the keypoint P, S (c)i,cj) Is ciAnd cjA compatibility value between;
Figure BDA00028121874800000410
is the j key point of the source point cloud,
Figure BDA00028121874800000411
the jth key point of the target point cloud is obtained; c. CjIs composed of
Figure BDA00028121874800000412
And
Figure BDA00028121874800000413
matching;
step 5-2: defining a rigid distance constraint term:
Figure BDA00028121874800000414
wherein,
Figure BDA00028121874800000415
defining a normal vector included angle constraint term:
Figure BDA0002812187480000051
wherein,
Figure BDA0002812187480000052
is the normal vector of the ith keypoint of the source cloud,
Figure BDA0002812187480000053
is the normal vector of the j-th key point of the source cloud,
Figure BDA0002812187480000054
is the normal vector of the ith key point of the target point cloud,
Figure BDA0002812187480000055
a normal vector of the jth key point of the target point cloud;
step 5-3: the compatibility value is defined as:
Figure BDA0002812187480000056
wherein,
Figure BDA0002812187480000057
in order to be a distance parameter,
Figure BDA0002812187480000058
is a normal vector included angle parameter;
step 5-4: calculating compatibility values between each match and all other matches in the initial matching set;
step 5-5: sorting all the compatibility values from large to small, and selecting 50 values in the top as compatibility characteristics;
step 6: classifying the compatibility characteristics by adopting an MLP network;
step 6-1: dividing the compatibility characteristic data into a training set and a testing set, wherein the proportion of the training set to the testing set is 4:1, and the Mini-batch size of network training is set to be 1024; adopting Focal local as a Loss function;
step 6-2: training the MLP network by using a training set until the MLP network converges;
step 6-3: evaluating the converged MLP network classification result by using a test set; evaluation was performed using Precision, Recall and F-score, as defined below:
precision definition:
Figure BDA0002812187480000059
recall defines:
Figure BDA00028121874800000510
f-score definition:
Figure BDA00028121874800000511
wherein, CgroupThe positive sample number judged by the network is represented, namely the correct matching is realized; cinlierIndicating the number of positive samples that the network judges to be correct,
Figure BDA00028121874800000512
and the positive sample number calculated by the group Truth matrix is shown, namely the actual correct matching number in the initial matching set.
Preferably, the method of calculating the keypoints of the source point cloud and the target point cloud is the Harris3D keypoint method.
Preferably, the method for calculating the feature descriptors at the key points of the source point cloud and the target point cloud is a SHOT descriptor.
The specific embodiment is as follows:
the three-dimensional corresponding grouping algorithm based on the compatibility characteristics, provided by the embodiment of the invention, has the flow shown in fig. 1, and comprises the steps of reading point cloud data from a data set; calculating key points and normal vectors of the point cloud; extracting the characteristics of the key points and calculating a point cloud descriptor; performing feature matching on the source point cloud and the target point cloud; extracting compatibility characteristics of the matching set, and inputting the compatibility characteristics into an MLP classification network to obtain corresponding matched output categories; and carrying out a series of comparison tests, analysis experiments and generalization experiments on the network generated by training. The following describes the three-dimensional corresponding grouping algorithm based on compatibility characteristics according to the present invention.
Specifically, to group the initial matching set pairs of two point clouds, i.e., to screen out correct matches (inner points) and incorrect matches (outer points) in the initial matching set and to measure the quality of the corresponding relationship grouping result, first, the key points of the source point cloud and the target point cloud and the normal vectors at the key points are calculated, and then, the geometric information at the key points needs to be described, i.e., the feature descriptors at the key points are calculated. According to the feature descriptors of the source point cloud and the target point cloud, the corresponding relation between the two point cloud key points can be calculated, and the corresponding relation is used as a matching pair. For the definition of the compatibility value, it is measured that the relation between the matching pair and the matching pair, its value range is [0,1], the correct match and the correct match are definitely compatible, its value of the compatibility value is close to 1, the correct match and the wrong match are incompatible, its value of the compatibility value is closer to 0, the wrong match and the wrong match are basically incompatible, its value of the compatibility value is also closer to 0, for example, a certain initial matching set contains N matching pairs, then a certain matching pair and other N-1 matching pairs can calculate N-1 compatibility values, and selecting the Top-K compatibility values after sorting constitutes the compatibility characteristic of the matching pair. After the compatibility features are extracted for all the matching pairs in the initial matching set, the classification can be started. The classification method is that a multilayer perceptron network is designed based on the Pythrch, a large number of compatibility characteristics are generated based on the U3M data set for pre-training, the initial matching set is grouped by using the trained network on the basis of the previous steps, and finally Precision, Recall and F-Score of network classification are calculated to measure the advantages and the disadvantages of the algorithm.
The following provisions are made: recording the cloud key point as PsThe key point of the target point cloud is Pt(ii) a The initial matching set is calculated from descriptors at key points, denoted as C ═ CiTherein of
Figure BDA0002812187480000061
And is
Figure BDA0002812187480000062
1. To perform initial matching calculation on the three-dimensional point cloud, the key points, normal vectors and descriptors of the point cloud need to be calculated first. Key points are points, such as corner points, that can exhibit point cloud characteristics. There are many methods for calculating the key points, and in this embodiment, Harris3D key points are used. The calculation descriptor is also called feature extraction, and in order to further describe features at key points, a SHOT descriptor is used in the embodiment. In the initial matching obtained by calculation, as the Ground Truth matrix of the point cloud is known, the existing correct matching of the initial matching can be obtained through the GT matrix, namely, the number of the inner points in the initial matching can be obtained.
Inner point number (Inlier): defining a function:
Figure BDA0002812187480000071
wherein, R is a rotation transformation matrix, and t is a translation transformation matrix.
When e (c)i) When less than a specific threshold, note ciThe inner point is the outer point otherwise. The present embodiment sets a threshold of 7.5 times the point cloud resolution. Thus, each match in the initial match set may be labeled with a tag value of 1, which is the correct match, and labeledA signature value of 0 is an error match. The actual point cloud matching relationship contains a lot of mismatching, and the purpose of the algorithm is to find out as many correct matches as possible.
2. After the initial matching set is obtained, the compatibility value in the initial matching set needs to be calculated. The compatibility value measures the relationship between the matching pairs, assuming that
Figure BDA0002812187480000072
In order to match a certain root of the tree,
Figure BDA0002812187480000073
for another match, n is the normal vector of the keypoint P, S (c)i,cj) Is ciAnd cjA compatibility value between.
For the
Figure BDA0002812187480000074
Distance parameter, this example selects 10 point cloud resolution, for
Figure BDA0002812187480000075
The normal vector included angle parameter is 10 ° in this embodiment, and it can be seen that the value range of the compatibility value is [0,1 °]And S (c) only if both constraints are satisfiedi,cj) Is 1.
3. Assuming that the initial matching set contains N matches, N-1 compatibility values can be calculated for each match, and the invention selects the sorted Top-50 compatibility values as the compatibility characteristics of the match. The present invention experimentally proves the distinguishability of the correctly matched and incorrectly matched 50-dimensional compatibility features, as shown in fig. 2, wherein (a) is the visualization result of the correctly matched features and (b) is the visualization result of the incorrectly matched features. As can be seen through the bar graph visualization of the characteristics, the correct characteristics and the error characteristics have obvious distinctiveness and can be used for classifying the multilayer perceptron network.
Generally, the initial matching inner point rate generated by calculating the key points, normal vectors and feature descriptors may be low, that is, the initial matching contains a large number of wrong matches and only contains a small number of correct matches, and the definition of the compatibility value shows that only the values of the correct matches and the correct matches are closer to 1, so that the matched compatibility values need to be sorted, and K groups of values which are front after the sorting are selected as the compatibility features, so that the accuracy and the distinguishability of the matching features can be ensured. The value range of the K value cannot be too large or too small. If the value range of K is too large, the data values of the end terminals of the compatibility characteristics of correct matching and incorrect matching tend to be similar, the end data cause interference on characteristic distinguishing, the characteristic dimensionality is high, and the time complexity of the algorithm is increased. If the value range of K is too small, the distinction between the wrongly matched features and the correctly matched features is not large, and the judgment of the network is not facilitated. The experiment proves that the effect is better when K is 50.
4. The extracted compatibility feature data is preprocessed before the network is trained. And dividing the compatibility feature data into a training set and a testing set, wherein the proportion of the training set to the testing set is about 4:1, and the Mini-batch size of network training is set to be 1024. Because the internal point rate of the initial matching set is low, namely the proportion of correct matching is low, the Loss function adopts a Focal local Loss function, the Loss function is used for solving the problem of serious imbalance of the proportion of positive and negative samples, the weight occupied by a large number of simple negative samples in training is reduced, and the method can also be understood as difficult sample mining. After the network is iteratively trained by using training data, a classification test is carried out on the test set, and classification results of the test set are evaluated by adopting Precision, Recall and F-Score.
The classification adopts an MLP network, the network structure specifically comprises 6 layers (50-128-64-32-2), the input is 50-dimensional compatibility characteristics, and the output is the probability that the network judges whether the matching is correct or wrong. The specific operation process is that a U3M data set is divided according to a training set test set ratio of 4:1, the magnitude of a finally obtained training sample is about 490k, the magnitude of the test sample is about 120k, the size of Mini-batch in the training process is set to 1024, as the point cloud initial matching internal point rate is very low, the ratio of positive and negative samples is unbalanced, the Loss function uses the Focal local, and the Focal local function is defined as follows:
Figure BDA0002812187480000081
after convergence of the network, the algorithm is evaluated by testing the Precision, Recall, F-score results presented in the network by the set, which is defined below, where CgroupRepresents the number of positive samples (correct match) judged by the network, CinlierIndicating the number of positive samples that the network judges to be correct,
Figure BDA0002812187480000082
the number of positive samples (i.e. the actual number of correct matches in the initial match) calculated from the group Truth matrix is shown.
As shown in fig. 4, the PRF result graph (this method is abbreviated as NN, and the comparison method is SS, NNSR, ST, RANSAC, GC, 3DHV, GTM, SI, CV) of the U3M test set in different algorithms, and it can be seen from the graph that the grouping effect of the method proposed by the present invention is better than that of other methods.
In order to measure the generalization, the invention adds interference, such as Gaussian noise, to target point cloud data, and randomly downsamples, as shown in FIG. 3, wherein a graph (a) is a target original point cloud, (b) Gaussian noise with 0.3 times of point cloud resolution is added to the point cloud, and (c) the point cloud is randomly downsampled to 1/8, and compatibility characteristics are calculated through a corresponding relation generated by the source point cloud and the target point cloud to generate test data; generating test data by generating compatibility characteristics extracted from matching relations of different key points and descriptors, and testing by using the network; and generating test data on other databases for testing. As shown in FIG. 5, it is found through experiments comparing Precision, Recall, and F-score of the algorithm of the present invention under different test data that the present invention improves the Precision and generalization of the grouping of the corresponding relationship of the three-dimensional point cloud.

Claims (4)

1. A three-dimensional corresponding relation grouping method based on compatibility characteristics is characterized by comprising the following steps:
step 1: calculating key points of source point cloud and target point cloud, and recording the key point of the source point cloud as PsThe key point of the target point cloud is Pt
Step 2: calculating normal vectors of key points of the source point cloud and the target point cloud;
and step 3: calculating feature descriptors of key points of the source point cloud and the target point cloud;
and 4, step 4: calculating the corresponding relation between the key points of the source point cloud and the target point cloud, taking the corresponding relation as matching, forming an initial matching set by matching, and recording the initial matching set as C ═ CiTherein of
Figure FDA0002812187470000011
And is
Figure FDA0002812187470000012
Figure FDA0002812187470000013
Figure FDA0002812187470000014
Is the ith key point of the source cloud,
Figure FDA00028121874700000123
the ith key point of the target point cloud is obtained; c. CiIs composed of
Figure FDA0002812187470000015
And
Figure FDA0002812187470000016
matching;
and 5: obtaining the matched compatibility characteristics in the initial matching set;
step 5-1: suppose that
Figure FDA0002812187470000017
For any one match in the set of initial matches,
Figure FDA0002812187470000018
for another match, n is the normal vector of the keypoint P, S (c)i,cj) Is ciAnd cjA compatibility value between;
Figure FDA0002812187470000019
is the j key point of the source point cloud,
Figure FDA00028121874700000110
the jth key point of the target point cloud is obtained; c. CjIs composed of
Figure FDA00028121874700000111
And
Figure FDA00028121874700000112
matching;
step 5-2: defining a rigid distance constraint term:
Figure FDA00028121874700000113
wherein,
Figure FDA00028121874700000114
defining a normal vector included angle constraint term:
Figure FDA00028121874700000115
wherein,
Figure FDA00028121874700000116
is the normal vector of the ith keypoint of the source cloud,
Figure FDA00028121874700000117
is the normal vector of the j-th key point of the source cloud,
Figure FDA00028121874700000118
is the normal vector of the ith key point of the target point cloud,
Figure FDA00028121874700000119
a normal vector of the jth key point of the target point cloud;
step 5-3: the compatibility value is defined as:
Figure FDA00028121874700000120
wherein,
Figure FDA00028121874700000121
in order to be a distance parameter,
Figure FDA00028121874700000122
is a normal vector included angle parameter;
step 5-4: calculating compatibility values between each match and all other matches in the initial matching set;
step 5-5: sorting all compatibility values from large to small, and selecting K values in the top sorting as compatibility characteristics;
step 6: classifying the compatibility characteristics by adopting an MLP network;
step 6-1: dividing the compatibility characteristic data into a training set and a testing set, wherein the proportion of the training set to the testing set is a:1, and the Mini-batch size of network training is set as B; adopting Focal local as a Loss function;
step 6-2: training the MLP network by using a training set until the MLP network converges;
step 6-3: evaluating the converged MLP network classification result by using a test set; evaluation was performed using Precision, Recall and F-score, as defined below:
precision definition:
Figure FDA0002812187470000021
recall defines:
Figure FDA0002812187470000022
f-score definition:
Figure FDA0002812187470000023
wherein, CgroupThe positive sample number judged by the network is represented, namely the correct matching is realized; cinlierIndicating the number of positive samples that the network judges to be correct,
Figure FDA0002812187470000024
and the positive sample number calculated by the group Truth matrix is shown, namely the actual correct matching number in the initial matching set.
2. The method for grouping three-dimensional correspondences based on compatibility characteristics according to claim 1, wherein the method for calculating the keypoints of the source point cloud and the target point cloud is a Harris3D keypoint method.
3. The method of claim 1, wherein the method for calculating the feature descriptors at the key points of the source point cloud and the target point cloud is a SHOT descriptor.
4. The method according to claim 1, wherein K is 50, a is 4, and B is 1024.
CN202011400682.7A 2020-12-02 2020-12-02 Three-dimensional corresponding relation grouping method based on compatibility characteristics Active CN112509019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011400682.7A CN112509019B (en) 2020-12-02 2020-12-02 Three-dimensional corresponding relation grouping method based on compatibility characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011400682.7A CN112509019B (en) 2020-12-02 2020-12-02 Three-dimensional corresponding relation grouping method based on compatibility characteristics

Publications (2)

Publication Number Publication Date
CN112509019A true CN112509019A (en) 2021-03-16
CN112509019B CN112509019B (en) 2024-03-08

Family

ID=74969737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011400682.7A Active CN112509019B (en) 2020-12-02 2020-12-02 Three-dimensional corresponding relation grouping method based on compatibility characteristics

Country Status (1)

Country Link
CN (1) CN112509019B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024103846A1 (en) * 2023-07-03 2024-05-23 西北工业大学 Three-dimensional registration reconstruction method based on multi-domain multi-dimensional feature map

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118059A (en) * 2015-08-19 2015-12-02 哈尔滨工程大学 Multi-scale coordinate axis angle feature point cloud fast registration method
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118059A (en) * 2015-08-19 2015-12-02 哈尔滨工程大学 Multi-scale coordinate axis angle feature point cloud fast registration method
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MASOUMEH REZAEI: "Deep learning-based 3D local feature descriptor from Mercator projections", COMPUTER AIDED GEOMETRIC DESIGN, 3 September 2019 (2019-09-03) *
ZHU, ANGFAN: "LRF-Net: Learning Local Reference Frames for 3D Local Shape Description and Matching", SENSORS, vol. 20, no. 18, 3 November 2020 (2020-11-03) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024103846A1 (en) * 2023-07-03 2024-05-23 西北工业大学 Three-dimensional registration reconstruction method based on multi-domain multi-dimensional feature map

Also Published As

Publication number Publication date
CN112509019B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
Lee et al. Cfa: Coupled-hypersphere-based feature adaptation for target-oriented anomaly localization
Zipfel et al. Anomaly detection for industrial quality assurance: A comparative evaluation of unsupervised deep learning models
EP2385483B1 (en) Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform
CN110766051A (en) Lung nodule morphological classification method based on neural network
Aghdaie et al. Detection of morphed face images using discriminative wavelet sub-bands
CN108492298B (en) Multispectral image change detection method based on generation countermeasure network
CN111126482A (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN111027576B (en) Cooperative significance detection method based on cooperative significance generation type countermeasure network
CN107239792A (en) A kind of workpiece identification method and device based on binary descriptor
Park et al. Learning and selecting confidence measures for robust stereo matching
CN114529581A (en) Multi-target tracking method based on deep learning and multi-task joint training
CN117576079A (en) Industrial product surface abnormality detection method, device and system
CN116934747A (en) Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN115034257B (en) Cross-modal information target identification method and device based on feature fusion
Shen et al. Gestalt rule feature points
CN110135428B (en) Image segmentation processing method and device
CN112509019B (en) Three-dimensional corresponding relation grouping method based on compatibility characteristics
CN111723852A (en) Robust training method for target detection network
Zainal Recognition of Copy Move Forgeries in Digital Images using Hybrid Optimization and Convolutional Neural Network Algorithm
Bakheet et al. Content-based image retrieval using brisk and surf as bag-of-visual-words for naïve Bayes classifier
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN112329798A (en) Image scene classification method based on optimized visual bag-of-words model
Löffler et al. Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time Series
Hsu et al. SSSNet: small-scale-aware siamese network for gastric cancer detection
CN116310416A (en) Deformable object similarity detection method based on Radon transformation and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant