CN113470085B - Improved RANSAC-based image registration method - Google Patents

Improved RANSAC-based image registration method Download PDF

Info

Publication number
CN113470085B
CN113470085B CN202110548022.1A CN202110548022A CN113470085B CN 113470085 B CN113470085 B CN 113470085B CN 202110548022 A CN202110548022 A CN 202110548022A CN 113470085 B CN113470085 B CN 113470085B
Authority
CN
China
Prior art keywords
characteristic point
target
point
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110548022.1A
Other languages
Chinese (zh)
Other versions
CN113470085A (en
Inventor
冯大政
曾晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110548022.1A priority Critical patent/CN113470085B/en
Publication of CN113470085A publication Critical patent/CN113470085A/en
Application granted granted Critical
Publication of CN113470085B publication Critical patent/CN113470085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image registration method based on improved RANSAC, which fuses a k-nearest neighbor matching similarity algorithm and a double-threshold RANSAC algorithm, when an inner point screening is carried out by using the k-nearest neighbor matching similarity algorithm, a feature point smaller than a threshold value of the logarithm of a matching point in a k-nearest neighbor feature point is taken as an outer point, the logarithm of the matching point in the k-nearest neighbor feature point is larger than the threshold value of the logarithm while the matching similarity of the k-nearest neighbor is larger than the threshold value of the second similarity, so that a first feature point set is formed, and a second feature point set is formed by feature points in the k-nearest neighbor feature point, wherein the logarithm of the matching point is larger than the threshold value while the matching similarity of the k-nearest neighbor is larger than the threshold value of the first similarity. And obtaining transformation matrix parameters and an interior point set by using the two feature point sets through an improved RANSAC algorithm, and registering the images to be matched by using the interior point set and the transformation matrix. Therefore, the invention can improve the registration accuracy and the registration efficiency.

Description

Image registration method based on improved RANSAC
Technical Field
The invention belongs to the technical field of image registration, and particularly relates to an image registration method based on improved RANSAC.
Background
In the process of image registration, after the NNDR method is used for matching the feature points, the obtained matching result is very rough, a large number of mismatching points exist, the accuracy of image registration is affected, further accurate matching needs to be performed, and the prior art proposes that an algorithm of Random Sample Consensus (RANSAC) is used for performing accurate matching.
The RANSAC algorithm is one of the most commonly used fine matching and model parameter estimation algorithms, and randomly samples matching point pairs in the operation process, and then estimates and transforms model parameters by using the sampled matching characteristic points. However, the iteration times of the RANSAC algorithm are difficult to determine, and when there are many mismatching points in the feature point set, the mismatching points are repeatedly sampled, so that the iteration times are increased rapidly, and the efficiency and accuracy are affected.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides an improved RANSAC-based image registration method. The technical problem to be solved by the invention is realized by the following technical scheme:
the invention provides an image registration method based on improved RANSAC, which comprises the following steps:
step 1: acquiring feature points of an image pair to be matched and description information of each feature point;
the image pair to be matched comprises a first image and a second image, and the description information comprises a scale, a position and a direction;
step 2: determining feature points matched with the second image in the feature points of the first image by using an NNDR algorithm based on the description information to obtain initial feature point pairs;
and 3, step 3: for each target feature point pair of the initial feature point pair, determining k-nearest neighbors of a first target feature point in the first image and k-nearest neighbors of a second target feature point in the second image according to the distance;
the target characteristic point pairs comprise first target characteristic points and second target characteristic points, wherein the ith first characteristic point is matched with the ith second characteristic point;
and 4, step 4: comparing the matching logarithm of the first characteristic point and the second characteristic point with a logarithm threshold value, and determining an outer point in the target characteristic point pair;
and 5: calculating the similarity between the first target characteristic point and the second target characteristic point;
step 6: determining a first characteristic point set and a second characteristic point set which are composed of the roughly screened interior points by using the size relationship between the similarity and a first similarity threshold value and a second similarity threshold value;
the number of the interior points in the first characteristic point set is greater than that of the interior points in the second characteristic point set;
and 7: randomly selecting a preset number of target characteristic point pairs from the second characteristic point set as matrix calculation characteristic point pairs aiming at the current iteration;
and 8: calculating the position information of the characteristic point pairs based on the matrix, and calculating a transformation matrix of the image pair to be matched;
and step 9: calculating the Euclidean distance error of each target characteristic point pair in the first characteristic point set based on the transformation matrix;
step 10: adding target characteristic point pairs with Euclidean distance errors smaller than a first error threshold value in the first characteristic point set into a third characteristic point set;
step 11: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is smaller than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration;
step 12: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is not less than the maximum number of the target characteristic point pairs, updating the maximum number of the target characteristic point pairs of the current iteration to be the number of the target characteristic point pairs in the third characteristic point set;
step 13: returning to the step 7 for the next iteration, until the maximum iteration times is reached, taking a third characteristic point set corresponding to the maximum target characteristic point number as an inner point set, and obtaining the inner point set and a transformation matrix corresponding to the inner point set;
step 14: and registering the image pair to be matched based on the transformation matrix.
Optionally, step 3 includes:
step 31: determining the distance between each initial characteristic point pair according to the position relation of each initial characteristic point pair;
step 32: for each target feature point pair of the initial feature point pair, k neighboring first feature points of the first target feature point are determined in the first image and k neighboring second feature points of the second target feature point are determined in the second image by distance.
Optionally, step 4 includes:
step 41: for each pair of target feature points, determining a matched first feature point, a matched second feature point and a matched logarithm;
wherein the ith first feature point is matched with the ith feature point;
step 42: when the matching logarithm of the first characteristic point and the second characteristic point is smaller than a logarithm threshold value, determining a target characteristic point pair consisting of a first target characteristic point corresponding to the first characteristic point and a second target characteristic point corresponding to the second characteristic point as an outer point;
step 43: and when the matched logarithm of the first characteristic point and the second characteristic point is not less than the logarithm threshold value, for the matched first characteristic point and the matched second characteristic point, connecting the first characteristic point with the corresponding first target characteristic point to obtain a first vector of the first target characteristic point, and connecting the second characteristic point with the corresponding second target characteristic point to obtain a second vector of the second target characteristic point.
Optionally, step 5 includes:
and calculating the similarity between the first target characteristic point and the second target characteristic point by using the first vector and the second vector.
Optionally, before step 5, the image registration method further includes:
step 51: determining transformation parameters in a transformation formula;
wherein the transformation parameters include: difference in direction alpha i And a scale ratio r i ,α i =mod(θ ii ',2π),r i =σ ii ',θ i Representing the principal direction, θ, of the first target feature point i ' indicates the principal direction, σ, of the second target feature point i Representing the dimension, σ, of the ith first target feature point i ' denotes a scale of the second target feature point, i =1,2, \ 8230;, n denotes a serial number of the target feature point;
step 52: transforming the second vector by using a transformation formula for determining transformation parameters to obtain a transformed second vector;
wherein, the transformation formula is as follows:
Figure BDA0003074245950000041
p i ' denotes a second target feature point, p i,j ' represents a second feature point corresponding to a second target feature point, and the transformed second vector is v i,j '。
Optionally, step 6 includes:
step 61: determining target characteristic point pairs with similarity larger than a first similarity threshold value as roughly screened interior points, and adding the roughly screened interior points into a first characteristic point set and a second characteristic point set;
the method comprises the following steps: 62: and determining the target characteristic point pairs with the similarity not greater than the first similarity threshold but greater than the second similarity threshold as the roughly screened interior points, and adding the first characteristic point set.
Optionally, step 6 further includes:
and step 63: and determining the target characteristic point pairs with the similarity smaller than the second similarity threshold as the outer points.
Optionally, step 11 includes:
step 111: determining the number of the maximum target characteristic point pairs of the current iteration;
step 112: and if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 for the next iteration.
Optionally, after step 14, the image registration method further includes:
step 15: and calculating an evaluation index for evaluating the result after registration based on the inner point set and the transformation matrix.
Wherein the evaluation index comprises: correct match rate, root mean square error, and root mean square error.
The invention provides an image registration method based on improved RANSAC, which fuses a k-nearest neighbor matching similarity algorithm and a double-threshold RANSAC algorithm, when an inner point screening is carried out by using the k-nearest neighbor matching similarity algorithm, a feature point smaller than a threshold value of the logarithm of a matching point in a k-nearest neighbor feature point is taken as an outer point, the logarithm of the matching point in the k-nearest neighbor feature point is larger than the threshold value of the logarithm while the matching similarity of the k-nearest neighbor is larger than the threshold value of the second similarity, so that a first feature point set is formed, and a second feature point set is formed by feature points in the k-nearest neighbor feature point, wherein the logarithm of the matching point is larger than the threshold value while the matching similarity of the k-nearest neighbor is larger than the threshold value of the first similarity. And obtaining transformation matrix parameters and an interior point set by using the two feature point sets through an improved RANSAC algorithm, and registering the images to be matched by using the interior point set and the transformation matrix. Therefore, the invention can improve the registration accuracy and the registration efficiency.
The present invention will be described in further detail with reference to the drawings and examples.
Drawings
Fig. 1 is a flowchart of an improved RANSAC-based image registration method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a transformation relationship between two sets of line segments provided by an embodiment of the present invention;
FIG. 3 is an image of a translational change provided by an embodiment of the present invention;
FIG. 4 is an image of a change in viewing angle provided by an embodiment of the present invention;
FIG. 5 is an image of a zoom change provided by an embodiment of the present invention;
FIG. 6 is an image with a primarily changing viewing angle provided by an embodiment of the present invention;
FIG. 7 is an image with a primarily changing viewing angle provided by an embodiment of the present invention;
FIG. 8 is a graph of the matching results of the RANSAC algorithm on Church images;
FIG. 9 is a graph of the matching results of the R-RANSAC algorithm on Church images;
FIG. 10 is a graph of the matching results of the LO-RANSAC algorithm on Church images;
FIG. 11 is a graph of the matching results of the method of the present invention on Church images.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
As shown in fig. 1, the improved RANSAC-based image registration method provided by the present invention includes:
step 1: acquiring feature points of an image pair to be matched and description information of each feature point;
the image pair to be matched comprises a first image and a second image, and the description information comprises a scale, a position and a direction;
step 2: determining feature points matched with the second image in the feature points of the first image by using an NNDR algorithm based on the description information to obtain initial feature point pairs;
and 3, step 3: for each target feature point pair of the initial feature point pair, determining k-nearest neighbors of a first target feature point in the first image and k-nearest neighbors of a second target feature point in the second image according to the distance;
the target characteristic point pairs comprise first target characteristic points and second target characteristic points;
and 4, step 4: determining an external point in the target characteristic point pair based on the magnitude relation between the matched target characteristic point logarithm and a logarithm threshold;
and 5: calculating the similarity between the first target characteristic point and the second target characteristic point;
step 6: determining a first characteristic point set and a second characteristic point set which are composed of the roughly screened interior points by using the size relation between the similarity and a first similarity threshold value as well as a second similarity threshold value;
the number of the interior points in the first characteristic point set is greater than that in the second characteristic point set;
and 7: randomly selecting a preset number of target characteristic point pairs from the second characteristic point set as matrix calculation characteristic point pairs aiming at the current iteration;
and 8: calculating the position information of the characteristic point pairs based on the matrix, and calculating a transformation matrix of the image pair to be matched;
and the k neighbor matching similarity algorithm continuously removes a part of mismatching points based on the rough selection result by utilizing the position, scale and direction information of the feature points and the neighbor points thereof to obtain a feature point set with a higher correct matching point ratio.
The invention is still a registration algorithm based on point features, and the adopted features are SIFT features. Suppose the SIFT feature point is at p = (x, y) T Then it can be represented as
l=(p T ,θ,σ) T (1-1)
Where θ represents the principal direction and σ represents the scale. Suppose (l) i ,l i '), i =1,2, \8230, n denotes a pair of characteristic point pairs, wherein
l i =(p i Tii ) T (1-2)
l i '=(p i ' Ti ',σ i ') T (1-3)
The direction difference of the two characteristic points can be obtained as
α i =mod(θ ii ',2π) (1-4)
Scale ratio of
r i =σ ii ' (1-5)
The direction difference may approximately represent a rotation angle of the local area of the picture to be registered, and the scale ratio may approximately represent a scaling of the local area of the picture to be registered.
The idea of k-nearest neighbor matching similarity algorithm is to find out l from a feature point set of an image to be matched according to distance i And l i 'the k neighbor feature points of the' are matched point pairs in two sets of k neighbor feature point sets to form a set, and the set is recorded as
H i ={(l i,j ,l' i,j )|j=1,2,...,m} (1-6)
Wherein (l) i,j ,l' i,j ) Represents the jth matching point pair, and m represents the number of matching point pairs in the k-nearest feature points of a pair of feature points. Larger m indicates k neighborThe greater the number of matching point pairs in the token, the greater l i And l i ' the greater the probability of being a pair of matching points, so m can be one of the parameters describing whether the features match.
And step 9: calculating the Euclidean distance error of each target characteristic point pair in the first characteristic point set based on the transformation matrix;
step 10: adding target characteristic point pairs with Euclidean distance errors smaller than a first error threshold value in the first characteristic point set into a third characteristic point set;
step 11: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is smaller than the maximum number of the target characteristic point pairs, returning to the step 7 for the next iteration;
step 12: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is not less than the maximum number of the target characteristic point pairs, determining that the maximum number of the target characteristic point pairs of the current iteration is the number of the target characteristic point pairs in the third characteristic point set;
step 13: returning to the step 7 to perform next iteration until the maximum iteration number is reached, and taking a third characteristic point set corresponding to the maximum target characteristic point number as an inner point set to obtain the inner point set and a transformation matrix corresponding to the inner point set;
step 14: and registering the image pair to be matched based on the transformation matrix.
Besides the matching number of the k neighbor feature points, the invention can also utilize the scale and the direction of the k neighbor features to carry out more accurate matching detection. Let p be i And p i ' is i And l i ' position, connecting feature points with set H by line segments i The feature points in (a) constitute two line segment sets. If l i And l i ' is the correct matching point pair, then the dimensions and the squares of the corresponding line segments in the two sets of line segments after transformation should be approximately the same. FIG. 2 depicts the transformation relationship of two sets of line segments, where p is connected i And p i,j Is represented as a vector v i,j Is connected to p i ' and p i,j ' the line segment is rotated by a i Degree and r i After the scale change, is represented as a vector v i,j ' the similarity of two vectors can also be used as one of the parameters describing whether the features match, indicated by the dashed line. Vector v i,j And v i,j ' can be expressed as:
v i,j =p i,j -p i (1-7)
Figure BDA0003074245950000091
firstly, calculating the similarity of the mode and the direction of two vectors, the mode similarity is measured by the ratio of the two vector modes, the invention divides the larger of the two modes and the smaller value to obtain a measurement value, and uses d i,j Representing degree of similarity
Figure BDA0003074245950000092
Direction similarity g i,j Expressed by the angle between the vectors
Figure BDA0003074245950000101
·|| 2 Representing the two-norm of the vector.
(l i ,l i ') k nearest neighbor matching similarity s i Can be expressed as
Figure BDA0003074245950000102
Wherein, | · | represents the cardinal number of the set, μ is a parameter that balances the modulus similarity and the direction similarity, when μ is greater than 0.5, the modulus similarity has a greater influence on the matching result, and when μ is less than 0.5, the direction similarity has a greater influence on the matching result, and μ is set to 0.5 in the experiment of the present invention. s i In the range of [0,1],s i The larger (l) i ,l i ') the greater the probability of being a correct matching point pair.
The traditional algorithm only considers the distance characteristics of the feature points and the adjacent points thereof, and the k-nearest neighbor matching similarity algorithm considers the position, direction and scale characteristics of the feature points and the adjacent points thereof on the basis of the traditional algorithm, so that a large number of mismatching points can be effectively removed, and the subsequent registration work is facilitated.
After the feature point matching operation, the transformation relation between the images to be registered needs to be solved through a model parameter estimation algorithm. The RANSAC algorithm is one of the most commonly used model parameter estimation algorithms, and estimates transformation model parameters in an iterative manner. The algorithm mainly comprises the following steps:
1. and randomly selecting a group of subsamples which are enough for calculating the model parameters from the matching point sample set, and calculating to obtain the model parameters.
2. Calculating the distance between the two transformed corresponding points, taking the distance as an evaluation index, taking the characteristic point with the distance smaller than a set threshold value as an inner point, and calculating the number of the inner points if the characteristic point is an outer point, wherein the larger the number of the inner points is, the better the quality of the obtained model is;
3. and repeating the steps, recording the model with the best quality, namely the model with the largest number of interior points, and quitting when the iteration times are reached, wherein the model with the best quality is the required transformation model.
The RANSAC algorithm is based on the basic principle that if the probability that each subsample selected by each random sampling is an interior point is w, and the probability that a sample is an exterior point is 1-w, the probability that all samples in the randomly sampled subsamples are correct matching points is w n N represents the number of samples sampled at random, and the probability that at least one of the n sample point pairs is an error matching point is 1-w n The probability that samples that are all inliers are not obtained after k iterations is (1-w) n ) k Let p be the probability that the subsamples of the sample are all inliers after k iterations
1-p=(1-w n ) k k→∞,p→1 (1-12)
Figure BDA0003074245950000111
According to the above formula, (1-w) when the number of iterations is infinite n ) k If the value of (b) is close to 0, the probability p approaches to 1, that is, when the number of iterations is large enough, the RANSAC algorithm must be able to obtain the correct transformation model.
RANSAC adopts an iteration method to carry out fine matching and simultaneously obtain parameters of a transformation model, but the iteration times are difficult to determine, are too low, correct model parameters cannot be found, the iteration times are too high, and the algorithm efficiency is reduced. Along with the increase of the proportion of the mismatching points in the matching point set, the iteration times required by the parameter estimation algorithm are also rapidly increased, and the precision and the efficiency of the algorithm are obviously reduced at the moment. Because of the above problems, the scholars propose many improved algorithms of RANSAC, matas et al propose a Randomized RANSAC (R-RANSAC) algorithm, check model parameters by randomly sampling feature point data subsets, exclude models which do not pass data subset check, check the models which pass subset check by using all data, greatly reduce the calculated amount and improve the algorithm efficiency. Chum et al propose a local Optimized RANSAC (LO-RANSAC) algorithm that improves accuracy by a local optimization method.
In order to alleviate the problems of RANSAC, the invention provides a RANSAC algorithm using double thresholds, which is improved in efficiency and accuracy. According to the method, each iteration is subjected to sampling calculation by using the result of the last iteration, and the result is gradually corrected, so that the iteration times can be reduced, and a more accurate and stable matching result can be obtained.
During each iteration, two thresholds are set, one smaller threshold d s And a larger threshold value d h After transformation, the distance of the feature point pair is less than d s Is denoted as D s Distance after transformation is less than d h Is denoted as D h . Feature point set D obtained from last iteration in each iteration s Middle sampling calculation transformation model, using D h The number of feature points in the point set evaluates the quality of the transformation model. Because the correct matching rate of the point set obtained by using the small threshold is higher, the probability of obtaining the correct model by using the sampling calculation of the point set is higherConvergence is faster.
The detailed procedure of the method of the present invention is given below.
Table 1 improved RANSAC algorithm based on k-nearest neighbor similarity
Figure BDA0003074245950000121
Figure BDA0003074245950000131
The invention provides an image registration method based on improved RANSAC, which fuses a k-neighbor matching similarity algorithm and a dual-threshold RANSAC algorithm, when an inner point screening is carried out by using the k-neighbor matching similarity algorithm, a feature point smaller than a threshold value of the number of pairs of matching points in a k-neighbor feature point is taken as an outer point, the number of pairs of matching points in the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the matching similarity of the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the similarity of the k-neighbor feature point is larger than the threshold value of the first similarity. And obtaining transformation matrix parameters and an interior point set by using the two feature point sets through an improved RANSAC algorithm, and registering the images to be matched by using the interior point set and the transformation matrix. Therefore, the invention can improve the registration accuracy and the registration efficiency.
As an alternative embodiment of the present invention, step 3 includes:
step 31: determining the distance between each initial characteristic point pair according to the position relation of each initial characteristic point pair;
step 32: for each target feature point pair of the initial feature point pair, k neighboring first feature points of the first target feature point are determined in the first image and k neighboring second feature points of the second target feature point are determined in the second image by distance.
As an optional embodiment of the present invention, step 4 includes:
step 41: for each pair of target feature points, determining a matched first feature point, a matched second feature point and a matched logarithm;
wherein the ith first feature point is matched with the ith feature point;
step 42: when the matching logarithm of the first characteristic point and the second characteristic point is smaller than a logarithm threshold value, determining a target characteristic point pair consisting of a first target characteristic point corresponding to the first characteristic point and a second target characteristic point corresponding to the second characteristic point as an outer point;
step 43: and when the matched logarithm of the first characteristic point and the second characteristic point is not less than the logarithm threshold value, for the matched first characteristic point and the matched second characteristic point, connecting the first characteristic point with the corresponding first target characteristic point to obtain a first vector of the first target characteristic point, and connecting the second characteristic point with the corresponding second target characteristic point to obtain a second vector of the second target characteristic point.
The step 5 comprises the following steps: and calculating the similarity between the first target characteristic point and the second target characteristic point by using the first vector and the second vector.
As an optional embodiment of the present invention, before step 5, the image registration method further includes:
step 51: determining transformation parameters in a transformation formula;
wherein the transformation parameters include: difference in direction alpha i And a scale ratio r i ,α i =mod(θ ii ',2π),r i =σ ii ',θ i Indicating the principal direction, theta, of the first target feature point i ' denotes a principal direction, σ, of the second target feature point i Representing the dimension, σ, of the ith first target feature point i ' denotes a scale of the second target feature point, i =1,2, \8230, n denotes a serial number of the target feature point;
step 52: transforming the second vector by using a transformation formula for determining transformation parameters to obtain a transformed second vector;
wherein, the transformation formula is as follows:
Figure BDA0003074245950000141
p i ' denotes a second target feature point, p i,j ' represents a second feature point corresponding to a second target feature point, and the transformed second vector is v i,j '。
As an alternative embodiment of the present invention, step 6 includes:
step 61: determining target characteristic point pairs with similarity greater than a first similarity threshold value as roughly screened interior points, and adding the roughly screened interior points into a first characteristic point set and a second characteristic point set;
the method comprises the following steps: 62: and determining the target characteristic point pairs with the similarity not greater than the first similarity threshold but greater than the second similarity threshold as the roughly screened interior points, and adding the first characteristic point set.
As an optional implementation manner of the present invention, step 6 further includes:
and step 63: and determining the target characteristic point pairs with the similarity smaller than the second similarity threshold as the outer points.
As an alternative embodiment of the present invention, step 11 includes:
step 111: determining the maximum target characteristic point pair number;
step 112: and if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 for the next iteration.
As an optional embodiment of the present invention, after step 14, the image registration method further comprises:
step 15: and calculating an evaluation index for evaluating the result after registration based on the inner point set and the transformation matrix.
Wherein the evaluation index comprises: correct match rate, root mean square error, and root mean square error.
The invention adopts three evaluation indexes commonly used in the field of image registration:
(1) The Correct Matching Rate (CMR) is a measure for measuring the quality of a Matching result, the Correct Matching point logarithm is obtained by dividing the obtained Correct Matching point logarithm by all Matching point logarithms to obtain the Correct Matching rate value, and the Correct Matching rate value can be obtained by calculation according to the following formula
Figure BDA0003074245950000151
Wherein N is corr Number of point pairs representing correct match after match, N c The logarithm of points after matching is represented. The larger the CMR value is, the more correct matching numbers obtained by the algorithm are, and the more accurate the matching result is.
(2) Root Mean Square Error (RMSE) is an index for measuring the quality of a transformation model and can be calculated by the following formula
Figure BDA0003074245950000161
Wherein N represents the logarithm of all matching points obtained after fine matching, (x) i ',y i ') and (x) i ,y i ) Respectively representing the coordinates of the feature points in the image to be registered and the reference image. The distance adopted by the invention is Euclidean distance, the smaller the root mean square error is, the better the transformation model is, and the more accurate the registration is.
(3) The shorter the running time, the higher the algorithm efficiency, and the longer the running time, the lower the algorithm efficiency and the worse the performance.
The correct matching rate, the root mean square error and the registration speed of the RANSAC and the improved algorithm thereof and the algorithm provided by the invention are respectively compared below.
The matching performance of the image registration method provided by the method is verified through a simulation experiment. The experimental program is compiled by MATLAB and is carried out on a computer with a CPU of Intel Core i5-8250U, an internal memory of 8.00GB and an operating system of 64 bits of Windows 10. The algorithm is utilized to obtain a characteristic point fine matching result and transformation model parameters, RANSAC, R-RANSAC and LO-RANSAC are selected for comparison, the probability of the comparison algorithm is set to be 0.99, the distance threshold is set to be 1, the maximum iteration number is set to be 10000, and the distance threshold of the algorithm is set to be 1 and 0.5. The algorithm performance is compared with the correct matching rate, the root mean square error and the registration speed.
In order to verify the effectiveness of the improved RANSAC algorithm based on the k-nearest neighbor matching similarity, five groups of images are selected for experiments, the reference image and the image to be registered are subjected to changes such as angle, translation, scaling and the like, and the details are similar and have more repetition. Wherein, fig. 3, fig. 4, fig. 5 relate to the changes of translation, view angle, zooming, etc., and fig. 6 and fig. 7 mainly have the change of view angle. The experimental graph contains a large number of similar textures, and the initial matching point set contains a large number of mismatching point pairs.
Fig. 8-11 show experimental results of Church using 4 registration algorithms, where the obtained matching point pairs are connected by straight lines, and the values in parentheses of the subheading of each result graph indicate the number of correct matching point pairs and the number of detected matching point pairs.
According to the matching result graph, 51 pairs of matching points are obtained by using a RANSAC algorithm, wherein 32 pairs are correct matching; obtaining 57 pairs of matching points by utilizing an R-RANSAC algorithm, wherein 38 pairs are correct matches; obtaining 57 pairs of matching points by using an LO-RANSAC algorithm, wherein 39 pairs are correct matches; using the algorithm of the present invention, 64 pairs of matching points were obtained, of which 53 pairs were correct matches. The experimental result shows that the accuracy of the algorithm provided by the invention is highest.
Table 2 shows the average result obtained by repeating the experiment of the Church image for 500 times, and it can be seen that the algorithm provided by the present invention is improved in both accuracy and efficiency.
TABLE 2Church image registration results
Name of algorithm Correct matching rate Error of Time
RANSAC 62.95% 0.7535 2.9774
R-RANSAC 66.12% 0.7832 2.0017
LO-RANSAC 68.72% 0.7473 2.9890
Proposed 81.94% 0.5902 0.8186
Table 3 describes the results of the correct matching rates of the 5 sets of experimental data under the four algorithms, table 4 describes the results of the root mean square errors of the 5 sets of experimental data under the four algorithms, table 5 describes the time results of the 5 sets of experimental data under the four algorithms, and the following experimental results are all average results obtained by repeating 500 times of experiments.
TABLE 3 correct match rates of the four algorithms
Figure BDA0003074245950000171
Figure BDA0003074245950000181
TABLE 4 root mean square error of the four algorithms
Figure BDA0003074245950000182
TABLE 5 time of the four algorithms
Figure BDA0003074245950000183
The experimental result shows that compared with RANSAC, R-RANSAC and LO-RANSAC, the algorithm provided by the invention has the best performance in time and matching accuracy. For the Magdalen picture, the algorithm provided by the invention is slightly inferior to the LO-RANSAC algorithm, but the mean value of the root mean square errors of the algorithm is minimum in the view of integrating the results of all data, namely the method provided by the invention is improved in speed and precision.
In the present invention, unless expressly stated or limited otherwise, the recitation of a first feature "on" or "under" a second feature may include the recitation of the first and second features being in direct contact, and may also include the recitation that the first and second features are not in direct contact, but are in contact via another feature between them. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. "beneath," "under" and "beneath" a first feature includes the first feature being directly beneath and obliquely beneath the second feature, or simply indicating that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples described in this specification can be combined and combined by those skilled in the art.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (8)

1. An improved RANSAC-based image registration method, comprising:
step 1: acquiring feature points of an image pair to be matched and description information of each feature point;
the image pair to be matched comprises a first image and a second image, and the description information comprises a scale, a position and a direction;
step 2: determining feature points matched with the second image in the feature points of the first image by using an NNDR algorithm based on the description information to obtain initial feature point pairs;
and step 3: for each target feature point pair of the initial feature point pair, determining k neighboring first feature points of a first target feature point in the first image and k neighboring second feature points of a second target feature point in the second image by distance;
the target feature point pairs comprise first target feature points and second target feature points, and the h-th first feature point is matched with the f-th second feature point; h = f;
and 4, step 4: comparing the matching logarithm of the first characteristic point and the second characteristic point with a logarithm threshold value, and determining an outer point in the target characteristic point pair;
and 5: calculating the similarity between the first target characteristic point and the second target characteristic point;
step 6: determining a first characteristic point set and a second characteristic point set which are composed of the roughly screened interior points by using the size relationship between the similarity and a first similarity threshold value and a second similarity threshold value;
the number of the interior points in the first characteristic point set is greater than that in the second characteristic point set;
and 7: randomly selecting a preset number of target characteristic point pairs from the second characteristic point set as matrix calculation characteristic point pairs aiming at the current iteration;
and 8: calculating the position information of the characteristic point pairs based on the matrix, and calculating a transformation matrix of the image pair to be matched;
and step 9: calculating Euclidean distance errors of each target characteristic point pair in the first characteristic point set based on the transformation matrix;
step 10: adding target characteristic point pairs with Euclidean distance errors smaller than a first error threshold value in the first characteristic point set into a third characteristic point set;
step 11: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration;
step 12: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is not less than the maximum number of the target characteristic point pairs, updating the maximum number of the target characteristic point pairs of the current iteration to be the number of the target characteristic point pairs in the third characteristic point set;
step 13: returning to the step 7 for the next iteration, until the maximum iteration times is reached, taking a third characteristic point set corresponding to the maximum target characteristic point number as an inner point set, and obtaining the inner point set and a transformation matrix corresponding to the inner point set;
step 14: registering the image pair to be matched based on the transformation matrix;
the step 4 comprises the following steps:
step 41: for each pair of target feature points, determining a matched first feature point, a matched second feature point and a matched logarithm;
step 42: when the matching logarithm of the first characteristic point and the second characteristic point is smaller than a logarithm threshold value, determining a target characteristic point pair consisting of a first target characteristic point corresponding to the first characteristic point and a second target characteristic point corresponding to the second characteristic point as an outer point;
step 43: when the matched logarithm of the first characteristic point and the second characteristic point is not smaller than a logarithm threshold value, for the matched first characteristic point and the matched second characteristic point, connecting the first characteristic point with the corresponding first target characteristic point to obtain a first vector of the first target characteristic point, and connecting the second characteristic point with the corresponding second target characteristic point to obtain a second vector of the second target characteristic point;
the step 6 comprises the following steps:
step 61: determining target characteristic point pairs with similarity larger than a first similarity threshold value as roughly screened interior points, and adding the roughly screened interior points into a first characteristic point set and a second characteristic point set;
the method comprises the following steps: 62: and determining the target characteristic point pairs with the similarity not greater than the first similarity threshold but greater than the second similarity threshold as the roughly screened interior points, and adding the first characteristic point set.
2. The image registration method according to claim 1, wherein the step 3 comprises:
step 31: determining the distance between each initial characteristic point pair according to the position relation of each initial characteristic point pair;
step 32: for each target feature point pair of the initial feature point pair, k neighboring first feature points of the first target feature point are determined in the first image and k neighboring second feature points of the second target feature point are determined in the second image by distance.
3. The image registration method according to claim 1, wherein the step 5 comprises:
and calculating the similarity between the first target characteristic point and the second target characteristic point by using the first vector and the second vector.
4. The image registration method of claim 1, wherein prior to the step 5, the image registration method further comprises:
step 51: determining transformation parameters in a transformation formula;
wherein the transformation parameters include: difference in direction alpha i And a scale ratio r i ,α i =mod(θ ii ',2π),r i =σ ii ',θ i Representing the principal direction, θ, of the first target feature point i ' denotes a principal direction, σ, of the second target feature point i Representing the scale, σ, of the ith first target feature point i ' denotes a scale of the second target feature point, i =1,2, \ 8230;, n denotes a serial number of the target feature point;
step 52: transforming the second vector by using a transformation formula for determining transformation parameters to obtain a transformed second vector;
wherein the transformation formula is:
Figure FDA0003988788400000041
p i ' denotes a second target feature point, p i,j ' represents a second feature point corresponding to a second target feature point, and the transformed second vector is v i,j '。
5. The image registration method according to claim 1, wherein the step 6 further comprises:
and step 63: and determining the target characteristic point pairs with the similarity smaller than the second similarity threshold as the outer points.
6. The image registration method according to claim 1, wherein the step 11 comprises:
step 111: determining the number of the maximum target characteristic point pairs of the current iteration;
step 112: and if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration.
7. The image registration method of claim 1, wherein after the step 14, the image registration method further comprises:
step 15: and calculating an evaluation index for evaluating the result after the registration based on the inner point set and the transformation matrix.
8. The image registration method according to claim 7, wherein the evaluation index includes: correct match rate, root mean square error, and root mean square error.
CN202110548022.1A 2021-05-19 2021-05-19 Improved RANSAC-based image registration method Active CN113470085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110548022.1A CN113470085B (en) 2021-05-19 2021-05-19 Improved RANSAC-based image registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110548022.1A CN113470085B (en) 2021-05-19 2021-05-19 Improved RANSAC-based image registration method

Publications (2)

Publication Number Publication Date
CN113470085A CN113470085A (en) 2021-10-01
CN113470085B true CN113470085B (en) 2023-02-10

Family

ID=77870996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110548022.1A Active CN113470085B (en) 2021-05-19 2021-05-19 Improved RANSAC-based image registration method

Country Status (1)

Country Link
CN (1) CN113470085B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934571B (en) * 2024-03-21 2024-06-07 广州市艾索技术有限公司 4K high-definition KVM seat management system
CN118506037B (en) * 2024-07-18 2024-09-20 中数智科(杭州)科技有限公司 Matching method for parts of bottom of railway vehicle

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (en) * 2009-04-22 2010-10-27 中国科学院自动化研究所 Method for automatically registering scanned document images
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN106971404A (en) * 2017-03-20 2017-07-21 西北工业大学 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering
CN107103580A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 A kind of HDR three-dimensional image registration method
CN108401565B (en) * 2015-05-28 2017-12-15 西北工业大学 Remote sensing image registration method based on improved KAZE features and Pseudo-RANSAC algorithms
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN110097585A (en) * 2019-04-29 2019-08-06 中国水利水电科学研究院 A kind of SAR image matching method and system based on SIFT algorithm
CN110738695A (en) * 2019-10-12 2020-01-31 哈尔滨工业大学 image feature point mismatching and removing method based on local transformation model
CN111767960A (en) * 2020-07-02 2020-10-13 中国矿业大学 Image matching method and system applied to image three-dimensional reconstruction
CN111798453A (en) * 2020-07-06 2020-10-20 博康智能信息技术有限公司 Point cloud registration method and system for unmanned auxiliary positioning
CN111833249A (en) * 2020-06-30 2020-10-27 电子科技大学 UAV image registration and splicing method based on bidirectional point characteristics
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (en) * 2009-04-22 2010-10-27 中国科学院自动化研究所 Method for automatically registering scanned document images
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN108401565B (en) * 2015-05-28 2017-12-15 西北工业大学 Remote sensing image registration method based on improved KAZE features and Pseudo-RANSAC algorithms
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN106971404A (en) * 2017-03-20 2017-07-21 西北工业大学 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering
CN107103580A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 A kind of HDR three-dimensional image registration method
CN110097585A (en) * 2019-04-29 2019-08-06 中国水利水电科学研究院 A kind of SAR image matching method and system based on SIFT algorithm
CN110738695A (en) * 2019-10-12 2020-01-31 哈尔滨工业大学 image feature point mismatching and removing method based on local transformation model
CN111833249A (en) * 2020-06-30 2020-10-27 电子科技大学 UAV image registration and splicing method based on bidirectional point characteristics
CN111767960A (en) * 2020-07-02 2020-10-13 中国矿业大学 Image matching method and system applied to image three-dimensional reconstruction
CN111798453A (en) * 2020-07-06 2020-10-20 博康智能信息技术有限公司 Point cloud registration method and system for unmanned auxiliary positioning
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration;Zahra Hossein-Nejad等;《Biomedical Signal Processing and Control》;20180831;325-338 *
基于深度学习的几何特征匹配方法;李键等;《计算机科学》;20190731;第46卷(第7期);274-279 *
基于特征点的图像拼接技术研究;姜小会;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160215;第2016年卷(第2期);I138-1358 *
基于特征点精确配准的图像拼接技术的研究;仲明;《中国优秀硕士学位论文全文数据库 信息科技辑》;20151015;第2015年卷(第10期);I138-319 *

Also Published As

Publication number Publication date
CN113470085A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN113012212B (en) Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN105844669B (en) A kind of video object method for real time tracking based on local Hash feature
CN113470085B (en) Improved RANSAC-based image registration method
CN107633226B (en) Human body motion tracking feature processing method
CN109697692B (en) Feature matching method based on local structure similarity
CN103403704B (en) For the method and apparatus searching arest neighbors
CN109344845B (en) Feature matching method based on triple deep neural network structure
CN103034982B (en) Image super-resolution rebuilding method based on variable focal length video sequence
CN110245683B (en) Residual error relation network construction method for less-sample target identification and application
CN107239792A (en) A kind of workpiece identification method and device based on binary descriptor
Mao et al. Uasnet: Uncertainty adaptive sampling network for deep stereo matching
CN113065525A (en) Age recognition model training method, face age recognition method and related device
CN111898428A (en) Unmanned aerial vehicle feature point matching method based on ORB
CN112364881B (en) Advanced sampling consistency image matching method
CN117274133A (en) Defect detection method, electronic device and storage medium
CN113379788A (en) Target tracking stability method based on three-element network
CN112418250B (en) Optimized matching method for complex 3D point cloud
CN113128518A (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN117253062A (en) Relay contact image characteristic quick matching method under any gesture
CN116630662A (en) Feature point mismatching eliminating method applied to visual SLAM
CN117274754A (en) Gradient homogenization point cloud multi-task fusion method
CN116958809A (en) Remote sensing small sample target detection method for feature library migration
CN113591704B (en) Body mass index estimation model training method and device and terminal equipment
CN113221914B (en) Image feature point matching and mismatching elimination method based on Jacobsad distance
CN103823889B (en) L1 norm total geometrical consistency check-based wrong matching detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant