CN113470085A - Image registration method based on improved RANSAC - Google Patents

Image registration method based on improved RANSAC Download PDF

Info

Publication number
CN113470085A
CN113470085A CN202110548022.1A CN202110548022A CN113470085A CN 113470085 A CN113470085 A CN 113470085A CN 202110548022 A CN202110548022 A CN 202110548022A CN 113470085 A CN113470085 A CN 113470085A
Authority
CN
China
Prior art keywords
characteristic point
target
point
feature
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110548022.1A
Other languages
Chinese (zh)
Other versions
CN113470085B (en
Inventor
冯大政
曾晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110548022.1A priority Critical patent/CN113470085B/en
Publication of CN113470085A publication Critical patent/CN113470085A/en
Application granted granted Critical
Publication of CN113470085B publication Critical patent/CN113470085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image registration method based on improved RANSAC, which fuses a k-neighbor matching similarity algorithm and a dual-threshold RANSAC algorithm, when an inner point screening is carried out by using the k-neighbor matching similarity algorithm, a feature point smaller than a threshold value of the number of pairs of matching points in a k-neighbor feature point is taken as an outer point, the number of pairs of matching points in the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the matching similarity of the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the similarity of the k-neighbor feature point is larger than the threshold value of the first similarity. And obtaining transformation matrix parameters and an interior point set by using the two feature point sets through an improved RANSAC algorithm, and registering the images to be matched by using the interior point set and the transformation matrix. Therefore, the invention can improve the registration precision and the registration efficiency.

Description

Image registration method based on improved RANSAC
Technical Field
The invention belongs to the technical field of image registration, and particularly relates to an image registration method based on improved RANSAC.
Background
In the process of image registration, after the NNDR method is used for matching the feature points, the obtained matching result is very rough, a large number of mismatching points exist, the accuracy of image registration is affected, further accurate matching needs to be performed, and the prior art proposes that an algorithm of Random Sample Consensus (RANSAC) is used for performing accurate matching.
The RANSAC algorithm is one of the most commonly used fine matching and model parameter estimation algorithms, and randomly samples matching point pairs in the operation process, and then estimates transformation model parameters by using the sampled matching feature points. However, the iteration times of the RANSAC algorithm are difficult to determine, and when there are many mismatching points in the feature point set, the mismatching points are repeatedly sampled, so that the iteration times are increased rapidly, and the efficiency and accuracy are affected.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides an improved RANSAC-based image registration method. The technical problem to be solved by the invention is realized by the following technical scheme:
the invention provides an image registration method based on improved RANSAC, which comprises the following steps:
step 1: acquiring feature points of an image pair to be matched and description information of each feature point;
the image pair to be matched comprises a first image and a second image, and the description information comprises a scale, a position and a direction;
step 2: determining feature points matched with the second image in the feature points of the first image by using an NNDR algorithm based on the description information to obtain initial feature point pairs;
and step 3: for each target feature point pair of the initial feature point pair, determining k-nearest neighbors of a first target feature point in the first image and k-nearest neighbors of a second target feature point in the second image according to the distance;
the target characteristic point pairs comprise first target characteristic points and second target characteristic points, wherein the ith first characteristic point is matched with the ith second characteristic point;
and 4, step 4: comparing the matching logarithm of the first characteristic point and the second characteristic point with a logarithm threshold value, and determining an outer point in the target characteristic point pair;
and 5: calculating the similarity between the first target characteristic point and the second target characteristic point;
step 6: determining a first characteristic point set and a second characteristic point set which are composed of the roughly screened interior points by using the size relationship between the similarity and a first similarity threshold value and a second similarity threshold value;
the number of the interior points in the first characteristic point set is greater than that in the second characteristic point set;
and 7: randomly selecting a preset number of target characteristic point pairs from the second characteristic point set as matrix calculation characteristic point pairs aiming at the current iteration;
and 8: calculating the position information of the characteristic point pairs based on the matrix, and calculating a transformation matrix of the image pair to be matched;
and step 9: calculating the Euclidean distance error of each target characteristic point pair in the first characteristic point set based on the transformation matrix;
step 10: adding target characteristic point pairs with Euclidean distance errors smaller than a first error threshold value in the first characteristic point set into a third characteristic point set;
step 11: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is smaller than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration;
step 12: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is not less than the maximum number of the target characteristic point pairs, updating the maximum number of the target characteristic point pairs of the current iteration to be the number of the target characteristic point pairs in the third characteristic point set;
step 13: returning to the step 7 for the next iteration, until the maximum iteration times is reached, taking a third characteristic point set corresponding to the maximum target characteristic point number as an inner point set, and obtaining the inner point set and a transformation matrix corresponding to the inner point set;
step 14: and registering the image pair to be matched based on the transformation matrix.
Optionally, step 3 includes:
step 31: determining the distance between each initial characteristic point pair according to the position relation of each initial characteristic point pair;
step 32: for each target feature point pair of the initial feature point pair, k neighboring first feature points of the first target feature point are determined in the first image and k neighboring second feature points of the second target feature point are determined in the second image by distance.
Optionally, step 4 includes:
step 41: for each pair of target feature points, determining a matched first feature point, a matched second feature point and a matched logarithm;
wherein the ith first feature point is matched with the ith feature point;
step 42: when the matching logarithm of the first characteristic point and the second characteristic point is smaller than a logarithm threshold value, determining a target characteristic point pair consisting of a first target characteristic point corresponding to the first characteristic point and a second target characteristic point corresponding to the second characteristic point as an outer point;
step 43: and when the matched logarithm of the first characteristic point and the second characteristic point is not less than the logarithm threshold value, for the matched first characteristic point and the matched second characteristic point, connecting the first characteristic point with the corresponding first target characteristic point to obtain a first vector of the first target characteristic point, and connecting the second characteristic point with the corresponding second target characteristic point to obtain a second vector of the second target characteristic point.
Optionally, step 5 includes:
and calculating the similarity between the first target characteristic point and the second target characteristic point by using the first vector and the second vector.
Optionally, before step 5, the image registration method further includes:
step 51: determining transformation parameters in a transformation formula;
wherein the transformation parameters include: difference in direction alphaiAnd a scale ratio ri,αi=mod(θii',2π),ri=σii',θiIndicating the principal direction, theta, of the first target feature pointi' denotes a principal direction, σ, of the second target feature pointiRepresenting the scale, σ, of the ith first target feature pointi' denotes a scale of the second target feature point, i ═ 1,2, …, and n denotes a serial number of the target feature point;
step 52: transforming the second vector by using a transformation formula for determining transformation parameters to obtain a transformed second vector;
wherein, the transformation formula is as follows:
Figure BDA0003074245950000041
pi' denotes a second target feature point, pi,j' represents a second feature point corresponding to a second target feature point, and the transformed second vector is vi,j'。
Optionally, step 6 includes:
step 61: determining target characteristic point pairs with similarity greater than a first similarity threshold value as roughly screened interior points, and adding the roughly screened interior points into a first characteristic point set and a second characteristic point set;
the method comprises the following steps: 62: and determining the target characteristic point pairs with the similarity not greater than the first similarity threshold but greater than the second similarity threshold as the roughly screened interior points, and adding the first characteristic point set.
Optionally, step 6 further includes:
and step 63: and determining the target characteristic point pairs with the similarity smaller than the second similarity threshold as the outer points.
Optionally, step 11 includes:
step 111: determining the number of the maximum target characteristic point pairs of the current iteration;
step 112: and if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 for the next iteration.
Optionally, after step 14, the image registration method further includes:
step 15: and calculating an evaluation index for evaluating the result after registration based on the inner point set and the transformation matrix.
Wherein the evaluation index comprises: correct match rate, root mean square error, and root mean square error.
The invention provides an image registration method based on improved RANSAC, which fuses a k-neighbor matching similarity algorithm and a dual-threshold RANSAC algorithm, when an inner point screening is carried out by using the k-neighbor matching similarity algorithm, a feature point smaller than a threshold value of the number of pairs of matching points in a k-neighbor feature point is taken as an outer point, the number of pairs of matching points in the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the matching similarity of the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the similarity of the k-neighbor feature point is larger than the threshold value of the first similarity. And obtaining transformation matrix parameters and an interior point set by using the two feature point sets through an improved RANSAC algorithm, and registering the images to be matched by using the interior point set and the transformation matrix. Therefore, the invention can improve the registration precision and the registration efficiency.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a flowchart of an improved RANSAC-based image registration method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a transformation relationship between two sets of line segments provided by an embodiment of the present invention;
FIG. 3 is an image of a translational change provided by an embodiment of the present invention;
FIG. 4 is an image of a change in viewing angle provided by an embodiment of the present invention;
FIG. 5 is an image of a zoom change provided by an embodiment of the present invention;
FIG. 6 is an image with a primarily changing viewing angle provided by an embodiment of the present invention;
FIG. 7 is an image with a primarily changing viewing angle provided by an embodiment of the present invention;
FIG. 8 is a graph of the matching results of the RANSAC algorithm on Church images;
FIG. 9 is a graph of the matching results of the R-RANSAC algorithm on Church images;
FIG. 10 is a graph of the matching results of the LO-RANSAC algorithm on Church images;
FIG. 11 is a graph of the matching results of the method of the present invention on Church images.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
As shown in fig. 1, the image registration method based on improved RANSAC provided by the present invention includes:
step 1: acquiring feature points of an image pair to be matched and description information of each feature point;
the image pair to be matched comprises a first image and a second image, and the description information comprises a scale, a position and a direction;
step 2: determining feature points matched with the second image in the feature points of the first image by using an NNDR algorithm based on the description information to obtain initial feature point pairs;
and step 3: for each target feature point pair of the initial feature point pair, determining k-nearest neighbors of a first target feature point in the first image and k-nearest neighbors of a second target feature point in the second image according to the distance;
the target characteristic point pairs comprise first target characteristic points and second target characteristic points;
and 4, step 4: determining an outer point in the target characteristic point pair based on the magnitude relation between the matched target characteristic point logarithm and the logarithm threshold;
and 5: calculating the similarity between the first target characteristic point and the second target characteristic point;
step 6: determining a first characteristic point set and a second characteristic point set which are composed of the roughly screened interior points by using the size relationship between the similarity and a first similarity threshold value and a second similarity threshold value;
the number of the interior points in the first characteristic point set is greater than that in the second characteristic point set;
and 7: randomly selecting a preset number of target characteristic point pairs from the second characteristic point set as matrix calculation characteristic point pairs aiming at the current iteration;
and 8: calculating the position information of the characteristic point pairs based on the matrix, and calculating a transformation matrix of the image pair to be matched;
and the k neighbor matching similarity algorithm continuously removes a part of mismatching points based on the rough selection result by utilizing the position, scale and direction information of the feature points and the neighbor points thereof to obtain a feature point set with a higher correct matching point ratio.
The invention is still a registration algorithm based on point features, and the adopted features are SIFT features. Suppose the SIFT feature point is located at p ═ x, yTThen it can be represented as
l=(pT,θ,σ)T (1-1)
Where θ represents the principal direction and σ represents the scale. Suppose (l)i,li'), i-1, 2, …, n denotes a pair of characteristic point pairs, where
li=(pi Tii)T (1-2)
li'=(pi'Ti',σi')T (1-3)
The direction difference of the two characteristic points can be obtained as
αi=mod(θii',2π) (1-4)
A scale ratio of
ri=σii' (1-5)
The direction difference may approximately represent a rotation angle of the local area of the picture to be registered, and the scale ratio may approximately represent a scaling of the local area of the picture to be registered.
The idea of k-nearest neighbor matching similarity algorithm is to find out l from a feature point set of an image to be matched according to distanceiAnd li'the k neighbor feature points of the' are matched point pairs in two sets of k neighbor feature point sets to form a set, and the set is recorded as
Hi={(li,j,l'i,j)|j=1,2,...,m} (1-6)
Wherein (l)i,j,l'i,j) Represents the jth matching point pairAnd m represents the number of matching point pairs in k-nearest feature points of a pair of feature points. The larger the value of m is, the more the number of matching point pairs in the k neighbor characteristic points is, the l isiAnd li' the greater the probability of being a pair of matching points, so m can be one of the parameters describing whether the features match.
And step 9: calculating the Euclidean distance error of each target characteristic point pair in the first characteristic point set based on the transformation matrix;
step 10: adding target characteristic point pairs with Euclidean distance errors smaller than a first error threshold value in the first characteristic point set into a third characteristic point set;
step 11: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is smaller than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration;
step 12: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is not less than the maximum number of the target characteristic point pairs, determining that the maximum number of the target characteristic point pairs of the current iteration is the number of the target characteristic point pairs in the third characteristic point set;
step 13: returning to the step 7 to perform next iteration until the maximum iteration number is reached, and taking a third characteristic point set corresponding to the maximum target characteristic point number as an inner point set to obtain the inner point set and a transformation matrix corresponding to the inner point set;
step 14: and registering the image pair to be matched based on the transformation matrix.
Besides the matching number of the k neighbor feature points, the invention can also utilize the scale and the direction of the k neighbor features to carry out more accurate matching detection. Let p beiAnd pi' isiAnd li' position, connecting feature points with set H by line segmentsiThe feature points in (1) constitute two line segment sets. If liAnd li' is the correct matching point pair, then the dimensions and the squares of the corresponding line segments in the two sets of line segments after transformation should be approximately the same. FIG. 2 depicts the transformation relationship of two sets of line segment sets, where p is connectediAnd pi,jIs represented as a vector vi,jIs connected to pi' and pi,j' the line segment is rotated by aiDegree and riAfter the scale change is expressed as a vector vi,j' the similarity of two vectors can also be used as one of the parameters describing whether the features match, indicated by the dashed line. Vector vi,jAnd vi,j' can be expressed as:
vi,j=pi,j-pi (1-7)
Figure BDA0003074245950000091
firstly, calculating the similarity of the mode and the direction of two vectors, the mode similarity is measured by the ratio of the two vector modes, the invention divides the larger of the two modes and the smaller value to obtain a measurement value, and uses di,jRepresenting degree of similarity
Figure BDA0003074245950000092
Direction similarity gi,jExpressed by the angle between the vectors
Figure BDA0003074245950000101
·||2Representing the two-norm of the vector.
(li,li') k nearest neighbor matching similarity siCan be expressed as
Figure BDA0003074245950000102
Wherein, | · | represents the cardinal number of the set, μ is a parameter that balances the modulus similarity and the direction similarity, when μ is greater than 0.5, the modulus similarity has a greater influence on the matching result, and when μ is less than 0.5, the direction similarity has a greater influence on the matching result, and μ is set to 0.5 in the experiment of the present invention. siIn the range of [0,1],siThe larger (l)i,li')The greater the probability of being a correct pair of matching points.
The traditional algorithm only considers the distance characteristics of the feature points and the adjacent points thereof, and the k-nearest neighbor matching similarity algorithm considers the position, direction and scale characteristics of the feature points and the adjacent points thereof on the basis of the traditional algorithm, so that a large number of mismatching points can be effectively removed, and the subsequent registration work is facilitated.
After the feature point matching operation, the transformation relation between the images to be registered needs to be solved through a model parameter estimation algorithm. The RANSAC algorithm is one of the most commonly used model parameter estimation algorithms, and estimates transformation model parameters in an iterative manner. The algorithm mainly comprises the following steps:
1. and randomly selecting a group of subsamples which are enough for calculating the model parameters from the matching point sample set, and calculating to obtain the model parameters.
2. Calculating the distance between the two transformed corresponding points, taking the distance as an evaluation index, taking the characteristic point with the distance smaller than a set threshold value as an inner point, and calculating the number of the inner points if the characteristic point is an outer point, wherein the larger the number of the inner points is, the better the quality of the obtained model is;
3. and repeating the steps, recording the model with the best quality, namely the model with the largest number of interior points, and quitting when the iteration times are reached, wherein the model with the best quality is the required transformation model.
The RANSAC algorithm is based on the basic principle that if the probability that each subsample selected by each random sampling is an interior point is w, and the probability that a sample is an exterior point is 1-w, the probability that all samples in the randomly sampled subsamples are correct matching points is wnN represents the number of samples sampled at random, and the probability that at least one of the n sample point pairs is an error matching point is 1-wnThe probability that samples that are all inliers are not obtained after k iterations is (1-w)n)kLet p be the probability that the sampled subsamples are all inliers over k iterations
1-p=(1-wn)k k→∞,p→1 (1-12)
Figure BDA0003074245950000111
According to the above formula, (1-w) when the number of iterations is infiniten)kIf the value of (b) is close to 0, the probability p approaches to 1, that is, when the number of iterations is large enough, the RANSAC algorithm must be able to obtain the correct transformation model.
RANSAC adopts an iteration method to carry out fine matching and simultaneously obtain parameters of a transformation model, but the iteration times are difficult to determine, are too low, correct model parameters cannot be found, the iteration times are too high, and the algorithm efficiency is reduced. Along with the increase of the proportion of the mismatching points in the matching point set, the iteration times required by the parameter estimation algorithm are also rapidly increased, and the precision and the efficiency of the algorithm are obviously reduced at the moment. Because of the above problems, the scholars propose many improved algorithms of RANSAC, Matas et al propose a Randomized RANSAC (R-RANSAC) algorithm, check model parameters by randomly sampling feature point data subsets, exclude models which do not pass data subset check, check the models which pass subset check by using all data, greatly reduce the calculated amount and improve the algorithm efficiency. Chum et al propose a local Optimized RANSAC (LO-RANSAC) algorithm that improves accuracy by a local optimization method.
In order to alleviate the problems of RANSAC, the invention provides a RANSAC algorithm using double thresholds, which is improved in efficiency and accuracy. According to the method, each iteration is performed with sampling calculation by using the result of the last iteration, and the result is gradually corrected, so that the iteration times can be reduced, and a more accurate and stable matching result can be obtained.
During each iteration, two thresholds are set, one smaller threshold dsAnd a larger threshold value dhAfter transformation, the distance of the feature point pair is less than dsIs denoted as DsDistance after transformation is less than dhIs denoted as Dh. Feature point set D obtained from last iteration in each iterationsMiddle sampling calculation transformation model, using DhThe number of feature points in the point set evaluates the quality of the transformation model. Due to the use of small threshold valuesThe correct matching rate of the obtained point set is higher, the probability of obtaining a correct model by utilizing the sampling calculation of the point set is higher, and the convergence is faster.
The detailed procedure of the method of the present invention is given below.
Table 1 improved RANSAC algorithm based on k-nearest neighbor similarity
Figure BDA0003074245950000121
Figure BDA0003074245950000131
The invention provides an image registration method based on improved RANSAC, which fuses a k-neighbor matching similarity algorithm and a dual-threshold RANSAC algorithm, when an inner point screening is carried out by using the k-neighbor matching similarity algorithm, a feature point smaller than a threshold value of the number of pairs of matching points in a k-neighbor feature point is taken as an outer point, the number of pairs of matching points in the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the matching similarity of the k-neighbor feature point is larger than the threshold value of the number of pairs of matching points, and the similarity of the k-neighbor feature point is larger than the threshold value of the first similarity. And obtaining transformation matrix parameters and an interior point set by using the two feature point sets through an improved RANSAC algorithm, and registering the images to be matched by using the interior point set and the transformation matrix. Therefore, the invention can improve the registration precision and the registration efficiency.
As an optional embodiment of the present invention, step 3 includes:
step 31: determining the distance between each initial characteristic point pair according to the position relation of each initial characteristic point pair;
step 32: for each target feature point pair of the initial feature point pair, k neighboring first feature points of the first target feature point are determined in the first image and k neighboring second feature points of the second target feature point are determined in the second image by distance.
As an optional embodiment of the present invention, step 4 includes:
step 41: for each pair of target feature points, determining a matched first feature point, a matched second feature point and a matched logarithm;
wherein the ith first feature point is matched with the ith feature point;
step 42: when the matching logarithm of the first characteristic point and the second characteristic point is smaller than a logarithm threshold value, determining a target characteristic point pair consisting of a first target characteristic point corresponding to the first characteristic point and a second target characteristic point corresponding to the second characteristic point as an outer point;
step 43: and when the matched logarithm of the first characteristic point and the second characteristic point is not less than the logarithm threshold value, for the matched first characteristic point and the matched second characteristic point, connecting the first characteristic point with the corresponding first target characteristic point to obtain a first vector of the first target characteristic point, and connecting the second characteristic point with the corresponding second target characteristic point to obtain a second vector of the second target characteristic point.
The step 5 comprises the following steps: and calculating the similarity between the first target characteristic point and the second target characteristic point by using the first vector and the second vector.
As an optional embodiment of the present invention, before step 5, the image registration method further includes:
step 51: determining transformation parameters in a transformation formula;
wherein the transformation parameters include: difference in direction alphaiAnd a scale ratio ri,αi=mod(θii',2π),ri=σii',θiIndicating the principal direction, theta, of the first target feature pointi' denotes a principal direction, σ, of the second target feature pointiRepresenting the scale, σ, of the ith first target feature pointi' denotes a scale of the second target feature point, i ═ 1,2, …, and n denotes a serial number of the target feature point;
step 52: transforming the second vector by using a transformation formula for determining transformation parameters to obtain a transformed second vector;
wherein, the transformation formula is as follows:
Figure BDA0003074245950000141
pi' denotes a second target feature point, pi,j' represents a second feature point corresponding to a second target feature point, and the transformed second vector is vi,j'。
As an alternative embodiment of the present invention, step 6 includes:
step 61: determining target characteristic point pairs with similarity greater than a first similarity threshold value as roughly screened interior points, and adding the roughly screened interior points into a first characteristic point set and a second characteristic point set;
the method comprises the following steps: 62: and determining the target characteristic point pairs with the similarity not greater than the first similarity threshold but greater than the second similarity threshold as the roughly screened interior points, and adding the first characteristic point set.
As an optional implementation manner of the present invention, step 6 further includes:
and step 63: and determining the target characteristic point pairs with the similarity smaller than the second similarity threshold as the outer points.
As an alternative embodiment of the present invention, step 11 includes:
step 111: determining the maximum target characteristic point pair number;
step 112: and if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 for the next iteration.
As an optional embodiment of the present invention, after step 14, the image registration method further comprises:
step 15: and calculating an evaluation index for evaluating the result after registration based on the inner point set and the transformation matrix.
Wherein the evaluation index comprises: correct match rate, root mean square error, and root mean square error.
The invention adopts three evaluation indexes commonly used in the field of image registration:
(1) the Correct Matching Ratio (CMR) is a measure for measuring the quality of a Matching result, the obtained Correct Matching point logarithm is divided by all Matching point pairs to obtain the Correct Matching Ratio, and the Correct Matching Ratio can be obtained by the following calculation
Figure BDA0003074245950000151
Wherein N iscorrNumber of point pairs representing correct match after match, NcThe matched dot logarithm is represented. The larger the CMR value is, the more correct matching numbers obtained by the algorithm are, and the more accurate the matching result is.
(2) Root Mean Square Error (RMSE) is an index for measuring the quality of a transformation model and can be calculated by the following formula
Figure BDA0003074245950000161
Wherein N represents the logarithm of all matching points obtained after fine matching, (x)i',yi') and (x)i,yi) Respectively representing the coordinates of the feature points in the image to be registered and the reference image. The distance adopted by the invention is Euclidean distance, the smaller the root mean square error is, the better the transformation model is, and the more accurate the registration is.
(3) The shorter the run time, the higher the algorithm efficiency, and the longer the run time, the lower the algorithm efficiency, and the worse the performance.
The correct matching rate, the root mean square error and the registration speed of RANSAC and the improved algorithm thereof and the algorithm provided by the invention are compared respectively.
The matching performance of the image registration method provided by the method is verified through a simulation experiment. The experimental program is compiled by MATLAB and is carried out on a computer with a CPU of Intel Core i5-8250U, an internal memory of 8.00GB and an operating system of 64-bit Windows 10. The algorithm is utilized to obtain a characteristic point fine matching result and transformation model parameters, RANSAC, R-RANSAC and LO-RANSAC are selected for comparison, the probability of the comparison algorithm is set to be 0.99, the distance threshold is set to be 1, the maximum iteration number is set to be 10000, and the distance threshold of the algorithm is set to be 1 and 0.5. The algorithm performance is compared with the correct matching rate, the root mean square error and the registration speed.
In order to verify the effectiveness of the improved RANSAC algorithm based on the k-nearest neighbor matching similarity, five groups of images are selected for experiments, the reference image and the image to be registered are changed in angle, translation, scaling and the like, and the details are similar and have more repetition. Wherein, fig. 3, fig. 4, fig. 5 relate to the changes of translation, view angle, zooming, etc., and fig. 6 and fig. 7 mainly have the change of view angle. The experimental graph contains a large number of similar textures, and the initial matching point set contains a large number of mismatching point pairs.
Fig. 8-11 show experimental results of Church using 4 registration algorithms, where the obtained matching point pairs are connected by straight lines, and the values in parentheses of the subheading of each result graph indicate the number of correct matching point pairs and the number of detected matching point pairs.
According to the matching result graph, 51 pairs of matching points are obtained by using a RANSAC algorithm, wherein 32 pairs are correct matching; obtaining 57 pairs of matching points by utilizing an R-RANSAC algorithm, wherein 38 pairs are correct matches; obtaining 57 pairs of matching points by using an LO-RANSAC algorithm, wherein 39 pairs are correct matches; the algorithm of the present invention is used to obtain 64 pairs of matching points, of which 53 pairs are correct matches. The experimental result shows that the accuracy of the algorithm provided by the invention is highest.
Table 2 shows the average result obtained by repeating the experiment of the Church image for 500 times, and it can be seen that the algorithm provided by the present invention is improved in both accuracy and efficiency.
TABLE 2Church image registration results
Name of algorithm Correct match rate Error of the measurement Time
RANSAC 62.95% 0.7535 2.9774
R-RANSAC 66.12% 0.7832 2.0017
LO-RANSAC 68.72% 0.7473 2.9890
Proposed 81.94% 0.5902 0.8186
Table 3 describes the results of the correct matching rates of the 5 sets of experimental data under the four algorithms, table 4 describes the results of the root mean square errors of the 5 sets of experimental data under the four algorithms, table 5 describes the time results of the 5 sets of experimental data under the four algorithms, and the following experimental results are all average results obtained by repeating 500 times of experiments.
TABLE 3 correct match rates for the four algorithms
Figure BDA0003074245950000171
Figure BDA0003074245950000181
TABLE 4 root mean square error of the four algorithms
Figure BDA0003074245950000182
TABLE 5 time of the four algorithms
Figure BDA0003074245950000183
The experimental result shows that compared with RANSAC, R-RANSAC and LO-RANSAC, the algorithm provided by the invention has the best performance in time and matching accuracy. For the Magdalen picture, the algorithm provided by the invention is slightly inferior to the LO-RANSAC algorithm, but the mean value of the root mean square errors of the algorithm is minimum in the view of integrating the results of all data, namely the method provided by the invention is improved in speed and precision.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples described in this specification can be combined and combined by those skilled in the art.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. An improved RANSAC-based image registration method, comprising:
step 1: acquiring feature points of an image pair to be matched and description information of each feature point;
the image pair to be matched comprises a first image and a second image, and the description information comprises a scale, a position and a direction;
step 2: determining feature points matched with the second image in the feature points of the first image by using an NNDR algorithm based on the description information to obtain initial feature point pairs;
and step 3: for each target feature point pair of the initial feature point pair, determining k neighboring first feature points of a first target feature point in the first image and k neighboring second feature points of a second target feature point in the second image by distance;
the target characteristic point pairs comprise first target characteristic points and second target characteristic points, wherein the ith first characteristic point is matched with the ith second characteristic point;
and 4, step 4: comparing the matching logarithm of the first characteristic point and the second characteristic point with a logarithm threshold value, and determining an outer point in the target characteristic point pair;
and 5: calculating the similarity between the first target characteristic point and the second target characteristic point;
step 6: determining a first characteristic point set and a second characteristic point set which are composed of the roughly screened interior points by using the size relationship between the similarity and a first similarity threshold value and a second similarity threshold value;
the number of the interior points in the first characteristic point set is greater than that in the second characteristic point set;
and 7: randomly selecting a preset number of target characteristic point pairs from the second characteristic point set as matrix calculation characteristic point pairs aiming at the current iteration;
and 8: calculating the position information of the characteristic point pairs based on the matrix, and calculating a transformation matrix of the image pair to be matched;
and step 9: calculating Euclidean distance errors of each target characteristic point pair in the first characteristic point set based on the transformation matrix;
step 10: adding target characteristic point pairs with Euclidean distance errors smaller than a first error threshold value in the first characteristic point set into a third characteristic point set;
step 11: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration;
step 12: if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is not less than the maximum number of the target characteristic point pairs, updating the maximum number of the target characteristic point pairs of the current iteration to be the number of the target characteristic point pairs in the third characteristic point set;
step 13: returning to the step 7 for the next iteration, until the maximum iteration times is reached, taking a third characteristic point set corresponding to the maximum target characteristic point number as an inner point set, and obtaining the inner point set and a transformation matrix corresponding to the inner point set;
step 14: and registering the image pair to be matched based on the transformation matrix.
2. The image registration method according to claim 1, wherein the step 3 comprises:
step 31: determining the distance between each initial characteristic point pair according to the position relation of each initial characteristic point pair;
step 32: for each target feature point pair of the initial feature point pair, k neighboring first feature points of the first target feature point are determined in the first image and k neighboring second feature points of the second target feature point are determined in the second image by distance.
3. The image registration method according to claim 1, wherein the step 4 comprises:
step 41: for each pair of target feature points, determining a matched first feature point, a matched second feature point and a matched logarithm;
wherein the ith first feature point is matched with the ith feature point;
step 42: when the matching logarithm of the first characteristic point and the second characteristic point is smaller than a logarithm threshold value, determining a target characteristic point pair consisting of a first target characteristic point corresponding to the first characteristic point and a second target characteristic point corresponding to the second characteristic point as an outer point;
step 43: and when the matched logarithm of the first characteristic point and the second characteristic point is not less than the logarithm threshold value, for the matched first characteristic point and the matched second characteristic point, connecting the first characteristic point with the corresponding first target characteristic point to obtain a first vector of the first target characteristic point, and connecting the second characteristic point with the corresponding second target characteristic point to obtain a second vector of the second target characteristic point.
4. The image registration method according to claim 3, wherein the step 5 comprises:
and calculating the similarity between the first target characteristic point and the second target characteristic point by using the first vector and the second vector.
5. The image registration method of claim 1, wherein prior to the step 5, the image registration method further comprises:
step 51: determining transformation parameters in a transformation formula;
wherein the transformation parameters include: difference in direction alphaiAnd a scale ratio ri,αi=mod(θii',2π),ri=σii',θiIndicating the principal direction, theta, of the first target feature pointi' denotes a principal direction, σ, of the second target feature pointiRepresenting the scale, σ, of the ith first target feature pointi' denotes a scale of the second target feature point, i ═ 1,2, …, and n denotes a serial number of the target feature point;
step 52: transforming the second vector by using a transformation formula for determining transformation parameters to obtain a transformed second vector;
wherein the transformation formula is:
Figure FDA0003074245940000031
pi' denotes a second target feature point, pi,j' represents a second feature point corresponding to a second target feature point, and the transformed second vector is vi,j'。
6. The image registration method according to claim 1, wherein the step 6 comprises:
step 61: determining target characteristic point pairs with similarity greater than a first similarity threshold value as roughly screened interior points, and adding the roughly screened interior points into a first characteristic point set and a second characteristic point set;
the method comprises the following steps: 62: and determining the target characteristic point pairs with the similarity not greater than the first similarity threshold but greater than the second similarity threshold as the roughly screened interior points, and adding the first characteristic point set.
7. The image registration method according to claim 6, wherein the step 6 further comprises:
and step 63: and determining the target characteristic point pairs with the similarity smaller than the second similarity threshold as the outer points.
8. The image registration method according to claim 1, wherein the step 11 comprises:
step 111: determining the number of the maximum target characteristic point pairs of the current iteration;
step 112: and if the number of the target characteristic point pairs in the third characteristic point set of the current iteration is less than the maximum number of the target characteristic point pairs, returning to the step 7 to perform the next iteration.
9. The image registration method of claim 1, wherein after the step 14, the image registration method further comprises:
step 15: and calculating an evaluation index for evaluating the result after the registration based on the inner point set and the transformation matrix.
10. The image registration method according to claim 9, wherein the evaluation index includes: correct match rate, root mean square error, and root mean square error.
CN202110548022.1A 2021-05-19 2021-05-19 Improved RANSAC-based image registration method Active CN113470085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110548022.1A CN113470085B (en) 2021-05-19 2021-05-19 Improved RANSAC-based image registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110548022.1A CN113470085B (en) 2021-05-19 2021-05-19 Improved RANSAC-based image registration method

Publications (2)

Publication Number Publication Date
CN113470085A true CN113470085A (en) 2021-10-01
CN113470085B CN113470085B (en) 2023-02-10

Family

ID=77870996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110548022.1A Active CN113470085B (en) 2021-05-19 2021-05-19 Improved RANSAC-based image registration method

Country Status (1)

Country Link
CN (1) CN113470085B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934571A (en) * 2024-03-21 2024-04-26 广州市艾索技术有限公司 4K high-definition KVM seat management system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (en) * 2009-04-22 2010-10-27 中国科学院自动化研究所 Method for automatically registering scanned document images
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN106971404A (en) * 2017-03-20 2017-07-21 西北工业大学 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering
CN107103580A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 A kind of HDR three-dimensional image registration method
CN108401565B (en) * 2015-05-28 2017-12-15 西北工业大学 Remote sensing image registration method based on improved KAZE features and Pseudo-RANSAC algorithms
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN110097585A (en) * 2019-04-29 2019-08-06 中国水利水电科学研究院 A kind of SAR image matching method and system based on SIFT algorithm
CN110738695A (en) * 2019-10-12 2020-01-31 哈尔滨工业大学 image feature point mismatching and removing method based on local transformation model
CN111767960A (en) * 2020-07-02 2020-10-13 中国矿业大学 Image matching method and system applied to image three-dimensional reconstruction
CN111798453A (en) * 2020-07-06 2020-10-20 博康智能信息技术有限公司 Point cloud registration method and system for unmanned auxiliary positioning
CN111833249A (en) * 2020-06-30 2020-10-27 电子科技大学 UAV image registration and splicing method based on bidirectional point characteristics
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (en) * 2009-04-22 2010-10-27 中国科学院自动化研究所 Method for automatically registering scanned document images
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN108401565B (en) * 2015-05-28 2017-12-15 西北工业大学 Remote sensing image registration method based on improved KAZE features and Pseudo-RANSAC algorithms
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN106971404A (en) * 2017-03-20 2017-07-21 西北工业大学 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering
CN107103580A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 A kind of HDR three-dimensional image registration method
CN110097585A (en) * 2019-04-29 2019-08-06 中国水利水电科学研究院 A kind of SAR image matching method and system based on SIFT algorithm
CN110738695A (en) * 2019-10-12 2020-01-31 哈尔滨工业大学 image feature point mismatching and removing method based on local transformation model
CN111833249A (en) * 2020-06-30 2020-10-27 电子科技大学 UAV image registration and splicing method based on bidirectional point characteristics
CN111767960A (en) * 2020-07-02 2020-10-13 中国矿业大学 Image matching method and system applied to image three-dimensional reconstruction
CN111798453A (en) * 2020-07-06 2020-10-20 博康智能信息技术有限公司 Point cloud registration method and system for unmanned auxiliary positioning
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZAHRA HOSSEIN-NEJAD等: "A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》 *
仲明: "基于特征点精确配准的图像拼接技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
姜小会: "基于特征点的图像拼接技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李键等: "基于深度学习的几何特征匹配方法", 《计算机科学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934571A (en) * 2024-03-21 2024-04-26 广州市艾索技术有限公司 4K high-definition KVM seat management system
CN117934571B (en) * 2024-03-21 2024-06-07 广州市艾索技术有限公司 4K high-definition KVM seat management system

Also Published As

Publication number Publication date
CN113470085B (en) 2023-02-10

Similar Documents

Publication Publication Date Title
CN109887015B (en) Point cloud automatic registration method based on local curved surface feature histogram
CN103403704B (en) For the method and apparatus searching arest neighbors
CN103034982B (en) Image super-resolution rebuilding method based on variable focal length video sequence
CN109344845B (en) Feature matching method based on triple deep neural network structure
CN111145227B (en) Iterative integral registration method for space multi-view point cloud of underground tunnel
CN111429494B (en) Biological vision-based point cloud high-precision automatic registration method
CN110245683B (en) Residual error relation network construction method for less-sample target identification and application
CN107341824B (en) Comprehensive evaluation index generation method for image registration
CN112364881B (en) Advanced sampling consistency image matching method
CN108846845B (en) SAR image segmentation method based on thumbnail and hierarchical fuzzy clustering
CN113470085B (en) Improved RANSAC-based image registration method
CN111127532B (en) Medical image deformation registration method and system based on deep learning characteristic optical flow
CN112418250B (en) Optimized matching method for complex 3D point cloud
CN111383281A (en) Video camera calibration method based on RBF neural network
CN113379788A (en) Target tracking stability method based on three-element network
CN113128518A (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN117253062A (en) Relay contact image characteristic quick matching method under any gesture
CN116630662A (en) Feature point mismatching eliminating method applied to visual SLAM
CN113221914B (en) Image feature point matching and mismatching elimination method based on Jacobsad distance
CN115331021A (en) Dynamic feature extraction and description method based on multilayer feature self-difference fusion
US11645827B2 (en) Detection method and device for assembly body multi-view change based on feature matching
CN112529021B (en) Aerial image matching method based on scale invariant feature transformation algorithm features
CN107220580B (en) Image identification random sampling consistent algorithm based on voting decision and least square method
CN113160284B (en) Guidance space-consistent photovoltaic image registration method based on local similar structure constraint
CN115019136B (en) Training method and detection method of target key point detection model for resisting boundary point drift

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant