CN112150520A - Image registration method based on feature points - Google Patents

Image registration method based on feature points Download PDF

Info

Publication number
CN112150520A
CN112150520A CN202010828690.5A CN202010828690A CN112150520A CN 112150520 A CN112150520 A CN 112150520A CN 202010828690 A CN202010828690 A CN 202010828690A CN 112150520 A CN112150520 A CN 112150520A
Authority
CN
China
Prior art keywords
matching
points
image
model
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010828690.5A
Other languages
Chinese (zh)
Inventor
顾军
贺广强
张恩明
张会柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Huaxun Technology Co ltd
Original Assignee
Xuzhou Huaxun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Huaxun Technology Co ltd filed Critical Xuzhou Huaxun Technology Co ltd
Priority to CN202010828690.5A priority Critical patent/CN112150520A/en
Publication of CN112150520A publication Critical patent/CN112150520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image registration method based on feature points, and relates to the technical field of image registration. The invention comprises the following parts: a. preprocessing a reference image and an image to be registered by adopting a wavelet transform threshold denoising method; b. extracting feature points by using an SIFT algorithm; c. describing the feature points by adopting a deformation dimension reduction method; d. performing rough matching on the feature points according to the cosine similarity; e. and eliminating partial mismatching by adopting an improved RANSAC algorithm to obtain a matching point pair with higher matching precision. The method comprises the steps of preprocessing an image by using a wavelet transform threshold denoising method before extracting feature points, removing partial noise in the image, improving the purity of the feature points, describing the extracted feature points by using a deformation dimension reduction method when describing the feature points, reducing SIFT descriptor dimensions, shortening algorithm running time, optimizing matched feature point pairs by using a method combining rough matching and fine matching, and improving the matching accuracy of the algorithm.

Description

Image registration method based on feature points
Technical Field
The invention relates to the technical field of image registration, in particular to an image registration method based on feature points.
Background
Image registration is a process of matching and superimposing two or more images acquired at different times, with different sensors (imaging devices) or under different conditions, and is widely applied to the fields of computer vision and image processing. The image registration mainly comprises three parts: (1) feature extraction (2) feature matching (3) parameter estimation. The current image registration method mainly comprises the following steps: 1) a region-based registration method; 2) the registration method based on the characteristics is characterized in that the registration method based on the region mainly utilizes the gray information of the images to establish the similarity measurement between 2 images, and then the parameter value of the transformation model with the maximum or minimum similarity measurement value is counted so as to achieve the aim of registering the images. The feature-based image registration method extracts some features in the image, such as points, lines, planes, etc., and compares and analyzes the features with the matched image, thereby obtaining a matching result. Compared with the former method, the characteristic-based method is not influenced by illumination and rotation, and has less calculation information amount and higher efficiency. Meanwhile, the characteristic points have universality and easy extraction, so that the invention carries out registration research from the aspect based on the characteristic points. SUSAN operator, Harris operator, Moravec operator, SIFT operator and the like are currently used algorithms for extracting feature points of several images. The SIFT operator not only has scale, rotation, visual angle and illumination invariance, but also keeps good matching performance to factors such as motion, shielding and noise of a target.
On the aspect of feature point registration, myrnenko et al propose a well-known Coherent Point Drift (CPD) algorithm, which solves the feature point matching problem from the perspective of probability density estimation and can well calculate errors and deletions of central feature points. The algorithm also utilizes the rapid Gaussian transformation and the matrix low-rank approximation technology, so that the calculation complexity of the algorithm is reduced, the calculation speed is increased, and the registration accuracy is not ideal. In order to further improve the registration accuracy of the remote sensing images, the remote sensing images are registered by using the flexibility of non-subsampled contourlet transform (NSCT) on image decomposition and the effectiveness of SIFT algorithm on feature description, but the method consumes longer time. Liu et al propose a limited spatial order constraints (RSOC) algorithm that employs a robust graph matching technique to remove false matches, increasing the algorithm run time while improving the matching accuracy. Yang et al propose a global and local mixed distance-thin plate spline (GLMDTPS) registration algorithm, which mainly treats the difference of global and local structural features as a linear distribution problem, and the matching precision is still to be improved. The unmanned aerial vehicle scene matching algorithm based on the CenSurE-star is provided aiming at the situations of multiple redundant points, poor real-time performance and unobtrusive geometric transformation resistance of the traditional scene matching algorithm with local invariant characteristics, such as Zhang-Wen Yu, and the like, although the matching precision is improved, the real-time performance is better, but the time is sacrificed.
Disclosure of Invention
In order to overcome the disadvantages of the prior art, the present invention provides a feature point-based image registration method. The invention is realized by the following technical scheme: an image registration method based on feature points comprises the following specific steps
(1) Preprocessing a reference image and an image to be registered by adopting a wavelet transform threshold denoising method;
(2) extracting feature points by using an SIFT algorithm;
(3) describing the feature points by adopting a deformation dimension reduction method;
(4) performing rough matching on the feature points according to the cosine similarity;
(5) and eliminating partial mismatching by adopting an improved RANSAC algorithm to obtain a matching point pair with higher matching precision.
Preferably, the specific steps of preprocessing the reference image and the image to be registered by using a wavelet transform threshold denoising method are as follows:
(1) wavelet decomposition of two-dimensional signals: calculating the wavelet transform of the noise-containing signals, selecting proper wavelet basis and wavelet decomposition layer number J, and performing wavelet decomposition on the graph to obtain corresponding wavelet decomposition coefficients.
(2) Threshold quantization is performed on the high frequency coefficients: and selecting a proper threshold value and a proper threshold value coefficient for each layer from 1 to J, and carrying out threshold value quantization on the high-frequency coefficient obtained by decomposition to obtain an estimated wavelet coefficient.
(3) And (3) two-dimensional wavelet reconstruction: and performing wavelet inverse transformation, and performing wavelet reconstruction by using a reconstruction algorithm according to the low-frequency coefficient (scale coefficient) of the J-th layer after wavelet decomposition and the high-frequency coefficient (wavelet coefficient) of each layer after threshold quantization to obtain a denoised signal.
Preferably, the SIFT algorithm extracts the feature points mainly as follows:
(1) constructing a Gaussian scale space
In order to find stable feature points in different scale spaces, the SIFT algorithm uses gaussian difference kernels of different scales to generate a gaussian difference scale space DOG,
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (1)
Figure BDA0002637140610000021
the gaussian convolution kernel is the only linear kernel to implement the scale transformation, and thus the scale space of a two-dimensional image is defined as:
L(x,y,σ)=G(x,y,σ)*I(x,y) (3)
wherein (x, y) is the spatial coordinates of the image pixel, I (x, y) represents the pixel value of the original image, and the amount of shake determines the smoothness of the image, G (x, y, shake) is a scale-variable gaussian function, k is a scale space factor, and L (x, y, k σ) is a gaussian function at the corresponding scale;
(2) detect local extreme points
In a Gaussian difference scale space, each pixel point is compared with N adjacent points with the same scale and M points corresponding to upper and lower adjacent scales to ensure that extreme points are detected in the scale space and the two-dimensional image space; if the point is the maximum or minimum in M neighborhoods of the local layer and the two adjacent layers of the Gaussian difference scale space, the point is considered as a local extreme point, and all the extreme points in different scale spaces are detected.
Preferably, the method for deforming and reducing the dimension describes the specific steps of the feature points:
(1) the SIFT algorithm extracts key points, and then adopts a circular neighborhood region with the radius of 6 pixels to count the grade and the direction of the pixels so as to determine a main direction and an auxiliary direction.
(2) Dividing a neighborhood region into four sectors and a ring, and taking each sub-region as a seed point; the level and direction of each pixel in the sub-region are counted and the gradients are assigned 0, π/4, π/2, 5 π/4, 2 π, 9 π/4, 3 π, 13 π/4 after Gaussian weighting.
(3) The four sectors are numbered 1,2,3 and 4 according to the clockwise direction, the ring number is 5, 5 seed points are provided, and each seed point has 8 pieces of direction vector information; finally, 5 x 8 total 40-dimensional feature descriptors are generated.
Preferably, the rough matching of the feature points comprises the following specific steps: assuming that the n-dimensional feature vectors of the preliminary matching point pair are respectively A and B; a is [ A ]1,A2,...,An]Said B is [ B, B2,...,Bn];ATA transpose matrix representing a matrix A, and cosine similarity of the preliminary matching point pair is represented by cos θ:
Figure BDA0002637140610000022
by calculating the cosine value between the characteristic vectors of the matching point pairs, namely the value of cos theta, when the value of cos theta is greater than or equal to 0.9, the similarity of the matching point pairs in the direction is considered to be higher, the matching point pairs are considered to be correct matching point pairs at the moment, and otherwise, the matching point pairs are wrong matching point pairs and are discarded.
Preferably, the specific steps of eliminating partial mismatching by using an improved RANSAC algorithm to obtain a matching point pair with higher matching precision are as follows:
(1) estimating a mathematical model, and in each iteration, calculating different transformation model parameters according to equation (5) in order to select the best model in a given iteration, where q is the calculation modeThe minimum matching point number required by the type parameter, and p is the number of parameters in each transformation model; equation (6) is to calculate the transformation model according to the transformation parameters, and select three random matching points to calculate the transformation parameters, where f, e, d, c, b, a are the transformation parameters, (x)1,y1) Is the coordinate of the matching point in the reference image, (x)1',y1') is a coordinate; writing HPe a transformation model in the image to be matched, wherein H is a transformation parameter and Pe is a matching point in the reference image;
Figure BDA0002637140610000031
Figure BDA0002637140610000032
(2) judging other points; after calculating the parameters of the transformation model, for each matching point in the reference image, calculating the distance between (P, HPe) in the image to be matched, recording the maximum value max and the minimum value min, and calculating the mean value mean of the distances and the threshold to be compared, wherein Pi is the ith matching in the image to be matched, HPei is the ith reference transformation model, i is the number of the matching points, calculating the distances from other points P in the model to the reference model HPe, comparing the distances with the threshold, if the distance is less than the threshold, considering the point as an inner point, conforming to the model, otherwise, as an outer point;
min(dis)=dis(Pi,HPei)(i=1,2,...,m) (7);
max(dis)=dis(Pi,HPei)(i=1,2,...,m) (8);
mean(dis)=dis(Pi,HPei)(i=1,2,...,m) (9);
Figure BDA0002637140610000033
(3) processing the model; comparing the number of the interior points in the model with an expected value, if the number of the interior points in the model is greater than the expected value or reaches a preset maximum iteration number, re-estimating the model, if the number of the interior points in the model is less than the expected value, taking the model as a candidate model, re-selecting samples, and repeating the steps;
(4) and after N iterations, re-estimating the model by using the sample with the largest number of interior points to obtain a final result.
The invention has the beneficial effects that:
(1) aiming at the problems that feature points extracted in the existing image registration algorithm are impure and are easily interfered by factors such as noise and the like, the method firstly provides that two images to be matched are preprocessed by adopting a wavelet transform threshold denoising method, and partial external interference is eliminated, so that the matching accuracy is improved.
(2) Aiming at the fact that the matching speed is reduced due to a 128-dimensional high-dimensional descriptor in the SIFT algorithm, a deformation dimension reduction method is provided for describing the extracted feature points, the operation time of the algorithm is shortened, and therefore the matching efficiency is improved.
(3) Aiming at the fact that a mismatching point pair exists in a matching result, the method adopts a method of combining rough matching and fine matching to match characteristic points and optimize the characteristic points, and therefore matching accuracy is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of detecting spatial extreme points of different scales;
FIG. 3(a) is a square descriptor diagram;
FIG. 3(b) is a square descriptor diagram;
fig. 4 is a diagram of a circular region divided into 2 × 2+1 sub-regions instead of a rectangular region.
Detailed Description
As shown in fig. 1, a feature point-based image registration method mainly aims to improve image registration efficiency, reduce time consumption in a registration process, and improve matching accuracy. The method mainly comprises the following steps: a. preprocessing a reference image and an image to be registered by adopting a wavelet transform threshold denoising method; b. extracting feature points by using an SIFT algorithm; c. describing the feature points by adopting a deformation dimension reduction method; d. performing rough matching on the feature points according to the cosine similarity; e. and eliminating partial mismatching by adopting an improved RANSAC algorithm to obtain a matching point pair with higher matching precision.
a. Pretreatment of
In the process of obtaining the image, due to the limitations of hardware equipment and environment, a high-quality image cannot be obtained, and the quality of the image directly affects the efficiency and the application result of the algorithm, so that the image needs to be preprocessed before the image features are extracted and matched. The wavelet coefficients corresponding to the signals contain important information of the signals, the amplitude values of the wavelet coefficients are larger, but the number of the wavelet coefficients is smaller, and the wavelet coefficients corresponding to the noise are distributed uniformly, the number of the wavelet coefficients is larger, but the amplitude values of the wavelet coefficients are smaller. Therefore, the invention adopts a wavelet transformation threshold denoising method to preprocess the image, and the main steps are as follows:
(1) wavelet decomposition of two-dimensional signals: calculating the wavelet transform of the noise-containing signals, selecting proper wavelet basis and wavelet decomposition layer number J, and performing wavelet decomposition on the graph to obtain corresponding wavelet decomposition coefficients.
(2) Threshold quantization is performed on the high frequency coefficients: and selecting a proper threshold value and a proper threshold value coefficient for each layer from 1 to J, and carrying out threshold value quantization on the high-frequency coefficient obtained by decomposition to obtain an estimated wavelet coefficient.
(3) And (3) two-dimensional wavelet reconstruction: and performing wavelet inverse transformation, and performing wavelet reconstruction by using a reconstruction algorithm according to the low-frequency coefficient (scale coefficient) of the J-th layer after wavelet decomposition and the high-frequency coefficient (wavelet coefficient) of each layer after threshold quantization to obtain a denoised signal.
b. Extracting feature points
The SIFT operator not only has scale, rotation, visual angle and illumination invariance, but also keeps better matching performance to factors such as motion, shielding and noise of a target. Therefore, the invention adopts SIFT algorithm to extract feature points, and the main steps are as follows:
(1) constructing a Gaussian scale space
In order to find stable feature points in different scale spaces, the SIFT algorithm uses gaussian difference kernels of different scales to generate a gaussian difference scale space DOG,
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
(1)
Figure BDA0002637140610000051
the gaussian convolution kernel is the only linear kernel to implement the scale transformation, and thus the scale space of a two-dimensional image is defined as:
L(x,y,σ)=G(x,y,σ)*I(x,y)
(3)
where (x, y) is the spatial coordinate of the image pixel, I (x, y) represents the pixel value of the original image, and the Lo is used to determine the smoothness of the image, and the Lo is used to correspond to the coarse scale (low resolution) and vice versa. G (x, y, Lo) is a scale-variable Gaussian function.
(2) Detect local extreme points
In the Gaussian difference scale space, each pixel point is compared with 26 points which are 9 multiplied by 2 points corresponding to 8 adjacent points with the same scale and upper and lower adjacent scales so as to ensure that extreme points are detected in the scale space and the two-dimensional image space. If the point is the maximum or minimum in 26 neighborhoods of the local layer and the two adjacent layers of the Gaussian difference scale space, the point is considered as a local extreme point, and all the extreme points in different scale spaces are detected.
Because the DoG value is sensitive to noise and edges, in order to improve the stability of the key points, curve fitting needs to be performed on the DoG function in the scale space, and unstable extreme points with low contrast are removed. Furthermore, since the peak point of the DoG function has different principal curvatures in the directions across and perpendicular to the edge, respectively, it is also necessary to exclude the edge response.
c. Describing feature points
In order to maintain rotational invariance, the SIFT algorithm needs to rotate the square region of the image to be matched to the direction of the key point so that its main direction is parallel to the main direction of the reference image. This results in not all pixels in the area overlapping, and obviously the pixels around the corners of the square exceed the overlap area. If square regions of the same size are used, the pixels used to generate the feature descriptors do not overlap completely, which results in greater errors. To avoid these errors, it is necessary to rotate with a larger square area than the reference image, as shown in fig. 3 (a). However, this operation increases the number of pixels that need to be rotated and results in longer program run times. Further, the direction of the keypoint is determined as 10 ° as an increment, and the quantization error present cannot be measured. However, changing a square area to a circle, even if there is an error in the main direction, the pixels in the rotated area are the same as before, and the circle itself has better rotational invariance, as shown in fig. 3 (b). It is therefore proposed herein to use a circular shape instead of a square area.
On the basis, the dimensionality of the feature vectors is reduced, the descriptor generation areas are reclassified, circular areas are divided into 2 x 2+1 sub-areas to replace rectangular areas (as shown in figure 4), the operation time of the algorithm is shortened, and the operation efficiency is improved. The method mainly comprises the following steps:
(1) the SIFT algorithm extracts key points, and then adopts a circular neighborhood region with the radius of 6 pixels to count the grade and the direction of the pixels so as to determine a main direction and an auxiliary direction.
(2) The neighborhood region is divided into four sectors and a ring, and each sub-region is used as a seed point. The level and direction of each pixel in the sub-region are counted and the gradients are assigned 0, π/4, π/2, 5 π/4, 2 π, 9 π/4, 3 π, 13 π/4 after Gaussian weighting.
(3) The four sectors are numbered 1,2,3 and 4 according to the clockwise direction, the ring number is 5, 5 seed points are provided, and each seed point has 8 pieces of direction vector information. Finally, 5 x 8 total 40-dimensional feature descriptors are generated.
(4) To reduce the effect of light variations, the feature descriptors generated above are normalized.
d. Coarse matching of feature points
Cosine similarity, namely cosine distance, calculates the similarity of two vectors by using cosine values of included angles of the two vectors, thereby measuring the difference between the two individuals. It is more than the Euclidean distanceThe difference in direction of the two vectors is weighted. Therefore, the invention adopts cosine similarity to carry out rough matching on the feature points, and the n-dimensional feature vectors of the preliminary matching point pair are assumed to be A and B respectively; a is [ A ]1,A2,...,An]Said B is [ B, B2,...,Bn];ATA transpose matrix representing a matrix A, and cosine similarity of the preliminary matching point pair is represented by cos θ:
Figure BDA0002637140610000061
by calculating the cosine value between the characteristic vectors of the matching point pairs, namely the value of cos theta, when the value of cos theta is greater than or equal to 0.9, the similarity of the matching point pairs in the direction is considered to be higher, the matching point pairs are considered to be correct matching point pairs at the moment, and otherwise, the matching point pairs are wrong matching point pairs and are discarded.
e. Fine matching of feature points
RANSAC (random sample consensus) is a robust parameter estimation method. In general, image registration generates some mismatching, and the RANSAC algorithm can distinguish correct and incorrect matching subsets in initial matching, so that part of mismatching point pairs are removed, and the matching accuracy is improved. The invention provides an improved method aiming at the problem of setting a threshold value in a RANSAC algorithm, which mainly comprises the following steps:
(1) a mathematical model is estimated. To select the best model in a given iteration, in each iteration, different transformation model parameters are computed according to equation (5), where q is the minimum number of matching points needed to compute the model parameters and p is the number of parameters in each transformation model. According to the transformation parameter calculation transformation model (as shown in formula (6)), the invention selects three random matching points (in affine transformation) to calculate transformation parameters, wherein f, e, d, c, b, a are transformation parameters, (x)1,y1) Is the coordinate of the matching point in the reference image, (x)1',y1') are coordinates. The transformation model in the image to be matched is written HPe, where H is the transformation parameter and Pe is the matching point in the reference image.
Figure BDA0002637140610000062
Figure BDA0002637140610000063
(2) And judging other points. After the transformation model parameters are calculated, for each matching point in the reference image, the distance between (P, HPe) is calculated in the image to be matched, the maximum value max, the minimum value min are recorded therein, and the mean of the distances and the threshold to be compared are calculated. Where Pi is the ith match in the image to be matched, HPei is the ith reference matching transformation model, and i is the number of matching points. The distance from other points P in the model to the reference model HPe is calculated and compared to the threshold, and if the distance is less than the threshold, the point is considered to be an interior point and is in accordance with the model, otherwise, the point is an exterior point.
min(dis)=dis(Pi,HPei)(i=1,2,...,m) (7)
max(dis)=dis(Pi,HPei)(i=1,2,...,m) (8)
mean(dis)=dis(Pi,HPei)(i=1,2,...,m) (9)
Figure BDA0002637140610000071
(3) And (6) processing the model. And comparing the number of the interior points in the model with an expected value, and if the number of the interior points in the model is greater than the expected value or reaches a preset maximum iteration number, re-estimating the model. If the value is less than the threshold value, the model is used as a candidate model, the sample is reselected, and the steps are repeated.
(4) And after N iterations, re-estimating the model by using the sample with the largest number of interior points to obtain a final result.

Claims (6)

1. An image registration method based on feature points is characterized in that: the method comprises the following specific steps
(1) Preprocessing a reference image and an image to be registered by adopting a wavelet transform threshold denoising method;
(2) extracting feature points by using an SIFT algorithm;
(3) describing the feature points by adopting a deformation dimension reduction method;
(4) performing rough matching on the feature points according to the cosine similarity;
(5) and eliminating partial mismatching by adopting an improved RANSAC algorithm to obtain a matching point pair with higher matching precision.
2. The feature point-based image registration method according to claim 1, wherein: the method for preprocessing the reference image and the image to be registered by adopting the wavelet transform threshold denoising method comprises the following specific steps:
(1) wavelet decomposition of two-dimensional signals: calculating the wavelet transform of the noise-containing signals, selecting proper wavelet basis and wavelet decomposition layer number J, and performing wavelet decomposition on the graph to obtain corresponding wavelet decomposition coefficients.
(2) Threshold quantization is performed on the high frequency coefficients: and selecting a proper threshold value and a proper threshold value coefficient for each layer from 1 to J, and carrying out threshold value quantization on the high-frequency coefficient obtained by decomposition to obtain an estimated wavelet coefficient.
(3) And (3) two-dimensional wavelet reconstruction: and performing wavelet inverse transformation, and performing wavelet reconstruction by using a reconstruction algorithm according to the low-frequency coefficient (scale coefficient) of the J-th layer after wavelet decomposition and the high-frequency coefficient (wavelet coefficient) of each layer after threshold quantization to obtain a denoised signal.
3. The feature point-based image registration method according to claim 2, wherein:
the SIFT algorithm mainly comprises the following steps of:
(1) constructing a Gaussian scale space
In order to find stable feature points in different scale spaces, the SIFT algorithm uses gaussian difference kernels of different scales to generate a gaussian difference scale space DOG,
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (1)
Figure FDA0002637140600000011
the gaussian convolution kernel is the only linear kernel to implement the scale transformation, and thus the scale space of a two-dimensional image is defined as:
L(x,y,σ)=G(x,y,σ)*I(x,y) (3)
wherein (x, y) is the spatial coordinates of the image pixel, I (x, y) represents the pixel value of the original image, and the amount of shake determines the smoothness of the image, G (x, y, shake) is a scale-variable gaussian function, k is a scale space factor, and L (x, y, k σ) is a gaussian function at the corresponding scale;
(2) detect local extreme points
In a Gaussian difference scale space, each pixel point is compared with N adjacent points with the same scale and M points corresponding to upper and lower adjacent scales to ensure that extreme points are detected in the scale space and the two-dimensional image space; if the point is the maximum or minimum in M neighborhoods of the local layer and the two adjacent layers of the Gaussian difference scale space, the point is considered as a local extreme point, and all the extreme points in different scale spaces are detected.
4. The feature point-based image registration method according to claim 3, wherein: the method for deformation dimension reduction comprises the following specific steps:
(1) the SIFT algorithm extracts key points, and then adopts a circular neighborhood region with the radius of 6 pixels to count the grade and the direction of the pixels so as to determine a main direction and an auxiliary direction.
(2) Dividing a neighborhood region into four sectors and a ring, and taking each sub-region as a seed point; the level and direction of each pixel in the sub-region are counted and the gradients are assigned 0, π/4, π/2, 5 π/4, 2 π, 9 π/4, 3 π, 13 π/4 after Gaussian weighting.
(3) The four sectors are numbered 1,2,3 and 4 according to the clockwise direction, the ring number is 5, 5 seed points are provided, and each seed point has 8 pieces of direction vector information; finally, 5 x 8 total 40-dimensional feature descriptors are generated.
5. The feature point-based image registration method according to claim 4, wherein: rough matching of the feature points comprises the following specific steps: assuming that the n-dimensional feature vectors of the preliminary matching point pair are respectively A and B; a is [ A ]1,A2,…,An]Said B is [ B, B2,…,Bn];ATA transpose matrix representing a matrix A, and cosine similarity of the preliminary matching point pair is represented by cos θ:
Figure FDA0002637140600000021
by calculating the cosine value between the characteristic vectors of the matching point pairs, namely the value of cos theta, when the value of cos theta is greater than or equal to 0.9, the similarity of the matching point pairs in the direction is considered to be higher, the matching point pairs are considered to be correct matching point pairs at the moment, and otherwise, the matching point pairs are wrong matching point pairs and are discarded.
6. The feature point-based image registration method according to claim 5, wherein: the method comprises the following specific steps of using an improved RANSAC algorithm to eliminate partial mismatching and obtaining a matching point pair with higher matching precision:
(1) estimating a mathematical model, calculating different transformation model parameters according to equation (5) in each iteration in order to select the best model in a given iteration, wherein q is the minimum number of matching points required for calculating the model parameters, and p is the number of parameters in each transformation model; equation (6) is to calculate the transformation model according to the transformation parameters, and select three random matching points to calculate the transformation parameters, where f, e, d, c, b, a are the transformation parameters, (x)1,y1) Is the coordinate of the matching point in the reference image, (x'1,y′1) Is a coordinate; writing HPe a transformation model in the image to be matched, wherein H is a transformation parameter and Pe is a matching point in the reference image;
Figure FDA0002637140600000022
Figure FDA0002637140600000023
(2) judging other points; after calculating the parameters of the transformation model, for each matching point in the reference image, calculating the distance between (P, HPe) in the image to be matched, recording the maximum value max and the minimum value min, and calculating the mean value mean of the distances and the threshold to be compared, wherein Pi is the ith matching in the image to be matched, HPei is the ith reference transformation model, i is the number of the matching points, calculating the distances from other points P in the model to the reference model HPe, comparing the distances with the threshold, if the distance is less than the threshold, considering the point as an inner point, conforming to the model, otherwise, as an outer point;
min(dis)=dis(Pi,HPei)(i=1,2,...,m) (7);
max(dis)=dis(Pi,HPei)(i=1,2,...,m) (8);
mean(dis)=dis(Pi,HPei)(i=1,2,...,m) (9);
Figure FDA0002637140600000031
(3) processing the model; comparing the number of the interior points in the model with an expected value, if the number of the interior points in the model is greater than the expected value or reaches a preset maximum iteration number, re-estimating the model, if the number of the interior points in the model is less than the expected value, taking the model as a candidate model, re-selecting samples, and repeating the steps;
(4) and after N iterations, re-estimating the model by using the sample with the largest number of interior points to obtain a final result.
CN202010828690.5A 2020-08-18 2020-08-18 Image registration method based on feature points Pending CN112150520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010828690.5A CN112150520A (en) 2020-08-18 2020-08-18 Image registration method based on feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010828690.5A CN112150520A (en) 2020-08-18 2020-08-18 Image registration method based on feature points

Publications (1)

Publication Number Publication Date
CN112150520A true CN112150520A (en) 2020-12-29

Family

ID=73888857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010828690.5A Pending CN112150520A (en) 2020-08-18 2020-08-18 Image registration method based on feature points

Country Status (1)

Country Link
CN (1) CN112150520A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033578A (en) * 2021-03-30 2021-06-25 上海星定方信息科技有限公司 Image calibration method, system, terminal and medium based on multi-scale feature matching
CN113221914A (en) * 2021-04-14 2021-08-06 河海大学 Image feature point matching and mismatching elimination method based on Jacobsad distance
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN113516184A (en) * 2021-07-09 2021-10-19 北京航空航天大学 Mismatching elimination method and system for image feature point matching
CN113592930A (en) * 2021-08-04 2021-11-02 桂林电子科技大学 Spatial heterodyne interference image registration preprocessing method
CN113671499A (en) * 2021-08-06 2021-11-19 南京航空航天大学 SAR and optical image matching method based on extraction of echo matrix map
CN113723428A (en) * 2021-08-19 2021-11-30 珠海格力节能环保制冷技术研究中心有限公司 Image feature matching method, device and system and PCB visual detection equipment
CN114972453A (en) * 2022-04-12 2022-08-30 南京雷电信息技术有限公司 Improved SAR image region registration method based on LSD and template matching
CN115797381A (en) * 2022-10-20 2023-03-14 河南理工大学 Heterogeneous remote sensing image registration method based on geographic blocking and hierarchical feature matching
CN116844142A (en) * 2023-08-28 2023-10-03 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033578A (en) * 2021-03-30 2021-06-25 上海星定方信息科技有限公司 Image calibration method, system, terminal and medium based on multi-scale feature matching
CN113221914B (en) * 2021-04-14 2022-10-11 河海大学 Image feature point matching and mismatching elimination method based on Jacobsad distance
CN113221914A (en) * 2021-04-14 2021-08-06 河海大学 Image feature point matching and mismatching elimination method based on Jacobsad distance
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN113470085B (en) * 2021-05-19 2023-02-10 西安电子科技大学 Improved RANSAC-based image registration method
CN113516184A (en) * 2021-07-09 2021-10-19 北京航空航天大学 Mismatching elimination method and system for image feature point matching
CN113592930A (en) * 2021-08-04 2021-11-02 桂林电子科技大学 Spatial heterodyne interference image registration preprocessing method
CN113671499A (en) * 2021-08-06 2021-11-19 南京航空航天大学 SAR and optical image matching method based on extraction of echo matrix map
CN113723428A (en) * 2021-08-19 2021-11-30 珠海格力节能环保制冷技术研究中心有限公司 Image feature matching method, device and system and PCB visual detection equipment
CN114972453A (en) * 2022-04-12 2022-08-30 南京雷电信息技术有限公司 Improved SAR image region registration method based on LSD and template matching
CN114972453B (en) * 2022-04-12 2023-05-05 南京雷电信息技术有限公司 Improved SAR image region registration method based on LSD and template matching
CN115797381A (en) * 2022-10-20 2023-03-14 河南理工大学 Heterogeneous remote sensing image registration method based on geographic blocking and hierarchical feature matching
CN115797381B (en) * 2022-10-20 2024-04-12 河南理工大学 Heterogeneous remote sensing image registration method based on geographic segmentation and hierarchical feature matching
CN116844142A (en) * 2023-08-28 2023-10-03 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method
CN116844142B (en) * 2023-08-28 2023-11-21 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method

Similar Documents

Publication Publication Date Title
CN112150520A (en) Image registration method based on feature points
CN107301661B (en) High-resolution remote sensing image registration method based on edge point features
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
Ma et al. Robust feature matching for remote sensing image registration via locally linear transforming
CN102722890B (en) Non-rigid heart image grading and registering method based on optical flow field model
CN111145228B (en) Heterologous image registration method based on fusion of local contour points and shape features
CN111667506B (en) Motion estimation method based on ORB feature points
CN112085772B (en) Remote sensing image registration method and device
Wang et al. Synthetic aperture sonar track registration using SIFT image correspondences
CN104834931A (en) Improved SIFT algorithm based on wavelet transformation
CN110222661B (en) Feature extraction method for moving target identification and tracking
Zhang Combination of SIFT and Canny edge detection for registration between SAR and optical images
CN108550165A (en) A kind of image matching method based on local invariant feature
Zhang et al. Saliency-driven oil tank detection based on multidimensional feature vector clustering for SAR images
CN110929598B (en) Unmanned aerial vehicle-mounted SAR image matching method based on contour features
Guo et al. High-resolution remote-sensing image registration based on angle matching of edge point features
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN114066954B (en) Feature extraction and registration method for multi-modal image
Huang et al. SAR and optical images registration using shape context
CN113763274A (en) Multi-source image matching method combining local phase sharpness orientation description
Changjie et al. Algorithm of remote sensing image matching based on corner-point
CN115035326B (en) Radar image and optical image accurate matching method
CN113066015B (en) Multi-mode remote sensing image rotation difference correction method based on neural network
CN114943891A (en) Prediction frame matching method based on feature descriptors
CN113409369A (en) Multi-mode remote sensing image registration method based on improved RIFT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination