CN104700400A - High-precision image matching method based on feature point training - Google Patents
High-precision image matching method based on feature point training Download PDFInfo
- Publication number
- CN104700400A CN104700400A CN201510046829.XA CN201510046829A CN104700400A CN 104700400 A CN104700400 A CN 104700400A CN 201510046829 A CN201510046829 A CN 201510046829A CN 104700400 A CN104700400 A CN 104700400A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- feature point
- feature
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 title claims abstract description 11
- 230000008569 process Effects 0.000 claims abstract description 12
- 238000001514 detection method Methods 0.000 claims abstract description 10
- 230000009466 transformation Effects 0.000 claims abstract description 10
- 239000000284 extract Substances 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 abstract description 10
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000004044 response Effects 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000000746 purification Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明涉及一种基于特征点训练的高精度图像匹配方法,在图像配准前将不同角度下的待配准图像与标准图像进行特征点匹配,并提取在不同角度下均能准确匹配的特征点作为实际检测过程中的基准特征点。本发明选取的特征点既有利于搜索到定位精度更高的特征点作为变换模型的计算依据,从而提高定位精度;同时也可以进一步减少特征点匹配运算的计算量,加快算法的运算速度。
The invention relates to a high-precision image matching method based on feature point training, which performs feature point matching on images to be registered under different angles and standard images before image registration, and extracts features that can be accurately matched under different angles The point is used as the reference feature point in the actual detection process. The feature points selected by the invention are not only beneficial to search for feature points with higher positioning accuracy as the calculation basis of the transformation model, thereby improving the positioning accuracy; but also can further reduce the calculation amount of feature point matching operations and speed up the operation speed of the algorithm.
Description
技术领域 technical field
本发明涉及特征点匹配方法,尤其是一种基于特征点训练的高精度图像匹配方法。 The invention relates to a feature point matching method, in particular to a high-precision image matching method based on feature point training.
背景技术 Background technique
图像配准是机器视觉领域的一个基本问题,一直是人们研究的热点和难点。图像配准是指在来自不同时间、不同视角或不同传感器的同一场景的两幅或多幅图像之间寻找对应关系。有关图像配准的方法主要分为:基于灰度配准和基于特征配准的方法。 Image registration is a basic problem in the field of machine vision, and it has always been a hot and difficult point of research. Image registration refers to finding correspondences between two or more images of the same scene from different times, different viewpoints, or different sensors. The methods related to image registration are mainly divided into: methods based on grayscale registration and methods based on feature registration.
基于灰度配准的方法,也称为相关配准方法,用空间二维滑动模板进行图像配准,不同算法的区别主要体现在相关准则的选择。基于特征点配准方法是首先在原始图像中提取特征,然后建立两幅图像之间特征点的配准关系,具有较高的鲁棒性,比如SURF算法。目前,基于SURF算法图像配准已经得到广泛应用,比如应用于医学图像配准,遥感图像配准等。SURF算法首先提取图像中的特征点进行特征点匹配,并将匹配的特征点对用于计算图像配准变换模型参数,使待配准图像准确地变换到标准图像位置,以达到配准目的。特征点的匹配是实现图像配准的关键,匹配的精度直接影响着后续配准的精度。算法中根据SURF特征点描述符中包含的特征点邻域信息,采用K最近邻法可以找出每个特征点潜在的两个最佳匹配点,并通过匹配点的距离最佳值与次佳值比率优选出最佳匹配点对。 The method based on grayscale registration, also known as correlation registration method, uses spatial two-dimensional sliding templates for image registration. The difference between different algorithms is mainly reflected in the selection of relevant criteria. The feature point-based registration method first extracts features in the original image, and then establishes the registration relationship of feature points between the two images, which has high robustness, such as the SURF algorithm. At present, image registration based on SURF algorithm has been widely used, such as in medical image registration, remote sensing image registration, etc. The SURF algorithm first extracts the feature points in the image for feature point matching, and uses the matched feature point pairs to calculate the parameters of the image registration transformation model, so that the image to be registered can be accurately transformed to the standard image position to achieve the purpose of registration. The matching of feature points is the key to image registration, and the accuracy of matching directly affects the accuracy of subsequent registration. In the algorithm, according to the feature point neighborhood information contained in the SURF feature point descriptor, the K nearest neighbor method can be used to find out the two potential best matching points for each feature point, and the distance between the best value of the matching point and the second best The value ratio is optimized to find the best matching point pair.
但在实际图像配准过程中,待配准图像相对于标准图像存在着不同角度的变化,虽然SURF特征点检测算法具有旋转不变性的特征点,但随着图像不同角度的变化,从中提取的特征点的定位精度仍然具有一定的差异性。为了提高图像配准的精度,如何提取不受图像角度变化影响的特征点成为解决问题的关键。 However, in the actual image registration process, the image to be registered has changes at different angles relative to the standard image. Although the SURF feature point detection algorithm has rotation-invariant feature points, as the image changes at different angles, the extracted The positioning accuracy of feature points still has certain differences. In order to improve the accuracy of image registration, how to extract feature points that are not affected by image angle changes becomes the key to solving the problem.
发明内容 Contents of the invention
本发明的目的在于提供一种基于特征点训练的高精度图像匹配方法,选取不受图像旋转角度变化影响的特征点,从而提高图像配准的精度。 The purpose of the present invention is to provide a high-precision image matching method based on feature point training, which selects feature points that are not affected by changes in image rotation angles, thereby improving the accuracy of image registration.
本发明解决其技术问题是采取以下技术方案实现的: The present invention solves its technical problem and realizes by taking the following technical solutions:
一种基于特征点训练的高精度图像匹配方法,其特征在于:在图像配准前将不同角度下的待配准图像与标准图像进行特征点匹配,并提取定位精度较高的特征点作为实际检测过程中的基准特征点。 A high-precision image matching method based on feature point training, which is characterized in that: before image registration, feature point matching is performed on images to be registered under different angles and standard images, and feature points with high positioning accuracy are extracted as actual The reference feature point in the detection process.
而且,所述基准特征点的具体选择步骤为:使用SURF特征点提取算法分别提取两种不同旋转角度图像中的特征点,并分别与标准图像中的特征点进行匹配,得到标准图像中匹配的特征点集,记录特征点的X和Y坐标;分析获得的两组匹配成功的特征点集合,选取两个特征点集合中存在X和Y坐标完全一致的特征点作为基准特征点。 Moreover, the specific selection step of the reference feature point is: use the SURF feature point extraction algorithm to extract feature points in two images with different rotation angles, and respectively match the feature points in the standard image to obtain the matched feature points in the standard image. Feature point set, record the X and Y coordinates of the feature point; analyze the obtained two sets of feature point sets with successful matching, and select the feature point with exactly the same X and Y coordinates in the two feature point sets as the reference feature point.
而且,所述旋转角度为2°至10°。 Also, the rotation angle is 2° to 10°.
而且,所述图像配准采用SURF特征点检测算法,将基准特征点作为SURF算法图像配准的计算图像配准变换模型参数,使待配准图像准确地变换到标准图像位置。 Moreover, the image registration adopts the SURF feature point detection algorithm, and the reference feature point is used as the calculation image registration transformation model parameter of the SURF algorithm image registration, so that the image to be registered is accurately transformed to the standard image position.
本发明的优点和积极效果是: Advantage and positive effect of the present invention are:
1、本方法通过验证特征点的匹配性能,选取到的特征点在不同旋转条件下均能都准确匹配,且其坐标达到小数点后三位的精度,这部分特征点对图像特征的描述能力强,定位精度高,且能够更好地适应不同旋转角度下的特征点匹配。 1. By verifying the matching performance of feature points, the selected feature points can be accurately matched under different rotation conditions, and their coordinates reach the accuracy of three decimal places. These feature points have a strong ability to describe image features , with high positioning accuracy, and can better adapt to feature point matching under different rotation angles.
2、本发明选取的特征点既有利于搜索到定位精度更高的特征点作为变换模型的计算依据,从而提高定位精度;同时也可以进一步减少特征点匹配运算的计算量,加快算法的运算速度。 2. The feature points selected by the present invention are beneficial to search for feature points with higher positioning accuracy as the calculation basis of the transformation model, thereby improving the positioning accuracy; at the same time, it can further reduce the calculation amount of feature point matching operations and speed up the operation speed of the algorithm .
附图说明 Description of drawings
图1为逆时针2度标准图像中匹配的特征点; Figure 1 shows the matching feature points in the counterclockwise 2-degree standard image;
图2为顺时针2度标准图像中匹配的特征点; Figure 2 is the matching feature points in the clockwise 2-degree standard image;
图3为不同旋转角度下标准图像中实现匹配的特征点分布图; Figure 3 is a distribution diagram of feature points for matching in a standard image under different rotation angles;
图4为重合特征点与不重合特征点计算出的坐标差值分布结果; Fig. 4 is the coordinate difference distribution result calculated by coincident feature points and non-overlapping feature points;
图5为9×9盒子滤波模板; Figure 5 is a 9×9 box filter template;
图6为特征点的主方向的选取; Fig. 6 is the selection of the main direction of the feature point;
图7特征描述符的构成。 Figure 7. Composition of feature descriptors.
具体实施方式 Detailed ways
下面结合附图并通过具体实施例对本发明作进一步详述,以下实施例只是描述性的,不是限定性的,不能以此限定本发明的保护范围。 The present invention will be further described in detail below in conjunction with the accompanying drawings and through specific embodiments. The following embodiments are only descriptive, not restrictive, and cannot limit the protection scope of the present invention.
一种基于特征点训练的高精度图像匹配方法,研究特征点的定位精度受不同变换的影响程度,本实施例选取了逆时针旋转2度的待测图像和顺时针旋转2度的待测图像进行实验。 A high-precision image matching method based on feature point training, which studies the degree to which the positioning accuracy of feature points is affected by different transformations. In this embodiment, the image to be tested that is rotated 2 degrees counterclockwise and the image to be tested that is rotated 2 degrees clockwise are selected for testing. experiment.
首先,使用SURF特征点提取算法分别提取两种不同旋转角度图像中的特征点,并分别与标准图像中的特征点进行匹配,得到标准图像中匹配的特征点集,如图1和图2所示,不同形状的点标记着不同旋转角度下标准图像中实现匹配的特征点。 First, use the SURF feature point extraction algorithm to extract feature points in two images with different rotation angles, and match them with the feature points in the standard image to obtain the matched feature point set in the standard image, as shown in Figure 1 and Figure 2 Points of different shapes mark the matching feature points in the standard image under different rotation angles.
随后,通过进一步分析获得的两组匹配成功的特征点集合后,发现两个特征点集合中存在X和Y坐标完全一致的特征点(图3所示的两种标记完全重合的点),X和Y坐标完全一致的特征点即为在不同旋转角度下均能准确匹配的特征点。 Subsequently, after further analyzing the two sets of feature point sets that were successfully matched, it was found that there were feature points with exactly the same X and Y coordinates in the two feature point sets (the two kinds of marks shown in Figure 3 completely overlapped points), X The feature point that is completely consistent with the Y coordinate is the feature point that can be accurately matched under different rotation angles.
表1所示为部分重合点的坐标。 Table 1 shows the coordinates of the partially coincident points.
表1 重合点坐标 Table 1 Coordinates of coincident points
从实验结果中可以看出,不同旋转条件下匹配成功的特征点集合存在较大差别。同时,我们注意到在上述特征点对集合中存在部分交集,即在上述特征点集合中存在部分特征点在不同旋转条件下均能都准确匹配,且其坐标达到小数点后三位的精度。这一实验结果说明这部分特征点对图像特征的描述能力强,定位精度高,且能够更好地适应不同旋转角度下的特征点匹配。 It can be seen from the experimental results that there are large differences in the set of feature points that are successfully matched under different rotation conditions. At the same time, we noticed that there are partial intersections in the above-mentioned feature point pair sets, that is, there are some feature points in the above-mentioned feature point sets that can be accurately matched under different rotation conditions, and their coordinates reach the accuracy of three decimal places. This experimental result shows that these feature points have a strong ability to describe image features, high positioning accuracy, and can better adapt to feature point matching under different rotation angles.
为了进一步验证这类特征点所具有的图像特征表达能力,进行了特征点的旋转适应性实验。在此,提取图3所示的重合特征点和不重合特征点(均为标准图像中的特征点)分别作为特征点匹配基准点,与顺时针旋转10度的待测图像中提取的特征点进行特征点匹配,并将匹配成功的特征点做如下处理: In order to further verify the image feature expression ability of this kind of feature points, the rotation adaptability experiment of feature points is carried out. Here, extract the coincident feature points and non-overlapping feature points shown in Figure 3 (both are feature points in the standard image) as feature point matching reference points, and the feature points extracted from the image to be tested rotated 10 degrees clockwise Perform feature point matching, and process the successfully matched feature points as follows:
按照图像旋转理论分别计算两组匹配基准点中实现了匹配的特征点在图像旋转10度后的理论坐标值(理论计算得到); According to the image rotation theory, calculate the theoretical coordinate values (obtained by theoretical calculation) of the matching feature points in the two sets of matching reference points after the image is rotated by 10 degrees;
将(1)中计算得到的特征点坐标值与旋转10度的待测图像中实现了匹配的特征点的坐标值进行X和Y方向上的差值计算,从而得到上述特征点的定位偏差,结果如图4所示。 Calculate the difference between the coordinate values of the feature points calculated in (1) and the coordinate values of the feature points that have been matched in the image to be tested rotated 10 degrees in the X and Y directions, so as to obtain the positioning deviation of the above feature points, The result is shown in Figure 4.
从图4可以看出,标准图像中提取的重合特征点坐标值与实际计算的理论坐标值差值变化相对较小,且相对集中于原点附近,表明其定位精度较高。与之相对应,不重合特征点的定位偏差就比较大。这一结果表明特征点的定位精度存在一定差异。 It can be seen from Figure 4 that the difference between the coordinate values of the coincident feature points extracted in the standard image and the theoretical coordinate values calculated in practice is relatively small, and relatively concentrated near the origin, indicating that its positioning accuracy is high. Correspondingly, the positioning deviation of non-coinciding feature points is relatively large. This result indicates that there is a certain difference in the localization accuracy of the feature points.
为此,在图像配准前可以将不同角度下的待配准图像与标准图像进行特征点匹配,并提取定位精度较高的特征点作为实际检测过程中的基准特征点。通过上述方法,既有利于搜索到定位精度更高的特征点作为变换模型的计算依据,从而提高定位精度;同时也可以进一步减少特征点匹配运算的计算量,加快算法的运算速度。 For this reason, before image registration, the image to be registered under different angles can be matched with the standard image for feature points, and the feature points with high positioning accuracy can be extracted as the reference feature points in the actual detection process. Through the above method, it is not only beneficial to search for feature points with higher positioning accuracy as the calculation basis of the transformation model, thereby improving the positioning accuracy; at the same time, it can further reduce the calculation amount of feature point matching operations and speed up the operation speed of the algorithm.
为了说明在进行图像配准前可以提取标准图像中与不同角度下的待配准图像均能匹配的特征点作为实际检测过程中的基准特征点,进行了进一步的分析。通过选择不同角度组合的两幅待配准的训练图像与标准图像进行特征点匹配,提取标准图像中与两幅待配准图像均能 匹配的特征点,并使用一幅不同角度的测试图像进行测试,以验证提取的特征点的有效性。如表2所示。 In order to illustrate that before image registration, the feature points in the standard image that can match the images to be registered under different angles can be extracted as the reference feature points in the actual detection process, and further analysis is carried out. By selecting two training images to be registered at different angles to match the feature points with the standard image, extract the feature points in the standard image that can match the two images to be registered, and use a test image from a different angle to perform Test to verify the validity of the extracted feature points. As shown in table 2.
表2 特征点匹配精度分析 Table 2 Analysis of feature point matching accuracy
从表2可以看出,重合点与理论坐标的平均距离均小于非重合点的平均距离,且仅为后者的一半左右。从而进一步说明,选取标准图像上与不同角度下的图像均能实现特征点匹配的特征点作为后续检测过程中的基准点的有效性。 It can be seen from Table 2 that the average distance between coincident points and theoretical coordinates is smaller than that of non-coincidence points, and is only about half of the latter. This further illustrates the effectiveness of selecting feature points that can achieve feature point matching on the standard image and images under different angles as reference points in the subsequent detection process.
将最终求出的基准特征点用于配准变换模型参数的计算,作为SURF算法图像配准的计算图像配准变换模型参数,使待配准图像准确地变换到标准图像位置,以达到配准目的。 The finally calculated reference feature points are used to calculate the parameters of the registration transformation model as the calculation image registration transformation model parameters of the SURF algorithm image registration, so that the image to be registered can be accurately transformed to the standard image position to achieve registration Purpose.
SURF特征点检测算法是在SIFT算法的基础上提出的一种速度快、鲁棒性强的特征提取算法。该算法不仅对图像旋转、平移、缩放和噪声具有较好的鲁棒性,而且对光照变化和视角变化不变性以及图像模糊度有较好的处理。同时,通过引入积分图像和盒子滤波器,在运算速度上大约提高了3倍,综合性能更加优越。 The SURF feature point detection algorithm is a fast and robust feature extraction algorithm based on the SIFT algorithm. The algorithm not only has good robustness to image rotation, translation, scaling and noise, but also has a good ability to deal with the invariance of illumination changes and viewing angle changes, as well as image ambiguity. At the same time, by introducing the integral image and the box filter, the operation speed is increased by about 3 times, and the comprehensive performance is more superior.
SURF特征点检测算法是通过计算Hessian矩阵行列式的局部极值来确定特征点的位置。其中,尺度为σ的图像I中点的Hessian矩阵定义为: The SURF feature point detection algorithm determines the position of feature points by calculating the local extremum of the determinant of the Hessian matrix. Among them, the midpoint of image I with scale σ The Hessian matrix is defined as:
式中,Lxx是高斯二阶导同I=(x,y)卷积的结果,其中Lxy,Lyy具有相同的含义。 In the formula, L xx is the second order derivative of Gaussian The result of convolution with I=(x,y), where L xy and L yy have the same meaning.
SURF算法采用盒子滤波来近似二阶高斯滤波,构造出一种快速Hessian矩阵,并使用积分图像来加速卷积以提高计算速度。 The SURF algorithm uses box filtering to approximate the second-order Gaussian filtering, constructs a fast Hessian matrix, and uses integral images to accelerate convolution to improve calculation speed.
其中,如图5所示为9×9的盒子滤波模板。为了使SURF特征具有尺度不变性,需要构造尺度空间,并在各个尺度空间下通过快速Hessian矩阵行列式得到局部极值点。在此,SURF算法采用不同大小的盒子滤波器与源图像进行卷积以获得不同尺度空间的特征描述。通过盒 子滤波处理后,x方向的值记作Dxx,xy方向的值记作Dxy,y方向的值记作Dyy。由于盒子滤波是二阶高斯滤波的近似估计,因此引入比例因子ω(ω近似取0.9),进一步求解得到快速Hessian矩阵的行列表达式为: Among them, as shown in FIG. 5, it is a 9×9 box filter template. In order to make the SURF feature scale-invariant, it is necessary to construct a scale space, and obtain local extremum points through the fast Hessian matrix determinant in each scale space. Here, the SURF algorithm uses box filters of different sizes to convolve with the source image to obtain feature descriptions in different scale spaces. After box filtering, the value in the x direction is denoted as D xx , the value in the xy direction is denoted as D xy , and the value in the y direction is denoted as D yy . Since the box filter is an approximate estimate of the second-order Gaussian filter, the scale factor ω (ω is approximately 0.9) is introduced, and the row and column expression of the fast Hessian matrix is obtained by further solving:
ΔH=Dxx(x)Dyy(x)-(ωDxy(x))2 ΔH=D xx (x)D yy (x)-(ωD xy (x)) 2
其中,Δ为图像点I(x,y)周围区域的盒子滤波器响应值,用Δ进行极值点的检测。 Among them, Δ is the response value of the box filter in the area around the image point I(x, y), and Δ is used to detect the extreme points.
根据快速Hessian矩阵求出尺度图像在各个尺度空间下局部的极值点后,将每个局部极值点及同一尺度的8个相邻点以及它上下两个尺度的各9个点构成一个3×3×3的立体邻域。将尺度空间的每个极值点与立体邻域相邻的26个点进行比较,只有当局部极值点的值均大于(或小于)所有26个相邻点时,才将此局部极值点作为候选特征点。为了能够对候选特征点进行亚像素定位,可以在尺度空间和图像空间中进行插值,得到稳定的特征点位置及所在的尺度值。 After obtaining the local extreme points of the scale image in each scale space according to the fast Hessian matrix, each local extreme point and 8 adjacent points of the same scale and 9 points of its upper and lower scales constitute a 3 ×3×3 three-dimensional neighborhood. Compare each extremum point in the scale space with the 26 adjacent points in the stereo neighborhood, and only when the value of the local extremum point is larger (or smaller) than all 26 adjacent points, the local extremum points as candidate feature points. In order to perform sub-pixel positioning on candidate feature points, interpolation can be performed in scale space and image space to obtain stable feature point positions and scale values.
插值使用的二次拟合函数为: The quadratic fitting function used for interpolation is:
对函数求导得到极值点为: The extreme point obtained by deriving the function is:
函数在极值点处的极值为: The extreme value of the function at the extreme point is:
实验中剔除D(x)<0.03的候选特征点。 Candidate feature points with D(x)<0.03 were eliminated in the experiment.
SURF特征点的描述 Description of SURF feature points
为保持特征点的旋转不变性,在特征点位置确定以后,为每一个特征点进行主方向的确定。为此,在以特征点为中心,计算半径为6σ(σ为特征点的尺度)的圆形区域内,对图像在x和y方向进行Harr小波(Harr小波边长取4σ)响应运算,并使用尺度为2σ的高斯加权函数对Harr小波响应值进行高斯加权,使得越靠近特征点的响应贡献越大;然后用π/3大小的扇形区域范围遍历整个圆形区域,并将域内水平方向响应和垂直方向响应的矢量和模的最大值的方向定义为特征点的主方向(如图6所示)。 In order to keep the rotation invariance of the feature points, after the position of the feature points is determined, the main direction is determined for each feature point. To this end, in a circular area with a feature point as the center and a calculated radius of 6σ (σ is the scale of the feature point), the Harr wavelet (Harr wavelet side length is 4σ) response operation is performed on the image in the x and y directions, and Use a Gaussian weighting function with a scale of 2σ to Gaussian-weight the Harr wavelet response value, so that the closer to the feature point, the greater the contribution of the response; then use the fan-shaped area of π/3 size to traverse the entire circular area, and the horizontal direction response in the domain The direction of the maximum value of the vector sum modulus and the vertical direction response is defined as the main direction of the feature point (as shown in Figure 6).
然后,以特征点为中心,构造一边垂直于主方向且边长为20σ的正方形窗口区域,并将该窗口区域划分成4×4的子区域。在每一个子区域内,进行5σ×5σ个采样点的水平方向 和垂直方向的Harr小波响应的计算,分别记作dx和dy,同样使用尺度为2σ的高斯加权函数对Harr小波响应值进行高斯加权,以增加对几何变换的鲁棒性。然后将每个子区域的响应值和响应值的绝对值相加形成∑dx,∑dy,∑|dx|,∑|dy|。由此,每个子区域就形成了一个四维特征描述向量V: Then, with the feature point as the center, construct a square window area with a side perpendicular to the main direction and a side length of 20σ, and divide the window area into 4×4 sub-areas. In each sub-region, calculate the Harr wavelet response in the horizontal direction and vertical direction of 5σ×5σ sampling points, which are recorded as d x and d y respectively, and use the Gaussian weighting function with a scale of 2σ to calculate the Harr wavelet response value Gaussian weighting to increase robustness to geometric transformations. Then add the response value and the absolute value of the response value for each sub-region to form ∑d x , ∑d y , ∑|d x |, ∑|d y |. Thus, each sub-region forms a four-dimensional feature description vector V:
V=(Σdx,Σdx,Σ|dx|,Σ|dy|) V=(Σd x ,Σd x ,Σ|d x |,Σ|d y |)
对于每一个特征点,形成4×4×4=64维的特征向量。为保证对光照的不变性,对特征向量进行归一化处理,得到最终的SURF描述符,如图7所示。 For each feature point, a 4×4×4=64-dimensional feature vector is formed. In order to ensure the invariance of the illumination, the feature vector is normalized to obtain the final SURF descriptor, as shown in Figure 7.
特征点的匹配 Matching of feature points
特征点的匹配是实现图像配准的关键,匹配的精度直接影响着后续配准的精度。算法中根据SURF特征点描述符中包含的特征点邻域信息,采用K最近邻法可以找出每个特征点潜在的两个最佳匹配点,并通过匹配点的距离最佳值与次佳值比率优选出最佳匹配点对。 The matching of feature points is the key to image registration, and the accuracy of matching directly affects the accuracy of subsequent registration. In the algorithm, according to the feature point neighborhood information contained in the SURF feature point descriptor, the K nearest neighbor method can be used to find out the two potential best matching points for each feature point, and the distance between the best value of the matching point and the second best The value ratio is optimized to find the best matching point pair.
设M1、M2分别为两幅图像I1、I2分别采用SURF提取的待匹配特征点集合,对M1中的任一特征点m1i,M2中与m1i的欧式距离最小的两个特征点分别为m2j,m'2j,对应距离分别为dij,d'ij,如果dij≤α*d'ij(实验取α=0.65),则认为m1i与m2j为优选的匹配点对。 Let M 1 and M 2 be two images I 1 and I 2 respectively using SURF to extract feature point sets to be matched. For any feature point m 1i in M 1 , the Euclidean distance between M 2 and m 1i is the smallest The two feature points are m 2j , m' 2j , and the corresponding distances are d ij , d' ij . If d ij ≤α*d' ij (experimentally α=0.65), m 1i and m 2j are considered to be the best matching point pairs.
利用上面的方法可以得到标准图像和待配准图像所有的匹配点对 其中S、D分别为标准图像和待配准图像完成匹配的特征点集。由于这些匹配点对中存在着一定的误匹配点对,本文采用三角形相似的方法对其进行提纯。在特征点集S、D中任取三个匹配点对P1与Q1、P2与Q2、和P3与Q3,构成虚拟的三角形ΔP1P2P3和ΔQ1Q2Q3,并形成一个三角形对,其三边的长度分别设为lp1、lp2、lp3和lq1、lq2、lq3,如果满足下式中的关系,则表明选取的匹配点对均为有效点对,否则存在误匹配点对,给予剔除,实现特征点对的提纯: Using the above method, all matching point pairs of the standard image and the image to be registered can be obtained Among them, S and D are the matching feature point sets of the standard image and the image to be registered respectively. Because there are certain mismatching point pairs in these matching point pairs, this paper adopts the method of triangle similarity to purify them. Randomly select three matching point pairs P 1 and Q 1 , P 2 and Q 2 , and P 3 and Q 3 in the feature point sets S and D to form a virtual triangle ΔP 1 P 2 P 3 and ΔQ 1 Q 2 Q 3 , and form a triangle pair, the lengths of the three sides are respectively set to lp 1 , lp 2 , lp 3 and lq 1 , lq 2 , lq 3 , if the relationship in the following formula is satisfied, it means that the selected matching point pairs are average is a valid point pair, otherwise there is a mismatch point pair, which will be eliminated to realize the purification of feature point pairs:
本实施例实验中边长比之间的误差控制在0.02。 In the experiment of this embodiment, the error between the side length ratios is controlled at 0.02.
尽管为说明目的公开了本发明的实施例和附图,但是本领域的技术人员可以理解:在不脱离本发明及所附权利要求的精神和范围内,各种替换、变化和修改都是可能的,因此,本发明的范围不局限于实施例和附图所公开的内容。 Although the embodiments and drawings of the present invention are disclosed for the purpose of illustration, those skilled in the art can understand that various replacements, changes and modifications are possible without departing from the spirit and scope of the present invention and the appended claims Therefore, the scope of the present invention is not limited to what is disclosed in the embodiments and drawings.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510046829.XA CN104700400A (en) | 2015-01-30 | 2015-01-30 | High-precision image matching method based on feature point training |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510046829.XA CN104700400A (en) | 2015-01-30 | 2015-01-30 | High-precision image matching method based on feature point training |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104700400A true CN104700400A (en) | 2015-06-10 |
Family
ID=53347482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510046829.XA Pending CN104700400A (en) | 2015-01-30 | 2015-01-30 | High-precision image matching method based on feature point training |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104700400A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701766A (en) * | 2016-02-24 | 2016-06-22 | 网易(杭州)网络有限公司 | Image matching method and device |
CN110443796A (en) * | 2019-08-15 | 2019-11-12 | 厦门盈趣科技股份有限公司 | Graph testing method and relevant apparatus |
CN114565781A (en) * | 2022-02-25 | 2022-05-31 | 中国人民解放军战略支援部队信息工程大学 | Image matching method based on rotation invariance |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101276465A (en) * | 2008-04-17 | 2008-10-01 | 上海交通大学 | Wide-angle image automatic stitching method |
US20120281146A1 (en) * | 2010-11-11 | 2012-11-08 | Hitoshi Yamada | Image processing device, image processing method, and program for image processing |
CN103489195A (en) * | 2013-10-11 | 2014-01-01 | 胡晓彤 | Method for detecting can body pattern direction of metal can |
CN103955931A (en) * | 2014-04-29 | 2014-07-30 | 江苏物联网研究发展中心 | Image matching method and device |
CN103979118A (en) * | 2014-04-01 | 2014-08-13 | 浙江大学 | Airfoil wall plate digital positioning method and positioning device |
-
2015
- 2015-01-30 CN CN201510046829.XA patent/CN104700400A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101276465A (en) * | 2008-04-17 | 2008-10-01 | 上海交通大学 | Wide-angle image automatic stitching method |
US20120281146A1 (en) * | 2010-11-11 | 2012-11-08 | Hitoshi Yamada | Image processing device, image processing method, and program for image processing |
CN103489195A (en) * | 2013-10-11 | 2014-01-01 | 胡晓彤 | Method for detecting can body pattern direction of metal can |
CN103979118A (en) * | 2014-04-01 | 2014-08-13 | 浙江大学 | Airfoil wall plate digital positioning method and positioning device |
CN103955931A (en) * | 2014-04-29 | 2014-07-30 | 江苏物联网研究发展中心 | Image matching method and device |
Non-Patent Citations (3)
Title |
---|
朱世元: "抗几何攻击鲁棒数字图像水印算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
胡晓彤,杨威: "一种基于快速傅里叶变换的分区域图像配准方法", 《天津科技大学学报》 * |
苑玮琦 等: "基于序列图像提取稳定特征点的虹膜识别算法", 《仪器仪表学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701766A (en) * | 2016-02-24 | 2016-06-22 | 网易(杭州)网络有限公司 | Image matching method and device |
CN105701766B (en) * | 2016-02-24 | 2019-03-19 | 网易(杭州)网络有限公司 | Image matching method and device |
CN110443796A (en) * | 2019-08-15 | 2019-11-12 | 厦门盈趣科技股份有限公司 | Graph testing method and relevant apparatus |
CN114565781A (en) * | 2022-02-25 | 2022-05-31 | 中国人民解放军战略支援部队信息工程大学 | Image matching method based on rotation invariance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103400384B (en) | The wide-angle image matching process of calmodulin binding domain CaM coupling and some coupling | |
CN104318548B (en) | Rapid image registration implementation method based on space sparsity and SIFT feature extraction | |
Novatnack et al. | Scale-dependent/invariant local 3D shape descriptors for fully automatic registration of multiple sets of range images | |
CN104867126B (en) | Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle | |
CN103400388B (en) | Method for eliminating Brisk key point error matching point pair by using RANSAC | |
CN101950419B (en) | Quick image rectification method in presence of translation and rotation at same time | |
CN103411553B (en) | The quick calibrating method of multi-linear structured light vision sensors | |
CN105654421B (en) | Based on the projective transformation image matching method for converting constant low-rank texture | |
CN101144708A (en) | Detection Method of Circular Marker Points in 3D Scanning System | |
Chen et al. | Robust affine-invariant line matching for high resolution remote sensing images | |
CN104574421A (en) | Large-breadth small-overlapping-area high-precision multispectral image registration method and device | |
Sinha et al. | Detecting and reconstructing 3d mirror symmetric objects | |
CN116452644A (en) | Three-dimensional point cloud registration method and device based on feature descriptors and storage medium | |
CN104992400B (en) | Multi-spectrum image registration method and device | |
CN105631872B (en) | Remote sensing image registration method based on multi-characteristic points | |
CN104134208A (en) | Coarse-to-fine infrared and visible light image registration method by adopting geometric construction characteristics | |
CN104268880A (en) | Depth information obtaining method based on combination of features and region matching | |
CN107452030A (en) | Method for registering images based on contour detecting and characteristic matching | |
CN105760879A (en) | Fourier-Mellin transform-based image geometric matching method | |
CN109285210B (en) | Pipeline three-dimensional reconstruction method combining topological relation and epipolar constraint | |
CN108269274A (en) | Method for registering images based on Fourier transformation and Hough transform | |
CN103679713A (en) | Two-dimensional image registration method for partially matched images | |
CN104700400A (en) | High-precision image matching method based on feature point training | |
CN102679871B (en) | Rapid detection method of sub-pixel precision industrial object | |
CN104166995A (en) | Harris-SIFT binocular vision positioning method based on horse pace measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150610 |
|
RJ01 | Rejection of invention patent application after publication |