CN108550165A - A kind of image matching method based on local invariant feature - Google Patents

A kind of image matching method based on local invariant feature Download PDF

Info

Publication number
CN108550165A
CN108550165A CN201810221834.3A CN201810221834A CN108550165A CN 108550165 A CN108550165 A CN 108550165A CN 201810221834 A CN201810221834 A CN 201810221834A CN 108550165 A CN108550165 A CN 108550165A
Authority
CN
China
Prior art keywords
feature
image
matching
point
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810221834.3A
Other languages
Chinese (zh)
Inventor
管凤旭
谷凤姣
严浙平
徐健
杜雪
高帅
邱天畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201810221834.3A priority Critical patent/CN108550165A/en
Publication of CN108550165A publication Critical patent/CN108550165A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is to provide a kind of image matching methods based on local invariant feature.Integral image and Hessian determinants of a matrix are sought to initial pictures;It establishes scale space pyramid and carries out the positioning of characteristic point;The principal direction of characteristic point is determined by Haar small echos, completes feature point extraction;Calculate the invariable rotary LBP features of image-region around each characteristic point, construction feature description;Using the nearest neighbor method of Euclidean distance, completes feature and slightly match;(6) by random sampling coherence method, remaining Mismatching point after thick matching process executes is rejected, completes the matching of feature essence.The method of the invention of the present invention ensureing match time and under conditions of accuracy, image occur scale, illumination and it is rotationally-varying when also there is certain robustness.

Description

一种基于局部不变特征的图像匹配方法An Image Matching Method Based on Local Invariant Features

技术领域technical field

发明涉及的是一种图像特征提取与图像处理方法,具体地说是一种图像匹配方法。The invention relates to an image feature extraction and image processing method, specifically an image matching method.

背景技术Background technique

图像匹配是指对包含同一场景的两幅图像进行对准并确定彼此对应关系的图像分析和处理技术,它广泛应用在导航、地图与地形匹配、生物特征识别、文字识别、医学影像分析、计算机视觉等领域。在实际应用中,需要进行匹配的图像往往是用不同传感器在不同时间、不同的条件下获得的,它们之间存在着平移、尺度、旋转、光照、噪声、视角等差异,这给图像匹配方法带来了巨大的挑战性。基于像素灰度值的相关方法(例如SSAD,NNPROD等)的缺点是对图像的尺度、旋转、光照、视角等变换敏感,相比之下,基于局部特征(例如角点、SURF(Speed Up Robust Features)特征点等)的方法效果更佳。图像的局部不变特征具备在多种图像变换(如几何变换,光照变换等)下的不变性、低冗余性、无需预先对图像分割以及独特性等特点,已经被广泛用于图像匹配、物体识别、图像分类及图像检索等领域。Image matching refers to the image analysis and processing technology that aligns two images containing the same scene and determines the corresponding relationship between them. It is widely used in navigation, map and terrain matching, biometric recognition, text recognition, medical image analysis, computer fields of vision. In practical applications, images that need to be matched are often obtained by different sensors at different times and under different conditions, and there are differences in translation, scale, rotation, illumination, noise, viewing angle, etc. between them. posed enormous challenges. The disadvantage of correlation methods based on pixel gray values (such as SSAD, NNPROD, etc.) is that they are sensitive to transformations such as image scale, rotation, illumination, and viewing angle. In contrast, methods based on local features (such as corners, SURF (Speed Up Robust Features) feature points, etc.) method is better. The local invariant feature of the image has the characteristics of invariance under various image transformations (such as geometric transformation, illumination transformation, etc.), low redundancy, no need to segment the image in advance, and uniqueness. It has been widely used in image matching, Object recognition, image classification and image retrieval and other fields.

在图像匹配技术中运用局部特征,可以将繁杂的图像匹配问题转换为特征向量的度量问题,从而提高算法的速度和鲁棒性。基于局部不变特征的图像匹配方法的基本思路一般是先检测出图像的特征点集,然后基于特征点和它的局部邻域信息生成特征向量,最后度量特征向量间的相似性来完成图像匹配问题。2004年发表在IJCV上的SIFT(ScaleInvariant Feature Transform)以及后来为改进SIFT运行速度提出的基于Hessian矩阵和Haar小波的SURF是图像匹配领域中最具代表性的两个的局部特征算法,但是在光照变化图像的问题上低效,准确率不太高。对光照变化具有鲁棒性的LBP(Local Binary Patterns)方法是一种表述灰度图像某像素点与周围像素点大小关系的二进制描述,最初被应用于图像纹理描述。近年来人们提出了很多扩展的LBP,它计算简单,并且具有部分的尺度、旋转和亮暗不变性等优点。The use of local features in image matching technology can convert the complex image matching problem into the measurement problem of feature vectors, thereby improving the speed and robustness of the algorithm. The basic idea of the image matching method based on local invariant features is generally to detect the feature point set of the image first, then generate the feature vector based on the feature point and its local neighborhood information, and finally measure the similarity between the feature vectors to complete the image matching question. The SIFT (ScaleInvariant Feature Transform) published on IJCV in 2004 and the SURF based on Hessian matrix and Haar wavelet proposed to improve the running speed of SIFT are the two most representative local feature algorithms in the field of image matching. The problem of changing images is inefficient and the accuracy is not very high. The LBP (Local Binary Patterns) method, which is robust to illumination changes, is a binary description that expresses the size relationship between a certain pixel of a grayscale image and surrounding pixels, and was originally applied to image texture description. In recent years, many extended LBPs have been proposed, which are simple to calculate and have the advantages of partial scale, rotation and brightness invariance.

《SIFT和旋转不变LBP相结合的图像匹配算法》中,用旋转不变LBP特征描述SIFT关键点周围局部图像区域,该方法对尺度、旋转、光照等图像变换具有很强的鲁棒性,但是在运算速度方面,无法满足高实时性的要求。In "Image Matching Algorithm Combining SIFT and Rotation Invariant LBP", the rotation invariant LBP feature is used to describe the local image area around the SIFT key point. This method is very robust to image transformations such as scale, rotation, and illumination. However, in terms of computing speed, it cannot meet the high real-time requirements.

发明名称为《一种结合LBP特征提取和SURF特征提取方法的图像匹配方法》的专利文件中,在特征点描述时,保留原SURF方法中的Haar特征描述子,并将Haar特征描述子与LBP旋转不变描述子相结合,使得该图像匹配方法比原SURF方法具有更好的匹配效果,但是两种特征描述使得方法复杂度增加,运算速度有所下降。In the patent document titled "An Image Matching Method Combining LBP Feature Extraction and SURF Feature Extraction Method", when describing feature points, the Haar feature descriptor in the original SURF method is retained, and the Haar feature descriptor is combined with LBP The combination of rotation invariant descriptors makes this image matching method have a better matching effect than the original SURF method, but the two feature descriptions increase the complexity of the method and decrease the operation speed.

《局部不变特征综述》一文中提到,在图像、视频检索及目标识别等领域应用较多的局部不变特征的匹配方法有基于门限的匹配方法、基于最近邻的匹配方法和基于最近邻距离比率的匹配方法。3种方法各有优缺点,其中基于门限的匹配方法简单计算量小,基于最近邻距离比率的匹配方法准确性高。As mentioned in the article "Review of Local Invariant Features", the matching methods of local invariant features that are widely used in the fields of image, video retrieval and target recognition include threshold-based matching methods, nearest-neighbor-based matching methods, and nearest-neighbor-based matching methods. Matching method for distance ratios. The three methods have their own advantages and disadvantages. Among them, the threshold-based matching method is simple and has a small amount of calculation, and the matching method based on the nearest neighbor distance ratio has high accuracy.

由于图像间存在各种几何及光度变换、噪声、量化误差及图像中可能存在相似的局部结构等多种因素的影响,基于相似性度量的特征匹配结果中仍可能存在错误的匹配。随机抽样一致性方法根据一组包含异常数据的样本数据集,计算出数据的数学模型参数,得到有效样本数据的方法,最早由Fischler和Bolles于1981年提出。它广泛应用于图像配准以及拼接,可以在一组包含“外点”(不符合最优模型的点)的数据集中,采用不断迭代的方法,寻找最优参数模型。Due to the influence of various geometric and photometric transformations, noise, quantization errors, and similar local structures in images, there may still be false matches in the feature matching results based on similarity metrics. According to a set of sample data sets containing abnormal data, the random sampling consistency method calculates the mathematical model parameters of the data and obtains effective sample data. It was first proposed by Fischler and Bolles in 1981. It is widely used in image registration and splicing, and can find the optimal parameter model in a set of data sets containing "outside points" (points that do not conform to the optimal model) by means of continuous iteration.

发明内容Contents of the invention

本发明的目的在于提供一种在保证匹配时间和正确率的条件下,在图像发生尺度、光照和旋转变化时也具有一定的鲁棒性的基于局部不变特征的图像匹配方法。The purpose of the present invention is to provide an image matching method based on local invariant features that is robust to changes in image scale, illumination and rotation under the condition of ensuring matching time and accuracy.

本发明的目的是这样实现的:The purpose of the present invention is achieved like this:

(1)对初始图像求取积分图像和Hessian矩阵的行列式;(1) obtain the determinant of integral image and Hessian matrix to initial image;

(2)建立尺度空间金字塔并进行特征点的定位;(2) Establish a scale-space pyramid and locate feature points;

(3)通过Haar小波确定特征点的主方向,完成特征点提取;(3) Determine the main direction of the feature point by Haar wavelet, and complete the feature point extraction;

(4)计算每个特征点周围图像区域的旋转不变LBP特征,构造特征描述子;(4) Calculate the rotation-invariant LBP feature of the image area around each feature point, and construct the feature descriptor;

(5)利用欧氏距离的最近邻法,完成特征粗匹配;(5) Use the nearest neighbor method of Euclidean distance to complete feature rough matching;

(6)通过随机抽样一致性方法,剔除粗匹配方法执行后剩余的误匹配点,完成特征精匹配。(6) Through the random sampling consistency method, the remaining mismatching points after the execution of the rough matching method are eliminated, and the feature fine matching is completed.

与背景技术相比本发明的主要特点在于:Compared with background technology, main features of the present invention are:

1、本发明把旋转不变LBP特征引入到关键点的描述中去,为关键点描述构造了一种计算简单、维数低的描述方法,增强了图像匹配方法的光照鲁棒性。1. The present invention introduces the rotation-invariant LBP feature into the description of key points, constructs a description method with simple calculation and low dimensionality for key point description, and enhances the illumination robustness of the image matching method.

2、本发明在特征点提取部分,采用SURF提供的特征点提取方法,利用Hessian矩阵和积分图方法进行快速计算,保证匹配性能的同时,有效地改进了基于梯度图的SIFT方法计算速度慢的问题。2. In the feature point extraction part of the present invention, the feature point extraction method provided by SURF is adopted, and the Hessian matrix and the integral graph method are used to perform fast calculations, while ensuring the matching performance, it effectively improves the slow calculation speed of the SIFT method based on the gradient graph. question.

3、本发明在特征描述时,采用以LBP旋转不变描述子代替原SURF方法中的Haar特征描述子,在应对旋转变化,光照变化,视角变化时有良好的效果,同时不影响图像匹配的速度。3. In the feature description, the present invention uses the LBP rotation invariant descriptor to replace the Haar feature descriptor in the original SURF method, which has a good effect when dealing with rotation changes, illumination changes, and viewing angle changes, and does not affect image matching. speed.

4、本发明为提高图像匹配的准确性,采用准确性较高的欧氏距离最近邻法进行粗匹配。4. In order to improve the accuracy of image matching, the present invention adopts the Euclidean distance nearest neighbor method with higher accuracy for rough matching.

5、本发明在精匹配时,采用随机抽样一致性方法,剔除粗匹配结果中存在的误匹配,进一步提高图像匹配的准确性。5. In fine matching, the present invention adopts the random sampling consistency method to eliminate the wrong matching in the rough matching result, and further improves the accuracy of image matching.

本发明围绕局部特征点的提取和描述,特征的匹配以及去除误匹配等关键问题进行研究,提出一种基于局部不变特征的图像匹配方法,既能保证较好的图像匹配性能,同时具有较好的尺度、光照和旋转鲁棒性。在保证匹配时间和正确率的条件下,在图像发生尺度、光照和旋转变化时也具有一定的鲁棒性。The present invention conducts research on key issues such as extraction and description of local feature points, feature matching, and removal of mismatching, and proposes an image matching method based on local invariant features, which can not only ensure better image matching performance, but also have relatively Good scale, lighting and rotation robustness. Under the condition of ensuring the matching time and correct rate, it also has certain robustness when the scale, illumination and rotation of the image change.

本发明的有益效果是:在特征点提取部分,利用Hessian矩阵和积分图方法进行快速计算,并构造尺度图像金字塔,保证了图像匹配的快速性和尺度不变性;在特征描述部分,把旋转不变LBP特征引入到关键点的描述中去,为关键点构造了一种计算简单、维数低的描述方法,增强了尺度和光照的鲁棒性;在特征角点粗匹配部分,采用欧氏距离的最近邻法,使角点匹配速度更快;在特征角点精匹配部分,采用随机抽样一致性方法,剔除粗匹配方法执行后剩余的误匹配点,优化匹配结果,使角点匹配准确率更高。The beneficial effects of the present invention are: in the feature point extraction part, the Hessian matrix and the integral graph method are used for fast calculation, and the scale image pyramid is constructed to ensure the rapidity and scale invariance of image matching; in the feature description part, the rotation is not The variable LBP feature is introduced into the description of key points, and a description method with simple calculation and low dimensionality is constructed for key points, which enhances the robustness of scale and illumination; in the rough matching part of feature corners, Euclidean The nearest neighbor method of distance makes the corner matching faster; in the feature corner fine matching part, the random sampling consistency method is used to eliminate the remaining mismatching points after the rough matching method is executed, and the matching result is optimized to make the corner matching accurate rate is higher.

附图说明Description of drawings

图1为本发明的流程图。Fig. 1 is a flowchart of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案和优点更加清晰明了,下面结合附图举例对本发明作进一步描述。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described below with reference to the accompanying drawings.

结合图1,本发明的基于局部特征点提取和描述的图像匹配方法,包括以下步骤:In conjunction with Fig. 1, the image matching method based on local feature point extraction and description of the present invention comprises the following steps:

步骤一:图像特征点提取,包括:Step 1: Image feature point extraction, including:

(1)求取初始图像的积分图像和Hessian矩阵的行列式(1) Find the integral image of the initial image and the determinant of the Hessian matrix

对需要进行匹配的初始图像遍历求取初始图像的积分图像,并求取图像上每个点的Hessian矩阵的行列式。The initial image that needs to be matched is traversed to obtain the integral image of the initial image, and the determinant of the Hessian matrix of each point on the image is obtained.

对于待匹配图像1中给定的一个像素点(x,x)=f(x,y),则该像素点的Hessian矩阵H(f(x,y))为:For a given pixel point (x, x)=f(x, y) in image 1 to be matched, the Hessian matrix H(f(x, y)) of the pixel point is:

H矩阵(Hessian矩阵的简称,下同)判别式det(H)为:The H matrix (abbreviation of Hessian matrix, the same below) discriminant det(H) is:

其中,为像素点f(x,y)的二阶偏导数。in, and is the second-order partial derivative of the pixel point f(x,y).

判别式的值是H矩阵的特征值,根据判定结果的符号可以将所有点分类,由判别式值的正负,来判断该点是不是极值点。The value of the discriminant is the eigenvalue of the H matrix, and all points can be classified according to the sign of the judgment result, and whether the point is an extreme point can be judged by the positive or negative value of the discriminant.

每一个像素点都可以求出一个H矩阵。为了让特征点具备尺度无关性,在进行H矩阵构造前,需要对其进行高斯滤波。经过滤波后再进行H矩阵的计算,则在像素点x的σ尺度上的Hessian矩阵H(x,σ)定义为:An H matrix can be obtained for each pixel. In order to make the feature points independent of scale, Gaussian filtering needs to be performed on them before constructing the H matrix. After filtering, calculate the H matrix, then the Hessian matrix H(x,σ) on the σ scale of the pixel x is defined as:

其中Lxx(x,σ)是高斯和函数的核函数二阶导与图像1在X点的卷积,Lxy(x,σ)和Lyy(x,σ)用同样的方法计算,其中 where L xx (x,σ) is the second derivative of the kernel function of the Gaussian sum function Convolution with image 1 at point X, L xy (x,σ) and L yy (x,σ) are calculated in the same way, where

通过这种方法可以为图像中每个像素计算出其H行列式的决定值,并用这个值来判别特征点。将框状滤波器与图像卷积的结果Dxx、Dxy、Dyy分别代替Lxx、Lxy、Lyy得近似Hessian的矩阵Happrox,其行列式det(Happrox)为:Through this method, the decision value of its H determinant can be calculated for each pixel in the image, and this value can be used to distinguish feature points. Replacing L xx , L xy , and L yy with the result D xx , D xy , D yy of the convolution of the frame filter and the image respectively, the matrix H approx of the approximate Hessian is obtained, and its determinant det(H approx ) is:

det(Happrox)=DxxDyy-(0.9Dxy)2 (4)det(H approx )=D xx D yy -(0.9D xy ) 2 (4)

(2)建立尺度空间金字塔并进行特征点的定位(2) Establish scale space pyramid and locate feature points

为了具有尺度不变性,必须建立尺度空间金字塔。保持图像大小不变,通过改变盒式滤波器的模板大小来建立尺度图像金字塔,具体构建方法为:图像尺度空间金字塔分四层,对每一层进行四次滤波,第1层第一次滤波模板大小是给定的,为9×9,第1层中相邻的模板尺寸相差6个像素,即第1层中第一至四次滤波模板大小分别是9×9、15×15、21×21和27×27;第2层中相邻的滤波模板大小相差12个像素,第3层中相邻的滤波模板大小相差24个像素,第4层中相邻的滤波模板大小相差48个像素,每一层的第一个模板尺寸等于前一层的第二个模板的尺寸,故第2层中第一至四次滤波模板大小分别是15×15、27×27、39×39、51×51,第3层中第一至四次滤波模板大小分别是27×27、51×51、75×75、99×99,第4层中第一至四次滤波模板大小分别是51×51、99×99、147×147、195×195,每次滤波对应的近似尺度可由公式(5)计算。In order to have scale invariance, a scale space pyramid must be established. Keep the size of the image unchanged, and build a scale image pyramid by changing the template size of the box filter. The specific construction method is: the image scale space pyramid is divided into four layers, and each layer is filtered four times, and the first layer is filtered for the first time. The size of the template is given, which is 9×9, and the size of adjacent templates in the first layer differs by 6 pixels, that is, the sizes of the first to fourth filtering templates in the first layer are 9×9, 15×15, and 21 respectively. ×21 and 27×27; the size difference of adjacent filter templates in the second layer is 12 pixels, the size difference of adjacent filter templates in the third layer is 24 pixels, and the size difference of adjacent filter templates in the fourth layer is 48 pixels pixels, the size of the first template in each layer is equal to the size of the second template in the previous layer, so the sizes of the first to fourth filtering templates in the second layer are 15×15, 27×27, 39×39, 51×51, the sizes of the first to fourth filtering templates in the third layer are 27×27, 51×51, 75×75, and 99×99 respectively, and the sizes of the first to fourth filtering templates in the fourth layer are 51× 51, 99×99, 147×147, 195×195, the approximate scale corresponding to each filtering can be calculated by formula (5).

其中σapprox表示尺度。where σ approx represents the scale.

特征点定位过程:为Hessian矩阵响应值设定一个阈值,所有小于这个阈值的点都被去除,然后通过非极大值抑制将那些比其临近层及本层尺度周围的点的响应值都大或都小的点选为特征点,最后拟合三维二次函数精确定特征点的点的位置和尺度。Feature point positioning process: set a threshold for the response value of the Hessian matrix, all points smaller than this threshold are removed, and then through non-maximum value suppression, the response values of those points that are larger than the adjacent layer and the scale of this layer are larger Or all small points are selected as feature points, and finally the three-dimensional quadratic function is fitted to accurately determine the position and scale of the feature points.

(3)通过Haar小波确定特征点的主方向(3) Determine the main direction of the feature point by Haar wavelet

在以特征点为圆心,6σ(σ为特征点所在尺度值)为半径的圆形邻域里,用边长为4σ的Haar小波模板求X和Y两个方向的Haar小波响应,并用标准差为2σ的高斯函数对滤波后的区域加权,而后以特征点为中心,用一个圆心角为π/3的扇形在圆形邻域内扫描一周,计算该扇形所扫描的每个π/3角度内包括的图像点的Haar小波响应总和,取其中最大响应的方向为该特征点的主方向,其中σ为特征点所在的尺度值,X和Y是圆形邻域所处的平面直角坐标系中的两个方向。In the circular neighborhood with the feature point as the center and 6σ (σ is the scale value of the feature point) as the radius, use the Haar wavelet template with a side length of 4σ to find the Haar wavelet response in the X and Y directions, and use the standard deviation The Gaussian function of 2σ weights the filtered area, and then takes the feature point as the center, uses a sector with a central angle of π/3 to scan a circle in the circular neighborhood, and calculates the angle within each π/3 angle scanned by the sector The sum of the Haar wavelet responses of the included image points, the direction of the maximum response is taken as the main direction of the feature point, where σ is the scale value of the feature point, and X and Y are the plane Cartesian coordinate system where the circular neighborhood is located of the two directions.

步骤二:特征描述子生成,包括:Step 2: Generate feature descriptors, including:

基本的LBP特征是对一个3×3邻域的二进制描述,缺点是旋转相关的,为了获得旋转不变性,本发明采用关键点周围区域的旋转不变LBP特征描述。The basic LBP feature is a binary description of a 3×3 neighborhood, and the disadvantage is rotation-related. In order to obtain rotation invariance, the present invention uses the rotation-invariant LBP feature description of the area around the key point.

设pi(r,c,σ,θ)为步骤一中关键点提取中得到的某一关键点,其中为(r,c)在原始图像上的位置坐标,σ和θ分别为pi的尺度和方向。根据σ的大小,在pi所在的高斯金字塔相应层上,以pi为中心取一个9×9大小的图像区域为待描述区域。为了获得旋转不变性,按照θ的大小把该图像区域旋转到参考方向。待描述区域旋转不变LBP特征描述步骤如下:Let p i (r, c, σ, θ) be a key point obtained in the key point extraction in step 1, where it is the position coordinate of (r, c) on the original image, σ and θ are respectively the values of p i scale and direction. According to the size of σ, on the corresponding layer of the Gaussian pyramid where p i is located, a 9×9 image area with p i as the center is taken as the area to be described. To obtain rotation invariance, the image region is rotated to the reference orientation according to the size of θ. The description steps of the rotation-invariant LBP feature of the area to be described are as follows:

(1)在待描述区域8×8大小的区域中,分别以每个像素点pj为中心,求取以它为中心的旋转不变LBP特征,记为lbpj(j=1,2,…,64)。(1) In the 8×8 area to be described, take each pixel point p j as the center, and obtain the rotation-invariant LBP feature centered on it, denoted as lbp j (j=1,2, ..., 64).

(2)从直观上讲,像素点pj距离中心越远,它对描述pi贡献的信息量越小,因此对lbpj进行加权,加权系数wj为:(2) Intuitively speaking, the farther the pixel point p j is from the center, the smaller the amount of information it contributes to describe p i , so lbp j is weighted, and the weighting coefficient w j is:

wj=exp{-[(rj-ri)2+(cj-ci)2]/(2σ0 2)}/(2πσ0 2) (6)w j =exp{ - [(r j -r i ) 2 +(c j -c i ) 2 ]/(2σ 0 2 )}/(2πσ 0 2 ) (6)

其中(rj,cj)和(rj,ci)为像素点pj和中心点pi在待描述图像区域中的坐标,σ0为选定的常数。Where (r j , c j ) and (r j , c i ) are the coordinates of the pixel point p j and the center point p i in the image area to be described, and σ 0 is a selected constant.

(3)把计算得到的所有加权LBP特征值组成一个一维向量,记为Ti(3) Combine all the calculated weighted LBP eigenvalues into a one-dimensional vector, denoted as T i ,

Ti=[w1·lbp1 w2·lbp2 … w64·lbp64] (7)T i =[w 1 ·lbp 1 w 2 ·lbp 2 ... w 64 ·lbp 64 ] (7)

(4)为了消除光照变化的影响,把Ti进行归一化,即(4) In order to eliminate the influence of illumination changes, T i is normalized, that is

综上,最后得到的64维向量Ti即为关键点Ti周围区域的描述。In summary, the finally obtained 64-dimensional vector T i is the description of the area around the key point T i .

步骤三:图像特征粗匹配,包括:Step 3: rough image feature matching, including:

当关键点的描述向量生成后,本发明采用距离公式为After the description vector of the key point is generated, the present invention adopts the distance formula as

公式(9)距离作为关键点之间的相似性判定度量。其中,TA=[a1 a2 … an]和TB=[b1 b2 … bn]分别为关键点A和B的描述向量。匹配策略:取图像1中的某个关键点A,并在图像2中找出与之描述向量距离最近的2个关键点B和C,如果最近的距离||TA-TB||1与次近的距离||TA-TC||1的比值小个阈值t,即Formula (9) distance is used as the similarity judgment measure between key points. Among them, T A =[a 1 a 2 ... a n ] and T B =[b 1 b 2 ... b n ] are description vectors of key points A and B respectively. Matching strategy: Take a key point A in image 1, and find the two key points B and C closest to the description vector in image 2, if the shortest distance ||T A -T B || 1 The ratio of the next closest distance ||T A -T C || 1 is smaller than the threshold t, that is

则认为关键点A与距离最近的关键点B匹配。Then the key point A is considered to match the nearest key point B.

步骤四:图像特征精匹配,包括:Step 4: Fine matching of image features, including:

随机抽样一致性方法是根据一组包含异常数据的样本数据集,计算出数据的数学模型参数,得到有效样本数据的方法。这种方法的基本假设是样本中包含正确数据(适应模型的数据),也包含异常数据(不适应模型的数据)。而且给定一组正确的数据,存在可以计算出符合这些数据的模型参数的方法。The random sampling consistency method is based on a set of sample data sets containing abnormal data, calculating the mathematical model parameters of the data, and obtaining effective sample data. The basic assumption of this method is that the sample contains correct data (data that fits the model) and abnormal data (data that does not fit the model). And given a correct set of data, there are ways to calculate the parameters of a model that fits those data.

设两幅待拼接图像1和图像2中对应的匹配特征点集为f,随机抽样一致性方法去除误匹配点的步骤如下:Assuming that the corresponding matching feature point sets in the two images 1 and 2 to be stitched are f, the steps of removing the mismatching points by the random sampling consistency method are as follows:

(1)任意的从匹配特征点集f中选三对不共线的匹配点,算出其变换矩阵M。(1) Randomly select three pairs of non-collinear matching points from the matching feature point set f, and calculate their transformation matrix M.

(2)根据变换矩阵M来计算对应的点,例如对点(a,b),计算b'=M(a)。(2) Calculate the corresponding point according to the transformation matrix M, for example, for the point (a, b), calculate b'=M(a).

(3)削断b'与b的距离,如果在门限内,则(a,b)为内点,否则为外点。(3) Cut off the distance between b' and b, if it is within the threshold, then (a, b) is an inner point, otherwise it is an outer point.

(4)内点的数目满足条件,则退出;循环k步后,内点的数目没达到要求,则取最大的内点数。如果不满足退出条件则转到步骤四中(1)。(4) If the number of interior points satisfies the condition, then exit; after loop k steps, if the number of interior points does not meet the requirement, take the largest number of interior points. If the exit condition is not met, go to step 4 (1).

Claims (1)

1.一种基于局部不变特征的图像匹配方法,其特征是:1. A method for image matching based on local invariant features, characterized in that: (1)对初始图像求取积分图像和Hessian矩阵的行列式;(1) obtain the determinant of integral image and Hessian matrix to initial image; (2)建立尺度空间金字塔并进行特征点的定位;(2) Establish a scale-space pyramid and locate feature points; (3)通过Haar小波确定特征点的主方向,完成特征点提取;(3) Determine the main direction of the feature point by Haar wavelet, and complete the feature point extraction; (4)计算每个特征点周围图像区域的旋转不变LBP特征,构造特征描述子;(4) Calculate the rotation-invariant LBP feature of the image area around each feature point, and construct the feature descriptor; (5)利用欧氏距离的最近邻法,完成特征粗匹配;(5) Use the nearest neighbor method of Euclidean distance to complete feature rough matching; (6)通过随机抽样一致性方法,剔除粗匹配方法执行后剩余的误匹配点,完成特征精匹配。(6) Through the random sampling consistency method, the remaining mismatching points after the execution of the rough matching method are eliminated, and the feature fine matching is completed.
CN201810221834.3A 2018-03-18 2018-03-18 A kind of image matching method based on local invariant feature Pending CN108550165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810221834.3A CN108550165A (en) 2018-03-18 2018-03-18 A kind of image matching method based on local invariant feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810221834.3A CN108550165A (en) 2018-03-18 2018-03-18 A kind of image matching method based on local invariant feature

Publications (1)

Publication Number Publication Date
CN108550165A true CN108550165A (en) 2018-09-18

Family

ID=63516598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810221834.3A Pending CN108550165A (en) 2018-03-18 2018-03-18 A kind of image matching method based on local invariant feature

Country Status (1)

Country Link
CN (1) CN108550165A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727239A (en) * 2018-12-27 2019-05-07 北京航天福道高技术股份有限公司 Registration method of inspection map and reference map based on SURF feature
CN109815822A (en) * 2018-12-27 2019-05-28 北京航天福道高技术股份有限公司 Inspection figure components target identification method based on Generalized Hough Transform
CN109816674A (en) * 2018-12-27 2019-05-28 北京航天福道高技术股份有限公司 An edge extraction method of registration map based on Canny operator
CN110189368A (en) * 2019-05-31 2019-08-30 努比亚技术有限公司 Method for registering images, mobile terminal and computer readable storage medium
CN110208795A (en) * 2019-06-13 2019-09-06 成都汇蓉国科微系统技术有限公司 A kind of low slow small target detection identifying system of mobile platform high-precision and method
CN111238488A (en) * 2020-03-18 2020-06-05 湖南云顶智能科技有限公司 Aircraft accurate positioning method based on heterogeneous image matching
CN115588204A (en) * 2022-09-23 2023-01-10 神州数码系统集成服务有限公司 Single character image matching and identifying method based on DS evidence theory

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903237A (en) * 2014-03-21 2014-07-02 上海大学 Dual-frequency identification sonar image sequence splicing method
CN104933434A (en) * 2015-06-16 2015-09-23 同济大学 Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN105608671A (en) * 2015-12-30 2016-05-25 哈尔滨工业大学 Image connection method based on SURF algorithm
CN106657789A (en) * 2016-12-29 2017-05-10 核动力运行研究所 Thread panoramic image synthesis method
CN107481273A (en) * 2017-07-12 2017-12-15 南京航空航天大学 A kind of Spacecraft Autonomous Navigation rapid image matching method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903237A (en) * 2014-03-21 2014-07-02 上海大学 Dual-frequency identification sonar image sequence splicing method
CN104933434A (en) * 2015-06-16 2015-09-23 同济大学 Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN105608671A (en) * 2015-12-30 2016-05-25 哈尔滨工业大学 Image connection method based on SURF algorithm
CN106657789A (en) * 2016-12-29 2017-05-10 核动力运行研究所 Thread panoramic image synthesis method
CN107481273A (en) * 2017-07-12 2017-12-15 南京航空航天大学 A kind of Spacecraft Autonomous Navigation rapid image matching method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727239A (en) * 2018-12-27 2019-05-07 北京航天福道高技术股份有限公司 Registration method of inspection map and reference map based on SURF feature
CN109815822A (en) * 2018-12-27 2019-05-28 北京航天福道高技术股份有限公司 Inspection figure components target identification method based on Generalized Hough Transform
CN109816674A (en) * 2018-12-27 2019-05-28 北京航天福道高技术股份有限公司 An edge extraction method of registration map based on Canny operator
CN109815822B (en) * 2018-12-27 2024-05-28 北京航天福道高技术股份有限公司 Patrol diagram part target identification method based on generalized Hough transformation
CN110189368A (en) * 2019-05-31 2019-08-30 努比亚技术有限公司 Method for registering images, mobile terminal and computer readable storage medium
CN110189368B (en) * 2019-05-31 2023-09-19 努比亚技术有限公司 Image registration method, mobile terminal and computer readable storage medium
CN110208795A (en) * 2019-06-13 2019-09-06 成都汇蓉国科微系统技术有限公司 A kind of low slow small target detection identifying system of mobile platform high-precision and method
CN110208795B (en) * 2019-06-13 2021-10-15 成都汇蓉国科微系统技术有限公司 High-precision low-speed small target detection and identification system and method for mobile platform
CN111238488A (en) * 2020-03-18 2020-06-05 湖南云顶智能科技有限公司 Aircraft accurate positioning method based on heterogeneous image matching
CN115588204A (en) * 2022-09-23 2023-01-10 神州数码系统集成服务有限公司 Single character image matching and identifying method based on DS evidence theory

Similar Documents

Publication Publication Date Title
CN108550165A (en) A kind of image matching method based on local invariant feature
Hodaň et al. Detection and fine 3D pose estimation of texture-less objects in RGB-D images
US9141871B2 (en) Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space
CN103400388B (en) Method for eliminating Brisk key point error matching point pair by using RANSAC
Alcantarilla et al. KAZE features
JP5705147B2 (en) Representing 3D objects or objects using descriptors
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
WO2017219391A1 (en) Face recognition system based on three-dimensional data
Yao et al. A new pedestrian detection method based on combined HOG and LSS features
CN106355577B (en) Fast Image Matching Method and System Based on Feature State and Global Consistency
Niu et al. Fast and effective keypoint-based image copy-move forgery detection using complex-valued moment invariants
US9619733B2 (en) Method for generating a hierarchical structured pattern based descriptor and method and device for recognizing object using the same
CN109543701A (en) Vision significance method for detecting area and device
Davarzani et al. Scale-and rotation-invariant texture description with improved local binary pattern features
WO2012111426A1 (en) Method for representing a shape of an object represented by a set of points
Konishi et al. Real-time 6D object pose estimation on CPU
CN106981077A (en) Infrared image and visible light image registration method based on DCE and LSS
CN108764249B (en) Method, system and device for local feature description of rotation-invariant multi-source images
CN107862708A (en) A kind of SAR and visible light image registration method
Ma et al. Image feature matching via progressive vector field consensus
Wang et al. An improved ORB image feature matching algorithm based on SURF
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
CN110991547A (en) An image saliency detection method based on optimal fusion of multiple features
CN102915540A (en) Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor
CN111199558A (en) Image matching method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180918