CN109697692B - Feature matching method based on local structure similarity - Google Patents
Feature matching method based on local structure similarity Download PDFInfo
- Publication number
- CN109697692B CN109697692B CN201811634213.4A CN201811634213A CN109697692B CN 109697692 B CN109697692 B CN 109697692B CN 201811634213 A CN201811634213 A CN 201811634213A CN 109697692 B CN109697692 B CN 109697692B
- Authority
- CN
- China
- Prior art keywords
- matching
- feature
- points
- point
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 239000011159 matrix material Substances 0.000 claims abstract description 117
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 96
- 238000000605 extraction Methods 0.000 claims abstract description 17
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 15
- 230000000694 effects Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005295 random walk Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
- G06T3/147—Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
本发明提供一种基于局部结构相似的特征匹配方法,用来解决图像配准过程中出现的由于噪声干扰而造成匹配结果不理想的问题。步骤包括:步骤1,对两张待匹配图像进行特征提取与初始匹配;步骤2,建立特征点的邻域仿射系数矩阵;步骤3,对初始匹配集中的每一个匹配,计算与之相关联的特征点的邻域仿射系数矩阵的差异;步骤4,对邻域仿射系数矩阵进行优化,获取局部结构差异程度;步骤5,根据每个匹配相关联的特征点的局部结构差异值,设定比较阈值,确定最终的特征匹配对作为待匹配图像的匹配关系结果。本发明在技术上克服了现有技术优化过程复杂并收敛慢的问题,有效提高匹配的效率。
The invention provides a feature matching method based on local structure similarity, which is used to solve the problem of unsatisfactory matching results caused by noise interference in the process of image registration. The steps include: Step 1, perform feature extraction and initial matching on the two images to be matched; Step 2, establish the neighborhood affine coefficient matrix of the feature points; Step 3, for each match in the initial matching set, calculate the associated The difference of the neighborhood affine coefficient matrix of the feature points; step 4, optimize the neighborhood affine coefficient matrix to obtain the degree of local structure difference; step 5, according to the local structure difference value of each matching associated feature point, Set the comparison threshold and determine the final feature matching pair as the matching relationship result of the image to be matched. The invention technically overcomes the problems of complex optimization process and slow convergence in the prior art, and effectively improves the matching efficiency.
Description
技术领域technical field
本发明属于图像处理领域,尤其涉及图像噪声影响下匹配不够精确时的图像处理,具体为一种基于局部结构相似的特征匹配方法。The invention belongs to the field of image processing, in particular to image processing when the matching is not accurate enough under the influence of image noise, in particular to a feature matching method based on local structure similarity.
背景技术Background technique
随着多媒体技术的迅猛发展,图像已成为传递信息的重要载体,数字图像处理技术越来越彰显出它的重要地位,其中图像匹配技术更是近些年来人们关注的重点内容。因为数字图像处理技术的其他研究方向如图像识别、图像检索,以及目标识别、目标跟踪等都是在图像匹配技术的基础上进一步发展的。可以说图像匹配技术的进步能够带动数字图像处理技术整体的发展。但图像匹配技术不仅是研究热点,同时也是研究难点。匹配的目标是对图像中相同的物体找到精确的对应关系,然而其实现过程受到了很多限制。比如待匹配的图像可能来自不同的摄影设备,不同的拍摄场景,甚至是不同的拍摄时代。存储设备的不同,视角、光照的变换以及背景的杂乱所带来的是物体的失真,几何变形,噪声干扰等,这些影响无疑给图像匹配技术带来了巨大考验。With the rapid development of multimedia technology, images have become an important carrier of information transmission, and digital image processing technology has become more and more important. Among them, image matching technology has become the focus of people's attention in recent years. Because other research directions of digital image processing technology, such as image recognition, image retrieval, object recognition, and object tracking, are further developed on the basis of image matching technology. It can be said that the progress of image matching technology can drive the overall development of digital image processing technology. However, image matching technology is not only a research hotspot, but also a research difficulty. The goal of matching is to find the exact correspondence of the same objects in the image, but its implementation process is subject to many limitations. For example, the images to be matched may come from different photographic equipment, different shooting scenes, or even different shooting times. Different storage devices, viewing angles, lighting changes, and background clutter cause object distortion, geometric deformation, noise interference, etc. These effects undoubtedly bring a huge test to image matching technology.
近些年来,广大学者已经对图像匹配技术进行了大量的研究,并取得较好的学术成果。一般的图像匹配方法是基于图像特征点及特征描述向量进行的。通过计算特征描述之间的距离获取初始匹配集。由于常用的特征提取算法具有很好的区分性以及尺度不变性,因而能够保证通过特征描述所得的初始匹配集中很大一部分是正确的匹配对。后续算法便是通过设定几何约束或关系约束去除初始匹配集中的错误匹配,获取最终的正确匹配集。最常见的是利用图匹配算法实现匹配集的优化过程。图中的顶点代表图像特征点,图中的边代表图像特征点关联,设置能量函数式表示点与点、边与边之间的相似性,通过最小化函数式达到匹配约束效果。在此基础上进行二次优化,选择关键点周围的3个邻居点线性表示该点,将得到的系数矩阵作用于该点对应的匹配点,并带入能量函数式利用线性规划求解,确定最终匹配结果。但该方法邻居点的选择不够灵活,后续的求解过程过于复杂,时间花销大,无法实现匹配的高效。In recent years, many scholars have done a lot of research on image matching technology, and achieved good academic results. The general image matching method is based on image feature points and feature description vectors. An initial matching set is obtained by computing the distance between feature descriptions. Since the commonly used feature extraction algorithm has good discrimination and scale invariance, it can ensure that a large part of the initial matching set obtained through feature description is the correct matching pair. The subsequent algorithm is to remove the wrong matches in the initial matching set by setting geometric constraints or relational constraints, and obtain the final correct matching set. The most common is to use the graph matching algorithm to realize the optimization process of the matching set. The vertices in the graph represent the feature points of the image, and the edges in the graph represent the association of the feature points of the image. The energy function is set to represent the similarity between points and points, and between edges. The matching constraint effect is achieved by minimizing the function. On this basis, carry out secondary optimization, select 3 neighbor points around the key point to linearly represent the point, apply the obtained coefficient matrix to the matching point corresponding to the point, and bring it into the energy function formula to solve it using linear programming to determine the final matching results. However, the selection of neighbor points in this method is not flexible enough, and the subsequent solution process is too complicated and time-consuming, which cannot achieve efficient matching.
发明内容Contents of the invention
针对以上匹配方法存在的问题,本发明提供了一种基于局部线性结构相似的特征匹配方法。与现有技术相比,此方法灵活利用图像结构信息,并在优化的过程中简化了计算量,大大提高了匹配的精确度和召回率。Aiming at the problems existing in the above matching methods, the present invention provides a feature matching method based on local linear structure similarity. Compared with the existing technology, this method flexibly utilizes the image structure information, simplifies the calculation amount in the optimization process, and greatly improves the matching precision and recall rate.
发明目的:本发明所要解决的是现有匹配方法存在不足的问题,提出一种基于局部结构相似的特征匹配方法。Purpose of the invention: The present invention aims to solve the problem of insufficient existing matching methods, and proposes a feature matching method based on local structure similarity.
技术方案:本发明一种基于局部结构相似的特征匹配方法,该方法的特征在于提取图像特征描述获取初始匹配集,用关键点邻接点线性表示关键点得到系数矩阵,基于系数矩阵测量匹配对关键点局部区域几何一致性,将一致性作为置信度判断标准,删除置信度小的匹配对,保留置信度大的,确定最终匹配集。具体包括以下步骤:Technical solution: The present invention is a feature matching method based on local structure similarity. The method is characterized in that the image feature description is extracted to obtain an initial matching set, and the key points are linearly represented by adjacent points of key points to obtain a coefficient matrix, and the matching pairs are measured based on the coefficient matrix. The geometric consistency of the local area of the point, the consistency is used as the confidence judgment standard, the matching pair with a small confidence degree is deleted, and the confidence degree is retained to determine the final matching set. Specifically include the following steps:
步骤1,对两张待匹配图像进行特征提取与初始匹配,从而获取初始匹配对应关系集;Step 1, perform feature extraction and initial matching on the two images to be matched, so as to obtain the initial matching corresponding relationship set;
步骤2,对步骤1中获取的初始匹配集中所确定的特征点,确定每个特征点的邻居点,并据此建立特征点的邻域仿射系数矩阵;Step 2, for the feature points determined in the initial matching set obtained in step 1, determine the neighbor points of each feature point, and establish the neighborhood affine coefficient matrix of the feature points accordingly;
步骤3,对初始匹配集中的每一个匹配,计算与之相关联的特征点的邻域仿射系数矩阵的差异,用此差异表示每个匹配相关联的特征点之间的局部结构相似性。差异值越小,局部结构相似性越高,差异越大,局部结构相似性越低;Step 3. For each match in the initial matching set, calculate the difference of the neighborhood affine coefficient matrix of the associated feature points, and use this difference to represent the local structural similarity between the feature points associated with each match. The smaller the difference value, the higher the local structural similarity, and the larger the difference, the lower the local structural similarity;
步骤4,对邻域仿射系数矩阵进行优化,定义以邻域仿射系数矩阵为变量计算局部结构差异程度的函数式,求函数式取极值时对应的邻域仿射系数矩阵,并将此系数矩阵带入函数式计算特征点局部结构差异程度;Step 4, optimize the neighborhood affine coefficient matrix, define a function formula that uses the neighborhood affine coefficient matrix as a variable to calculate the degree of local structure difference, find the corresponding neighborhood affine coefficient matrix when the function formula takes an extreme value, and set This coefficient matrix is brought into the functional formula to calculate the degree of local structure difference of feature points;
步骤5,根据步骤4得到的每个匹配相关联的特征点的局部结构差异值,设定一个比较阈值,保留差异值低于阈值的匹配,删除差异值不低于阈值的匹配,确定最终的特征匹配对作为待匹配图像的匹配关系结果。Step 5: Set a comparison threshold according to the local structure difference value of the feature points associated with each match obtained in step 4, retain the matches whose difference value is lower than the threshold, delete the match whose difference value is not lower than the threshold, and determine the final The feature matching pair is used as the matching relationship result of the image to be matched.
进一步说,本发明的详细步骤如下:Further, the detailed steps of the present invention are as follows:
步骤1,图像特征提取与初始匹配:提取待匹配图像的局部特征点,即找到图像中对图像变换具有较强鲁棒性且检测重复率高的可区分关键点。对检测到的特征点局部区域进行梯度统计计算,完成对特征点的特征描述过程。对于任意一个特征点,计算它的描述符与其他特征点的描述之间的欧式距离值,选择对应欧式距离值最小的特征点作为它的匹配点。集合所有特征点与其匹配点形成图像初始匹配集;Step 1, image feature extraction and initial matching: extract local feature points of the image to be matched, that is, find distinguishable key points in the image that are robust to image transformation and have a high detection repetition rate. Gradient statistical calculation is performed on the local area of the detected feature points to complete the feature description process of the feature points. For any feature point, calculate the Euclidean distance value between its descriptor and the description of other feature points, and select the feature point corresponding to the smallest Euclidean distance value as its matching point. Gather all feature points and their matching points to form an initial image matching set;
步骤2,建立匹配对中关键点的邻域仿射系数矩阵:在步骤1获取初始匹配集的基础上,对匹配集中的每个局部特征点,找到距离局部特征点一定范围内的其他特征点作为它的邻域特征点,并用这些邻域特征点线性表示该关键点,获取对应的仿射系数矩阵;Step 2, establish the neighborhood affine coefficient matrix of key points in the matching pair: on the basis of obtaining the initial matching set in step 1, for each local feature point in the matching set, find other feature points within a certain range from the local feature point As its neighborhood feature points, and use these neighborhood feature points to linearly represent the key point, and obtain the corresponding affine coefficient matrix;
步骤3,基于邻域仿射系数矩阵的局部结构差异程度测量:对步骤1中获取的初始匹配集中的每一个匹配,经过步骤2获取与每个匹配相关联的特征点的邻域仿射系数矩阵,计算两个邻域仿射系数矩阵的差异值,此差异值表示的是每个匹配相关联的特征点的局部结构差异程度。差异值越小,局部结构相似性越高,差异越大,局部结构相似性越低;Step 3, measure the degree of local structure difference based on the neighborhood affine coefficient matrix: for each match in the initial matching set obtained in step 1, obtain the neighborhood affine coefficients of the feature points associated with each match through step 2 Matrix, calculate the difference value of two neighborhood affine coefficient matrices, this difference value represents the degree of local structure difference of the feature points associated with each match. The smaller the difference value, the higher the local structural similarity, and the larger the difference, the lower the local structural similarity;
步骤4,邻域仿射系数矩阵优化:定义以邻域仿射系数矩阵为变量计算局部结构差异程度的函数式,该函数式由两个数据项之和组成,而两个数据项分别为初始匹配集中匹配对应的特征点与其邻域仿射系数矩阵组成的仿射组合之差,求解公式的目标是让函数取值尽可能地接近零,获取函数式在极值点处对应的邻域仿射系数矩阵,并将此邻域仿射系数矩阵带入函数式计算特征点局部结构差异程度;Step 4, Neighborhood Affine Coefficient Matrix Optimization: Define a function formula that uses the neighborhood affine coefficient matrix as a variable to calculate the degree of local structure difference. The function formula consists of the sum of two data items, and the two data items are the initial The difference between the corresponding feature point in the matching set and the affine combination formed by its neighborhood affine coefficient matrix. The goal of solving the formula is to make the value of the function as close to zero as possible, and obtain the neighborhood analog corresponding to the extreme point of the function formula. Affine coefficient matrix, and bring this neighborhood affine coefficient matrix into the functional formula to calculate the degree of local structure difference of feature points;
步骤5,对步骤4中每个匹配对应的局部结构差异值进行阈值比较,当差异值小于规定阈值时,对应的特征点对属于正确的匹配对,当差异值不小于规定阈值时,对应的特征点对属于错误匹配对,从初始匹配对应关系集中去除错误匹配对,余下所有匹配对作为最终正确匹配结果,并输出。Step 5, compare the threshold value of the local structure difference value corresponding to each match in step 4, when the difference value is less than the specified threshold, the corresponding feature point pair belongs to the correct matching pair, and when the difference value is not less than the specified threshold, the corresponding feature point pair The feature point pairs belong to the wrong matching pairs, and the wrong matching pairs are removed from the initial matching correspondence set, and all the remaining matching pairs are taken as the final correct matching results and output.
进一步说,步骤1的实现过程具体如下:步骤1.1,图像特征提取:采用经典的尺度不变特征变换(SIFT)特征描述算子提取图像特征。将一对待匹配图像I和I’作为SIFT算法的输入进行特征提取,获取SIFT特征点以及特征点对应的128维特征描述向量;Furthermore, the implementation process of step 1 is specifically as follows: Step 1.1, image feature extraction: the classic scale-invariant feature transform (SIFT) feature description operator is used to extract image features. A pair of images to be matched I and I' are used as the input of the SIFT algorithm for feature extraction to obtain SIFT feature points and 128-dimensional feature description vectors corresponding to the feature points;
步骤1.2,特征初始匹配:根据步骤1.1得到的特征向量,令待匹配的两幅图像I和I’的单个特征描述向量分别为Xi、Xj,计算两向量的欧式距离为d(Xi,Xj),当且仅当Xi和其他所有特征描述向量的距离与d(Xi,Xj)的比值大于设定的阈值时,我们认为Xi与Xj是可能匹配的,而比值小于等于设定的阈值时,则对应特征点认为是不匹配的。特征描述向量之间的距离公式为:Step 1.2, feature initial matching: According to the feature vector obtained in step 1.1, let the single feature description vectors of the two images I and I' to be matched be X i and X j respectively, and calculate the Euclidean distance between the two vectors as d(X i ,X j ), if and only if the ratio of the distance between Xi and all other feature description vectors to d(X i ,X j ) is greater than the set threshold, we consider that Xi and X j are likely to match, and When the ratio is less than or equal to the set threshold, the corresponding feature points are considered as unmatched. The distance formula between feature description vectors is:
其中k表示的是128维特征描述向量维数序号,范围是[1,128]。Xik表示向量Xi的第k维分量,Xjk表示向量Xj的第k维分量;Among them, k represents the dimension number of the 128-dimensional feature description vector, and the range is [1,128]. X ik represents the k-th dimension component of vector X i , and X jk represents the k-th dimension component of vector X j ;
换言之,本步骤为:给定一个待匹配图像对,将其中一幅图像中所有的特征描述向量逐一与另一幅图像中的特征描述向量进行距离值测量,选择距离值最小时对应的特征点作为其初始匹配,这样每个匹配对将对应待匹配图像中的特征区域。将所有初始匹配对集合起来,构成待匹配图像的初始匹配集。In other words, this step is: given a pair of images to be matched, measure the distance value between all the feature description vectors in one image and the feature description vector in the other image one by one, and select the corresponding feature point when the distance value is the smallest As its initial matching, each matching pair will correspond to the feature region in the image to be matched. Collect all the initial matching pairs to form the initial matching set of the image to be matched.
步骤2具体如下:步骤2.1,选取关键邻域点:根据步骤1.2得到图像初始匹配集,令匹配集中任意匹配M对应特征点对(p,p’),其中特征点p属于待匹配图像I,而p’对应的是待匹配图像I’中与特征点p相匹配的点;寻找匹配M邻近区域的其他匹配,并且令对应的特征点对为(qi,qi’);则当特征点p与qi,同时特征点p’与qi’的空间距离值小于规定阈值时,我们认为特征点qi属于特征点p的邻居点,特征点p’属于特征点qi’的邻居点,而当其距离值大于等于规定阈值时,我们认为特征点p与qi,p’与qi’不具有邻居关系;邻居关系判断公式如下:Step 2 is as follows: step 2.1, select key neighborhood points: obtain the initial matching set of the image according to step 1.2, let the matching set arbitrarily match the feature point pair (p, p') corresponding to M, where the feature point p belongs to the image I to be matched, And p' corresponds to the point that matches the feature point p in the image I' to be matched; find other matches that match the adjacent area of M, and let the corresponding feature point pair be (q i , q i '); then when the feature point p and q i , and when the spatial distance between feature point p' and q i ' is less than the specified threshold, we consider that feature point q i belongs to the neighbor point of feature point p, and feature point p' belongs to the neighbor of feature point q i ' point, and when its distance value is greater than or equal to the specified threshold, we consider that the feature point p and q i , p' and q i ' do not have a neighbor relationship; the neighbor relationship judgment formula is as follows:
||p-qi||<τ,and||p'-qi'||<τ,||pq i ||<τ, and||p'-q i '||<τ,
其中,τ表示关键点与其邻接点的相似度阈值。τ的设置用来保障待匹配图像关键点p平均具有k个最近邻,k值的设定可以根据关键点的局部邻域信息具体设置,以便更准确的表示出关键点的结构信息。i表示特征点p的邻居点的个数,取值范围为[1,k],而实验中可以通过调节τ的值进而灵活的改变k的大小。通过对每个特征点与其他特征点进行邻居关系判断公式计算,将获取每个特征点的邻居点集;即通过本步骤确定了待匹配图像点的邻居点;Among them, τ represents the similarity threshold between the key point and its neighbors. The setting of τ is used to ensure that the key point p of the image to be matched has k nearest neighbors on average, and the value of k can be set according to the local neighborhood information of the key point in order to more accurately represent the structural information of the key point. i represents the number of neighbor points of the feature point p, and the value range is [1, k]. In the experiment, the value of τ can be adjusted to flexibly change the size of k. By calculating the neighbor relationship judgment formula for each feature point and other feature points, the neighbor point set of each feature point will be obtained; that is, the neighbor point of the image point to be matched is determined through this step;
步骤2.2,建立匹配对中关键点的邻域仿射系数矩阵:在步骤2.1的基础上,我们确定了待匹配图像点的邻居点,将任意匹配M在图像I中对应的特征点p的邻居点集成表示为其中k表示特征点p的邻居点的个数,qi表示p的第i个邻居点;遵循局部线性嵌入(Locally Linear Embedding)理论,图像特征点p附近的几何结构可以用其邻域仿射系数矩阵w=[w1,...,wi,...]T来刻画,则特征点p用其已确定的邻居点集的一个仿射组合表示为:p=∑wiqi,其中∑wi=1;而对于特征点p在图像I’上对应的匹配特征点p’,则用p’的邻域仿射系数矩阵w'=[w'1,...,w'i,...]T来刻画,所以得到p'=∑w'iq'i,其中∑w'i=1,这样就完成了关键点邻域仿射系数矩阵的建立;Step 2.2, establish the neighborhood affine coefficient matrix of the key points in the matching pair: on the basis of step 2.1, we determine the neighbor points of the image point to be matched, and arbitrarily match the neighbors of the feature point p corresponding to M in image I The point integration is expressed as Where k represents the number of neighbor points of the feature point p, and q i represents the i-th neighbor point of p; following the theory of Locally Linear Embedding, the geometric structure near the image feature point p can be affine with its neighborhood Coefficient matrix w=[w 1 ,..., wi ,...] T to characterize, then feature point p uses its determined neighbor point set An affine combination of is expressed as: p=∑w i q i , where ∑w i =1; and for the matching feature point p' corresponding to the feature point p on the image I', the neighborhood affine of p' is used Coefficient matrix w'=[w' 1 ,...,w' i ,...] T to characterize, so get p'=∑w' i q' i , where ∑w' i =1, so complete The establishment of the affine coefficient matrix of the key point neighborhood is established;
步骤3具体如下:匹配点局部结构相似性判断:通过步骤2对图像特征点的处理,获取了初始匹配集中的任意匹配M相关联的特征点对(p,p’)各自的邻域仿射变换矩阵w与w’,因为邻域仿射系数矩阵描述了特征点的局部结构几何性质,所以匹配特征点对的邻域系数矩阵的一致表示的就是对应的匹配对是正确的匹配对,即w=w’时,保留对应的匹配对作为正确匹配对,如果两者不相等,则被认定为错误匹配对,将之删除;The details of step 3 are as follows: Judgment of local structure similarity of matching points: Through the processing of image feature points in step 2, the respective neighborhood affine of any pair of feature points (p, p') associated with any matching M in the initial matching set is obtained The transformation matrix w and w', because the neighborhood affine coefficient matrix describes the local structural geometric properties of the feature points, so the consistency of the neighborhood coefficient matrix of the matching feature point pair means that the corresponding matching pair is the correct matching pair, that is When w=w', keep the corresponding matching pair as the correct matching pair, if the two are not equal, it will be considered as a wrong matching pair and will be deleted;
步骤4具体如下,对特征点邻域仿射系数矩阵进行优化:为了优化特征点的邻域仿射系数矩阵,寻找一个最优邻域仿射系数矩阵与特征点p的邻居点集线性组合,再与特征点p相减,获取两者的差异值并进行范式处理,再用最优仿射系数矩阵系数向量与特征点p的匹配点p’的邻居点集线性组合,之后与特征点p’相减并进行范式处理;定义矩阵协误差J由以上两个范式之和组成,用来表示匹配点对的局部结构差异程度;矩阵协误差J的具体形式如下:Step 4 is as follows, optimize the neighborhood affine coefficient matrix of feature points: In order to optimize the neighborhood affine coefficient matrix of feature points, find an optimal neighborhood affine coefficient matrix Neighbor point set with feature point p Linear combination, and then subtracted from the feature point p to obtain the difference value between the two and perform paradigm processing, and then use the optimal affine coefficient matrix coefficient vector Neighbor point set of matching point p' with feature point p Linear combination, and then subtracted from the feature point p' and processed in a normal form; the definition matrix co-error J is composed of the sum of the above two paradigms, which is used to represent the degree of local structural difference of the matching point pair; the specific form of the matrix co-error J is as follows :
待匹配图像中映射同一对象的正确匹配在同一幅图像中相关联的特征点在空间位置上通常是邻近的,并且由于物理约束而在待匹配的两幅图像之间共享相似的拓扑结构,这样对应的矩阵协误差的求解目标就是使J的值尽可能地小。而与之相对应的是错误匹配对应特征点以其邻近区域的特征点表示的几何结构很难在不同的图像上仍保持一致,因此,矩阵协误差J为判断特征点局部结构相似性,即初始匹配集中匹配的准确度提供了一种很好的判别方法。A correct match that maps the same object in the image to be matched has associated feature points in the same image that are usually adjacent in spatial location and share a similar topology between the two images to be matched due to physical constraints, such that The goal of solving the corresponding matrix co-error is to make the value of J as small as possible. Correspondingly, the geometric structure represented by the feature points in the adjacent area of the wrong matching feature point is difficult to keep consistent on different images. Therefore, the matrix co-error J is to judge the local structural similarity of the feature point, that is The accuracy of the matches in the initial set of matches provides a good discriminant.
步骤5具体如下:基于步骤4获得的每个匹配对应的特征点的局部结构的差异程度,即矩阵协误差的值。设定一个较小的阈值,并规定当特征点的矩阵协误差小于设定的阈值时,认为此特征点所关联的匹配是正确的匹配,保留到最终的匹配集,否则,当特征点的矩阵协方差大于设定的阈值时,认为此特征点所关联的匹配是错误的匹配,将之删除;最终将所有保留下来的匹配集合,作为最终的图像匹配结果集。Step 5 is specifically as follows: Based on the degree of difference in the local structure of each matching feature point obtained in step 4, that is, the value of the matrix co-error. Set a small threshold, and stipulate that when the matrix co-error of the feature point is less than the set threshold, the matching associated with this feature point is considered to be a correct match, and it will be kept in the final matching set; otherwise, when the feature point’s When the matrix covariance is greater than the set threshold, it is considered that the matching associated with this feature point is a wrong matching, and it will be deleted; finally, all the remaining matching sets will be used as the final image matching result set.
有益的技术效果Beneficial technical effect
本发明所提供的特征匹配方法,是用来解决图像配准过程中出现的由于噪声干扰而造成匹配结果不理想的问题。采用的是基于局部结构相似的特征匹配方法。包括以下步骤:步骤1,对两张待匹配图像进行特征提取与初始匹配,从而获取初始匹配对应关系集;步骤2,对获取的初始匹配集中所确定的特征点,确定每个特征点的邻居点,并据此建立特征点的邻域仿射系数矩阵;步骤3,对初始匹配集中的每一个匹配,计算与之相关联的特征点的邻域仿射系数矩阵的差异,用此差异表示每个匹配相关联的特征点之间的局部结构相似性;步骤4,对邻域仿射系数矩阵进行优化,定义以邻域仿射系数矩阵为变量计算局部结构差异程度的函数式,对函数式取极值并进一步获取局部结构差异程度;步骤5,根据每个匹配相关联的特征点的局部结构差异值,设定比较阈值,保留差异值低于阈值的匹配,删除差异值不低于阈值的匹配,确定最终的特征匹配对作为待匹配图像的匹配关系结果。The feature matching method provided by the present invention is used to solve the problem of unsatisfactory matching results caused by noise interference in the process of image registration. A feature matching method based on local structure similarity is adopted. The method comprises the following steps: step 1, performing feature extraction and initial matching on two images to be matched, thereby obtaining an initial matching correspondence set; step 2, determining the neighbors of each feature point for the feature points determined in the obtained initial matching set point, and establish the neighborhood affine coefficient matrix of the feature point accordingly; step 3, for each match in the initial matching set, calculate the difference of the neighborhood affine coefficient matrix of the feature point associated with it, expressed by this difference The local structure similarity between the feature points associated with each match; step 4, optimize the neighborhood affine coefficient matrix, define the function formula that uses the neighborhood affine coefficient matrix as a variable to calculate the degree of local structure difference, and the function The formula takes the extreme value and further obtains the degree of local structure difference; step 5, according to the local structure difference value of the feature point associated with each match, set the comparison threshold, keep the match whose difference value is lower than the threshold, and delete the difference value not less than Threshold matching determines the final feature matching pair as the matching relationship result of the image to be matched.
本发明是基于局部结构相似的图像匹配方法,针对目前图像匹配技术无法避免的噪声干扰问题,设计有效的解决方法。利用图像特征点与其邻居点之间的仿射关系定义特 征点的局部结构描述,通过比较匹配特征点的局部结构删除错误匹配,以此达到去除噪声 的目的。本发明主要建立在待匹配图像中映射同一物体的匹配对通常处于邻近区域这一思 想的基础上,噪声点不可能同时满足邻域点范围和相似局部结构两个约束,因而可以有效 地排除噪声。另一方面,在这本发明具体实现的过程中,灵活利用图像自身的结构信息,给出一种仿射关系的约束准确实现图像匹配,并简化计算过程,在技术上克服了现有技术优化过程复杂并收敛慢的问题,有效提高匹配的效率。The invention is based on an image matching method with similar local structures, and designs an effective solution to the problem of noise interference that cannot be avoided in the current image matching technology. The local structure description of feature points is defined by using the affine relationship between image feature points and their neighbors, and the error matching is deleted by comparing the local structure of matching feature points, so as to achieve the purpose of removing noise. The present invention is mainly based on the idea that the matching pairs mapping the same object in the image to be matched are usually located in the adjacent area . It is impossible for the noise points to satisfy the two constraints of the neighborhood point range and the similar local structure at the same time, so it can effectively eliminate noise. On the other hand, in the actual implementation process of the present invention, the structure information of the image itself is flexibly utilized, and an affine relationship constraint is given to accurately realize image matching, and the calculation process is simplified, technically overcoming the prior art optimization The problem of complex process and slow convergence can effectively improve the efficiency of matching.
附图说明Description of drawings
图1为本发明方法的基本流程图Fig. 1 is the basic flowchart of the inventive method
图2为特征点与其邻点关系表示图Figure 2 is a representation of the relationship between feature points and their neighbors
图3为实验匹配效果图具体实施方法Figure 3 is the specific implementation method of the experimental matching effect diagram
下面结合附图对本发明进一步说明:Below in conjunction with accompanying drawing, the present invention is further described:
参见图1,一种基于局部结构相似的匹配方法,通过将一对待匹配图像依次如下处理:See Figure 1, a matching method based on local structure similarity, by sequentially processing a pair of images to be matched as follows:
步骤1,对两张待匹配图像进行特征提取与初始匹配,从而获取初始匹配对应关系集;步骤2,对步骤1中获取的初始匹配集中所确定的特征点,确定每个特征点的邻居点,并据此建立特征点的邻域仿射系数矩阵;步骤3,对初始匹配集中的每一个匹配,计算与之相关联的特征点的邻域仿射系数矩阵的差异,用此差异表示每个匹配相关联的特征点之间的局部结构相似性。差异值越小,局部结构相似性越高,差异越大,局部结构相似性越低;步骤4,对邻域仿射系数矩阵进行优化,定义以邻域仿射系数矩阵为变量计算局部结构差异程度的函数式,求函数式取极值时对应的邻域仿射系数矩阵,并将此系数矩阵带入函数式计算特征点局部结构差异程度;步骤5,根据步骤4得到的每个匹配相关联的特征点的局部结构差异值,设定一个比较阈值,保留差异值低于阈值的匹配,删除差异值不低于阈值的匹配,确定最终的特征匹配对作为待匹配图像的匹配关系结果。Step 1, perform feature extraction and initial matching on the two images to be matched, so as to obtain the initial matching correspondence set; Step 2, determine the neighbor points of each feature point for the feature points determined in the initial matching set obtained in step 1 , and establish the neighborhood affine coefficient matrix of the feature points accordingly; step 3, for each match in the initial matching set, calculate the difference of the neighborhood affine coefficient matrix of the feature points associated with it, and use this difference to represent each The local structural similarity between feature points associated with each match. The smaller the difference value, the higher the local structure similarity, and the larger the difference, the lower the local structure similarity; step 4, optimize the neighborhood affine coefficient matrix, define the neighborhood affine coefficient matrix as a variable to calculate the local structure difference degree of functional formula, find the corresponding neighborhood affine coefficient matrix when the functional formula takes the extreme value, and bring this coefficient matrix into the functional formula to calculate the degree of local structure difference of feature points; step 5, according to each matching correlation obtained in step 4 The local structure difference value of the associated feature points, set a comparison threshold, retain the matching whose difference value is lower than the threshold, delete the matching whose difference value is not lower than the threshold, and determine the final feature matching pair as the matching relationship result of the image to be matched.
进一步说,具体步骤为:步骤1,图像特征提取与初始匹配:提取待匹配图像的局部特征点,即找到图像中对图像变换具有较强鲁棒性且检测重复率高的可区分关键点。对检测到的特征点局部区域进行梯度统计计算,完成对特征点的特征描述过程。对于任意一个特征点,计算它的描述符与其他特征点的描述之间的欧式距离值,选择对应欧式距离值最小的特征点作为它的匹配点。集合所有特征点与其匹配点形成图像初始匹配集;Further, the specific steps are: Step 1, image feature extraction and initial matching: extract local feature points of the image to be matched, that is, find distinguishable key points in the image that are robust to image transformation and have a high detection repetition rate. Gradient statistical calculation is performed on the local area of the detected feature points to complete the feature description process of the feature points. For any feature point, calculate the Euclidean distance value between its descriptor and the description of other feature points, and select the feature point corresponding to the smallest Euclidean distance value as its matching point. Gather all feature points and their matching points to form an initial image matching set;
步骤2,建立匹配对中关键点的邻域仿射系数矩阵:在步骤1获取初始匹配集的基础上,对匹配集中的每个局部特征点,找到距离局部特征点一定范围内的其他特征点作为它的邻域特征点,并用这些邻域特征点线性表示该关键点,获取对应的仿射系数矩阵;Step 2, establish the neighborhood affine coefficient matrix of key points in the matching pair: on the basis of obtaining the initial matching set in step 1, for each local feature point in the matching set, find other feature points within a certain range from the local feature point As its neighborhood feature points, and use these neighborhood feature points to linearly represent the key point, and obtain the corresponding affine coefficient matrix;
步骤3,基于邻域仿射系数矩阵的局部结构差异程度测量:对步骤1中获取的初始匹配集中的每一个匹配,经过步骤2获取与每个匹配相关联的特征点的邻域仿射系数矩阵,计算两个邻域仿射系数矩阵的差异值,此差异值表示的是每个匹配相关联的特征点的局部结构差异程度。差异值越小,局部结构相似性越高,差异越大,局部结构相似性越低;Step 3, measure the degree of local structure difference based on the neighborhood affine coefficient matrix: for each match in the initial matching set obtained in step 1, obtain the neighborhood affine coefficients of the feature points associated with each match through step 2 Matrix, calculate the difference value of two neighborhood affine coefficient matrices, this difference value represents the degree of local structure difference of the feature points associated with each match. The smaller the difference value, the higher the local structural similarity, and the larger the difference, the lower the local structural similarity;
步骤4,邻域仿射系数矩阵优化:定义以邻域仿射系数矩阵为变量计算局部结构差异程度的函数式,函数式由两个数据项之和组成,而两个数据项分别为初始匹配集中匹配对应的特征点与其领域仿射系数矩阵组成的仿射组合之差,求解公式的目标是让函数取值尽可能地接近零,获取函数式在极值点处对应的邻域仿射系数矩阵,并将此邻域仿射系数矩阵带入函数式计算特征点局部结构差异程度;Step 4, Neighborhood Affine Coefficient Matrix Optimization: Define a function formula that uses the neighborhood affine coefficient matrix as a variable to calculate the degree of local structure difference. The function formula consists of the sum of two data items, and the two data items are the initial matching Focus on the difference between the corresponding feature points and the affine combination formed by the domain affine coefficient matrix. The goal of solving the formula is to make the function value as close to zero as possible, and obtain the neighborhood affine coefficient corresponding to the extreme point of the function formula Matrix, and bring this neighborhood affine coefficient matrix into the functional formula to calculate the degree of local structure difference of feature points;
步骤5,对步骤4中每个匹配对应的局部结构差异值进行阈值比较,当差异值小于规定阈值时,对应的特征点对属于正确的匹配对,当差异值不小于规定阈值时,对应的特征点对属于错误匹配对,从初始匹配对应关系集中去除错误匹配对,余下所有匹配对作为最终正确匹配结果,并输出。Step 5, compare the threshold value of the local structure difference value corresponding to each match in step 4, when the difference value is less than the specified threshold, the corresponding feature point pair belongs to the correct matching pair, and when the difference value is not less than the specified threshold, the corresponding feature point pair The feature point pairs belong to the wrong matching pairs, and the wrong matching pairs are removed from the initial matching correspondence set, and all the remaining matching pairs are taken as the final correct matching results and output.
如图1所示流程图,本方法是一个串行式的匹配过程:首先提取图像特征点以及特征描述符,根据描述的相似性获取图像初始匹配集。然后寻找每个特征点的邻近点,并建立初始匹配集中匹配对的关键点的邻域仿射系数矩阵,并定义矩阵协误差公式对特征点的邻域仿射系数矩阵进行优化,最后保留对应矩阵协误差小于规定阈值的匹配,删除不小于规定阈值的匹配,确定最终的特征匹配集作为待匹配图像匹配结果。As shown in the flow chart in Fig. 1, this method is a serial matching process: first extract image feature points and feature descriptors, and obtain an initial matching set of images according to the similarity of descriptions. Then find the adjacent points of each feature point, and establish the neighborhood affine coefficient matrix of the matching key points in the initial matching set, and define the matrix co-error formula to optimize the neighborhood affine coefficient matrix of the feature point, and finally keep the corresponding The matrix co-error is less than the specified threshold, and the matching not less than the specified threshold is deleted, and the final feature matching set is determined as the matching result of the image to be matched.
具体地说,如图1所示,本发明公开了一种基于局部线结构相似的匹配方法。主要包括以下步骤:Specifically, as shown in FIG. 1 , the present invention discloses a matching method based on local line structure similarity. It mainly includes the following steps:
步骤1,图像特征提取与初始匹配:提取待匹配图像的特征点及其特征描述,基于描述的相似性获取图像初始匹配集。Step 1, image feature extraction and initial matching: extract the feature points and feature descriptions of the image to be matched, and obtain the initial matching set of images based on the similarity of the description.
步骤1.1,图像特征提取:采用经典的尺度不变特征变换(SIFT)特征描述算子提取 图像特征。将一对待匹配图像I和I’作为SIFT算法的输入进行特征提取,获取SIFT特征点以 及特征点对应的128维特征描述向量; Step 1.1, image feature extraction: the classic scale-invariant feature transform (SIFT) feature description operator is used to extract image features. A pair of images to be matched I and I' are used as the input of the SIFT algorithm for feature extraction, and the SIFT feature points and the 128-dimensional feature description vectors corresponding to the feature points are obtained ;
步骤1.2,特征初始匹配:根据步骤1.1得到的特征向量,令待匹配的两幅图像I和I’的单个特征描述向量分别为Xi、Xj,计算两向量的欧式距离为d(Xi,Xj),当且仅当Xi和其他所有特征描述向量的距离与d(Xi,Xj)的比值大于设定的阈值时,我们认为Xi与Xj是可能匹配的,而比值小于等于设定的阈值时,则对应特征点认为是不匹配的。特征描述向量之间的距离公式为:Step 1.2, feature initial matching: According to the feature vector obtained in step 1.1, let the single feature description vectors of the two images I and I' to be matched be X i and X j respectively, and calculate the Euclidean distance between the two vectors as d(X i ,X j ), if and only if the ratio of the distance between Xi and all other feature description vectors to d(X i ,X j ) is greater than the set threshold, we consider that Xi and X j are likely to match, and When the ratio is less than or equal to the set threshold, the corresponding feature points are considered as unmatched. The distance formula between feature description vectors is:
其中Xik表示向量Xi的第k维分量,阈值取为1.1。Where X ik represents the k-th dimension component of vector Xi , and the threshold is taken as 1.1.
给定一个待匹配图像对,将其中一幅图像中所有的特征描述向量逐一与另一幅图像中的特征描述向量进行距离值测量,选择距离值最小时对应的特征点作为匹配点。这样每个匹配对将对应待匹配图像中的各个特征区域。将所有初始匹配对集合起来,构成待匹配图像的初始匹配集。Given an image pair to be matched, measure the distance value between all the feature description vectors in one image and the feature description vector in the other image one by one, and select the corresponding feature point when the distance value is the smallest as the matching point. In this way, each matching pair will correspond to each feature region in the image to be matched. Collect all the initial matching pairs to form the initial matching set of the image to be matched.
步骤2,建立匹配对中关键点的邻域仿射系数矩阵:结合图2进一步说明,步骤2.1,选取关键邻域点:根据步骤1.2得到图像初始匹配集,令匹配集中任意匹配M对应特征点对(p,p’),其中特征点p属于待匹配图像I,而p’对应的是待匹配图像I’中与特征点p相匹配的点;寻找匹配M邻近区域的其他匹配,并且令对应的特征点对为(qi,qi’);则当特征点p与qi,同时特征点p’与qi’的空间距离值小于规定阈值时,我们认为特征点qi属于特征点p的邻居点,特征点p’属于特征点qi’的邻居点,而当其距离值大于等于规定阈值时,我们认为特征点p与qi,p’与qi’不具有邻居关系;邻居关系判断公式如下:Step 2, establish the neighborhood affine coefficient matrix of the key points in the matching pair: further explain in conjunction with Figure 2, step 2.1, select the key neighborhood points: get the initial matching set of the image according to step 1.2, let the matching set match M corresponding feature points arbitrarily pair (p,p'), where the feature point p belongs to the image to be matched I, and p' corresponds to the point in the image to be matched I' that matches the feature point p; find other matches that match the adjacent area of M, and let The corresponding feature point pair is (q i , q i '); then when the feature point p and q i , and the spatial distance between feature point p' and q i ' is less than the specified threshold, we consider that feature point q i belongs to the feature The neighbor point of point p, the feature point p' belongs to the neighbor point of the feature point q i ', and when the distance value is greater than or equal to the specified threshold, we consider that the feature point p and q i , p' and q i ' do not have a neighbor relationship ; Neighbor relationship judgment formula is as follows:
||p-qi||<τ,and||p'-qi'||<τ,||pq i ||<τ, and||p'-q i '||<τ,
其中,τ表示关键点与其邻接点的相似度阈值,取值为10。τ的设置用来保障待匹配图像关键点p平均具有k个最近邻,k值的设定可以根据关键点的局部邻域信息具体设置,以便更准确的表示出关键点的结构信息。i表示特征点p得邻居点的个数,取值范围为[1,k],而实验中可以通过调节τ的值进而灵活的改变k的大小,这让实验能够呈现更好的效果。Among them, τ represents the similarity threshold between the key point and its adjacent points, and the value is 10. The setting of τ is used to ensure that the key point p of the image to be matched has k nearest neighbors on average, and the value of k can be set according to the local neighborhood information of the key point in order to more accurately represent the structural information of the key point. i represents the number of neighbor points of the feature point p, and the value range is [1, k]. In the experiment, the value of τ can be adjusted to flexibly change the size of k, which allows the experiment to show better results.
步骤2.2,建立匹配对中关键点的邻域仿射系数矩阵:在步骤2.1的基础上,我们确定了待匹配图像点的邻局点,将任意匹配M在图像I中对应的特征点p的邻居点集成表示为其中k表示特征点p的邻居点的个数,qi表示p的第i个邻居点;遵循局部线性嵌入(Locally Linear Embedding)理论,图像特征点p附近的几何结构可以用其邻域仿射系数矩阵w=[w1,...,wi,...]T来刻画,则特征点p用其已确定的邻居点集的一个仿射组合表示为:p=∑wiqi,其中∑wi=1;而对于特征点p在图像I’上对应的匹配特征点p’,则用p’的邻域仿射系数矩阵w'=[w'1,...,w'i,...]T来刻画,所以得到p'=∑w'iq'i,其中∑w'i=1,这样就完成了关键点邻域仿射系数矩阵的建立;Step 2.2, establish the neighborhood affine coefficient matrix of the key point in the matching pair: on the basis of step 2.1, we determine the neighboring points of the image point to be matched, and arbitrarily match the feature point p corresponding to M in image I Neighbor integration is expressed as Where k represents the number of neighbor points of the feature point p, qi represents the i-th neighbor point of p; following the theory of Locally Linear Embedding, the geometric structure near the image feature point p can be calculated by its neighborhood affine coefficient matrix w=[w 1 ,..., wi ,...] T to characterize, then feature point p uses its determined neighbor point set An affine combination of is expressed as: p=∑w i q i , where ∑w i =1; and for the matching feature point p' corresponding to the feature point p on the image I', the neighborhood affine of p' is used Coefficient matrix w'=[w' 1 ,...,w' i ,...] T to characterize, so get p'=∑w' i q' i , where ∑w' i =1, so complete The establishment of the affine coefficient matrix of the key point neighborhood is established;
步骤3,匹配点局部结构相似性判断:通过步骤2对图像特征点的处理,获取了初始匹配集中的任意匹配M相关联的特征点对(p,p’)各自的邻域仿射变换矩阵w与w’,因为邻域仿射系数矩阵描述了特征点的局部结构几何性质,所以匹配特征点对的邻域系数矩阵的一致表示的就是对应的匹配对是正确的匹配对,即w=w’时,保留对应的匹配对作为正确匹配对,如果两者不相等,则被认定为错误匹配对,将之删除;Step 3, local structure similarity judgment of matching points: through the processing of image feature points in step 2, the respective neighborhood affine transformation matrices of the feature point pairs (p, p') associated with any matching M in the initial matching set are obtained w and w', because the neighborhood affine coefficient matrix describes the geometric properties of the local structure of the feature point, so the consistency of the neighborhood coefficient matrix of the matching feature point pair means that the corresponding matching pair is the correct matching pair, that is, w= When w', keep the corresponding matching pair as the correct matching pair, if the two are not equal, it will be considered as a wrong matching pair and will be deleted;
步骤4,对特征点邻域仿射系数矩阵进行优化:为了优化特征点的邻域仿射系数矩阵,寻找一个最优邻域仿射系数矩阵与特征点p的邻居点集线性组合,再与特征点p相减,获取两者的差异值并进行范式处理,再用最优仿射系数矩阵系数向量与特征点p的匹配点p’的邻居点集线性组合,之后与特征点p’相减并进行范式处理;定义矩阵协误差J由以上两个范式之和组成,用来表示匹配点对的局部结构差异程度;矩阵协误差J的具体形式如下:Step 4, optimize the neighborhood affine coefficient matrix of feature points: In order to optimize the neighborhood affine coefficient matrix of feature points, find an optimal neighborhood affine coefficient matrix Neighbor point set with feature point p Linear combination, and then subtracted from the feature point p to obtain the difference value between the two and perform paradigm processing, and then use the optimal affine coefficient matrix coefficient vector Neighbor point set of matching point p' with feature point p Linear combination, and then subtracted from the feature point p' and processed in a normal form; the definition matrix co-error J is composed of the sum of the above two paradigms, which is used to represent the degree of local structural difference of the matching point pair; the specific form of the matrix co-error J is as follows :
J值的大小表示的是特征点p和p’的局部结构的差异程度,J的值越小就表示特征点p和p'局部结构越相似。从原理上来看,待匹配图像中映射同一对象的正确匹配趋向于聚集在邻近区域,并且由于物理约束而在图像之间共享相似的拓扑结构,因而对应J值很小。而在错误匹配附近的匹配点的几何布局很难在不同的图像上仍保持一致,如此对应的J值就较大。因此,矩阵协误差J为评估匹配的正确性提供了一种很好的方法。设定一个较小的比较阈值,当矩阵协误差J小于设定的阈值时,认为对应的匹配是正确的匹配,在初始匹配集中保留该匹配,当矩阵协误差J不小于设定的阈值时,认为对应的匹配是错误的匹配,在初始匹配集中删除该匹配。The value of J indicates the degree of difference between the local structures of the feature points p and p', and the smaller the value of J, the more similar the local structures of the feature points p and p'. In principle, correct matches mapping the same object in the image to be matched tend to cluster in adjacent regions and share similar topological structures between images due to physical constraints, thus corresponding to small J values. However, the geometric layout of the matching points near the wrong matching is difficult to keep consistent on different images, so the corresponding J value is relatively large. Therefore, the matrix coerror J provides a good way to evaluate the correctness of the matching. Set a small comparison threshold. When the matrix co-error J is less than the set threshold, the corresponding match is considered to be a correct match, and the match is retained in the initial matching set. When the matrix co-error J is not less than the set threshold , the corresponding match is considered to be a wrong match, and the match is deleted from the initial match set.
公式J的有效性建立在邻域仿射系数矩阵对图像间的局部几何变化保持不变的假设基础上。在特征匹配的文献中,通常假定对应局部区域是仿射不变的。具体的证明方法如下:对初始匹配集中的任一匹配M对应匹配点对(p,p’),存在它的邻近匹配Mi对应匹配点对(q,q’)。特征点p’可以由其匹配特征点p经过一个2×2的旋转缩放矩阵A和一个2×1的平移向量t来逼近,同时,邻近区域的特征点具有相似的变换,所以特征点的p的邻居点q也可以由其匹配点q’经过旋转缩放矩阵A和平移向量t逼近。则:The validity of Equation J is based on the neighborhood affine coefficient matrix It is based on the assumption that the local geometric changes between images remain unchanged. In the literature of feature matching, it is usually assumed that the corresponding local regions are affine invariant. The specific proof method is as follows: For any matching M in the initial matching set, it corresponds to the matching point pair (p, p'), and its adjacent matching M i corresponds to the matching point pair (q, q'). The feature point p' can be approximated by its matching feature point p through a 2×2 rotation scaling matrix A and a 2×1 translation vector t. At the same time, the feature points in the adjacent area have similar transformations, so the feature point p The neighbor point q of can also be approximated by its matching point q' through the rotation scaling matrix A and translation vector t. but:
其中这一约束确保了转换的不变性。这样便证明了邻域仿射系数矩阵对图像间的局部几何变化保持不变,即如果则成立。如果较严重的图像失真出现在关键点p或p’附近,则上述公式可能不成立。幸运的是,图像严重失真并不是常见的情况。总之,我们在此证明,当匹配M和其邻近匹配Mi是正确匹配,则局部区域结构是仿射不变的。in This constraint ensures the invariance of transformations. This proves that the neighborhood affine coefficient matrix remains invariant to local geometric changes between images, that is, if but established. If more severe image distortion occurs near the key point p or p', the above formula may not hold. Fortunately, severely distorted images are not a common occurrence. In summary, we prove here that when a match M and its neighboring matches M i are correct matches, the local region structure is affine invariant.
由于初始匹配集会包含错误匹配,因此在每幅图像中联合构造具有各自邻域线性组合的匹配特征点对可以减少噪声引起的误差。而且对于纯粹的错误匹配,也可以进行优化,从而对噪声干扰下的匹配提供更强的鲁棒性处理。优化具体过程如下:Since the initial matching set will contain false matches, jointly constructing matching feature point pairs with linear combinations of their respective neighborhoods in each image can reduce errors caused by noise. And for purely wrong matches, it can also be optimized to provide stronger robustness for matching under noise interference. The specific process of optimization is as follows:
公式J可以被写作:Formula J can be written as:
其中X=[p-q1,...,p-qi,...],Y=[p'-q'1,...,p'-q'i,...]。设置集合C=XTX+YTY。然后引入拉格朗日乘子λ来执行则公式又可转化为:Where X=[pq 1 ,...,pq i ,...], Y=[p'-q' 1 ,...,p'-q' i ,...]. Set the set C = X T X + Y T Y. Then introduce the Lagrangian multiplier λ to perform Then the formula can be transformed into:
其中1=[1,...,1]T是|N|×1的列向量。通过取J的梯度并将其设置为零,的值可以计算为 Where 1=[1,...,1] T is a column vector of |N|×1. By taking the gradient of J and setting it to zero, The value of can be calculated as
公式的解需要显式计算矩阵C的逆,按照定义,矩阵C是对称半正定的。然而,因为匹配M的邻接匹配集的数量一般都大于2,所以矩阵C可以是奇异的。为了使公式线性可解,在实际应用中,我们进一步将恒等矩阵I的一个小乘积作为正则化项加入到矩阵C中。formula The solution of requires explicit computation of the inverse of matrix C, which by definition is symmetric positive semidefinite. However, since the number of contiguous matching sets matching M is generally greater than 2, matrix C may be singular. In order to make the formula linearly solvable, in practical applications, we further add a small product of the identity matrix I to the matrix C as a regularization term.
Cnew=C+εI,C new =C+εI,
与C的迹相比,ε被设置为一个较小的值,在整个实验中设置为10-3tr(C)。我们可以通过求解线性方程组和调整来得到从而使在处理完后,可以将计算得出的带入矩阵协误差J的公式中计算出对应的J值,进而得出每个匹配对应特征点对的局部结构差异值。Compared to the C trace, ε was set to a small value, 10 −3 tr(C) throughout the experiments. We can solve the system of linear equations by and adjust come and get So that after processing After that, the calculated The corresponding J value is calculated by bringing it into the formula of the matrix co-error J, and then the local structural difference value of each matching feature point pair is obtained.
步骤5,基于步骤4获得的每个匹配对应的特征点的局部结构的差异程度,即矩阵协误差的值。设定一个较小的阈值,阈值取值为8,并规定当特征点的矩阵协误差小于设定的阈值时,认为此特征点所关联的匹配是正确的匹配,保留到最终的匹配集,否则,当特征点的矩阵协方差大于设定的阈值时,认为此特征点所关联的匹配使错误的匹配,将之删除;最终将所有保留下来的匹配集合,作为最终的图像匹配结果集。Step 5, based on the degree of difference in the local structure of each matching feature point obtained in step 4, that is, the value of the matrix co-error. Set a small threshold, the threshold value is 8, and stipulate that when the matrix co-error of the feature point is less than the set threshold, the matching associated with this feature point is considered to be a correct match, and it is reserved for the final matching set. Otherwise, when the matrix covariance of the feature point is greater than the set threshold, it is considered that the match associated with this feature point is a wrong match, and it will be deleted; finally, all the remaining matching sets will be used as the final image matching result set.
实施例Example
本发明的实验硬件环境是:Intel(R)Core(TM)i5-4590CPU@3.30GHz 3.30GHz,8G内存,Microsoft Windows7旗舰版,编程环境是Visual Studio 2015,MATLAB(R2016a)64位,测试图(详见图3所示的实验匹配效果图)来源于韩国首尔大学(Seoul NationalUniversity,SNU)网上公开的多目标物体匹配标准图像集。The experimental hardware environment of the present invention is: Intel (R) Core (TM) i5-4590CPU@3.30GHz 3.30GHz, 8G memory, Microsoft Windows7 flagship edition, programming environment is Visual Studio 2015, MATLAB (R2016a) 64, test figure ( For details, see the experimental matching effect diagram shown in Figure 3) from the standard image set of multi-target object matching published on the Internet by Seoul National University (SNU).
选用SNU图像集所有的图像,共有6组待匹配图像,图像名称分别为Books、Bulletins、Jigsaws、Mickeys、Minnies和Toys,每幅图像中均包含若干个物体,并且每对待匹配物体之间存在不同的视角变换,不同的光照以及不同的自身变化,这些都使得采用这些图像的匹配技术面临很大的挑战。All the images in the SNU image set are selected. There are 6 groups of images to be matched. The image names are Books, Bulletins, Jigsaws, Mickeys, Minnies, and Toys. Each image contains several objects, and there are differences between each object to be matched. The viewing angle transformation, different lighting and different self-variation all make the matching technology using these images face great challenges.
本发明在匹配技术实现的基础上和其他的匹配方法进行了准确度和召回率的对比,采用对比的匹配方法包括霍夫投票特征匹配法HV(Feature Matching with AlternateHough and Inverted Hough Transforms),离散拓扑搜寻匹配法DTS(Discrete TabuSearch For Graph Matchting),加权随机游走图匹配法RRWM(Reweighted Random Walksfor Graph Matching)以及弱几何关系下的空间匹配法EWGR(Spatial Matching asensemble of weak geometric relations)。匹配过程中都是用SIFT描述算子进行特征描述,且均是以获取初始匹配集为基础,以去除初始匹配集中的错误匹配为目标。对比方法中的各项参数选用的都是实验效果表现最好时对应的参数。利用准确度和召回率表现个方法的性能。对比结果如下:The present invention compares the accuracy and recall rate with other matching methods on the basis of the realization of the matching technology. The matching methods used for comparison include HV (Feature Matching with Alternate Hough and Inverted Hough Transforms), discrete topology Search matching method DTS (Discrete TabuSearch For Graph Matching), weighted random walk graph matching method RRWM (Reweighted Random Walks for Graph Matching) and spatial matching method EWGR (Spatial Matching asensemble of weak geometric relations) under weak geometric relations. In the matching process, the SIFT description operator is used for feature description, and all are based on obtaining the initial matching set, with the goal of removing the wrong match in the initial matching set. The parameters in the comparison method are all selected when the experimental effect is the best. The performance of a method is expressed using precision and recall. The comparison results are as follows:
从表中的数据可以看出,在准确度和召回率的数据表现上,本发明的匹配方法取得的效果均比其他的匹配方法表现得好,由此证明了本发明的意义。It can be seen from the data in the table that the matching method of the present invention performs better than other matching methods in terms of accuracy and recall data performance, thus proving the significance of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811634213.4A CN109697692B (en) | 2018-12-29 | 2018-12-29 | Feature matching method based on local structure similarity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811634213.4A CN109697692B (en) | 2018-12-29 | 2018-12-29 | Feature matching method based on local structure similarity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109697692A CN109697692A (en) | 2019-04-30 |
CN109697692B true CN109697692B (en) | 2022-11-22 |
Family
ID=66233012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811634213.4A Expired - Fee Related CN109697692B (en) | 2018-12-29 | 2018-12-29 | Feature matching method based on local structure similarity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109697692B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245671B (en) * | 2019-06-17 | 2021-05-28 | 艾瑞迈迪科技石家庄有限公司 | Endoscope image feature point matching method and system |
CN110472543B (en) * | 2019-08-05 | 2022-10-25 | 电子科技大学 | Mechanical drawing comparison method based on local connection feature matching |
CN110659654A (en) * | 2019-09-24 | 2020-01-07 | 福州大学 | A method for checking and anti-plagiarism of painting based on computer vision |
CN110807797B (en) * | 2019-10-22 | 2022-03-22 | 中国测绘科学研究院 | Multi-source heterogeneous surface entity and point entity matching method considering global optimization and storage medium thereof |
CN110874849B (en) * | 2019-11-08 | 2023-04-18 | 安徽大学 | Non-rigid point set registration method based on local transformation consistency |
CN111461196B (en) * | 2020-03-27 | 2023-07-21 | 上海大学 | Fast and Robust Image Recognition and Tracking Method and Device Based on Structural Features |
CN112348105B (en) * | 2020-11-17 | 2023-09-01 | 贵州省环境工程评估中心 | Unmanned aerial vehicle image matching optimization method |
CN114399422B (en) * | 2021-12-02 | 2023-08-08 | 西安电子科技大学 | A Registration Method of Remote Sensing Image Based on Local Information and Global Information |
CN114299312B (en) * | 2021-12-10 | 2024-12-20 | 中国科学技术大学 | A line segment matching method and matching system |
CN114882260B (en) * | 2022-05-31 | 2025-04-22 | 济南大学 | A graph matching method and system |
CN115049847B (en) * | 2022-06-21 | 2024-04-16 | 上海大学 | ORB descriptor-based feature point local neighborhood feature matching method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839253A (en) * | 2013-11-21 | 2014-06-04 | 苏州盛景空间信息技术有限公司 | Arbitrary point matching method based on partial affine transformation |
CN105354578A (en) * | 2015-10-27 | 2016-02-24 | 安徽大学 | Multi-target object image matching method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7382897B2 (en) * | 2004-04-27 | 2008-06-03 | Microsoft Corporation | Multi-image feature matching using multi-scale oriented patches |
-
2018
- 2018-12-29 CN CN201811634213.4A patent/CN109697692B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839253A (en) * | 2013-11-21 | 2014-06-04 | 苏州盛景空间信息技术有限公司 | Arbitrary point matching method based on partial affine transformation |
CN105354578A (en) * | 2015-10-27 | 2016-02-24 | 安徽大学 | Multi-target object image matching method |
Non-Patent Citations (1)
Title |
---|
结合亮度序局部特征描述的图匹配算法;鲍文霞等;《哈尔滨工程大学学报》;20150109(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109697692A (en) | 2019-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109697692B (en) | Feature matching method based on local structure similarity | |
Hossein-Nejad et al. | An adaptive image registration method based on SIFT features and RANSAC transform | |
Quan et al. | Deep feature correlation learning for multi-modal remote sensing image registration | |
CN105844669B (en) | A kind of video object method for real time tracking based on local Hash feature | |
CN111242221B (en) | Image matching method, system and storage medium based on image matching | |
Si et al. | Dense registration of fingerprints | |
CN106682700B (en) | Block-based fast matching method based on keypoint description operator | |
CN113361542A (en) | Local feature extraction method based on deep learning | |
CN106548462A (en) | Non-linear SAR image geometric correction method based on thin-plate spline interpolation | |
Ma et al. | Image feature matching via progressive vector field consensus | |
CN105551022A (en) | Image error matching detection method based on shape interaction matrix | |
CN111753119A (en) | Image searching method and device, electronic equipment and storage medium | |
CN109800787B (en) | Image Template Matching Method Based on Relative Feature Distance Error Metric | |
Zhang et al. | An efficient image matching method using Speed Up Robust Features | |
CN103955950A (en) | Image tracking method utilizing key point feature matching | |
CN112446431B (en) | Feature point extraction and matching method, network, device and computer storage medium | |
CN117455967A (en) | A large-scale point cloud registration method based on deep semantic graph matching | |
Shi et al. | Robust image registration using structure features | |
CN109840529B (en) | An Image Matching Method Based on Local Sensitive Confidence Evaluation | |
Giang et al. | Topicfm+: Boosting accuracy and efficiency of topic-assisted feature matching | |
CN105956581B (en) | A Fast Initialization Method of Facial Feature Points | |
Dai et al. | FMAP: Learning robust and accurate local feature matching with anchor points | |
Jiang et al. | Improving sparse graph attention for feature matching by informative keypoints exploration | |
CN110705569A (en) | Image local feature descriptor extraction method based on texture features | |
CN109785372A (en) | Basis matrix robust estimation method based on soft decision optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20221122 |